On Mon, Aug 5, 2019 at 12:55 PM Sage Weil <sage(a)newdream.net> wrote:
On Mon, 5 Aug 2019, Alfredo Deza wrote:
On Mon, Aug 5, 2019 at 9:48 AM Sage Weil
<sage(a)newdream.net> wrote:
On Mon, 5 Aug 2019, Alfredo Deza wrote:
I think we are deviating a bit on the
discussion:
1) We are not in the capacity of adding any other new distro to our
build toolchain, for development or releases.
2) We can own the effort of adding the mechanisms needed in shaman
(
shaman.ceph.com) so that community-built Ceph packages/repos can
report there. This will entail
adding authentication so that updates can verify the source.
For #2 specifically, the shaman dashboard allows updating the status
of a build (started, building, failed, succeeded) as well as the repo.
Tools like teuthology query shaman for the state and location
of repos. The API is detailed here:
https://github.com/ceph/shaman
I think this is misunderstanding who "we" are. What distributions are
tested and built for shaman/chacra and
download.ceph.com is a community
decision and depends on who is able to invest the effort.
I agree here, I am not implying that the decision is (or should) not
driven by the community.
If someone
shows up willing to do the work, whoever was doing the work before doesn't
get to just say no--especially if they don't want to be stuck with that
responsibility for all time.
Our infrastructure (Jenkins, repo builders, OVH services, shaman)
wasn't built with the ability to allow community contributions.
Again, I'm unclear what "our" and "community" mean here. All of
the build
infrastructure (with the exception of the signing host) are in the sepia
lab, which is a shared community (upstream) resource. Adjusting access to
individual hosts within the lab is trivial.
At
some point we tried with CERN who graciously spared a few nodes
to build Ceph but this had lots of issues on both ends and ended up
not moving forward with that way of extending how we build.
Agree on the external jenkins workers, or integration with anything that
is outside of sepia--that's a different ball of wax!
I think that we may have been discussing different parts of the
infrastructure... so yes, I agree that the lab should have the ability
to run tests and get access to others, seems like
that has always been in place. My concerns are more with what produces
and hosts packages, which I think can be solved by allowing shaman to
get reports.
It also seems that I may have come up as a sort of gate-keeper here
and I want to ensure that is not the case. From having introduced the
build system a while ago, I can tell that there are
issues if we wanted to go monolithic (vs. federated). I am happy to
assist where I can.
>
> sage
>
>
> > Note that this is different from "what distro is going to be built" -
> > adding nodes is just a piece of the puzzle.
> >
> > >
> > > I see two paths forward: (1) we continue with a monolithic approach to
> > > builds and expand the pool of people who understand and contribute to
> > > maintaining the build infra, or (2) we rearchitect to a federated
> > > approach. *Both* paths require knowledge transfer to new people,
> > > especially if the old team is too busy with other projects (as I keep
> > > hearing). (FWIW, the first path sounds like a lot less effort, and the
> > > two presumably also aren't mutually exclusive.)
> >
> > I am all for anyone wanting to step up and do the work, but I see that
> > functioning well if we stick with #2 which is what I am proposing:
> > allow the community to build and maintain what is needed for other
> > builds/distro combinations
> > and allow those to be reported in shaman. Again, this will take some
> > effort that I am happy to assist with.
> >
> > >
> > > I talked to David a couple weeks back about getting a walk through
> > > bringing Debian Buster up in fog so that we could document the process and
> > > the response I got was that it is all already documented. It then took
> > > him the rest of the day to get a working fog image. :) Where is this
> > > documentation?
> > >
> > > Thanks!
> > > sage
> > >
> > >
> > >
> > > >
> > > >
> > > >
> > > > On Mon, Aug 5, 2019 at 4:21 AM Marcin Juszkiewicz
> > > > <marcin.juszkiewicz(a)linaro.org> wrote:
> > > > >
> > > > > W dniu 02.08.2019 o 19:55, Alfredo Deza pisze:
> > > > > > On Fri, Aug 2, 2019 at 12:20 PM Lars Marowsky-Bree
<lmb(a)suse.com>
> > > > > > wrote:
> > > > >
> > > > > >> Kyrlyo wanted to add openSUSE to the set of
distributions that get
> > > > > >> build for and used by teuthology, for example:
> > > > >
> > > > > >> However, seems officially there are "no plans"
to add other
> > > > > >> platforms but Ubuntu and CentOS, so Alfredo closed the
PRs.
> > > > >
> > > > > At Linaro we would like to help with adding Debian to the list
of
> > > > > distributions Ceph is built for. We can provide aarch64 (arm64)
machines
> > > > > for it.
> > > > >
> > > > > > Hey Lars. Adding a distro for builds is a very involved
problem to
> > > > > > solve. We don't keep a detailed list of everything that
is needed
> > > > > > but I will try to go over some of the well-known items:
> > > > >
> > > > > > 2) A new distribution added *must* exist in the cloud
provider (OVH
> > > > > > in this case) that can spin up a VM for builds (as of this
writing,
> > > > > > there is only an opensuse42 image available from 2016)
> > > > >
> > > > >
https://www.ovh.co.uk/dedicated_servers/distributions/ lists
Debian 10
> > > > > as available. Not that this page is up-to-date as it does not
even list
> > > > > Ubuntu 18.04
> > > > >
> > > > > > 3) *All* the building scripts must be revised to ensure that
the new
> > > > > > distribution is accounted for. I did some of this work when
adding
> > > > > > Ubuntu Bionic and it was non-trivial, error prone, and it
took about
> > > > > > two weeks to really get it right with the help of other
people.
> > > > >
> > > > > I can probably work on it.
> > > > >
> > > > > > 4) The services that ensure that images come up and are
prepared to
> > > > > > build Ceph have to be udpated as well to ensure that the
minimum
> > > > > > requirements are installed so that the machine is
operational
> > > > >
> > > > > > 5) If the new distro is Python3 only we will need to update
all
> > > > > > tooling that interacts with a jenkins node - we are not
there yet as
> > > > > > all our tooling is Python2 exclusive.
> > > > >
> > > > > Debian 10 'buster' has both Python 2.7 and Python 3.7 so
should not be
> > > > > a problem - we can start with py2 and then update to py3 once
Ceph move.
> > > > >
> > > > > > At Cephalocon, Ken Dreyer and me did a presentation on what
exactly
> > > > > > entails building Ceph for both development and releases,
the
> > > > > > problems we've faced and where we would like to head
next. It might
> > > > > > be useful to go through if you haven't already:
> > > > > >
https://www.youtube.com/watch?v=seHyiQT8YJM
> > > > >
> > > > > Will watch it later.
> > > > >
> > > > > > A few of the things we brought up is that we (the Ceph
> > > > > > infrastructure team and our services) aren't prepared to
accommodate
> > > > > > multiple other distributions and that we are trying to get
away from
> > > > > > taking on the load of maintenance in our systems and looking
to other
> > > > > > build/repo solutions. One of these solutions is the CentOS
> > > > > > storage-sig which we are trying to coordinate to build and
host repos
> > > > > > for us there.
> > > > >
> > > > > > In addition to that, we mentioned that we would like to see
a wider
> > > > > > community effort go into building and hosting Ceph in
separate
> > > > > > systems, maybe with a special signing key (our release
signing
> > > > > > process is pretty inflexible!) so that others who are
building
> > > > > > development repositories can ensure their authenticity,
while giving
> > > > > > us the ability to revoke keys as needed.
> > > > >
> > > > > > In the past, we've gotten asked to re-enable the Debian
builds,
> > > > > > which puts us into a similar problem (maintenance burden,
script
> > > > > > updates, and other items already mentioned), and we've
had to turn
> > > > > > that down. As Ken mentions in the presentation, we really
want to be
> > > > > > helpful and accommodating to the wider community, but we
can't do it
> > > > > > on our own and with our infrastructure as it is today - we
are maxed
> > > > > > out.
> > > > >
> > > > > > Distributing community signing keys, or allowing other
builders to
> > > > > > submit status updates into
shaman.ceph.com for testing
scheduling is
> > > > > > yet-to-be-done work, but I am open to have those
conversations so
> > > > > > that we can move forwards with more distros and better
testing.
> > > > >
> > > > > At Linaro we can arrange machines for building and testing. Space
for
> > > > > hosting Debian/arm64 repos too. We are fine on using those
machines also
> > > > > to build packages for other arm64 distributions.
> > > > > _______________________________________________
> > > > > Dev mailing list -- dev(a)ceph.io
> > > > > To unsubscribe send an email to dev-leave(a)ceph.io
> > > > _______________________________________________
> > > > Dev mailing list -- dev(a)ceph.io
> > > > To unsubscribe send an email to dev-leave(a)ceph.io
> > > >
> > > >
> > _______________________________________________
> > Dev mailing list -- dev(a)ceph.io
> > To unsubscribe send an email to dev-leave(a)ceph.io
> >
> >