On Aug 2, 2019, at 7:55 PM, Alfredo Deza
<adeza(a)redhat.com> wrote:
On Fri, Aug 2, 2019 at 12:20 PM Lars Marowsky-Bree <lmb(a)suse.com> wrote:
Hi,
Kyrlyo wanted to add openSUSE to the set of distributions that get build
for and used by teuthology, for example:
https://github.com/ceph/ceph-build/pull/1356
https://github.com/ceph/ceph-build/pull/1355
However, seems officially there are "no plans" to add other platforms
but Ubuntu and CentOS, so Alfredo closed the PRs.
Hey Lars. Adding a distro for builds is a very involved problem to
solve. We don't keep a detailed list of everything that is needed but
I will try to go over some of the well-known items:
1) for RPM-based distros, we must have an actual machine running that
distro (for example CentOS 7 for CentOS7 RPMs) - until we start using
mock where we can produce RPMs from any base distro.
We are not interesting in the
«mock» on CentOS or what ever RedHat,
we are interested in pure SUSE environment.
2) A new distribution added *must* exist in the cloud
provider (OVH in
this case) that can spin up a VM for builds (as of this writing, there
is only an opensuse42 image available from 2016)
This image is outdated and can be
dropped. We, at SUSE want to add
a new image, like opensuse15 or leap15. I can help with this.
Where can I find the instruction on how to do this, or someone who can
say me what to do? Who can help us to get access to OVH.
I've already spent some time to add support of recent openSUSE distro
to ansible scripts. One of the patch to ansible/slave for ceph-build is rejected
with not clear reason and without reasonable discussion.
3) *All* the building scripts must be revised to
ensure that the new
distribution is accounted for. I did some of this work when adding
Ubuntu Bionic and it was non-trivial, error prone, and it took about
two weeks
to really get it right with the help of other people.
How can we contribute when
adding the distro rather than providing patches
and human resources?
4) The services that ensure that images come up and
are prepared to
build Ceph have to be udpated as well to ensure that the minimum
requirements are installed so that the machine is operational
Where are those
services described? How can we contribute into this?
5) If the new distro is Python3 only we will need to
update all
tooling that interacts with a jenkins node - we are not there yet as
all our tooling is Python2 exclusive.
At Cephalocon, Ken Dreyer and me did a presentation on what exactly
entails building Ceph for both development and releases, the problems
we've faced and where we would like to head next. It might be useful
to go
through if you haven't already:
https://www.youtube.com/watch?v=seHyiQT8YJM
I haven’t seen in this video any exact recommendation how to proceed if
one wish to contribute and add support own system.
I had not see any reference where one can start.
A few of the things we brought up is that we (the Ceph
infrastructure
team and our services) aren't prepared to accommodate multiple other
distributions and that we are trying to get away from
taking on the
load of maintenance
in our systems and looking to other build/repo solutions. One of these
solutions is the CentOS storage-sig which we are trying to coordinate
to build and host repos for us there.
In addition to that, we mentioned that we would like to see a wider
community effort go into building and hosting Ceph in separate
systems, maybe with a special signing key (our release signing process
is pretty inflexible!) so that
others who are building development repositories can ensure their
authenticity, while giving us the ability to revoke keys as needed.
Do you guys have clear procedure how to do this? How can we help to develop the
one?
In the past, we've gotten asked to re-enable the
Debian builds, which
puts us into a similar problem (maintenance burden, script updates,
and other items already mentioned), and we've had to turn that down.
As Ken mentions in the presentation,
we really want to be helpful and accommodating to the wider community,
but we can't do it on our own and with our infrastructure as it is
today - we are maxed out.
Distributing community signing keys, or allowing other
builders to
submit status updates into
shaman.ceph.com for testing scheduling is
yet-to-be-done work, but I am open to have those conversations so that
we can move forwards with
more distros and better testing.
Taking into account that SUSE can provide resources in order to help improve Ceph
Infrastructure,
can we start from the topic what are clear steps, so at the end we can:
1) Submit a PR and there make check test can run on suse based distro and reported back to
the github.
2) There can be scheduled build for our lovely distro based on PR and any branch we
demand,
artifacts exposed at shaman, and can be used in teuthology.
3) Same as (2) but for the nightly/daily builds.
Finally, do we need to improve existing ceph/ceph-build system or we need to build our own
system on some other site?
If so, can we use Shaman/Chacra or we have to develop our own infrastructure?
If Shaman/Chacra is the way, who can provide us corresponding credentials/keys in order to
use it, or we can try and grab them from the logs of jenkins? Is there any official
documentation how to integrate with Shaman/Chacra.
4) So let say we have already some CI (onsite Jenkins setup) and resources which are
running, for example, «make check» for PR against our openSUSE system. We wish to enable
it for
github.com/ceph/ceph <http://github.com/ceph/ceph>, we need right access to
the repo in order to report the status to the PR, who should we ask for granting the
access for our robot user? Or how does it work?
What requirements does a platform need to meet to be added? What's the
process for that?
(I hope this is the right list)
Regards,
Lars
--
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah, HRB 21284 (AG
Nürnberg)
"Architects should open possibilities and not determine everything." (Ueli
Zbinden)
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io