From vhtnow11@gmail.com Wed Apr 1 11:48:44 2020 From: vhtnow11@gmail.com To: sepia@ceph.io Subject: [sepia] video production company Date: Wed, 01 Apr 2020 11:48:40 +0000 Message-ID: <158574172079.21.18314351821819268734@mailman-web> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4308919300456185804==" --===============4308919300456185804== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Best video production bangalore : We = at VHTnow create visual masterpieces that engage, inspire and impact people's= lives. Our services also include ad film and corporate film production in ba= ngalore --===============4308919300456185804==-- From dgallowa@redhat.com Fri Apr 3 21:28:13 2020 From: David Galloway To: sepia@ceph.io Subject: [sepia] RHEL7.8 available in the lab Date: Fri, 03 Apr 2020 17:28:14 -0400 Message-ID: <5f6a89dc-6b3a-81b9-9227-5c5816ffc4fb@redhat.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0492509243299328801==" --===============0492509243299328801== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit RHEL7.8 GAed on March 31 and is available in the Sepia lab now. We have FOG images for smithi and mira. -- David Galloway Systems Administrator, RDU Ceph Engineering IRC: dgalloway --===============0492509243299328801==-- From dgallowa@redhat.com Wed Apr 8 14:38:18 2020 From: David Galloway To: sepia@ceph.io Subject: [sepia] Outage this Friday (?) Date: Wed, 08 Apr 2020 10:38:21 -0400 Message-ID: <5ccdf2c5-56cf-8030-74de-a1794faab2a7@redhat.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4332795037980587479==" --===============4332795037980587479== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hey all, As the list of active Sepia users grows, the free space on the teuthology VM continues to decrease. Instead of continuing to e-mail the users with the largest home dirs, I'm going to increase the size of the teuthology VM's disk. This will require about a 12 hour outage. I'll need to instruct teuthology workers to die after their running job, shut the VM down, grow the disk and start everything back up. If there are no objections, I can start this Thursday night at midnight Eastern time and hopefully do the maintenance in the morning. Are there are any pressing releases or a need to keep things running or does this work for everyone? -- David Galloway Systems Administrator, RDU Ceph Engineering IRC: dgalloway --===============4332795037980587479==-- From tchaikov@gmail.com Thu Apr 9 03:37:22 2020 From: kefu chai To: sepia@ceph.io Subject: [sepia] Re: Outage this Friday (?) Date: Thu, 09 Apr 2020 11:37:18 +0800 Message-ID: In-Reply-To: 5ccdf2c5-56cf-8030-74de-a1794faab2a7@redhat.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7093002412377683179==" --===============7093002412377683179== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Wed, Apr 8, 2020 at 10:38 PM David Galloway wrote: > > Hey all, > > As the list of active Sepia users grows, the free space on the > teuthology VM continues to decrease. Instead of continuing to e-mail > the users with the largest home dirs, I'm going to increase the size of > the teuthology VM's disk. > > This will require about a 12 hour outage. I'll need to instruct > teuthology workers to die after their running job, shut the VM down, > grow the disk and start everything back up. > > If there are no objections, I can start this Thursday night at midnight > Eastern time and hopefully do the maintenance in the morning. > > Are there are any pressing releases or a need to keep things running or > does this work for everyone? David, is it possible to take this opportunity to upgrade the VM to bionic? so we can start using python3.6 offered by bionic in teuthology test suites. xenial was shipped with python3.5. =( > > -- > David Galloway > Systems Administrator, RDU > Ceph Engineering > IRC: dgalloway > _______________________________________________ > Sepia mailing list -- sepia(a)ceph.io > To unsubscribe send an email to sepia-leave(a)ceph.io -- Regards Kefu Chai --===============7093002412377683179==-- From dgallowa@redhat.com Thu Apr 9 16:17:49 2020 From: David Galloway To: sepia@ceph.io Subject: [sepia] Re: Outage this Friday (?) Date: Thu, 09 Apr 2020 12:17:53 -0400 Message-ID: <361bfc1d-f708-c3ae-bd79-04bd27f6fc25@redhat.com> In-Reply-To: CAJE9aONB7XOPkY+cSHQbM88m=30TcewjEzg3fE=9MQd9HU_MsA@mail.gmail.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6725608681612990049==" --===============6725608681612990049== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On 4/8/20 11:37 PM, kefu chai wrote: > On Wed, Apr 8, 2020 at 10:38 PM David Galloway wrot= e: >> >> Hey all, >> >> As the list of active Sepia users grows, the free space on the >> teuthology VM continues to decrease. Instead of continuing to e-mail >> the users with the largest home dirs, I'm going to increase the size of >> the teuthology VM's disk. >> >> This will require about a 12 hour outage. I'll need to instruct >> teuthology workers to die after their running job, shut the VM down, >> grow the disk and start everything back up. >> >> If there are no objections, I can start this Thursday night at midnight >> Eastern time and hopefully do the maintenance in the morning. >> >> Are there are any pressing releases or a need to keep things running or >> does this work for everyone? >=20 >=20 > David, is it possible to take this opportunity to upgrade the VM to > bionic? so we can start using python3.6 offered by bionic in > teuthology test suites. xenial was shipped with python3.5. =3D( >=20 >=20 That's more of a "Tuesday" type outage than a "Friday" type outage :) I will put it on my to-do list. --===============6725608681612990049==-- From tchaikov@gmail.com Tue Apr 14 06:40:06 2020 From: kefu chai To: sepia@ceph.io Subject: [sepia] Re: Outage this Friday (?) Date: Tue, 14 Apr 2020 14:40:02 +0800 Message-ID: In-Reply-To: 361bfc1d-f708-c3ae-bd79-04bd27f6fc25@redhat.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1508307410211800834==" --===============1508307410211800834== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Fri, Apr 10, 2020 at 12:17 AM David Galloway wrote: > > > > On 4/8/20 11:37 PM, kefu chai wrote: > > On Wed, Apr 8, 2020 at 10:38 PM David Galloway wr= ote: > >> > >> Hey all, > >> > >> As the list of active Sepia users grows, the free space on the > >> teuthology VM continues to decrease. Instead of continuing to e-mail > >> the users with the largest home dirs, I'm going to increase the size of > >> the teuthology VM's disk. > >> > >> This will require about a 12 hour outage. I'll need to instruct > >> teuthology workers to die after their running job, shut the VM down, > >> grow the disk and start everything back up. > >> > >> If there are no objections, I can start this Thursday night at midnight > >> Eastern time and hopefully do the maintenance in the morning. > >> > >> Are there are any pressing releases or a need to keep things running or > >> does this work for everyone? > > > > > > David, is it possible to take this opportunity to upgrade the VM to > > bionic? so we can start using python3.6 offered by bionic in > > teuthology test suites. xenial was shipped with python3.5. =3D( > > > > > > That's more of a "Tuesday" type outage than a "Friday" type outage :) ahh, indeed! > > I will put it on my to-do list. thank you David, as always! > --=20 Regards Kefu Chai --===============1508307410211800834==-- From tchaikov@gmail.com Tue Apr 14 07:39:39 2020 From: kefu chai To: sepia@ceph.io Subject: [sepia] teuthology is now python3 Date: Tue, 14 Apr 2020 15:39:34 +0800 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1631261013877620041==" --===============1631261013877620041== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit hi folks, we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph to python3. and from now on, the teuthology-worker runs in a python3 environment by default unless specified otherwise using "--teuthology-branch py2". which means: - we need to write tests in python3 in master now - teuthology should be python3 compatible. - teuthology bug fixes should be backported to "py2" branch. if you run into any issues related to python3 due to the above changes, please let me know. and i will try to fix it ASAP. currently, the tests under qa/ directories in ceph:ceph master branch are python2 and python3 compatible. but since we've moved to python3, there is no need to be python2 compatible anymore. since the sepia lab is still using ubuntu xenial, we cannot use features offered by python3.6 at this moment yet. but we do plan to upgrade the OS to bionic soon. before that happens, the tests need to be compatible with Python3.5. the next step is to - drop python2 support in ceph:ceph master branch, and - drop python2 support in ceph:teuthology master. - backport python3 compatible changes to octopus and nautilus to ease the pain of backport -- Regards Kefu Chai --===============1631261013877620041==-- From kyrylo.shatskyy@suse.com Tue Apr 14 16:39:26 2020 From: Kyrylo Shatskyy To: sepia@ceph.io Subject: [sepia] Re: teuthology is now python3 Date: Tue, 14 Apr 2020 16:32:30 +0000 Message-ID: <3D98779B-D42B-4D93-A058-8B26B1760164@suse.com> In-Reply-To: CAJE9aOMKiThK3bAzroOEK42ECT5hai1E1j-Ucm7eaS+AbsE-WQ@mail.gmail.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4776488193271399232==" --===============4776488193271399232== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This is sweet, Kefu, thanks for the great job you=E2=80=99ve done here. I honestly thought we can have py2 branch minimal and just rebase it regularl= y when something merged to the master. The backporting looks good to me too though, do we have a recommendation page= , or should we create a devguide how to backport teuthology patches? Kyrylo Shatskyy -- SUSE Software Solutions Germany GmbH Maxfeldstr. 5 90409 Nuremberg Germany > On Apr 14, 2020, at 9:39 AM, kefu chai wrote: >=20 > hi folks, >=20 > we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph > to python3. and from now on, the teuthology-worker runs in a python3 > environment by default unless specified otherwise using > "--teuthology-branch py2". >=20 > which means: >=20 > - we need to write tests in python3 in master now > - teuthology should be python3 compatible. > - teuthology bug fixes should be backported to "py2" branch. >=20 > if you run into any issues related to python3 due to the above > changes, please let me know. and i will try to fix it ASAP. >=20 > currently, the tests under qa/ directories in ceph:ceph master branch > are python2 and python3 compatible. but since we've moved to python3, > there is no need to be python2 compatible anymore. since the sepia lab > is still using ubuntu xenial, we cannot use features offered by > python3.6 at this moment yet. but we do plan to upgrade the OS to > bionic soon. before that happens, the tests need to be compatible with > Python3.5. >=20 > the next step is to >=20 > - drop python2 support in ceph:ceph master branch, and > - drop python2 support in ceph:teuthology master. > - backport python3 compatible changes to octopus and nautilus to ease > the pain of backport >=20 > --=20 > Regards > Kefu Chai > _______________________________________________ > Sepia mailing list -- sepia(a)ceph.io > To unsubscribe send an email to sepia-leave(a)ceph.io --===============4776488193271399232==-- From tchaikov@gmail.com Wed Apr 15 12:34:55 2020 From: kefu chai To: sepia@ceph.io Subject: [sepia] Re: teuthology is now python3 Date: Wed, 15 Apr 2020 20:34:49 +0800 Message-ID: In-Reply-To: 3D98779B-D42B-4D93-A058-8B26B1760164@suse.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0007547045077902462==" --===============0007547045077902462== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Wed, Apr 15, 2020 at 12:39 AM Kyrylo Shatskyy wrote: > > This is sweet, > > Kefu, thanks for the great job you=E2=80=99ve done here. thank you! just tried to help and to continue based on your ground work. > > I honestly thought we can have py2 branch minimal and just rebase it regula= rly when something merged to the master. i am trying to avoid using the word of "rebase". as i want to drop python3 support in master. to rebase python2 compatible branch on a python3 only branch sounds dangerous to me. > The backporting looks good to me too though, do we have a recommendation pa= ge, or should we create a devguide how to backport teuthology patches? not yet. i have not got a chance to create one. > > Kyrylo Shatskyy > -- > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5 > 90409 Nuremberg > Germany > > > > On Apr 14, 2020, at 9:39 AM, kefu chai wrote: > > > > hi folks, > > > > we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph > > to python3. and from now on, the teuthology-worker runs in a python3 > > environment by default unless specified otherwise using > > "--teuthology-branch py2". > > > > which means: > > > > - we need to write tests in python3 in master now > > - teuthology should be python3 compatible. > > - teuthology bug fixes should be backported to "py2" branch. > > > > if you run into any issues related to python3 due to the above > > changes, please let me know. and i will try to fix it ASAP. > > > > currently, the tests under qa/ directories in ceph:ceph master branch > > are python2 and python3 compatible. but since we've moved to python3, > > there is no need to be python2 compatible anymore. since the sepia lab > > is still using ubuntu xenial, we cannot use features offered by > > python3.6 at this moment yet. but we do plan to upgrade the OS to > > bionic soon. before that happens, the tests need to be compatible with > > Python3.5. > > > > the next step is to > > > > - drop python2 support in ceph:ceph master branch, and > > - drop python2 support in ceph:teuthology master. > > - backport python3 compatible changes to octopus and nautilus to ease > > the pain of backport > > > > -- > > Regards > > Kefu Chai > > _______________________________________________ > > Sepia mailing list -- sepia(a)ceph.io > > To unsubscribe send an email to sepia-leave(a)ceph.io > --=20 Regards Kefu Chai --===============0007547045077902462==-- From kdreyer@redhat.com Mon Apr 20 16:52:41 2020 From: Ken Dreyer To: sepia@ceph.io Subject: [sepia] DreamCompute does not support nested virtualization Date: Mon, 20 Apr 2020 10:52:37 -0600 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0847024943519649291==" --===============0847024943519649291== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit DreamHost has given us a "ceph" tenant in their OpenStack environment (DreamCompute). After some stability problems, we moved almost all the workloads to OVH's OpenStack instead a few years ago. I have been using this tenant for some testing recently, and I found that some of our old compute nodes were stuck in "Error" state. I asked DreamHost support to delete these, and they deleted them today. It sounds like we will hit this problem again if we try to use nested virtualization on DreamCompute. This might have been a result of running the ceph-ansible Vagrant tests there a long time ago. On Mon, Apr 20, 2020 at 2:43 AM DreamHost Customer Support Team wrote: > Okay! After much hard work by our DreamCompute engineers (they had to > completely evacuate each of the hypervisors those instances were on of > all other instances so that they could be rebooted), those instances > have been successfully deleted! They also asked me to pass along a > request that you and your team not try to do nested virtualization on > DreamCompute at this time as it currently leads to problems like this > (where a complete hypervisor reboot is required). If you have any > additional questions, please let us know. - Ken --===============0847024943519649291==-- From dgallowa@redhat.com Mon Apr 20 17:39:08 2020 From: David Galloway To: sepia@ceph.io Subject: [sepia] Re: DreamCompute does not support nested virtualization Date: Mon, 20 Apr 2020 13:38:59 -0400 Message-ID: <7df3c1aa-c33e-9eeb-8293-7ec10eb3aedc@redhat.com> In-Reply-To: CALqRxCzPzk9TKVP3Oo3rnYyHjim6unH=TUF7Xt5C2nAiC4gn_g@mail.gmail.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1746598763596590009==" --===============1746598763596590009== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit That explains why those Jenkins slaves locked up and went offline almost immediately after I added them to Jenkins and they ran their first job. On 4/20/20 12:52 PM, Ken Dreyer wrote: > DreamHost has given us a "ceph" tenant in their OpenStack environment > (DreamCompute). After some stability problems, we moved almost all the > workloads to OVH's OpenStack instead a few years ago. > > I have been using this tenant for some testing recently, and I found > that some of our old compute nodes were stuck in "Error" state. I > asked DreamHost support to delete these, and they deleted them today. > > It sounds like we will hit this problem again if we try to use nested > virtualization on DreamCompute. This might have been a result of > running the ceph-ansible Vagrant tests there a long time ago. > > On Mon, Apr 20, 2020 at 2:43 AM DreamHost Customer Support Team wrote: >> Okay! After much hard work by our DreamCompute engineers (they had to >> completely evacuate each of the hypervisors those instances were on of >> all other instances so that they could be rebooted), those instances >> have been successfully deleted! They also asked me to pass along a >> request that you and your team not try to do nested virtualization on >> DreamCompute at this time as it currently leads to problems like this >> (where a complete hypervisor reboot is required). If you have any >> additional questions, please let us know. > > - Ken --===============1746598763596590009==-- From kdreyer@redhat.com Mon Apr 20 17:48:26 2020 From: Ken Dreyer To: sepia@ceph.io Subject: [sepia] Re: DreamCompute does not support nested virtualization Date: Mon, 20 Apr 2020 11:48:17 -0600 Message-ID: In-Reply-To: 7df3c1aa-c33e-9eeb-8293-7ec10eb3aedc@redhat.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============9057854524412884604==" --===============9057854524412884604== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Right! I asked if they have plans to support nested virt, and they did not have any public info to share about this. - Ken On Mon, Apr 20, 2020 at 11:39 AM David Galloway wrote: > > That explains why those Jenkins slaves locked up and went offline almost > immediately after I added them to Jenkins and they ran their first job. > > On 4/20/20 12:52 PM, Ken Dreyer wrote: > > DreamHost has given us a "ceph" tenant in their OpenStack environment > > (DreamCompute). After some stability problems, we moved almost all the > > workloads to OVH's OpenStack instead a few years ago. > > > > I have been using this tenant for some testing recently, and I found > > that some of our old compute nodes were stuck in "Error" state. I > > asked DreamHost support to delete these, and they deleted them today. > > > > It sounds like we will hit this problem again if we try to use nested > > virtualization on DreamCompute. This might have been a result of > > running the ceph-ansible Vagrant tests there a long time ago. > > > > On Mon, Apr 20, 2020 at 2:43 AM DreamHost Customer Support Team wrote: > >> Okay! After much hard work by our DreamCompute engineers (they had to > >> completely evacuate each of the hypervisors those instances were on of > >> all other instances so that they could be rebooted), those instances > >> have been successfully deleted! They also asked me to pass along a > >> request that you and your team not try to do nested virtualization on > >> DreamCompute at this time as it currently leads to problems like this > >> (where a complete hypervisor reboot is required). If you have any > >> additional questions, please let us know. > > > > - Ken > _______________________________________________ > Sepia mailing list -- sepia(a)ceph.io > To unsubscribe send an email to sepia-leave(a)ceph.io > --===============9057854524412884604==-- From dgallowa@redhat.com Wed Apr 22 19:24:21 2020 From: David Galloway To: sepia@ceph.io Subject: [sepia] teuthology VM upgraded to Bionic Date: Wed, 22 Apr 2020 15:24:25 -0400 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1546099332618974071==" --===============1546099332618974071== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Hello all, The teuthology VM has been upgraded to Ubuntu Bionic. Additionally, the teuthology worker processes are using the latest master which means python3. If you run teuthology commands from the teuthology VM, you will likely need to rebootstrap. `cd` to wherever your teuthology checkout is, git checkout master git pull rm -rf virtualenv ./bootstrap And you should be good. Please feel free to ping or e-mail me if you need help with anything and thanks for your patience! -- David Galloway Systems Administrator, RDU Ceph Engineering IRC: dgalloway --===============1546099332618974071==-- From bhubbard@redhat.com Fri Apr 24 04:59:18 2020 From: Brad Hubbard To: sepia@ceph.io Subject: [sepia] Re: teuthology is now python3 Date: Fri, 24 Apr 2020 14:59:12 +1000 Message-ID: In-Reply-To: CAJE9aOMKiThK3bAzroOEK42ECT5hai1E1j-Ucm7eaS+AbsE-WQ@mail.gmail.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2010713918726578870==" --===============2010713918726578870== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit So we are clear about this any tests using a branch based on luminous, mimic, nautilus, or octopus should use "--teuthology-branch py2". Anything newer than octopus should be tested against the teuthology master branch (IOW do not use "--teuthology-branch py2"). Hope this is clear. On Tue, Apr 14, 2020 at 5:40 PM kefu chai wrote: > > hi folks, > > we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph > to python3. and from now on, the teuthology-worker runs in a python3 > environment by default unless specified otherwise using > "--teuthology-branch py2". > > which means: > > - we need to write tests in python3 in master now > - teuthology should be python3 compatible. > - teuthology bug fixes should be backported to "py2" branch. > > if you run into any issues related to python3 due to the above > changes, please let me know. and i will try to fix it ASAP. > > currently, the tests under qa/ directories in ceph:ceph master branch > are python2 and python3 compatible. but since we've moved to python3, > there is no need to be python2 compatible anymore. since the sepia lab > is still using ubuntu xenial, we cannot use features offered by > python3.6 at this moment yet. but we do plan to upgrade the OS to > bionic soon. before that happens, the tests need to be compatible with > Python3.5. > > the next step is to > > - drop python2 support in ceph:ceph master branch, and > - drop python2 support in ceph:teuthology master. > - backport python3 compatible changes to octopus and nautilus to ease > the pain of backport > > -- > Regards > Kefu Chai > _______________________________________________ > Dev mailing list -- dev(a)ceph.io > To unsubscribe send an email to dev-leave(a)ceph.io > -- Cheers, Brad --===============2010713918726578870==-- From gfarnum@redhat.com Fri Apr 24 19:20:41 2020 From: Gregory Farnum To: sepia@ceph.io Subject: [sepia] Re: teuthology is now python3 Date: Fri, 24 Apr 2020 12:20:36 -0700 Message-ID: In-Reply-To: CAF-wwdEma-BLOaw6dRG4MyXruyRNY4A3o27uyWqOTrvD_NYjSQ@mail.gmail.com MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1046317944098203315==" --===============1046317944098203315== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit On Thu, Apr 23, 2020 at 9:59 PM Brad Hubbard wrote: > > So we are clear about this any tests using a branch based on luminous, > mimic, nautilus, or octopus should use "--teuthology-branch py2". > > Anything newer than octopus should be tested against the teuthology > master branch (IOW do not use "--teuthology-branch py2"). Is there a way we can automate this? I guess the main users of older branches will probably be the nightlies or using backport scripts so maybe we're okay just leaving it, but something that checks the included tags and tries to guess or at least warn if you use the wrong one would prevent some errors. (I'm assuming this applies to anything we run with teuthology-schedule, not just direct invocations.) -Greg > > Hope this is clear. > > On Tue, Apr 14, 2020 at 5:40 PM kefu chai wrote: > > > > hi folks, > > > > we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph > > to python3. and from now on, the teuthology-worker runs in a python3 > > environment by default unless specified otherwise using > > "--teuthology-branch py2". > > > > which means: > > > > - we need to write tests in python3 in master now > > - teuthology should be python3 compatible. > > - teuthology bug fixes should be backported to "py2" branch. > > > > if you run into any issues related to python3 due to the above > > changes, please let me know. and i will try to fix it ASAP. > > > > currently, the tests under qa/ directories in ceph:ceph master branch > > are python2 and python3 compatible. but since we've moved to python3, > > there is no need to be python2 compatible anymore. since the sepia lab > > is still using ubuntu xenial, we cannot use features offered by > > python3.6 at this moment yet. but we do plan to upgrade the OS to > > bionic soon. before that happens, the tests need to be compatible with > > Python3.5. > > > > the next step is to > > > > - drop python2 support in ceph:ceph master branch, and > > - drop python2 support in ceph:teuthology master. > > - backport python3 compatible changes to octopus and nautilus to ease > > the pain of backport > > > > -- > > Regards > > Kefu Chai > > _______________________________________________ > > Dev mailing list -- dev(a)ceph.io > > To unsubscribe send an email to dev-leave(a)ceph.io > > > > > -- > Cheers, > Brad > _______________________________________________ > Dev mailing list -- dev(a)ceph.io > To unsubscribe send an email to dev-leave(a)ceph.io > --===============1046317944098203315==-- From dgallowa@redhat.com Tue Apr 28 14:19:25 2020 From: David Galloway To: sepia@ceph.io Subject: [sepia] Ubuntu Focal Fossa Date: Tue, 28 Apr 2020 10:19:13 -0400 Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4705205526927715042==" --===============4705205526927715042== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hey all, I'm happy to report we are already ready for the latest Ubuntu LTS release 20.04 aka "Focal Fossa" We are already building octopus and master branches on it and all our CI/lab infra are ready. This includes Jenkins, shaman, chacra, teuthology, and FOG. https://shaman.ceph.com/api/search/?status=3Dready&project=3Dceph&flavor=3Dde= fault&distros=3Dubuntu%2F20.04 `teuthology-lock --lock-many 1 --machine-type smithi --os-type ubuntu --os-version 20.04` will reimage a smithi with Focal Fossa. --=20 David Galloway Systems Administrator, RDU Ceph Engineering IRC: dgalloway --===============4705205526927715042==--