hi folks,
we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph
to python3. and from now on, the teuthology-worker runs in a python3
environment by default unless specified otherwise using
"--teuthology-branch py2".
which means:
- we need to write tests in python3 in master now
- teuthology should be python3 compatible.
- teuthology bug fixes should be backported to "py2" branch.
if you run into any issues related to python3 due to the above
changes, please let me know. and i will try to fix it ASAP.
currently, the tests under qa/ directories in ceph:ceph master branch
are python2 and python3 compatible. but since we've moved to python3,
there is no need to be python2 compatible anymore. since the sepia lab
is still using ubuntu xenial, we cannot use features offered by
python3.6 at this moment yet. but we do plan to upgrade the OS to
bionic soon. before that happens, the tests need to be compatible with
Python3.5.
the next step is to
- drop python2 support in ceph:ceph master branch, and
- drop python2 support in ceph:teuthology master.
- backport python3 compatible changes to octopus and nautilus to ease
the pain of backport
--
Regards
Kefu Chai
Hey all,
I'm happy to report we are already ready for the latest Ubuntu LTS
release 20.04 aka "Focal Fossa"
We are already building octopus and master branches on it and all our
CI/lab infra are ready. This includes Jenkins, shaman, chacra,
teuthology, and FOG.
https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=defaul…
`teuthology-lock --lock-many 1 --machine-type smithi --os-type ubuntu
--os-version 20.04` will reimage a smithi with Focal Fossa.
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
Hello all,
The teuthology VM has been upgraded to Ubuntu Bionic. Additionally, the
teuthology worker processes are using the latest master which means python3.
If you run teuthology commands from the teuthology VM, you will likely
need to rebootstrap. `cd` to wherever your teuthology checkout is,
git checkout master
git pull
rm -rf virtualenv
./bootstrap
And you should be good.
Please feel free to ping or e-mail me if you need help with anything and
thanks for your patience!
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
DreamHost has given us a "ceph" tenant in their OpenStack environment
(DreamCompute). After some stability problems, we moved almost all the
workloads to OVH's OpenStack instead a few years ago.
I have been using this tenant for some testing recently, and I found
that some of our old compute nodes were stuck in "Error" state. I
asked DreamHost support to delete these, and they deleted them today.
It sounds like we will hit this problem again if we try to use nested
virtualization on DreamCompute. This might have been a result of
running the ceph-ansible Vagrant tests there a long time ago.
On Mon, Apr 20, 2020 at 2:43 AM DreamHost Customer Support Team wrote:
> Okay! After much hard work by our DreamCompute engineers (they had to
> completely evacuate each of the hypervisors those instances were on of
> all other instances so that they could be rebooted), those instances
> have been successfully deleted! They also asked me to pass along a
> request that you and your team not try to do nested virtualization on
> DreamCompute at this time as it currently leads to problems like this
> (where a complete hypervisor reboot is required). If you have any
> additional questions, please let us know.
- Ken
Hey all,
As the list of active Sepia users grows, the free space on the
teuthology VM continues to decrease. Instead of continuing to e-mail
the users with the largest home dirs, I'm going to increase the size of
the teuthology VM's disk.
This will require about a 12 hour outage. I'll need to instruct
teuthology workers to die after their running job, shut the VM down,
grow the disk and start everything back up.
If there are no objections, I can start this Thursday night at midnight
Eastern time and hopefully do the maintenance in the morning.
Are there are any pressing releases or a need to keep things running or
does this work for everyone?
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
RHEL7.8 GAed on March 31 and is available in the Sepia lab now. We have
FOG images for smithi and mira.
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
Best <a href="https://www.vhtnow.com/">video production bangalore</a> : We at VHTnow create visual masterpieces that engage, inspire and impact people's lives. Our services also include ad film and corporate film production in bangalore