The teuthology file system is where results are stored from QA runs.
You don't _have_ to login to the teuthology VM to access these test
artifacts. In fact, you can mount this file system from your laptop
with sepia VPN access or a dev machine (like vossi [1] or senta [2]).
For example:
pdonnell@vossi04 $ grep teuthology < /etc/fstab
172.21.2.201,172.21.2.202,172.21.6.108:/teuthology-archive /teuthology
ceph name=teuthology-ro,secret=<redacted>,mds_namespace=teuthology,_netdev
0 2
This client.teuthology-ro credential can only read the file system. To
get the secret, login to vossi04.front.sepia.ceph.com to read the
unredacted /etc/fstab or email me directly.
You may ask why? Because it's usually much faster to access the file
system on another machine. The teuthology VM is often under heavy load
and memory pressure so the file system cache is cold. Looking at
multi-GB test artifacts also uses up significant memory that's
primarily earmarked for running the teuthology workers.
[1] https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
[2] https://wiki.sepia.ceph.com/doku.php?id=hardware:senta
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
https://pulpito.ceph.com/?status=queued
Are the nightlies cut down too much? The weekend queue is de facto
empty (not that I'm complaining for selfish reasons).
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Something happened around 2PM PST today, and a bunch of services were
stopped because their underlying storage access to an iscsi service on a
Ceph cluster stopped. I restarted the iscsi containers and things got
back to normal. It was probably down for about three hours.
Sorry for the interruption; tests will probably need to be rescheduled.
I believe we've finally captured a new workable centos8.stream image for
testing, with the proper setup for mirrorlists/repos. Please let me
know if you can or can't use it in your testing.
Working on making sure the automated capture of our other testing
releases is also fixed.
Github changed their public SSH host key on Friday, and it's been quite
the hunt tracking down all the ways that has affected our build
machines. I'm continuing to monitor, but I hope the last tweak has
resolved the issue (about half an hour ago). Let me know if you see
builds started after about noon PST that are still failing because of a
host key mismatch for github.com.
This Sunday(22/03) at 13:00PM GMT+2 I will upgrade
https://2.jenkins.ceph.com/ to Jenkins latest version, hopefully it will
take around 30min to complete and to bring the instance back online
Thanks
--
Adam Kraitman
Systems Administrator
Ceph Engineering
IRC: akraitma
Hey next Sunday I plan to migrate tracker.ceph.com to a stronger instance
the entire process should take approximately 2 hours during that time the
tracker will be unavailable
--
Adam Kraitman
Systems Administrator
Ceph Engineering
IRC: akraitma
Both instances are running in OVH cloud so were checking if it's related to
their network maintenance
https://network.status-ovhcloud.com/
--
Adam Kraitman
Systems Administrator
Ceph Engineering
IRC: akraitma