Short summary: have "-debug" terminating your branch name. See:
https://github.com/ceph/ceph-build/pull/2167
and the integration branch helper script change:
https://github.com/ceph/ceph/pull/53855
The benefit for doing this is that mutex debugging will be enabled,
many compiler checks are enabled, and some optimizations will be
disabled (potentially making some debugging easier). One known
drawback will be that execution may be slower.
See also:
https://github.com/ceph/ceph-build/pull/2167#issuecomment-1751033910
There are build failures for CentOS 8 for which I will make tickets soon.
See also:
https://shaman.ceph.com/builds/ceph/wip-batrick-testing-20231006.014828-deb…
If this is shown to not create a lot of fallout in the QA suite
testing, this may be turned on by default without the "-debug" suffix
on branch names. I encourage QA testers to give this a try so any
issues can be shaken out.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
At the infrastructure meeting today, we decided on a course of action
for migrating the existing /home directory to CephFS. This is being
done for a few reasons:
- Alleviate load on the root file system device (which is also hosted
on the LRC via iscsi)
- Avoid disk space full scenarios we've regularly hit
- Is more recoverable in the event of teuthology corruption/catastrophe
- Is generally much faster.
- Use as a home file system on other sepia resources (maybe)
To effect this:
- The new "home" CephFS file system is mounted at /cephfs/home
- User's home /home/$USER has been or will be (again) rsync'd to
/cephfs/home/$USER
- User's account "home" (/etc/passwd) is being updated to /cephfs/home/$USER
- User's old home /home/$USER will be archived to /home/.archive/$USER
- A symlink will be placed in /home/$USER pointing to
/cephfs/home/$USER for compatibility with existing
(mis-)configurations.
The main reason for not simply updating /home is to allow
administrators continued access to teuthology in the event of a
Ceph(FS) outage.
Most home directories have already been rsync'd as of 2 weeks ago. A
final rsync will be performed prior to each user's terminal migration.
In order to update a user's home directory, the user must be logged
out. Generally no action need be taken but I may kindly ask you to log
out of teuthology if necessary.
Thanks to Laura Flores, Venky Shankar, Yuri Weinstein, and Leonid Usov
for volunteering as guinea pigs for my early testing. They have
already been migrated. The rest of the users will be migrated in a few
days time incrementally.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
I don't know who uses it / owns it / or anything. The wiki page is not
always up to date:
https://wiki.sepia.ceph.com/doku.php?id=hardware:senta
e.g.
https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
indicates vossi01 is owned by Greg Farnum but my understanding is that
he relinquished it for sepia core services.
Since no one protested or is even aware the system is offline. I think
reimage it with CentOS 9.stream and announce it's available again.
On Wed, Aug 30, 2023 at 9:16 AM Adam Kraitman <akraitma(a)redhat.com> wrote:
>
> Hey Patrick, can I reimage senta03 or there is anything we need to backup first?
>
> On Thu, Aug 24, 2023 at 8:15 PM Patrick Donnelly <pdonnell(a)redhat.com> wrote:
>>
>> Is it dead? I can't ssh to it.
>>
>> --
>> Patrick Donnelly, Ph.D.
>> He / Him / His
>> Red Hat Partner Engineer
>> IBM, Inc.
>> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>> _______________________________________________
>> Sepia mailing list -- sepia(a)ceph.io
>> To unsubscribe send an email to sepia-leave(a)ceph.io
>>
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
We were forced to reboot teuthology today due to an odd error with
cephfs. All smithis are allocated, so runs will have to be killed and
nodes nuked to let more tests run.
I'm planning to shut down teuthology briefly this evening to upgrade its
OS (and its Python interpreter as a consequence). I'll start around
1800 GMT-8. I expect no problems, and for it to take around an hour.
Let me know if there's something you can't tolerate losing during that
period of time.
https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
The assignees of these systems seem out-of-date. I thought vossi01 was
repurposed for some upstream service?
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Is it dead? I can't ssh to it.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
I'm going to set Jenkins to stop new builds, and when it quiesces,
restart it after upgrading Jenkins and some plugins. It's pretty quiet
right now. I'll send more email when it's back up.
If you are a Ceph developer, you should familiarize yourself with this
core infrastructure service [1]. Included are instructions for
mounting these CephFS file systems on your development machines.
https://wiki.sepia.ceph.com/doku.php?id=services:cephfs
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D