I don't know who uses it / owns it / or anything. The wiki page is not
always up to date:
https://wiki.sepia.ceph.com/doku.php?id=hardware:senta
e.g.
https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
indicates vossi01 is owned by Greg Farnum but my understanding is that
he relinquished it for sepia core services.
Since no one protested or is even aware the system is offline. I think
reimage it with CentOS 9.stream and announce it's available again.
On Wed, Aug 30, 2023 at 9:16 AM Adam Kraitman <akraitma(a)redhat.com> wrote:
>
> Hey Patrick, can I reimage senta03 or there is anything we need to backup first?
>
> On Thu, Aug 24, 2023 at 8:15 PM Patrick Donnelly <pdonnell(a)redhat.com> wrote:
>>
>> Is it dead? I can't ssh to it.
>>
>> --
>> Patrick Donnelly, Ph.D.
>> He / Him / His
>> Red Hat Partner Engineer
>> IBM, Inc.
>> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>> _______________________________________________
>> Sepia mailing list -- sepia(a)ceph.io
>> To unsubscribe send an email to sepia-leave(a)ceph.io
>>
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
We were forced to reboot teuthology today due to an odd error with
cephfs. All smithis are allocated, so runs will have to be killed and
nodes nuked to let more tests run.
I'm planning to shut down teuthology briefly this evening to upgrade its
OS (and its Python interpreter as a consequence). I'll start around
1800 GMT-8. I expect no problems, and for it to take around an hour.
Let me know if there's something you can't tolerate losing during that
period of time.
https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
The assignees of these systems seem out-of-date. I thought vossi01 was
repurposed for some upstream service?
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Is it dead? I can't ssh to it.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hi folks,
As many of you are aware, RAM/CPU is particularly scarce on
teuthology.front.sepia.ceph.com which means that any log viewing of QA
results can cause swapping and general slowness. At today's Ceph
Infrastructure Weekly call [1], I proposed only allowing root,
www-data, and teuthworker access to /teuthology so that others would
be forced to look at logs on other beefier development machines like
senta [2], vossi [3], your personal workstation [4], or even some
temporary locked node [5].
The current plan is to proceed barring some reasonable justification
not to. There are a few to-do items to make it happen laid out in the
minutes.
Until then, admins of teuthology would appreciate it if you already
start moving your log viewing to other machines without waiting for a
technical barrier to be set up.
[1] https://pad.ceph.com/p/ceph-infra-weekly
[2] https://wiki.sepia.ceph.com/doku.php?id=hardware:senta
[3] https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
[4] https://wiki.sepia.ceph.com/doku.php?id=services:cephfs
[5] https://wiki.sepia.ceph.com/doku.php?id=testnodeaccess
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D