Hi Christoph,
can you send me the ceph config set ... command you used and/or the ceph
config dump output?
Regards, Joachim
Clyso GmbH
Homepage:
https://www.clyso.com
Am 05.05.2021 um 16:30 schrieb Christoph Adomeit:
> I manage a historical cluster of severak ceph nodes with each 128 GB Ram and 36 OSD
each 8 TB size.
>
> The cluster ist just for archive purpose and performance is not so important.
>
> The cluster was running fine for long time using ceph luminous.
>
> Last week I updated it to Debian 10 and Ceph Nautilus.
>
> Now I can see that the memory usage of each osd grows slowly to 4 GB each and once
the system has
> no memory left it will oom-kill processes
>
> I have already configured osd_memory_target = 1073741824 .
> This helps for some hours but then memory usage will grow from 1 GB to 4 GB per OSD.
>
> Any ideas what I can do to further limit osd memory usage ?
>
> It would be good to keep the hardware running some more time without upgrading RAM on
all
> OSD machines.
>
> Any Ideas ?
>
> Thanks
> Christoph
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io