Hi there,
I think we have our OSD nodes setup with vm.swappiness = 0
If I remember correctly few years ago vm.swappiness = 0 was changed and now it does not prevent swapping it just reduces the changes of memory being send to swap.
Cheers,
Xavier.
-----Mensaje original-----
De: Götz Reinicke <goetz.reinicke@filmakademie.de>
Enviado el: viernes, 6 de diciembre de 2019 8:14
Para: ceph-users <ceph-users@ceph.com>
Asunto: [ceph-users] High swap usage on one replication node
Hi,
our Ceph 14.2.3 cluster so far runs smooth with replicated and EC pools, but since a couple of days one of the dedicated replication nodes consumes up to 99% swap and stays at that level. The other two replicated nodes use +- 50 - 60% of swap.
All the 24 NVMe OSDs per node are BlueStore with default settings, 128GB RAM. The vm.swappiness is set to 10.
Do you have any suggestions how to handle/reduce the swap usage?
Thanks for feedback and regards . Götz
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io