Hello,

I would suggest to:

~# swapoff -a 
~# vi /etc/fstab
... remove swap line ...

and buy additional ram if required. Without knowing your exact use case, 128GB would be our minimum ram for simple use cases and most likely not for EC and complex configurations.

Swap is nothing you want to have in a Server as it is very slow and can cause long downtimes.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Sa., 7. Dez. 2019 um 12:34 Uhr schrieb Xavier Trilla <xavier.trilla@clouding.io>:
Hi there,

I think we have our OSD nodes setup with vm.swappiness = 0

If I remember correctly few years ago vm.swappiness = 0 was changed and now it does not prevent swapping it just reduces the changes of memory being send to swap.

Cheers,
Xavier.
-----Mensaje original-----
De: Götz Reinicke <goetz.reinicke@filmakademie.de>
Enviado el: viernes, 6 de diciembre de 2019 8:14
Para: ceph-users <ceph-users@ceph.com>
Asunto: [ceph-users] High swap usage on one replication node

Hi,

our Ceph 14.2.3 cluster so far runs smooth with replicated and EC pools, but since a couple of days one of the dedicated replication nodes consumes up to 99% swap and stays at that level. The other two replicated nodes use +- 50 - 60% of swap.

All the 24 NVMe OSDs per node are BlueStore with default settings, 128GB RAM. The vm.swappiness is set to 10.

Do you have any suggestions how to handle/reduce the swap usage?

        Thanks for feedback and regards . Götz
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io