Hi,
our Ceph 14.2.3 cluster so far runs smooth with replicated and EC pools, but since a
couple of days one of the dedicated replication nodes consumes up to 99% swap and stays at
that level. The other two replicated nodes use +- 50 - 60% of swap.
All the 24 NVMe OSDs per node are BlueStore with default settings, 128GB RAM. The
vm.swappiness is set to 10.
Do you have any suggestions how to handle/reduce the swap usage?
Thanks for feedback and regards . Götz
Attachments:
- smime.p7s
(application/pkcs7-signature — 5.1 KB)