Question about HA here: I understood the documentation of the fuse NFS client such that
the connection state of all NFS clients is stored on ceph in rados objects and, if using a
floating IP, the NFS clients should just recover from a short network timeout.
Not sure if this is what should happen with this specific HA set-up in the original
request, but a fail-over of the NFS server ought to be handled gracefully by starting a
new one up with the IP of the down one. Or not?
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Eugen Block <eblock(a)nde.ag>
Sent: Tuesday, April 16, 2024 11:24 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker
Ah, okay, thanks for the hint. In that case what I see is expected.
Zitat von Robert Sander <r.sander(a)heinlein-support.de>de>:
Hi,
On 16.04.24 10:49, Eugen Block wrote:
I believe I can confirm your suspicion, I have a test cluster on
Reef 18.2.1 and deployed nfs without HAProxy but with keepalived [1].
Stopping the active NFS daemon doesn't trigger anything, the MGR
notices that it's stopped at some point, but nothing else seems to
happen.
There is currently no failover for NFS.
The ingress service (haproxy + keepalived) that cephadm deploys for
an NFS cluster does not have a health check configured. Haproxy does
not notice if a backend NFS server dies. This does not matter as
there is no failover and the NFS client cannot be "load balanced" to
another backend NFS server.
There is no use to configure an ingress service currently without failover.
The NFS clients have to remount the NFS share in case of their
current NFS server dies anyway.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io