Hi,
As I read the documentation[1] the "count:
1" handles that so what I
have is a placement pool from which only one is selected for
deployment?
you're probably right, when using your example command with multiple
hosts it automatically sets "count:1" (don't mind the hostnames, it's
an upgraded cluster currently running 18.2.1):
# ceph orch ls nfs
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
nfs.nfs-reef ?:2049 1/1 114s ago 3m
nautilus;nautilus2;nautilus3;count:1
# ceph orch ls ingress
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
ingress.nfs.nfs-reef 192.168.168.114:9049 1/1 2m ago 4m
nautilus;nautilus2;nautilus3;count:1
So it's not really clear what happened to your ingress service. :-)
But at least it works now, so that's good.
Zitat von Torkil Svensgaard <torkil(a)drcmr.dk>dk>:
> On 31/01/2024 09:36, Eugen Block wrote:
>> Hi,
>>
>> if I understand this correctly, with the "keepalive-only" option
>> only one ganesha instance is supposed to be deployed:
>>
>>> If a user additionally supplies --ingress-mode keepalive-only a
>>> partial ingress service will be deployed that still provides a
>>> virtual IP, but has nfs directly binding to that virtual IP and
>>> leaves out any sort of load balancing or traffic redirection. This
>>> setup will restrict users to deploying only 1 nfs daemon as
>>> multiple cannot bind to the same port on the virtual IP.
>>
>> Maybe that's why it disappeared as you have 3 hosts in the
>> placement parameter? Is the ingress service still present in 'ceph
>> orch ls'?
>
As I read the documentation[1] the "count:
1" handles that so what I
have is a placement pool from which only one is selected for
deployment?
>
> The absence of the ingress service is puzzling me, as it worked just
> fine prior to the upgrade and the upgrade shouldn't have touched the
> service spec in any way?
>
> Mvh.
>
> Torkil
>
> [1]
>
https://docs.ceph.com/en/latest/cephadm/services/nfs/#nfs-with-virtual-ip-b…
>
>
>> Regards,
>> Eugen
>>
>> Zitat von Torkil Svensgaard <torkil(a)drcmr.dk>dk>:
>>
>>> Hi
>>>
>>> Last week we created an NFS service like this:
>>>
>>> "
>>> ceph nfs cluster create jumbo
>>> "ceph-flash1,ceph-flash2,ceph-flash3" --ingress --virtual_ip
>>> 172.21.15.74/22 --ingress-mode keepalive-only
>>> "
>>>
>>> Worked like a charm. Yesterday we upgraded from 17.2.7 to 18.20.0
>>> and the NFS virtual IP seems to have gone missing in the process:
>>>
>>> "
>>> # ceph nfs cluster info jumbo
>>> {
>>> "jumbo": {
>>> "backend": [
>>> {
>>> "hostname": "ceph-flash1",
>>> "ip": "172.21.15.148",
>>> "port": 2049
>>> }
>>> ],
>>> "virtual_ip": null
>>> }
>>> }
>>> "
>>>
>>> Service spec:
>>>
>>> "
>>> service_type: nfs
>>> service_id: jumbo
>>> service_name: nfs.jumbo
>>> placement:
>>> count: 1
>>> hosts:
>>> - ceph-flash1
>>> - ceph-flash2
>>> - ceph-flash3
>>> spec:
>>> port: 2049
>>> virtual_ip: 172.21.15.74
>>> "
>>>
>>> I've tried restarting the nfs.jumbo service which didn't help.
Suggestions?
>>>
>>> Mvh.
>>>
>>> Torkil
>>>
>>> --
>>> Torkil Svensgaard
>>> Sysadmin
>>> MR-Forskningssektionen, afs. 714
>>> DRCMR, Danish Research Centre for Magnetic Resonance
>>> Hvidovre Hospital
>>> Kettegård Allé 30
>>> DK-2650 Hvidovre
>>> Denmark
>>> Tel: +45 386 22828
>>> E-mail: torkil(a)drcmr.dk
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> --
> Torkil Svensgaard
> Sysadmin
> MR-Forskningssektionen, afs. 714
> DRCMR, Danish Research Centre for Magnetic Resonance
> Hvidovre Hospital
> Kettegård Allé 30
> DK-2650 Hvidovre
> Denmark
> Tel: +45 386 22828
> E-mail: torkil(a)drcmr.dk
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io