Hi,
We have a 3-site Ceph cluster and would like to create a 4+2 EC pool
with 2 chunks per datacenter, to maximise the resilience in case of 1
datacenter being down. I have not found a way to create an EC profile
with this 2-level allocation strategy. I created an EC profile with a
failure domain = datacenter but it doesn't work as, I guess, it would
like to ensure it has always 5 OSDs up (to ensure that the pools remains
R/W) where with a failure domain = datacenter, the guarantee is only 4.
My idea was to create a 2-step allocation and a failure domain=host to
achieve our desired configuration, with something like the following in
the crushmap rule:
step choose indep 3 datacenter
step chooseleaf indep x host
step emit
Is it the right approach? If yes, what should be 'x'? Would 0 work?
From what I have seen, there is no way to create such a rule with the
'ceph osd crush' commands: I have to download the current CRUSHMAP, edit
it and upload the modified version. Am I right?
Thanks in advance for your help or suggestions. Best regards,
Michel
Show replies by date
Hello Michel,
What you need is:
step choose indep 0 type datacenter
step chooseleaf indep 2 type host
step emit
I think you're right about the need to tweak the crush rule by editing the crushmap
directly.
Regards
Frédéric.
----- Le 3 Avr 23, à 18:34, Michel Jouvin michel.jouvin(a)ijclab.in2p3.fr a écrit :
> Hi,
>
> We have a 3-site Ceph cluster and would like to create a 4+2 EC pool
> with 2 chunks per datacenter, to maximise the resilience in case of 1
> datacenter being down. I have not found a way to create an EC profile
> with this 2-level allocation strategy. I created an EC profile with a
> failure domain = datacenter but it doesn't work as, I guess, it would
> like to ensure it has always 5 OSDs up (to ensure that the pools remains
> R/W) where with a failure domain = datacenter, the guarantee is only 4.
> My idea was to create a 2-step allocation and a failure domain=host to
> achieve our desired configuration, with something like the following in
> the crushmap rule:
>
> step choose indep 3 datacenter
> step chooseleaf indep x host
> step emit
>
> Is it the right approach? If yes, what should be 'x'? Would 0 work?
>
> From what I have seen, there is no way to create such a rule with the
> 'ceph osd crush' commands: I have to download the current CRUSHMAP, edit
> it and upload the modified version. Am I right?
>
> Thanks in advance for your help or suggestions. Best regards,
>
> Michel
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io