If you have a small cluster, without host redundancy, you are still
able to configure this in Ceph to be handled correctly by adding a
drive failure domain between host and OSD level. So yes you need to
change more then just failure-domain=OSD, as this would be a problem.
However it is absolutely the same as to having multiple OSDs per NVMe
as some people do it.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am Sa., 13. März 2021 um 13:11 Uhr schrieb Marc <Marc(a)f1-outsourcing.eu>eu>:
>
> > Well, if you run with failure-domain=host, then if it says "I have 8
> > 14TB drives and one failed" or "I have 16 7TB drives and two
failed"
> > isn't going to matter much in terms of recovery, is it?
> > It would mostly matter for failure-domain=OSD, otherwise it seems about
> > equal.
>
> Yes, but especially in small clusters, people are changing the failure domain to osd
to be able to use EC (like I have ;))