Thanks for the answer.
For the default redundancy rule and pool size 3 you
need three separate
hosts.
I have 24 separate server nodes with with 32 osd in everyone in total
768 osd, my question is why the mds suffer when only 4% of the osd goes
down (in the same node). I need to modify the crush map?
El 5/5/21 a las 11:55, Robert Sander escribió:
> Hi,
>
> Am 05.05.21 um 11:44 schrieb Andres Rojas Guerrero:
>> I have in the cluster 768 OSD, it is enough that 32 (~ 4%) of them (in
>> the same node) fall and the information becomes inaccessible. Is it
>> possible to improve this behavior?
>
> You need to spread your failure zone in the crush map. It looks like the
> OSD is the failure zone, and not the host. If it woould be the host the
> failure of any number of OSDs in a single host would not bring PGs down.
>
For the default redundancy rule and pool size 3 you
need three separate
hosts.
>
> Regards
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
--
*******************************************************
Andrés Rojas Guerrero
Unidad Sistemas Linux
Area Arquitectura Tecnológica
Secretaría General Adjunta de Informática
Consejo Superior de Investigaciones Científicas (CSIC)
Pinar 19
28006 - Madrid
Tel: +34 915680059 -- Ext. 990059
email: a.rojas(a)csic.es
ID comunicate.csic.es: @50852720l:matrix.csic.es
*******************************************************