I have a Ceph Luminous (12.2.12) cluster with 6 nodes. I’m attempting to create an EC3+2 pool with the following commands:

  1. Create the EC profile:
    1. ceph osd erasure-code-profile set es32 k=3 m=2 plugin=jerasure w=8 technique=reed_sol_van crush-failure-domain=host crush-root=sgshared
  2. Verify profile creation:

[root@mon-1 ~]# ceph osd erasure-code-profile get es32

crush-device-class=

crush-failure-domain=host

crush-root=sgshared

jerasure-per-chunk-alignment=false

k=3

m=2

plugin=jerasure

technique=reed_sol_van

w=8

  1. Create a pool using this profile:
    1. ceph osd pool create ec32pool 1024 1024 erasure es32
  2. List pool detail:
    1. pool 31 'es32' erasure size 5 min_size 4 crush_rule 11 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 1568 flags hashpspool stripe_width 12288 application ES
  3. Here’s the crush rule that’s created:
        {

        "rule_id": 11,

        "rule_name": "es32",

        "ruleset": 11,

        "type": 3,

        "min_size": 3,

        "max_size": 5,

        "steps": [

            {

                "op": "set_chooseleaf_tries",

                "num": 5

            },

            {

                "op": "set_choose_tries",

                "num": 100

            },

            {

                "op": "take",

                "item": -2,

                "item_name": "sgshared"

            },

            {

                "op": "chooseleaf_indep",

                "num": 0,

                "type": "host"

            },

            {

                "op": "emit"

            }

        ]

    },

 

From the output of “ceph osd pool ls detail” you can see min_size=4, the crush rule says min_size=3 however the pool does NOT survive 2 hosts failing.

 

Am I missing something?