artemis@icitsrv5:~$ ceph -s cluster: id: 815ea021-7839-4a63-9dc1-14f8c5feecc6 health: HEALTH_OK services: mon: 3 daemons, quorum iccluster003,iccluster005,iccluster007 (age 6d) mgr: iccluster021(active, since 5d), standbys: iccluster023 mds: cephfs:1 {0=iccluster013=up:active} 2 up:standby osd: 80 osds: 80 up (since 6d), 80 in (since 6d); 68 remapped pgs rgw: 8 daemons active (iccluster003.rgw0, iccluster005.rgw0, iccluster007.rgw0, iccluster013.rgw0, iccluster015.rgw0, iccluster019.rgw0, iccluster021.rgw0, iccluster023.rgw0) data: pools: 9 pools, 1592 pgs objects: 41.82M objects, 103 TiB usage: 149 TiB used, 292 TiB / 442 TiB avail pgs: 22951249/457247397 objects misplaced (5.019%) 1524 active+clean 55 active+remapped+backfill_wait 13 active+remapped+backfilling io: client: 0 B/s rd, 7.6 MiB/s wr, 0 op/s rd, 201 op/s wr recovery: 340 MiB/s, 121 objects/s artemis@icitsrv5:~$ ceph fs status cephfs - 4 clients ====== +------+--------+--------------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+--------------+---------------+-------+-------+ | 0 | active | iccluster013 | Reqs: 10 /s | 346k | 337k | +------+--------+--------------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 751M | 81.1T | | cephfs_data | data | 14.2T | 176T | +-----------------+----------+-------+-------+ +--------------+ | Standby MDS | +--------------+ | iccluster019 | | iccluster015 | +--------------+ MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable) artemis@icitsrv5:~$ ceph osd pool ls detail pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 125 flags hashpspool stripe_width 0 application rgw pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 128 flags hashpspool stripe_width 0 application rgw pool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 130 flags hashpspool stripe_width 0 application rgw pool 6 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 131 flags hashpspool stripe_width 0 application rgw pool 7 'cephfs_data' erasure size 11 min_size 9 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode warn last_change 204 lfor 0/0/199 flags hashpspool,ec_overwrites stripe_width 32768 application cephfs pool 8 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 144 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs pool 9 'default.rgw.buckets.data' erasure size 11 min_size 9 crush_rule 2 object_hash rjenkins pg_num 1024 pgp_num 808 pgp_num_target 1024 autoscale_mode warn last_change 2982 lfor 0/0/180 flags hashpspool stripe_width 32768 application rgw pool 10 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw pool 11 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 176 flags hashpspool stripe_width 0 application rgw artemis@icitsrv5:~$ ceph fs authorize cephfs client.test /test rw [client.test] key = XXX artemis@icitsrv5:~$ ceph auth get client.test exported keyring for client.test [client.test] key = XXX caps mds = "allow rw path=/test" caps mon = "allow r" caps osd = "allow rw tag cephfs data=cephfs" root@icitsrv5:~# ceph --cluster artemis auth get-key client.test > /etc/ceph/artemis.client.test.secret root@icitsrv5:~# mkdir -p /mnt/artemis/test/ ; mount -t ceph -o rw,relatime,name=test,secretfile=/etc/ceph/artemis.client.test.secret iccluster003.iccluster.epfl.ch,iccluster005.iccluster.epfl.ch,iccluster007.iccluster.epfl.ch:/test /mnt/artemis/test/ root@icitsrv5:~# ls -la /mnt/artemis/test/ total 5 drwxr-xr-x 1 root root 1 Jan 23 07:21 . drwxr-xr-x 3 root root 4096 Jan 23 07:15 .. root@icitsrv5:~# echo "test" > /mnt/artemis/test/foo root@icitsrv5:~# ls -la /mnt/artemis/test/ total 5 drwxr-xr-x 1 root root 1 Jan 23 07:21 . drwxr-xr-x 3 root root 4096 Jan 23 07:15 .. -rw-r--r-- 1 root root 5 Jan 23 07:21 foo # What I did to have EC pool on the cephfs_data pool : # must stop mds servers ansible -i ~/iccluster/ceph-config/cluster-artemis/inventory mdss -m shell -a " systemctl stop ceph-mds.target" # must allow pool deletion ceph --cluster artemis tell mon.\* injectargs '--mon-allow-pool-delete=true' ceph --cluster artemis fs rm cephfs --yes-i-really-mean-it #delete cephfs pool ceph --cluster artemis osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it --yes-i-really-really-mean-it ceph --cluster artemis osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it --yes-i-really-really-mean-it # disallow pool deletion ceph --cluster artemis tell mon.\* injectargs '--mon-allow-pool-delete=false' # create earsure coding profile # ecpool-8-3 for apollo cluster ceph --cluster artemis osd erasure-code-profile set ecpool-8-3 k=8 m=3 crush-failure-domain=host #re create pool for cephfs # cephfs_data in erasure coding ceph --cluster artemis osd pool create cephfs_data 64 64 erasure ecpool-8-3 # cephfs_metadata must be in replicated ! ceph --cluster artemis osd pool create cephfs_metadata 8 8 # must set allow_ec_overwrites to be able to create a cephfs over EC pool ceph --cluster artemis osd pool set cephfs_data allow_ec_overwrites true # create the cephfs filesystem named "cephfs" ceph --cluster artemis fs new cephfs cephfs_metadata cephfs_data