On Tue, Mar 23, 2021 at 6:13 AM duluxoz <duluxoz(a)gmail.com> wrote:
Hi All,
I've got a new issue (hopefully this one will be the last).
I have a working Ceph (Octopus) cluster with a replicated pool
(my-pool), an erasure-coded pool (my-pool-data), and an image (my-image)
created - all *seems* to be working correctly. I also have the correct
Keyring specified (ceph.client.my-id.keyring).
ceph -s is reporting all healthy.
The ec profile (my-ec-profile) was created with: ceph osd
erasure-code-profile set my-ec-profile k=4 m=2 crush-failure-domain=host
The replicated pool was created with: ceph osd pool create my-pool 100
100 replicated
Followed by: rbd pool init my-pool
The ec pool was created with: ceph osd pool create my-pool-data 100 100
erasure my-ec-profile --autoscale-mode=on
Followed by: rbd pool init my-pool-data
The image was created with: rbd create -s 1T --data-pool my-pool-data
my-pool/my-image
The Keyring was created with: ceph auth get-or-create client.my-id mon
'profile rbd' osd 'profile rbd pool=my-pool' mgr 'profile rbd
pool=my-pool' -o /etc/ceph/ceph.client.my-id.keyring
Hi Matthew,
If you are using a separate data pool, you need to give "my-id" access
to it:
osd 'profile rbd pool=my-pool, profile rbd pool=my-pool-data'
On a centos8 client machine I have installed ceph-common, placed the
Keyring file into /etc/ceph/, and run the command: rbd device map
my-pool/my-image --id my-id
Does "rbd device map" actually succeed? Can you attach dmesg from that
client machine from when you (attempted to) map, ran fdisk, etc?
Thanks,
Ilya