On Sun, Feb 21, 2021 at 1:04 PM Gaƫl THEROND <gael.therond(a)bitswalk.com> wrote:
Hi Ilya,
Sorry for the late reply, I've been sick all week long :-/ and then really busy at
work once I'll get back.
I've tried to wipe out the image by zeroing it (Even tried to fully wipe it), I can
see the same error message.
The thing is, isn't the image created supposed to be empty?
The image is empty, but mkfs.xfs doesn't know that and attempts to
discard it anyway.
Regarding the pool creation, both, I created a new metadata pool (archives) and a new
data pool (archives-data) as my pool is used for EC based RBD images.
Both too, I've tried to delete and re-create pools with a different name and the same
name, we always hit the issue.
Here are the commands I used to create those pools and volumes:
POOLS CREATION:
ceph osd pool create archives 1024 1024 replicated
ceph osd pool create archives-data 1024 1024 erasure standard-ec
ceph osd pool set archives-data allow_ec_overwrites true
VOLUME CREATION:
rbd create --size 80T --data-pool archives-data archives/mirror
just for complementary information, we use the following EC profile:
k=3
m=2
plugin=jerasure
crush-failure-domain=host
crush-device-class=ssd
technique=reed_sol_van
This cluster is composed of 10 OSDs nodes filled with 24 8Tb SSD disks so if I'm not
wrong with my maths, our profile is OK so it shouldn't be a profile/crushmap issue.
I didn't try to map the volume using the admin user tho, you're right I should in
order to eliminate any auth issue, but I doubt it is related as a smaller image works just
fine with this client key using the same pool name.
Please do, it really looks like an authentication issue to me.
Thanks,
Ilya