I think I can replicate your issue on a luminous cluster. It works fine with a 8+3 pool,
but 10+2 fails after creating the 3 chunks with the same error.
What does your erasure code profile look like? I don't think I actually tested
anything other than powers of two (8,16) for k before settling on 8.
Have you or can you check if 8+3 has the same behaviour? I may be missing something really
obvious here, but I haven't thought about this for a while.
Cheers,
Tom
-----Original Message-----
From: aoanla(a)gmail.com <aoanla(a)gmail.com>
Sent: 04 September 2019 11:36
To: ceph-users(a)ceph.io
Subject: [ceph-users] rados + radosstriper puts fail with "large" input objects
(mimic/nautilus, ec pool)
Hi everyone:
I asked about this on the #ceph IRC channel, but didn't get much traction (and, as an
aside: the advertised host for the channel's logs turns up unaccessible to me...).
I have a new Ceph cluster presenting an erasure-coded pool.
Current configuration is 8 nodes, each hosting a mon, and 1 osd per hdd (20 10TB HGST
spinning disks per node) for a total of 160 OSDs in the cluster.
The cluster configures fine with ceph-ansible (as nautilus or mimic), and ceph health is
always marked as good (other than the "warn" for not having associated an
application string with the current test pool).
rados bench maxes out the bandwidth of the network interface when I try it with 4MB
objects.
However, attempting a more "real-world" test of
rados -p ecpool --striper put obj 120MBfile
causes the transfer to fail with "operation not permitted (95)"
Inspection reveals that 3 stripe chunks get created - the first being the expected size,
and the second and third being only a few kb in size.
Object metadata from rados -p --striper ls obj is inconsistent with the sum of the on-disk
size of the chunks.
Can you advise how to diagnose what's breaking here?
Thanks
Sam Skipsey
University of Glasgow
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io