Hi Dan,
Possibly you're reproducing
https://tracker.ceph.com/issues/46456.
That explains how the underlying issue worked, I don't remember how a
bucked exhibiting this is repaired.
Eric?
Matt
On Thu, Oct 1, 2020 at 8:41 AM Dan van der Ster <dan(a)vanderster.com> wrote:
Dear friends,
Running 14.2.11, we have one particularly large bucket with a very
strange distribution of objects among the shards. The bucket has 512
shards, and most shards have ~75k entries, but shard 0 has 1.75M
entries:
# rados -p default.rgw.buckets.index listomapkeys
.dir.61c59385-085d-4caa-9070-63a3868dccb6.272652427.1.0 | wc -l
1752085
# rados -p default.rgw.buckets.index listomapkeys
.dir.61c59385-085d-4caa-9070-63a3868dccb6.272652427.1.1 | wc -l
78388
# rados -p default.rgw.buckets.index listomapkeys
.dir.61c59385-085d-4caa-9070-63a3868dccb6.272652427.1.2 | wc -l
78764
We had resharded this bucket (manually) from 32 up to 512 shards just
before upgrading from 12.2.12 to 14.2.11 a couple weeks ago.
Any idea why shard .0 is getting such an imbalance of entries?
Should we manually reshard this bucket again?
Thanks!
Dan
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309