Are you running a multi-site setup?
In this case it's best to set the default shard size to large enough
number *before* enabling multi-site.
If you didn't do this: well... I think the only way is still to
completely re-sync the second site...
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Tue, Feb 4, 2020 at 5:23 PM <DHilsbos(a)performair.com> wrote:
>
> All;
>
> We're backing to having large OMAP object warnings regarding our RGW index pool.
>
> This cluster is now in production, so I can simply dump the buckets / pools and hope
everything works out.
>
> I did some additional research on this issue, and it looks like I need to (re)shard
the bucket (index?). I found information that suggests that, for older versions of Ceph,
buckets couldn't be sharded after creation[1]. Other information suggests the
Nautilus (which we are running), can re-shard dynamically, but not when multi-site
replication is configured[2].
>
> This suggests that a "manual" resharding of a Nautilus cluster should be
possible, but I can't find the commands to do it. Has anyone done this? Does anyone
have the commands to do it? I can schedule down time for the cluster, and take the
RADOSGW instance(s), and dependent user services offline.
>
> [1]:
https://ceph.io/geen-categorie/radosgw-big-index/
> [2]:
https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> DHilsbos(a)PerformAir.com
>
www.PerformAir.com
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io