I'm agree with you because this causes me 5hours downtime and I think
there's no more need to have at this level quota checking in high
scale clusters.
Thanks for your help.
On Mon, Jul 6, 2020 at 6:56 PM Casey Bodley <cbodley(a)redhat.com> wrote:
>
> It looks like these messages are related to the config variable
> rgw_bucket_quota_soft_threshold, which defaults to 0.95. I dug through
> the git history and found this was added in a 2013 commit
>
https://github.com/ceph/ceph/commit/14eabd4aa7b8a2e2c0c43fe7f877ed2171277526.
>
> I guess the reasoning there is that, once a bucket is close to hitting
> its quota, we want our quota checks to be 'exact' instead of using the
> cache. But these quota checks can be extremely expensive for sharded
> buckets, and the checks aren't atomic with the writes anyway. The
> change long predates dynamic resharding, and I don't think it's
> reasonable anymore. I'd support reverting that commit entirely. What
> does everyone else think?
>
> On Mon, Jul 6, 2020 at 6:44 AM Seena Fallah <seenafallah(a)gmail.com> wrote:
> >
> > Hi all.
> >
> > I'm facing this log on my rgw instances and this seems the reason that
> > I have so high iops on my buckets.index pool.
> >
> > 2020-07-04 18:15:08.472 7f15b37fa700 20 quota: can't use cached stats,
> > exceeded soft threshold (size): 515396075520 >= 489626271744
> >
> > Can someone help me on this?
> >
> > Thanks.
> > _______________________________________________
> > Dev mailing list -- dev(a)ceph.io
> > To unsubscribe send an email to dev-leave(a)ceph.io
> >
>