I would suggest enabling the upmap balancer if you haven't done that,
it should help even data out. Even if it would not do better than some
manual rebalancing scheme, it will at least do it nicely in the
background some 8 PGs at a time so it doesn't impact client traffic.
I looks very weird to have such uneven distribution even while having
lots of PGs (which was my first guess =)
Den tis 25 maj 2021 kl 03:47 skrev Sergei Genchev <sgenchev(a)gmail.com>om>:
Hello,
I am running a nautilus cluster with 5 OSD nodes/90 disks that is
exclusively used for S3. My disks are identical, but utilization
ranges from 9% to 82%, and I am starting to get backfill_toofull
errors even though I have only used 150TB out of 650TB of data.
- Other than manually crush reweighting OSDs, is there any other
option for me ?
- what would cause this uneven distribution? Is there some
documentation on how to track down what's going on?
output of 'ceph osd df" is at
https://pastebin.com/17HWFR12
Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
May the most significant bit of your life be positive.