Hi all,
We've recently run into an issue where our single ceph rbd pool is throwing errors for
nearfull osds. The OSDs themselves vary in PGs/%full with a low of 64/78% and a high of
73/86%. Is there any suggestions on how to get this to balance a little more cleanly?
Currently we have 360 drives in a single pool with 8192 PGs. I think we may be able to
double the PG num and that will balance things a bit cleaner but I just wanted to see if
there may be anything the community suggests other than that. Let me know if there's
any further info I forgot to provide if that'll help sort this out.
Thanks,
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 741 TiB 135 TiB 606 TiB 607 TiB 81.85
TOTAL 741 TiB 135 TiB 606 TiB 607 TiB 81.85
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
pool 1 162 TiB 46.81M 494 TiB 89.02 20 TiB
cluster:
health: HEALTH_WARN
85 nearfull osd(s)
1 pool(s) nearfull
services:
osd: 360 osds: 360 up (since 7d), 360 in (since 7d)
data:
pools: 1 pools, 8192 pgs
objects: 46.81M objects, 169 TiB
usage: 607 TiB used, 135 TiB / 741 TiB avail
pgs: 8192 active+clean
Show replies by date