Hi,
I already use CRUSHMAP weight to manually control the OSD utilization.
However this results in a situation where 5-10% of my 336 OSDs have a
weight < 1.00000, and this would impact/hinder ceph balancer to work.
This means I would need to modify any OSD with weight < 1.00000 first
before ceph balancer can start. And this would exceed the treshold of a
pool to store new data.
Therefore I would prefer to "move PGs manually to empty OSDs".
THX
Am 04.03.2020 um 11:34 schrieb Scheurer François:
Hi Thomas
To get the usage:
ceph osd df | sort -nk8
#VAR is the ratio to avg util
#WEIGHT is CRUSHMAP weight; typically the Disk capacity in TiB
#REWEIGHT is temporary (until osd restart or ceph osd set noout) WEIGHT correction
for manual rebalance
You can use for temporary reweight:
ceph osd reweight osd.<ID> <REWEIGHT>
or :
ceph osd test-reweight-by-utilization <VAR>
ceph osd reweight-by-utilization <VAR>
You can use for permanent reweight:
ceph osd crush reweight osd.<ID> <WEIGHT>
To speed up the backfill I use this (warning it decreases client performance):
ceph tell 'osd.*' injectargs '--osd_max_backfills 30
--osd_recovery_max_active 45 --osd_recovery_op_priority 10'
Then to set to back to default:
ceph tell 'osd.*' injectargs '--osd_max_backfills 1
--osd_recovery_max_active 3 --osd_recovery_op_priority 3'
Cheers
Francois Scheurer
________________________________________
From: Thomas Schneider <74cmonty(a)gmail.com>
Sent: Wednesday, March 4, 2020 11:15 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Forcibly move PGs from full to empty OSD
Hi,
Ceph balancer is not working correctly; there's an open bug
<https://tracker.ceph.com/issues/43752> report, too.
Until this issue is not solved, I need a workaround because I get more
and more warnings about "nearfull osd(s)".
Therefore my question is:
How can I forcibly move PGs from full OSD to empty OSD?
THX
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io