Hi,
In ceph, when you create an object, it cannot go any OSD as it fits. An object is mapped
to a placement group using a hash algorithm. Then placement groups are mapped to OSDs. See
[1] for details. So, if any of your OSD goes full, write operations cannot be guaranteed
success. Once you correct the unbalance, you should see more available space.
Also, you only have 289 placement groups, which I think is too few for your 48 OSDs [2].
If you have more placement groups, the unbalance issue will be far less severe.
[1]:
在 2020年10月25日,18:24,Amudhan P
<amudhan83(a)gmail.com> 写道:
Hi Stefan,
I have started balancer but what I don't understand is there are enough
free space in other disks.
Why it's not showing those in available space?
How to reclaim the free space?
On Sun 25 Oct, 2020, 2:27 PM Stefan Kooman,
<stefan(a)bit.nl> wrote:
On 2020-10-25 05:33, Amudhan P wrote:
Yes, There is a unbalance in PG's assigned to OSD's.
`ceph osd df` output snip
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS
0 hdd 5.45799 1.00000 5.5 TiB 3.6 TiB 3.6 TiB 9.7 MiB 4.6
GiB 1.9 TiB 65.94 1.31 13 up
1 hdd 5.45799 1.00000 5.5 TiB 1.0 TiB 1.0 TiB 4.4 MiB 1.3
GiB 4.4 TiB 18.87 0.38 9 up
2 hdd 5.45799 1.00000 5.5 TiB 1.5 TiB 1.5 TiB 4.0 MiB 1.9
GiB 3.9 TiB 28.30 0.56 10 up
3 hdd 5.45799 1.00000 5.5 TiB 2.1 TiB 2.1 TiB 7.7 MiB 2.7
GiB 3.4 TiB 37.70 0.75 12 up
4 hdd 5.45799 1.00000 5.5 TiB 4.1 TiB 4.1 TiB 5.8 MiB 5.2
GiB 1.3 TiB 75.27 1.50 20 up
5 hdd 5.45799 1.00000 5.5 TiB 5.1 TiB 5.1 TiB 5.9 MiB 6.7
GiB 317 GiB 94.32 1.88 18 up
6 hdd 5.45799 1.00000 5.5 TiB 1.5 TiB 1.5 TiB 5.2 MiB 2.0
GiB 3.9 TiB 28.32 0.56 9 up
MIN/MAX VAR: 0.19/1.88 STDDEV: 22.13
ceph balancer mode upmap
ceph balancer on
The balancer should start balancing and this should result in way more
space available. Good to know that ceph df is based on the disk that is
most full.
There is all sorts of tuning available for the balancer, although I
can't find it in the documentation. Ceph docu better project is working
on that. See [1] for information. You can look up the python code to see
what variables you can tune: /usr/share/ceph/mgr/balancer/module.py
ceph config set mgr/balancer/begin_weekday 1
ceph config set mgr/balancer/end_weekday 5
ceph config set mgr mgr/balancer/begin_time 1000
ceph config set mgr mgr/balancer/end_time 1700
^^ to restrict the balancer running only on weekdays (monday to friday)
from 10:00 - 17:00 h.
Gr. Stefan
[1]:
https://eur05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.ceph…
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io