unless uou have enabled some balancing - then this is very normal (actually pretty good
normal)
Jesper
Sent from myMail for iOS
Thursday, 14 May 2020, 09.35 +0200 from Florent B. <florent(a)coppint.com>om>:
Hi,
I have something strange on a Ceph Luminous cluster.
All OSDs have the same size, the same weight, and one of them is used at
88% by Ceph (osd.3) while others are around 40 to 50% usage :
# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META
AVAIL %USE VAR PGS
2 hdd 0.49179 1.00000 504GiB 264GiB 263GiB 63.7MiB 960MiB
240GiB 52.34 1.14 81
13 hdd 0.49179 1.00000 504GiB 267GiB 266GiB 55.7MiB 1.37GiB
236GiB 53.09 1.16 94
20 hdd 0.49179 1.00000 504GiB 235GiB 234GiB 62.5MiB 962MiB
268GiB 46.70 1.02 99
21 hdd 0.49179 1.00000 504GiB 306GiB 305GiB 65.2MiB 991MiB
198GiB 60.75 1.32 87
22 hdd 0.49179 1.00000 504GiB 185GiB 184GiB 51.9MiB 972MiB
318GiB 36.83 0.80 73
23 hdd 0.49179 1.00000 504GiB 167GiB 166GiB 60.9MiB 963MiB
337GiB 33.07 0.72 80
24 hdd 0.49179 1.00000 504GiB 235GiB 234GiB 67.5MiB 956MiB
268GiB 46.74 1.02 90
25 hdd 0.49179 1.00000 504GiB 183GiB 182GiB 68.8MiB 955MiB
321GiB 36.32 0.79 100
3 hdd 0.49179 1.00000 504GiB 442GiB 440GiB 77.5MiB 1.15GiB
61.9GiB 87.70 1.91 103
26 hdd 0.49179 1.00000 504GiB 220GiB 219GiB 61.2MiB 963MiB
283GiB 43.78 0.95 80
29 hdd 0.49179 1.00000 504GiB 298GiB 296GiB 77.4MiB 1013MiB
206GiB 59.09 1.29 106
30 hdd 0.49179 1.00000 504GiB 183GiB 182GiB 60.2MiB 964MiB
321GiB 36.32 0.79 88
10 hdd 0.49179 1.00000 504GiB 176GiB 175GiB 56.5MiB 968MiB
327GiB 35.02 0.76 85
11 hdd 0.49179 1.00000 504GiB 209GiB 208GiB 62.5MiB 961MiB
295GiB 41.42 0.90 89
0 hdd 0.49179 1.00000 504GiB 253GiB 252GiB 55.7MiB 968MiB
251GiB 50.18 1.09 76
1 hdd 0.49179 1.00000 504GiB 199GiB 198GiB 60.4MiB 964MiB
305GiB 39.51 0.86 92
16 hdd 0.49179 1.00000 504GiB 219GiB 218GiB 58.2MiB 966MiB
284GiB 43.51 0.95 85
17 hdd 0.49179 1.00000 504GiB 231GiB 230GiB 69.0MiB 955MiB
272GiB 45.97 1.00 97
14 hdd 0.49179 1.00000 504GiB 210GiB 209GiB 61.0MiB 963MiB
293GiB 41.72 0.91 74
15 hdd 0.49179 1.00000 504GiB 182GiB 181GiB 50.7MiB 973MiB
322GiB 36.10 0.79 72
18 hdd 0.49179 1.00000 504GiB 297GiB 296GiB 53.7MiB 978MiB
206GiB 59.03 1.29 87
19 hdd 0.49179 1.00000 504GiB 125GiB 124GiB 61.9MiB 962MiB
379GiB 24.81 0.54 82
TOTAL 10.8TiB 4.97TiB 4.94TiB 1.33GiB 21.4GiB
5.85TiB 45.91
MIN/MAX VAR: 0.54/1.91 STDDEV: 12.80
Is it a normal situation ? Is there any way to let Ceph handle this
alone or am I forced to reweight the OSD manually ?
Thank you.
Florent
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io