Hi,
I am set to resize OSDs in ceph cluster to extend overall cluster capacity, by adding 40GB
to each of disk and noticed that after disk resize and OSD restart RAW USE size grows
proportionally to new size, ex. by 20GB while DATA remains the same, which makes new space
not readily available. Here is the osd output of cluster:
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL
%USE VAR PGS STATUS
1 hdd 0.09769 1.00000 100 GiB 83 GiB 82 GiB 164 MiB 891 MiB 17 GiB
82.79 1.00 77 up
3 hdd 0.09769 1.00000 100 GiB 83 GiB 82 GiB 355 MiB 772 MiB 17 GiB
82.74 1.00 84 up
2 hdd 0.09769 1.00000 100 GiB 84 GiB 82 GiB 337 MiB 1.3 GiB 16 GiB
83.88 1.01 82 up
4 hdd 0.09769 1.00000 140 GiB 125 GiB 84 GiB 148 MiB 919 MiB 15 GiB
89.24 1.07 80 up
6 hdd 0.09769 1.00000 140 GiB 106 GiB 104 GiB 333 MiB 1015 MiB 34 GiB
75.47 0.91 107 up
7 hdd 0.09769 1.00000 140 GiB 118 GiB 97 GiB 351 MiB 1.2 GiB 22 GiB
84.48 1.02 101 up
TOTAL 720 GiB 598 GiB 531 GiB 1.6 GiB 6.1 GiB 122 GiB
83.10
MIN/MAX VAR: 0.91/1.07 STDDEV: 4.06
The OSDs I managed to extend are 7, 6 and 4. Only OSD number 6 detected new size and did
not inflate RAW USE, OSD 7 and 4 have RAW USE vs DATA gap.
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 720 GiB 122 GiB 598 GiB 598 GiB 83.10
TOTAL 720 GiB 122 GiB 598 GiB 598 GiB 83.10
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 16 GiB
cephfs-metadata 2 16 832 MiB 245.62k 2.4 GiB 4.80 16 GiB
cephfs-replicated 3 128 176 GiB 545.23k 530 GiB 91.63 16 GiB
replicapool 4 32 19 B 2 12 KiB 0 16 GiB
This reports nearly 600GB used, while it should be more like 530GB as cephfs-replicated
pool is reporting its data usage.
Any ideas why is this happening? Should I continue with extension of all OSDs to 140GB to
see if that makes a difference?
Br,
merp.
Show replies by date