It shows
sdb 8:16 0 5.5T 0 disk /var/lib/ceph/osd/ceph-56
when I do ll on that dir it says
[root@ctplosd8 ~]# ll /var/lib/ceph/osd/ceph-56
total 552
-rw------- 1 root root 9 May 21 10:40 bfm_blocks
-rw------- 1 root root 4 May 21 10:40 bfm_blocks_per_key
-rw------- 1 root root 5 May 21 10:40 bfm_bytes_per_block
-rw------- 1 root root 13 May 21 10:40 bfm_size
-rw-r--r-- 1 root root 107374182400 May 21 10:41 block
-rw------- 1 root root 2 May 21 10:40 bluefs
-rw------- 1 root root 37 May 21 10:41 ceph_fsid
-rw-r--r-- 1 root root 37 May 21 10:40 fsid
-rw------- 1 root root 8 May 21 10:40 kv_backend
-rw------- 1 root root 21 May 21 10:41 magic
-rw------- 1 root root 4 May 21 10:41 mkfs_done
-rw------- 1 root root 6 May 21 10:41 ready
-rw------- 1 root root 10 May 21 10:40 type
-rw------- 1 root root 3 May 21 10:41 whoami
...
I created osds manually with following script
UUID=$(uuidgen)
ID=$(ceph osd new $UUID)
mkdir /var/lib/ceph/osd/ceph-$ID
mkfs.xfs /dev/$disk
mount /dev/$disk /var/lib/ceph/osd/ceph-$ID/
ceph-osd -i $ID --mkfs --osd-uuid $UUID --data /dev/sdb
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID/
---
and there 100G block file resides.
On Fri, May 21, 2021 at 9:59 AM Janne Johansson <icepic.dz(a)gmail.com> wrote:
Den fre 21 maj 2021 kl 09:41 skrev Rok Jaklič
<rjaklic(a)gmail.com>om>:
why would ceph osd df show in SIZE field smaller
number than there is:
85 hdd 0.89999 1.00000 100 GiB 96 GiB 95 GiB 289 KiB 952
MiB 4.3 GiB 95.68 3.37 10 up
instead of 100GiB there should be 5.5TiB.
What does "lsblk" say about the size of the disk/partition where osd 85
runs?
--
May the most significant bit of your life be positive.