rados ls -p ssdshop
outputs 20MB of lines without any bench prefix
...
rbd_data.d4993cc3c89825.00000000000074ec
rbd_data.d4993cc3c89825.0000000000001634
journal_data.83.d4993cc3c89825.333485
journal_data.83.d4993cc3c89825.380648
journal_data.83.d4993cc3c89825.503838
...
On 13. Dec 2020, at 11:05, Anthony D'Atri
<anthony.datri(a)gmail.com> wrote:
Any chance you might have orphaned `rados bench` objects ? This happens more than one
might think.
`rados ls > /tmp/out`
Inspect the result. You should see a few administrative objects, some header and data
objects for the RBD volume. If you see a zillion with names like `bench*` there’s your
culprit. Those can be cleaned up.
On Dec 12, 2020, at 11:42 PM, mk
<mk(a)pop.de> wrote:
Hi folks,
my cluster shows strange behavior, the only ssd pool on cluster with repsize 3 and pg/pgp
size 512
which contains 300GB rbd image and only one snapshot occupies 11TB space!
I have tried objectmap check / rebuild, fstrim etc. which couldn’t solve that problem,
any help would be appreciated.
ceph version 14.2.7 nautilus (stable)
ceph df
-------
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 107 TiB 68 TiB 39 TiB 39 TiB 36.45
ssd 21 TiB 11 TiB 11 TiB 11 TiB 50.78
TOTAL 128 TiB 78 TiB 50 TiB 50 TiB 38.84
POOLS:
POOL ID STORED OBJECTS USED %USED
MAX AVAIL
ssdshop 83 3.5 TiB 517.72k 11 TiB 96.70
124 GiB
rados df
--------
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND
DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
ssdshop 11 TiB 537040 28316 1611120 0 0
0 11482773 15 GiB 44189589 854 GiB 0 B 0 B
rbd du -p ssdshop
-----------------
NAME PROVISIONED USED
shp-de-300gb.rbd@snap_2020-12-12_20:30:00 300 GiB 289 GiB
shp-de-300gb.rbd 300 GiB 109 GiB
<TOTAL> 300 GiB 398 GiB
crush_rule
-----------
rule ssd {
id 3
type replicated
min_size 1
max_size 10
step take dc1 class ssd
step chooseleaf firstn 2 type rack
step emit
step take dc2 class ssd
step chooseleaf firstn -1 type rack
step emit
}
BR
Max
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io