I have removed_snaps listed on pools that I am not using. They are
mostly for doing some performance testing, so I cannot imagine ever
creating snapshots in them.
pool 33 'fs_data.ssd' replicated size 3 min_size 1 crush_rule 5
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn
last_change 68413 lfor 0/0/65853 flags hashpspool,selfmanaged_snaps
stripe_width 0 application cephfs
removed_snaps
[567~1,56b~1,56f~1,57f~1,583~1,587~1,58b~1,58f~1,591~1,593~1,595~1,597~1
,599~1,59b~1,59d~1,59f~1,5a1~1,5a3~1,5a5~1,5a7~1,
[@ ]# rados df| egrep '^POOL|fs_data.ssd'
POOL_NAME USED OBJECTS CLONES COPIES
MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS
WR USED COMPR UNDER COMPR
fs_data.ssd 0 B 0 0 0
0 0 0 0 0 B 0 0 B 0
B 0 B
Show replies by date
Hi Gregory,
I saw ceph -s showing 'snaptrim'(?), but I have still have these
'removed_snaps' listed on this pool (also on other pools, I don't
remember creating/deleting them) A 'ceph tell mds.c scrub start /test/
recursive repair' did not remove those. Can/should/how I remove these?
-----Original Message-----
To: ceph-users
Subject: [ceph-users] ceph osd pool ls detail 'removed_snaps' on empty
pool?
I have removed_snaps listed on pools that I am not using. They are
mostly for doing some performance testing, so I cannot imagine ever
creating snapshots in them.
pool 33 'fs_data.ssd' replicated size 3 min_size 1 crush_rule 5
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn
last_change 68413 lfor 0/0/65853 flags hashpspool,selfmanaged_snaps
stripe_width 0 application cephfs
removed_snaps
[567~1,56b~1,56f~1,57f~1,583~1,587~1,58b~1,58f~1,591~1,593~1,595~1,597~1
,599~1,59b~1,59d~1,59f~1,5a1~1,5a3~1,5a5~1,5a7~1,
[@ ]# rados df| egrep '^POOL|fs_data.ssd'
POOL_NAME USED OBJECTS CLONES COPIES
MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS
WR USED COMPR UNDER COMPR
fs_data.ssd 0 B 0 0 0
0 0 0 0 0 B 0 0 B 0
B 0 B
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io