On Mon, Dec 16, 2019 at 11:34 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
Hi Gregory,
I saw ceph -s showing 'snaptrim'(?), but I have still have these
'removed_snaps' listed on this pool (also on other pools, I don't
remember creating/deleting them) A 'ceph tell mds.c scrub start /test/
recursive repair' did not remove those. Can/should/how I remove these?
-----Original Message-----
To: ceph-users
Subject: [ceph-users] ceph osd pool ls detail 'removed_snaps' on empty
pool?
I have removed_snaps listed on pools that I am not using. They are
mostly for doing some performance testing, so I cannot imagine ever
creating snapshots in them.
pool 33 'fs_data.ssd' replicated size 3 min_size 1 crush_rule 5
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn
last_change 68413 lfor 0/0/65853 flags hashpspool,selfmanaged_snaps
stripe_width 0 application cephfs
removed_snaps
[567~1,56b~1,56f~1,57f~1,583~1,587~1,58b~1,58f~1,591~1,593~1,595~1,597~1
,599~1,59b~1,59d~1,59f~1,5a1~1,5a3~1,5a5~1,5a7~1,
These are RADOS snapshots, so CephFS may have removed them but the
OSDs still need to trim all the data; cephfs commands won't do
anything to or with them. Once the OSD have removed the snapshot (it
can take a while, depending on settings and other cluster activity)
they'll report it back to the monitor, and it will remove them from
the list of removed snaps. (It may not bother removing them if there
aren't enough other live snapshots, I forget.)
Either way, this list is basically harmless so you shouldn't worry about it.
[@ ]# rados df| egrep '^POOL|fs_data.ssd'
POOL_NAME USED OBJECTS CLONES COPIES
MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS
WR USED COMPR UNDER COMPR
fs_data.ssd 0 B 0 0 0
0 0 0 0 0 B 0 0 B 0
B 0 B
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io