I had a 1x replicated fs data test pool, when a osd died I had '1 pg
stale+active+clean of cephfs'[1] after a cluster reboot this turned into
'1 pg unknown'
ceph pg repair did not fix anything (for stale and unknown state)
I recreated the pg with:
ceph osd force-create-pg pg.id --yes-i-really-mean-it
Question now is: Say I had one or two files in this pool/pg, is this
still administered in the mds server? Do I need to fix something in the
mds?
PS. This was just performance testing pool, so at most there could be a
few testing images on it, nothing important.
[1]
https://www.mail-archive.com/ceph-users@ceph.io/msg03147.html