Please try flushing the journal:
ceph daemon mds.foo flush journal
The problem may be caused by this bug:
As for what to do next, you would likely need to recover the deleted
inodes from the data pool so you can retry deleting the files:
On Tue, Jan 14, 2020 at 9:30 AM Oskar Malnowicz
<oskar.malnowicz(a)rise-world.com> wrote:
Hello Patrick,
"purge_queue": {
"pq_executing_ops": 0,
"pq_executing": 0,
"pq_executed": 5097138
},
We already restarted the MDS daemons, but no change.
There are no other health warnings than that one what Florian already
mentioned.
cheers Oskar
Am 14.01.20 um 17:32 schrieb Patrick Donnelly:
On Tue, Jan 14, 2020 at 5:15 AM Florian Pritz
<florian.pritz(a)rise-world.com> wrote:
`ceph daemon mds.$hostname perf dump | grep
stray` shows:
> "num_strays": 0,
> "num_strays_delayed": 0,
> "num_strays_enqueuing": 0,
> "strays_created": 5097138,
> "strays_enqueued": 5097138,
> "strays_reintegrated": 0,
> "strays_migrated": 0,
Can you also paste the purge queue
("pq") perf dump?
It's possible the MDS has hit an ENOSPC condition that caused the MDS
to go read-only. This would prevent the MDS PurgeQueue from cleaning
up. Do you see a health warning that the MDS is in this state? Is so,
please try restarting the MDS.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D