i run this commands, but still the same problems
$ cephfs-data-scan scan_extents cephfs_data
$ cephfs-data-scan scan_inodes cephfs_data
$ cephfs-data-scan scan_links
2020-01-14 20:36:45.110 7ff24200ef80 -1 mds.0.snapĀ updating last_snap 1
-> 27
$ cephfs-data-scan cleanup cephfs_data
do you have other ideas ?
Am 14.01.20 um 20:32 schrieb Patrick Donnelly:
On Tue, Jan 14, 2020 at 11:24 AM Oskar Malnowicz
<oskar.malnowicz(a)rise-world.com> wrote:
$ ceph daemon mds.who flush journal
{
"message": "",
"return_code": 0
}
$ cephfs-table-tool 0 reset session
{
"0": {
"data": {},
"result": 0
}
}
$ cephfs-table-tool 0 reset snap
{
"result": 0
}
$ cephfs-table-tool 0 reset inode
{
"0": {
"data": {},
"result": 0
}
}
$ cephfs-journal-tool --rank=cephfs_test1:0 journal reset
old journal was 98282151365~92872
new journal start will be 98285125632 (2881395 bytes past old end)
writing journal head
writing EResetJournal entry
done
$ cephfs-data-scan init
Inode 0x0x1 already exists, skipping create. Use --force-init to
overwrite the existing object.
Inode 0x0x100 already exists, skipping create. Use --force-init to
overwrite the existing object.
Should i run with --force-init flag ?
No, that shouldn't be necessary.