Thank you Yan, Zheng for the help to get my cephfs back in working order by providing a
source version that had the fix in it to get the root inode fixed.
(
https://tracker.ceph.com/issues/42675)
- Karsten
-----Original message-----
From: Yan, Zheng <ukernel(a)gmail.com>
Sent: Tue 12-11-2019 11:55
Subject: Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
CC: ceph-users(a)ceph.io;
> On Tue, Nov 12, 2019 at 6:18 PM Karsten Nielsen <karsten(a)foo-bar.dk> wrote:
> >
> > -----Original message-----
> > From: Karsten Nielsen <karsten(a)foo-bar.dk>
> > Sent: Tue 12-11-2019 10:30
> > Subject: [ceph-users] Re: mds crash loop
> > To: Yan, Zheng <ukernel(a)gmail.com>om>;
> > CC: ceph-users(a)ceph.io;
> > > -----Original message-----
> > > From: Yan, Zheng <ukernel(a)gmail.com>
> > > Sent: Mon 11-11-2019 15:09
> > > Subject: Re: [ceph-users] Re: mds crash loop
> > > To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
> > > CC: ceph-users(a)ceph.io;
> > > > On Mon, Nov 11, 2019 at 5:09 PM Karsten Nielsen
<karsten(a)foo-bar.dk>
> wrote:
> > > > >
> > > > > I started a job that moved some files around in the cephfs
cluster that
> > > > resulted in the mds to go back into the crash loop.
> > > > > Logs are here:
> > > > >
http://s3.foo-bar.dk/mds-dumps/mds.log-20191111
> > > > >
> > > > > Any help would be appriciated.
> > > > >
> > > >
> > > > looks like snaptable is corrupted.
> > > >
> > > > nautilus version (14.2.2) of ‘cephfs-data-scan scan_links’ can fix
> > > > snaptable. hopefully it will fix your issue.
> > > >
> > > > you don't need to upgrade whole cluster. Just install nautilus in
a
> > > > temp machine or compile ceph from source.
> > >
> > > I did run the command that you suggested, it did not unfortunately fix the
> > > problem.
> > >
> > >
http://s3.foo-bar.dk/mds-dumps/mds.log-20191112
> > >
> >
> >
> > The output from the command is this:
> >
> > sudo docker exec -it rgw2 cephfs-data-scan scan_links
> > 2019-11-12 08:46:27.025 7fe775dd7d80 -1 datascan.scan_links: Remove
> duplicated ino 0x0x100026d17d4 from 0x100013b0d3d/latest.log
> > 2019-11-12 08:46:28.665 7fe775dd7d80 -1 datascan.load_table: unable to read
> mds table 'mds1_inotable': (2) No such file or directory
> > 2019-11-12 08:46:28.665 7fe775dd7d80 -1 mds.1.inotable: erasing 0x20000000000
> to 0x2000000d665
> > 2019-11-12 08:46:28.793 7fe775dd7d80 -1 datascan.load_table: unable to read
> mds table 'mds2_inotable': (2) No such file or directory
> > 2019-11-12 08:46:28.793 7fe775dd7d80 -1 mds.2.inotable: erasing 0x30000000000
> to 0x300000228f5
> > 2019-11-12 08:46:29.345 7fe775dd7d80 -1 mds.0.snap updating last_snap 1 ->
3
> >
> >
>
> please run ceph-mds with debug_mds=20, and send the crash log to me.
>
> Thanks
> Yan, Zheng
>
> > >
> > >
> > > >
> > > >
> > > >
> > > > > - Karsten
> > > > >
> > > > > -----Original message-----
> > > > > From: Yan, Zheng <ukernel(a)gmail.com>
> > > > > Sent: Thu 07-11-2019 14:20
> > > > > Subject: Re: [ceph-users] Re: mds crash loop
> > > > > To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
> > > > > CC: ceph-users(a)ceph.io;
> > > > > > On Thu, Nov 7, 2019 at 6:40 PM Karsten Nielsen
<karsten(a)foo-bar.dk>
> wrote:
> > > > > > >
> > > > > > > That is awesome.
> > > > > > >
> > > > > > > Now I just need to figure out where the lost+found
files needs to
> go.
> > > > > > > And what happened to the missing objects for the dirs.
> > > > > > >
> > > > > >
> > > > > > lost+found files are likely files that were deleted. you can
keep the
> > > > > > lost+found dir for a while, then delete the
'lost+found' directory.
> > > > > >
> > > > > > for 'missing object' dirs, mv all of them to a temp
directory, such as
> > > > > > /mnt/cephfs/missing_obj_dirs.
> > > > > > Then run command 'ceph daemon mds.x scrub_patch
/missing_obj_dirs
> > > > > > force recursive repair'. wait a minute, the rm -rf
> > > > > > /mnt/cephfs/missing_obj_dirs
> > > > > >
> > > > > > > Any tool that is able to do that ?
> > > > > > >
> > > > > > > Thanks
> > > > > > > - Karsten
> > > > > > >
> > > > > > > -----Original message-----
> > > > > > > From: Yan, Zheng <ukernel(a)gmail.com>
> > > > > > > Sent: Thu 07-11-2019 09:22
> > > > > > > Subject: Re: [ceph-users] Re: mds crash loop
> > > > > > > To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
> > > > > > > CC: ceph-users(a)ceph.io;
> > > > > > > > I have tracked down the root cause. See
> > > > > >
https://tracker.ceph.com/issues/42675
> > > > > > > >
> > > > > > > > Regards
> > > > > > > > Yan, Zheng
> > > > > > > >
> > > > > > > > On Thu, Nov 7, 2019 at 4:01 PM Karsten Nielsen
> <karsten(a)foo-bar.dk>
> > > > wrote:
> > > > > > > > >
> > > > > > > > > -----Original message-----
> > > > > > > > > From: Yan, Zheng <ukernel(a)gmail.com>
> > > > > > > > > Sent: Thu 07-11-2019 07:21
> > > > > > > > > Subject: Re: [ceph-users] Re: mds
crash loop
> > > > > > > > > To: Karsten Nielsen
<karsten(a)foo-bar.dk>dk>;
> > > > > > > > > CC: ceph-users(a)ceph.io;
> > > > > > > > > > On Thu, Nov 7, 2019 at 5:50 AM Karsten
Nielsen
> > > <karsten(a)foo-bar.dk>
> > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > -----Original message-----
> > > > > > > > > > > From: Yan, Zheng
<ukernel(a)gmail.com>
> > > > > > > > > > > Sent: Wed 06-11-2019 14:16
> > > > > > > > > > > Subject: Re: [ceph-users]
mds crash loop
> > > > > > > > > > > To: Karsten Nielsen
<karsten(a)foo-bar.dk>dk>;
> > > > > > > > > > > CC: ceph-users(a)ceph.io;
> > > > > > > > > > > > On Wed, Nov 6, 2019 at 4:42 PM
Karsten Nielsen
> > > > <karsten(a)foo-bar.dk>
> > > > > > > > wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > -----Original
message-----
> > > > > > > > > > > > > From: Yan, Zheng
<ukernel(a)gmail.com>
> > > > > > > > > > > > > Sent: Wed 06-11-2019
08:15
> > > > > > > > > > > > > Subject: Re:
[ceph-users] mds crash loop
> > > > > > > > > > > > > To: Karsten Nielsen
<karsten(a)foo-bar.dk>dk>;
> > > > > > > > > > > > > CC:
ceph-users(a)ceph.io;
> > > > > > > > > > > > > > On Tue, Nov 5, 2019
at 5:29 PM Karsten Nielsen
> > > > > > <karsten(a)foo-bar.dk>
> > > > > > > > > > wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Hi,
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Last week I
upgraded my ceph cluster from luminus to
> > > mimic
> > > > > > 13.2.6
> > > > > > > > > > > > > > > It was running
fine for a while but yesterday my mds
> > > went
> > > > > > into a
> > > > > > > > crash
> > > > > > > > > > > > loop.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I have 1 active
and 1 standby mds for my cephfs
> both of
> > > > which
> > > > > > is
> > > > > > > > > > running
> > > > > > > > > > > > the
> > > > > > > > > > > > > > same crash loop.
> > > > > > > > > > > > > > > I am running
ceph based on
> > > > > >
https://hub.docker.com/r/ceph/daemon
> > > > > > > > > > version
> > > > > > > > > > > > > >
v3.2.7-stable-3.2-minic-centos-7-x86_64 with a etcd kv
> > > > store.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Log details
are:
https://paste.debian.net/1113943/
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > please try again
with debug_mds=20. Thanks
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Yan, Zheng
> > > > > > > > > > > > >
> > > > > > > > > > > > > Yes I have set that and
had to move to
pastebin.com as
> > > debian
> > > > > > > > apperently
> > > > > > > > > > only
> > > > > > > > > > > > supports 150k
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
https://pastebin.com/Gv7c5h54
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Looks like on-disk root inode
is corrupted. have you
> > > > encountered any
> > > > > > > > > > > > unusually things during the
upgrade?
> > > > > > > > > > > >
> > > > > > > > > > > > please run 'rados -p
<cephfs metadata pool> stat
> > > > 1.00000000.inode' ,
> > > > > > > > > > > > check if the object is
modified before or after the
> 'luminous
> > > ->
> > > > > > > > > > > > 13.2.6' upgrade.
> > > > > > > > > > > > To fix the corrupted object.
Run 'cephfs-data-scan init
> > > > > > > > > > > > --force-init'. Then
restart mds. After mds become active,
> run
> > > > 'ceph
> > > > > > > > > > > > daemon mds.x scrub_path /
force repair'
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > I followed the steps I got the mds
started but now a lot of
> > > files
> > > > are
> > > > > > in
> > > > > > > > > > lost+found 24283 and I have these errors
in the mds log
> > > > > > > > > > >
> > > > > > > > > > 'cephfs-data-scan init
--force-init' does not move files into
> > > > > > > > > > lost+found. have you ever run other
'cephfs-data-scan foo'
> > > command
> > > > or
> > > > > > > > > > 'cephfs-journal-tool foo'
command?
> > > > > > > > >
> > > > > > > > > I have had a similar problem with the cluster
before where I
> went
> > > > through
> > > > > > the
> > > > > > > > cycle of:
> > > > > > > > >
https://docs.ceph.com/docs/mimic/cephfs/disaster-recovery/ ->
> Using
> > > an
> > > > > > > > alternate metadata pool for recovery
> > > > > > > > >
> > > > > > > > > I did run the cephfs-journal-tool journal
reset command, mostly
> > > > because
> > > > > > > > cephfs is not that utilized so I thought it was
safe to do as
> after
> > > the
> > > > > > upgrade
> > > > > > > > the cluster has not been used much, so data lose
would be minimal
> -
> > > > > > apparently
> > > > > > > > I was wrong.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > 2019-11-06 20:20:18.215
7f0bd9090700 1 mds.0.32011 cluster
> > > > recovered.
> > > > > > > > > > > 2019-11-06 20:20:19.019
7f0bd2dfa700 0
> > > > mds.0.cache.dir(0x100013acfcb)
> > > > > > > > > > _fetched missing object for [dir
0x100013acfcb
> > > > > > > > /nextcloud/custom_apps/carnet/
> > > > > > > > > > [2,head] auth v=0 cv=0/0 ap=1+0+0
state=1073741888|fetching
> f()
> > > n()
> > > > > > > > > > hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dc4f5100]
> > > > > > > > > > > 2019-11-06 20:20:19.019
7f0bd2dfa700 -1
> log_channel(cluster) log
> > > > > > [ERR] :
> > > > > > > > dir
> > > > > > > > > > 0x100013acfcb object missing on disk;
some files may be lost
> > > > > > > > > > (/nextcloud/custom_apps/carnet)
> > > > > > > > > > > 2019-11-06 20:20:19.275
7f0bd2dfa700 0
> > > > mds.0.cache.dir(0x100013a3156)
> > > > > > > > > > _fetched missing object for [dir
0x100013a3156
> > > > > > /nextcloud/custom_apps/mail/
> > > > > > > > > > [2,head] auth v=0 cv=0/0 ap=1+0+0
state=1073741888|fetching
> f()
> > > n()
> > > > > > > > > > hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dcc40000]
> > > > > > > > > > > 2019-11-06 20:20:19.275
7f0bd2dfa700 -1
> log_channel(cluster) log
> > > > > > [ERR] :
> > > > > > > > dir
> > > > > > > > > > 0x100013a3156 object missing on disk;
some files may be lost
> > > > > > > > > > (/nextcloud/custom_apps/mail)
> > > > > > > > > > > 2019-11-06 20:20:19.371
7f0bd2dfa700 0
> > > > mds.0.cache.dir(0x100013abb3c)
> > > > > > > > > > _fetched missing object for [dir
0x100013abb3c
> > > > > > > > > > /nextcloud/custom_apps/passwords/
[2,head] auth v=0 cv=0/0
> > > ap=1+0+0
> > > > > > > > > > state=1073741888|fetching f() n()
hs=0+0,ss=0+0 | waiter=1
> > > authpin=1
> > > > > > > > > > 0x55d4dcc40700]
> > > > > > > > > > > 2019-11-06 20:20:19.371
7f0bd2dfa700 -1
> log_channel(cluster) log
> > > > > > [ERR] :
> > > > > > > > dir
> > > > > > > > > > 0x100013abb3c object missing on disk;
some files may be lost
> > > > > > > > > > (/nextcloud/custom_apps/passwords)
> > > > > > > > > > > 2019-11-06 20:20:19.383
7f0bd2dfa700 0
> > > > mds.0.cache.dir(0x100013a9b9b)
> > > > > > > > > > _fetched missing object for [dir
0x100013a9b9b
> > > > > > > > > > /nextcloud/custom_apps/phonetrack/
[2,head] auth v=0 cv=0/0
> > > ap=1+0+0
> > > > > > > > > > state=1073741888|fetching f() n()
hs=0+0,ss=0+0 | waiter=1
> > > authpin=1
> > > > > > > > > > 0x55d4dcc40e00]
> > > > > > > > > > > 2019-11-06 20:20:19.383
7f0bd2dfa700 -1
> log_channel(cluster) log
> > > > > > [ERR] :
> > > > > > > > dir
> > > > > > > > > > 0x100013a9b9b object missing on disk;
some files may be lost
> > > > > > > > > > (/nextcloud/custom_apps/phonetrack)
> > > > > > > > > > > 2019-11-06 20:20:19.431
7f0bd2dfa700 0
> > > > mds.0.cache.dir(0x100013a2659)
> > > > > > > > > > _fetched missing object for [dir
0x100013a2659
> > > > > > > > > > /nextcloud/custom_apps/richdocuments/
[2,head] auth v=0 cv=0/0
> > > > ap=1+0+0
> > > > > > > > > > state=1073741888|fetching f() n()
hs=0+0,ss=0+0 | waiter=1
> > > authpin=1
> > > > > > > > > > 0x55d4dcc41500]
> > > > > > > > > > > 2019-11-06 20:20:19.431
7f0bd2dfa700 -1
> log_channel(cluster) log
> > > > > > [ERR] :
> > > > > > > > dir
> > > > > > > > > > 0x100013a2659 object missing on disk;
some files may be lost
> > > > > > > > > > (/nextcloud/custom_apps/richdocuments)
> > > > > > > > > > > 2019-11-06 20:20:22.360
7f0bd9090700 1 mds.k8s-node-01
> > > Updating
> > > > MDS
> > > > > > map
> > > > > > > > to
> > > > > > > > > > version 32015 from mon.1
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > - Karsten
> > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Thanks for any
hints
> > > > > > > > > > > > > > > - Karsten
> > > > > > > > > > > > > > >
_______________________________________________
> > > > > > > > > > > > > > > ceph-users
mailing list -- ceph-users(a)ceph.io
> > > > > > > > > > > > > > > To unsubscribe
send an email to
> ceph-users-leave(a)ceph.io
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
_______________________________________________
> > > > > > > > > > > ceph-users mailing list --
ceph-users(a)ceph.io
> > > > > > > > > > > To unsubscribe send an email to
ceph-users-leave(a)ceph.io
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
_______________________________________________
> > > > > > > > > ceph-users mailing list --
ceph-users(a)ceph.io
> > > > > > > > > To unsubscribe send an email to
ceph-users-leave(a)ceph.io
> > > > > > > >
> > > > > > > >
> > > > > > > _______________________________________________
> > > > > > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > > > > > To unsubscribe send an email to
ceph-users-leave(a)ceph.io
> > > > > >
> > > > > >
> > > >
> > > >
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > >
>
>