-----Original message-----
From: Yan, Zheng <ukernel(a)gmail.com>
Sent: Wed 06-11-2019 14:16
Subject: Re: [ceph-users] mds crash loop
To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
CC: ceph-users(a)ceph.io;
On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen
<karsten(a)foo-bar.dk> wrote:
-----Original message-----
From: Yan, Zheng <ukernel(a)gmail.com>
Sent: Wed 06-11-2019 08:15
Subject: Re: [ceph-users] mds crash loop
To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
CC: ceph-users(a)ceph.io;
> On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen <karsten(a)foo-bar.dk> wrote:
> >
> > Hi,
> >
> > Last week I upgraded my ceph cluster from luminus to mimic 13.2.6
> > It was running fine for a while but yesterday my mds went into a crash
loop.
> >
> > I have 1 active and 1 standby mds for my cephfs both of which is running
the
same
crash loop.
v3.2.7-stable-3.2-minic-centos-7-x86_64 with a etcd kv store.
please try again with debug_mds=20. Thanks
Yan, Zheng
Yes I have set that and had to move to
pastebin.com as debian apperently only
supports 150k
Looks like on-disk root inode is corrupted. have you encountered any
unusually things during the upgrade?
please run 'rados -p <cephfs metadata pool> stat 1.00000000.inode' ,
check if the object is modified before or after the 'luminous ->
13.2.6' upgrade.
To fix the corrupted object. Run 'cephfs-data-scan init
--force-init'. Then restart mds. After mds become active, run 'ceph
daemon mds.x scrub_path / force repair'
I followed the steps I got the mds started but now a lot of files are in lost+found 24283
and I have these errors in the mds log
2019-11-06 20:20:18.215 7f0bd9090700 1 mds.0.32011 cluster recovered.
2019-11-06 20:20:19.019 7f0bd2dfa700 0 mds.0.cache.dir(0x100013acfcb) _fetched missing
object for [dir 0x100013acfcb /nextcloud/custom_apps/carnet/ [2,head] auth v=0 cv=0/0
ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dc4f5100]
2019-11-06 20:20:19.019 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013acfcb
object missing on disk; some files may be lost (/nextcloud/custom_apps/carnet)
2019-11-06 20:20:19.275 7f0bd2dfa700 0 mds.0.cache.dir(0x100013a3156) _fetched missing
object for [dir 0x100013a3156 /nextcloud/custom_apps/mail/ [2,head] auth v=0 cv=0/0
ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dcc40000]
2019-11-06 20:20:19.275 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013a3156
object missing on disk; some files may be lost (/nextcloud/custom_apps/mail)
2019-11-06 20:20:19.371 7f0bd2dfa700 0 mds.0.cache.dir(0x100013abb3c) _fetched missing
object for [dir 0x100013abb3c /nextcloud/custom_apps/passwords/ [2,head] auth v=0 cv=0/0
ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dcc40700]
2019-11-06 20:20:19.371 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013abb3c
object missing on disk; some files may be lost (/nextcloud/custom_apps/passwords)
2019-11-06 20:20:19.383 7f0bd2dfa700 0 mds.0.cache.dir(0x100013a9b9b) _fetched missing
object for [dir 0x100013a9b9b /nextcloud/custom_apps/phonetrack/ [2,head] auth v=0 cv=0/0
ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dcc40e00]
2019-11-06 20:20:19.383 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013a9b9b
object missing on disk; some files may be lost (/nextcloud/custom_apps/phonetrack)
2019-11-06 20:20:19.431 7f0bd2dfa700 0 mds.0.cache.dir(0x100013a2659) _fetched missing
object for [dir 0x100013a2659 /nextcloud/custom_apps/richdocuments/ [2,head] auth v=0
cv=0/0 ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1
0x55d4dcc41500]
2019-11-06 20:20:19.431 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013a2659
object missing on disk; some files may be lost (/nextcloud/custom_apps/richdocuments)
2019-11-06 20:20:22.360 7f0bd9090700 1 mds.k8s-node-01 Updating MDS map to version 32015
from mon.1
>
> > - Karsten
> >
> > >
> > > > Thanks for any hints
> > > > - Karsten
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > >
> > >
>
>