-----Original message-----
From: Yan, Zheng <ukernel(a)gmail.com>
Sent: Wed 06-11-2019 14:16
Subject: Re: [ceph-users] mds crash loop
To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
CC: ceph-users(a)ceph.io;
On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen
<karsten(a)foo-bar.dk> wrote:
-----Original message-----
From: Yan, Zheng <ukernel(a)gmail.com>
Sent: Wed 06-11-2019 08:15
Subject: Re: [ceph-users] mds crash loop
To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
CC: ceph-users(a)ceph.io;
> On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen <karsten(a)foo-bar.dk> wrote:
> >
> > Hi,
> >
> > Last week I upgraded my ceph cluster from luminus to mimic 13.2.6
> > It was running fine for a while but yesterday my mds went into a crash
loop.
> >
> > I have 1 active and 1 standby mds for my cephfs both of which is running
the
same
crash loop.
v3.2.7-stable-3.2-minic-centos-7-x86_64 with a etcd kv store.
please try again with debug_mds=20. Thanks
Yan, Zheng
Yes I have set that and had to move to
pastebin.com as debian apperently only
supports 150k
Looks like on-disk root inode is corrupted. have you encountered any
unusually things during the upgrade?
please run 'rados -p <cephfs metadata pool> stat 1.00000000.inode' ,
check if the object is modified before or after the 'luminous ->
13.2.6' upgrade.
The fil was modified before the upgrade.
> To fix the corrupted object. Run 'cephfs-data-scan init
> --force-init'. Then restart mds. After mds become active, run 'ceph
> daemon mds.x scrub_path / force repair'
>
>
> > - Karsten
> >
> > >
> > > > Thanks for any hints
> > > > - Karsten
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > >
> > >
>
>