-----Original message-----
From: Yan, Zheng <ukernel(a)gmail.com>
Sent: Wed 06-11-2019 08:15
Subject: Re: [ceph-users] mds crash loop
To: Karsten Nielsen <karsten(a)foo-bar.dk>dk>;
CC: ceph-users(a)ceph.io;
On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen
<karsten(a)foo-bar.dk> wrote:
Hi,
Last week I upgraded my ceph cluster from luminus to mimic 13.2.6
It was running fine for a while but yesterday my mds went into a crash loop.
I have 1 active and 1 standby mds for my cephfs both of which is running the
same
crash loop.
v3.2.7-stable-3.2-minic-centos-7-x86_64 with a etcd kv store.
please try again with debug_mds=20. Thanks
Yan, Zheng
Yes I have set that and had to move to
pastebin.com as debian apperently only supports
150k
https://pastebin.com/Gv7c5h54
- Karsten
>
> > Thanks for any hints
> > - Karsten
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>