We're now hitting this on CentOS 8.4.
The "setmaxosd" workaround fixed access to one of our clusters, but
isn't working for another, where we have gaps in the osd ids, e.g.
# ceph osd getmaxosd
max_osd = 553 in epoch 691642
# ceph osd tree | sort -n -k1 | tail
541 ssd 0.87299 osd.541 up 1.00000 1.00000
543 ssd 0.87299 osd.543 up 1.00000 1.00000
548 ssd 0.87299 osd.548 up 1.00000 1.00000
552 ssd 0.87299 osd.552 up 1.00000 1.00000
Is there another workaround for this?
On Mon, May 3, 2021 at 12:32 PM Ilya Dryomov <idryomov(a)gmail.com> wrote:
> On Mon, May 3, 2021 at 12:27 PM Magnus Harlander <magnus(a)harlan.de> wrote:
> > Am 03.05.21 um 12:25 schrieb Ilya Dryomov:
> > ceph osd setmaxosd 10
> > Bingo! Mount works again.
> > Veeeery strange things are going on here (-:
> > Thanx a lot for now!! If I can help to track it down, please let me know.
> Good to know it helped! I'll think about this some more and probably
> plan to patch the kernel client to be less stringent and not choke on
> this sort of misconfiguration.
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io