Thank you. The image solved our problem.
Von: David Orman <ormandj(a)corenode.com>
Gesendet: Dienstag, 22. Juni 2021 17:27:12
An: Jansen, Jan
Betreff: Re: [ceph-users] Having issues to start more than 24 OSDs per host
If you're brave (YMMV, test first non-prod), we pushed an image with
the issue we encountered fixed as per above here:
you can use to install with.
I'm not sure when the next release is due out (I'm a little confused
why a breaking install/upgrade issue like this has been allowed to
sit), but it should include this fix, as well as others.
On Tue, Jun 22, 2021 at 1:16 AM <Jan.Jansen(a)gdata.de> wrote:
We did try to use Cephadm with Podman to start 44 OSDs per host which consistently stop
after adding 24 OSDs per host.
We did look into the cephadm.log on the problematic host and saw that the command
`cephadm ceph-volume lvm list --format json` did stuck.
We were the output of the command wasn't complete. Therefore, we tried to use
compacted JSON and we could increase the number to 36 OSDs per host.
If you need more information just ask.
Podman version: 3.2.1
Ceph version: 16.2.4
OS version: Suse Leap 15.3
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io