Apparently this release has fixed some race condition when an OSD daemon was being
started earlier
than corresponding osd directory had been created under /var/lib/ceph/osd/
At least my experimental setup of 1 host with 8 SATA HDDs and 1 NVMe is showing signs of
life again
With Pacific 16.2.0-to-3, ceph health details showed 5..6 out of 8 OSDs as down,
and ceph osd df showed 0 capacity even for OSDs that were "up"
So now it looks at least as good as it had with Ceph Octopus.
Thanks a lot for your help
David Galloway wrote:
This is a hotfix release addressing a number of
security issues and
regressions. We recommend all users update to this release. For a
detailed release notes with links & changelog please refer to the
official blog entry at
https://ceph.io/releases/v16-2-4-pacific-released
Getting Ceph
------------
* Git at
git://github.com/ceph/ceph.git
* Tarball at
https://download.ceph.com/tarballs/ceph-16.2.4.tar.gz
* For packages, see
https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 3cbe25cde3cfa028984618ad32de9edc4c1eaed0
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io