Indeed, I think this is yet another incarnation of the "origin of misplaced data is
no longer found"-bug.
https://tracker.ceph.com/issues/37439
https://tracker.ceph.com/issues/46847
We also experience it regularly, but I haven't found the cause yet.
Another bug that occurs when adding new OSDs is a cluster hang since the max_pg_limit is
hit just at initial start time,
but then the number then decreases
https://tracker.ceph.com/issues/48298
-- JJ
On 01/12/2020 15.54, mj wrote:
> Hi,
>
> We are wondering why adding an OSD to a healthy cluster results in a (very small
percentage of) "Degraded data
> redundancy". (0.020%)
>
> We understand a large percentage of misplaced objects (7.622%)
>
> But since we're adding an OSD to a HEALTH_OK cluster, there should really not be
any degraded data redundancy, right..?
>
> MJ
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io