If you just destroy the osd, it won't change the crush weight. Once the
drive is replaced you can recreate the osd with the same OSD.
On Tue, Aug 27, 2019, 8:53 PM Cory Hawkless <Cory(a)hawkless.id.au> wrote:
I have an OSD that is throwing sense errors – It’s at
it’s end of life and
needs to be replaced.
The server is in the datacentre and I won’t get there for a few weeks so
I’ve stopped the service (systemctl stop ceph-osd@208) and let the
cluster rebalance, all is well.
My thinking is that if for some reason the host that OSD208 resides within
was to reboot, that OSD would start and become part of the cluster again.
So I’d like to prevent this OSD from ever starting again without
physically being able to remove it from the server.
I was thinking that deleting it’s key from the auth list might work. So a
ceph osd purge 208
Then when the service tries to start it’ll fail with an auth error.
Any other suggestions?
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io