# ceph orch osd rm status
No OSD remove/replace operations reported
# ceph orch osd rm 232 --replace
Unable to find OSDs: ['232']
It is not finding 232 anymore. It is still shown as down and out in the
Ceph-Dashboard.
pgs: 3236 active+clean
This is the new disk shown as locked (because unzapped at the moment).
# ceph orch device ls
ceph-a1-06 /dev/sdm hdd TOSHIBA_X_X 16.0T 9m ago
locked
Best
Ken
On 29.01.23 18:19, David Orman wrote:
> What does "ceph orch osd rm status" show before you try the zap? Is
> your cluster still backfilling to the other OSDs for the PGs that were
> on the failed disk?
>
> David
>
> On Fri, Jan 27, 2023, at 03:25, mailing-lists wrote:
>> Dear Ceph-Users,
>>
>> i am struggling to replace a disk. My ceph-cluster is not replacing the
>> old OSD even though I did:
>>
>> ceph orch osd rm 232 --replace
>>
>> The OSD 232 is still shown in the osd list, but the new hdd will be
>> placed as a new OSD. This wouldnt mind me much, if the OSD was also
>> placed on the bluestoreDB / NVME, but it doesn't.
>>
>>
>> My steps:
>>
>> "ceph orch osd rm 232 --replace"
>>
>> remove the failed hdd.
>>
>> add the new one.
>>
>> Convert the disk within the servers bios, so that the node can have
>> direct access on it.
>>
>> It shows up as /dev/sdt,
>>
>> enter maintenance mode
>>
>> reboot server
>>
>> drive is now /dev/sdm (which the old drive had)
>>
>> "ceph orch device zap node-x /dev/sdm"
>>
>> A new OSD is placed on the cluster.
>>
>>
>> Can you give me a hint, where did I take a wrong turn? Why is the disk
>> not being used as OSD 232?
>>
>>
>> Best
>>
>> Ken
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io