We've found that after doing the osd rm, you can use: "ceph-volume lvm
zap --osd-id 178 --destroy" on the server with that OSD as per:
https://docs.ceph.com/en/latest/ceph-volume/lvm/zap/#removing-devices
and it will clean things up so they work as expected.
On Tue, May 25, 2021 at 6:51 AM Kai Stian Olstad <ceph+list(a)olstad.com> wrote:
>
> Hi
>
> The server run 15.2.9 and has 15 HDD and 3 SSD.
> The OSDs was created with this YAML file
>
> hdd.yml
> --------
> service_type: osd
> service_id: hdd
> placement:
> host_pattern: 'pech-hd-*'
> data_devices:
> rotational: 1
> db_devices:
> rotational: 0
>
>
> The result was that the 3 SSD is added to 1 VG with 15 LV on it.
>
> # vgs | egrep "VG|dbs"
> VG #PV #LV #SN Attr
> VSize VFree
> ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b 3 15 0 wz--n-
> <5.24t 48.00m
>
>
> One of the osd failed and I run rm with replace
>
> # ceph orch osd rm 178 --replace
>
> and the result is
>
> # ceph osd tree | grep "ID|destroyed"
> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT
> PRI-AFF
> 178 hdd 12.82390 osd.178 destroyed 0
> 1.00000
>
>
> But I'm not able to replace the disk with the same YAML file as shown
> above.
>
>
> # ceph orch apply osd -i hdd.yml --dry-run
> ################
> OSDSPEC PREVIEWS
> ################
> +---------+------+------+------+----+-----+
> |SERVICE |NAME |HOST |DATA |DB |WAL |
> +---------+------+------+------+----+-----+
> +---------+------+------+------+----+-----+
>
> I guess this is the wrong way to do it, but I can't find the answer in
> the documentation.
> So how can I replace this failed disk in Cephadm?
>
>
> --
> Kai Stian Olstad
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io