Hello team,
I have deployed ceph cluster in production , the cluster composed by two
types of disks HDD and SSD , and the cluster was deployed using
ceph-ansible , unfortunately after deployment the HDD disks appear only
without SSD , would like to restart deployment from scratch , but I miss
the way on how to erase disk to the initial state . try to format disks but
LVM comeback with disks.
sda
8:0 0 7.3T 0 disk
└─ceph--da4a5d58--73ef--473b--9960--371f837cb5ed-osd--block--6e800937--c4d2--4fc9--84ca--083c39d057a8
253:1 0 7.3T 0 lvm
sdb
8:16 0 7.3T 0 disk
└─ceph--773f50a1--79ed--4908--8f81--74f85efeb473-osd--block--9737a046--ba8b--4494--91f7--b80dd894df0b
253:7 0 7.3T 0 lvm
sdc
8:32 0 7.3T 0 disk
└─ceph--02000cec--fdbc--4def--967e--a7c32c851964-osd--block--c54d8182--b5e7--4c73--8d7b--7d24c7a3ce15
253:6 0 7.3T 0 lvm
Kindly help me to sort this out.
Best regards
Michel
Show replies by date
You need to stop all daemons, remove the mon stores and wipe the OSDs with ceph-volume.
Find out which OSDs were running on which host (ceph-volume inventory DEVICE) and use
ceph-volume lvm zap --destroy --osd-id ID
on these hosts.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Michel Niyoyita <micou12(a)gmail.com>
Sent: 06 January 2023 16:13:32
To: ceph-users
Subject: [ceph-users] Erasing Disk to the initial state
Hello team,
I have deployed ceph cluster in production , the cluster composed by two
types of disks HDD and SSD , and the cluster was deployed using
ceph-ansible , unfortunately after deployment the HDD disks appear only
without SSD , would like to restart deployment from scratch , but I miss
the way on how to erase disk to the initial state . try to format disks but
LVM comeback with disks.
sda
8:0 0 7.3T 0 disk
└─ceph--da4a5d58--73ef--473b--9960--371f837cb5ed-osd--block--6e800937--c4d2--4fc9--84ca--083c39d057a8
253:1 0 7.3T 0 lvm
sdb
8:16 0 7.3T 0 disk
└─ceph--773f50a1--79ed--4908--8f81--74f85efeb473-osd--block--9737a046--ba8b--4494--91f7--b80dd894df0b
253:7 0 7.3T 0 lvm
sdc
8:32 0 7.3T 0 disk
└─ceph--02000cec--fdbc--4def--967e--a7c32c851964-osd--block--c54d8182--b5e7--4c73--8d7b--7d24c7a3ce15
253:6 0 7.3T 0 lvm
Kindly help me to sort this out.
Best regards
Michel
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io