Ohhhh....
sigh.
Thank you very much.
That actually makes sense, and isnt so bad after all.
Makes me surprised why I got no answers on my prior related question, couple weeks ago,
about what the proper way to replace an HDD in a failed hybrid OSD.
At least I know now.
You guys might consider a feature request of doing some kind of check on long device path
names getting passed in, to see if the util should complain to the user, "hey use the
other syntax".
----- Original Message -----
From: "Jeff Bailey" <bailey(a)cs.kent.edu>
To: "ceph-users" <ceph-users(a)ceph.io>
Sent: Monday, April 5, 2021 1:00:18 PM
Subject: [ceph-users] Re: bug in ceph-volume create
On 4/5/2021 3:49 PM, Philip Brown wrote:
As soon as you have an HDD fail... you will need to recreate the OSD.. and you are then
stuck. Because you cant use batch mode for it...
and you cant do it more granularly, with
ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg --block.db
/dev/ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd here
This isn't a bug. You're specifying the LV incorrectly. Just use
--block.db ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd
without the /dev at the front.