If such 'simple' tool as ceph-volume is not properly working, how can I
trust cephadm to be good? Maybe ceph development should rethink trying
to pump out quickly new releases, and take a bit more time for testing.
I am already sticking to the oldest supported version just because of
this.
-----Original Message-----
Cc: ceph-users
Subject: Re: [ceph-users] Re: ceph-volume quite buggy compared to
ceph-disk
Hi Matt, Marc,
I'm using Ceph Otopus with cephadm as the orchestration tool. I've tried
adding OSDs with ceph orch daemon add ... but it's pretty limited. For
one, you can't create dmcrypt OSD with it nor having a separate db
device. I found that the most reliable way to create OSD with cephadm
orchestration tool is via the spec file (i.e. ceph orch apply osd -i
osd.spec ). For example, you can ask it to find all the HDD disk by
certain model, size... on a particular host(s) and make them into osds.
Here is a simple spec file:
service_type: osd
service_id: furry-osd
placement:
host_pattern: 'furry'
data_devices:
size: '5900G:6000G'
encrypted: true
You can find more info here:
https://docs.ceph.com/en/latest/cephadm/drivegroups/
However, this method only works with full disks, not partitions nor LVs.
You can use 'ceph orch device ls <host> --refresh' to list all avail
disks a particular host and why certain disks are't avail.
My understanding with the ceph-volume lvm is that it uses LV labels
exclusively to find the block/db/wal devices using LVM. During startup,
it will use lvm to find the OSD blocks, setup dmcrypt volume (if
required), create proper links, and execute ceph-osd command. The
existing links in /var/lib/ceph/ceph-osd/ would be overrided by the info
from the LV tags.
You can use lvs -o lv_tags on an LV to see all the labels created for an
OSD.
Hope it helps.
--Tri Hoang