Thanks. You mean directly running ‘ceph-volume lvm create’ on target host (not inside any
container like what ‘ceph orch’ does), right?
And I finally found a hack way to run my OSD in a container.
1. ceph orch daemon add osd host:/dev/sdX
2. On target host, stop the just created OSD service.
3. ‘ceph osd destroy’ the just created osd.
4. On target host, run ‘cephadm shell’
* ceph-volume lvm zap —destroy /dev/sdX
* ceph-volume lvm prepare —data /dev/sdX —block.db vg/lv —osd-id x —osd-fsid xxxx
—no-systemd This replaced the auto created OSD with my desired config and reuse the
previous ID and fsid.
5. On target host, restart the OSD service.
I think the OSD created in this way fits better into other ‘ceph orch’ operations. Any
advice on this?
On Oct 2, 2020, at 20:59, Eugen Block <eblock(a)nde.ag> wrote:
Hi,
at the moment there's only the manual way to deploy single OSDs, not with cephadm.
There have been a couple of threads on this list, I don't have a link though.
You'll have to run something like
ceph-volume lvm create --data /dev/sdX --block.db {VG/LV}
Note that for block.db you'll need to provide the volume-goup/logical volume, not the
device path.
Regards,
Eugen
Zitat von 胡 玮文 <huww98(a)outlook.com>om>:
Hi all,
I’m new to ceph. I recently deployed a ceph cluster with cephadm. Now I want to add a
single new OSD daemon with a db device on SSD. But I can’t find any documentation about
this.
I have tried:
1. Using web dashboard. This requires at least one filter to proceed (type, vendor,
model or size). But I just want to select the block device manually.
2. Using ‘ceph orch apply osd -i spec.yml’. This is also filter based.
3. Using ‘ceph orch daemon add osd host:device’. Seems I cannot specify my SSD db device
in this way.
4. On the target host, run ‘cephadm shell’ then ceph-volume prepare and activate. But
ceph-volume seems can’t create systemd service outside the container like ‘ceph orch’
does.
5. On the target host, run ‘cephadm ceph-volume’, but it requires a json config file, I
can’t figure out what is that.
Any help is appreciated. Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io