Hi, trying to migrate a second ceph cluster to Cephadm. All the host successfully migrated
from "legacy" except one of the OSD hosts (cephadm kept duplicating osd ids e.g.
two "osd.5", still not sure why). To make things easier, we re-provisioned the
node (reinstalled from netinstall, applied the same SaltStack traits as the other nodes,
wiped the disks) and tried to use cephadm to setup the OSD's.
So, orch correctly starts the provisioning processes (a docker container running
ceph-volume is created). But the provisioning never completes (docker exec):
# ps axu
root 1 0.1 0.2 99272 22488 ? Ss 15:26 0:01
/usr/libexec/platform-python -s /usr/sbin/ceph-volume lvm batch --no-auto /dev/sdb
/dev/sdc --dmcrypt --yes --no-systemd
root 807 0.9 0.5 154560 44120 ? S<L 15:26 0:06 /usr/sbin/cryptsetup
--key-file - --allow-discards luksOpen
/dev/ceph-851cae40-3270-45ea-b788-be6e05465e92/osd-data-e3157b54-f6b9-4ec9-ab12-e289f52c00a4
Afr6Ct-ok4h-pBEy-GfFF-xxYl-EKwi-cHhjZc
# cat /var/log/ceph/ceph-volume.log
Running command: /usr/sbin/cryptsetup --batch-mode --key-file - luksFormat
/dev/ceph-851cae40-3270-45ea-b788-be6e05465e92/osd-data-e3157b54-f6b9-4ec9-ab12-e289f52c00a4
Running command: /usr/sbin/cryptsetup --key-file - --allow-discards luksOpen
/dev/ceph-851cae40-3270-45ea-b788-be6e05465e92/osd-data-e3157b54-f6b9-4ec9-ab12-e289f52c00a4
Afr6Ct-ok4h-pBEy-GfFF-xxYl-EKwi-cHhjZc
# docker ps
2956dec0450d ceph/ceph:v15 "/usr/sbin/ceph-volu…" 14
minutes ago Up 14 minutes condescending_nightingale
# cat osd_spec_default.yaml
service_type: osd
service_id: osd_spec_default
placement:
host_pattern: '*'
data_devices:
all: true
encrypted: true
It looks like cephadm hangs on luksOpen.
Is this expected (encryption is mentioned to be supported, outside of no documentation)?
Show replies by date