Hi Igor,
mhm i updated the missing lv tags:
# lvs -o lv_tags
/dev/ceph-3a295647-d5a1-423c-81dd-1d2b32d7c4c5/osd-block-c2676c5f-111c-4603-b411-473f7a7638c2
| tr ',' '\n' | sort
LV Tags
ceph.block_device=/dev/ceph-3a295647-d5a1-423c-81dd-1d2b32d7c4c5/osd-block-c2676c5f-111c-4603-b411-473f7a7638c2
ceph.block_uuid=0wBREi-I5t1-UeUa-EvbA-sET0-S9O0-VaxOgg
ceph.cephx_lockbox_secret=
ceph.cluster_fsid=7e242332-55c3-4926-9646-149b2f5c8081
ceph.cluster_name=ceph
ceph.crush_device_class=None
ceph.db_device=/dev/bluefs_db1/db-osd0
ceph.db_uuid=UUw35K-YnNT-HZZE-IfWd-Rtxn-0eVW-kTuQmj
ceph.encrypted=0
ceph.osd_fsid=c2676c5f-111c-4603-b411-473f7a7638c2
ceph.osd_id=0
ceph.type=block
ceph.vdo=0
# lvdisplay /dev/bluefs_db1/db-osd0
--- Logical volume ---
LV Path /dev/bluefs_db1/db-osd0
LV Name db-osd0
VG Name bluefs_db1
LV UUID UUw35K-YnNT-HZZE-IfWd-Rtxn-0eVW-kTuQmj
LV Write Access read/write
LV Creation host, time cloud10-1517, 2020-02-28 21:32:48 +0100
LV Status available
# open 0
LV Size 185,00 GiB
Current LE 47360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
but lvm trigger still says:
# /usr/sbin/ceph-volume lvm trigger
0-c2676c5f-111c-4603-b411-473f7a7638c2
--> RuntimeError: could not find db with uuid
UUw35K-YnNT-HZZE-IfWd-Rtxn-0eVW-kTuQmj
Mit freundlichen Grüßen
Stefan Priebe
Bachelor of Science in Computer Science (BSCS)
Vorstand (CTO)
-------------------------------
Profihost AG
Expo Plaza 1
30539 Hannover
Deutschland
Tel.: +49 (511) 5151 8181 | Fax.: +49 (511) 5151 8282
URL:
http://www.profihost.com | E-Mail: info(a)profihost.com
Sitz der Gesellschaft: Hannover, USt-IdNr. DE813460827
Registergericht: Amtsgericht Hannover, Register-Nr.: HRB 202350
Vorstand: Cristoph Bluhm, Stefan Priebe
Aufsichtsrat: Prof. Dr. iur. Winfried Huck (Vorsitzender)
Am 21.04.20 um 16:07 schrieb Igor Fedotov:
> On 4/21/2020 4:59 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>> Am 21.04.20 um 15:52 schrieb Igor Fedotov:
>>> Hi Stefan,
>>>
>>> I think that's the cause:
>>>
>>>
https://tracker.ceph.com/issues/42928
>> thanks yes that matches. Is there any way to fix this manually?
>
> I think so - AFAIK missed tags are pure LVM stuff and hence can be set
> by regular LVM tools.
>
> ceph-volume does that during OSD provisioning as well. But
> unfortunately I haven't dived into this topic deeper yet. So can't
> provide you with the details how to fix this step-by-step.
>
>>
>> And is this also related to:
>>
https://tracker.ceph.com/issues/44509
>
> Probably unrelated. That's either a different bug or rather some
> artifact from RocksDB/BlueFS interaction.
>
> Leaving a request for more info in the ticket...
>
>>
>> Greets,
>> Stefan
>>
>>> On 4/21/2020 4:02 PM, Stefan Priebe - Profihost AG wrote:
>>>> Hi there,
>>>>
>>>> i've a bunch of hosts where i migrated HDD only OSDs to hybird ones
>>>> using:
>>>> sudo -E -u ceph -- bash -c 'ceph-bluestore-tool --path
>>>> /var/lib/ceph/osd/ceph-${OSD} bluefs-bdev-new-db --dev-target
>>>> /dev/bluefs_db1/db-osd${OSD}'
>>>>
>>>> while this worked fine and each OSD was running fine.
>>>>
>>>> It looses it's block.db symlink after reboot.
>>>>
>>>> If i manually recreate the block.db symlink inside:
>>>> /var/lib/ceph/osd/ceph-*
>>>>
>>>> all osds start fine. Can anybody help who creates those symlinks and
>>>> why
>>>> they're not created automatically in case of migrated db?
>>>>
>>>> Greets,
>>>> Stefan
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io