OK, attachments wont work.
See this:
https://filebin.net/t0p7f1agx5h6bdje
Best
Ken
On 01.02.23 17:22, mailing-lists wrote:
> I've pulled a few lines from the log and i've attached this to this
> mail. (I hope this works for this mailinglist?)
>
>
> I found the line 135
>
> [2023-01-26 16:25:00,785][ceph_volume.process][INFO ] stdout
>
ceph.block_device=/dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.block_uuid=b4WDQQ-eMTb-AN1U-D7dk-yD2q-4dPZ-KyFrHi,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8038f09a-27a0-11ed-8de8-55262cdd5a37,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.osd_id=232,ceph.osdspec_affinity=dashboard-admin-1661788934732,ceph.type=wal,ceph.vdo=0,ceph.wal_device=/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c,ceph.wal_uuid=dquBMJ-s8ou-Wp6M-NY8Z-QoFh-6L4b-9Lwqm0";"/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c";"osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c";"ceph-3a336b8e-ed39-4532-a199-ac6a3730840b";"dquBMJ-s8ou-Wp6M-NY8Z-QoFh-6L4b-9Lwqm0";"355622453248
>
>
> Which indicates that this OSD is in fact using a WAL, since WAL and DB
> should both be on the NVME, i would guess it is just a visual bug in
> the dashboard?
>
>
> From line 135:
>
>
ceph.wal_device=/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c
>
>
> From lsblk:
>
>
├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--5d845dba--8b55--4984--890b--547fbdaff10c
> 253:12 0 331.2G 0 lvm
>
>
> So it looks like it is using that lvm group right there. Yet, the
> dashboard doesn't show a nvme. (please compare screenshot osd_232.png
> and osd_218.png)
>
>
> Can I somehow confirm, that my osd 232 is really using the nvme as
> wal/db?
>
>
> Thanks and best regards
>
> Ken
>
>
>
> On 01.02.23 10:35, Guillaume Abrioux wrote:
>> Any chance you can share the ceph-volume.log (from the corresponding
>> host)?
>> It should be in /var/log/ceph/<cluster fsid>/ceph-volume.log. Note
>> that there might be several log files (log rotation). Ideally, the
>> one that includes the recreation steps.
>>
>> Thanks,
>>
>> On Wed, 1 Feb 2023 at 10:13, mailing-lists <mailing-lists(a)indane.de>
>> wrote:
>>
>> Ah, nice.
>>
>> service_type: osd
>> service_id: dashboard-admin-1661788934732
>> service_name: osd.dashboard-admin-1661788934732
>> placement:
>> host_pattern: '*'
>> spec:
>> data_devices:
>> model: MG08SCA16TEY
>> db_devices:
>> model: Dell Ent NVMe AGN MU AIC 6.4TB
>> filter_logic: AND
>> objectstore: bluestore
>> wal_devices:
>> model: Dell Ent NVMe AGN MU AIC 6.4TB
>> status:
>> created: '2022-08-29T16:02:22.822027Z'
>> last_refresh: '2023-02-01T09:03:22.853860Z'
>> running: 306
>> size: 306
>>
>>
>> Best
>>
>> Ken
>>
>> On 31.01.23 23:51, Guillaume Abrioux wrote:
>>> On Tue, 31 Jan 2023 at 22:31, mailing-lists
>>> <mailing-lists(a)indane.de> wrote:
>>>
>>> I am not sure. I didn't find it... It should be somewhere,
>>> right? I used
>>> the dashboard to create the osd service.
>>>
>>>
>>> what does a `cephadm shell -- ceph orch ls osd --format yaml` say?
>>>
>>> -- *Guillaume Abrioux
>>> *Senior Software Engineer
>>
>>
>>
>> --
>> *Guillaume Abrioux
>> *Senior Software Engineer
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io