Ok. Thanks. This is how it works.
I did not think that with a working service it would not work.
11.11.2019, 15:54, "Igor Fedotov" <ifedotov(a)suse.de>de>:
> On 11/11/2019 3:51 PM, Andrey Groshev wrote:
>> Hi, Igor!
>> Service is UP.
>
> This prevents ceph-bluestore-tool from starting, you should shutdown it
> first.
>
>> I did not make separate devices.
>> blocks.db and blocks.wal are created only if they are on separate devices?
>
> in general - yes. They make sense when using separate devices only...
>> # systemctl status ceph-osd(a)8.service
>> ● ceph-osd(a)8.service - Ceph object storage daemon osd.8
>> Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime;
vendor preset: disabled)
>> Active: active (running) since Sat 2019-11-09 15:56:28 MSK; 1 day 23h ago
>> Process: 2676 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster
${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
>> Main PID: 2707 (ceph-osd)
>> CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd(a)8.service
>> └─2707 /usr/bin/ceph-osd -f --cluster ceph --id 8 --setuser ceph
--setgroup ceph
>>
>> Nov 09 15:56:28 test-host5 systemd[1]: Starting Ceph object storage daemon
osd.8...
>> Nov 09 15:56:28 test-host5 systemd[1]: Started Ceph object storage daemon
osd.8.
>>
>> 11.11.2019, 15:18, "Igor Fedotov" <ifedotov(a)suse.de>de>:
>>> Hi Andrey,
>>>
>>> this log output rather looks like some other process is using
>>> /var/lib/ceph/osd/ceph-8
>>>
>>> Have you stopped OSD.8 daemon?
>>>
>>> And are you sure you deployed standalone DB/WAL devices for this OSD?
>>>
>>> Thanks,
>>>
>>> Igor
>>>
>>> On 11/11/2019 3:10 PM, Andrey Groshev wrote:
>>>> Hello,
>>>>
>>>> Some time ago I deployed a ceph cluster.
>>>> It works great.
>>>> Today I collect some statistics and found that the BlueFs utility is
not working.
>>>>
>>>> # ceph-bluestore-tool bluefs-bdev-sizes --path
/var/lib/ceph/osd/ceph-8
>>>> inferring bluefs devices from bluestore path
>>>> slot 1 /var/lib/ceph/osd/ceph-8/block -> /dev/dm-5
>>>> unable to open /var/lib/ceph/osd/ceph-8/block: (11) Resource
temporarily unavailable
>>>> 2019-11-11 15:03:30.665 7f4b9a427f00 -1 bdev(0x55d5b0310a80
/var/lib/ceph/osd/ceph-8/block) _lock flock failed on /var/lib/ceph/osd/ceph-8/block
>>>> 2019-11-11 15:03:30.665 7f4b9a427f00 -1 bdev(0x55d5b0310a80
/var/lib/ceph/osd/ceph-8/block) open failed to lock /var/lib/ceph/osd/ceph-8/block: (11)
Resource temporarily unavailable
>>>>
>>>> As far as I understand, blocks.db and blocks.wal are missing. I don’t
know how it turned out.
>>>>
>>>> # ls -l /var/lib/ceph/osd/ceph-8
>>>> total 28
>>>> lrwxrwxrwx 1 ceph ceph 93 Nov 9 15:56 block ->
/dev/ceph-55b8a53d-1740-402a-b6f4-09d4befdd564/osd-block-c5488db7-621a-490a-88a0-904c12e8b8ed
>>>> -rw------- 1 ceph ceph 37 Nov 9 15:56 ceph_fsid
>>>> -rw------- 1 ceph ceph 37 Nov 9 15:56 fsid
>>>> -rw------- 1 ceph ceph 55 Nov 9 15:56 keyring
>>>> -rw------- 1 ceph ceph 6 Nov 9 15:56 ready
>>>> -rw-r--r-- 1 ceph ceph 3 Nov 9 15:56 require_osd_release
>>>> -rw------- 1 ceph ceph 10 Nov 9 15:56 type
>>>> -rw------- 1 ceph ceph 2 Nov 9 15:56 whoami
>>>>
>>>> Did as standard:
>>>> ....
>>>> ceph-deploy osd create test-host1:/dev/sde
>>>> ceph-deploy osd create test-host2:/dev/sde
>>>> ....
>>>>
>>>> What now to do and whether it is necessary to repair it at all?
>>>> Assembled on luminous, yesterday updated to Nautilus.
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io