Hi Andrey,
this log output rather looks like some other process is using
/var/lib/ceph/osd/ceph-8
Have you stopped OSD.8 daemon?
And are you sure you deployed standalone DB/WAL devices for this OSD?
Thanks,
Igor
On 11/11/2019 3:10 PM, Andrey Groshev wrote:
> Hello,
>
> Some time ago I deployed a ceph cluster.
> It works great.
> Today I collect some statistics and found that the BlueFs utility is not working.
>
> # ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-8
> inferring bluefs devices from bluestore path
> slot 1 /var/lib/ceph/osd/ceph-8/block -> /dev/dm-5
> unable to open /var/lib/ceph/osd/ceph-8/block: (11) Resource temporarily unavailable
> 2019-11-11 15:03:30.665 7f4b9a427f00 -1 bdev(0x55d5b0310a80
/var/lib/ceph/osd/ceph-8/block) _lock flock failed on /var/lib/ceph/osd/ceph-8/block
> 2019-11-11 15:03:30.665 7f4b9a427f00 -1 bdev(0x55d5b0310a80
/var/lib/ceph/osd/ceph-8/block) open failed to lock /var/lib/ceph/osd/ceph-8/block: (11)
Resource temporarily unavailable
>
> As far as I understand, blocks.db and blocks.wal are missing. I don’t know how it
turned out.
>
> # ls -l /var/lib/ceph/osd/ceph-8
> total 28
> lrwxrwxrwx 1 ceph ceph 93 Nov 9 15:56 block ->
/dev/ceph-55b8a53d-1740-402a-b6f4-09d4befdd564/osd-block-c5488db7-621a-490a-88a0-904c12e8b8ed
> -rw------- 1 ceph ceph 37 Nov 9 15:56 ceph_fsid
> -rw------- 1 ceph ceph 37 Nov 9 15:56 fsid
> -rw------- 1 ceph ceph 55 Nov 9 15:56 keyring
> -rw------- 1 ceph ceph 6 Nov 9 15:56 ready
> -rw-r--r-- 1 ceph ceph 3 Nov 9 15:56 require_osd_release
> -rw------- 1 ceph ceph 10 Nov 9 15:56 type
> -rw------- 1 ceph ceph 2 Nov 9 15:56 whoami
>
> Did as standard:
> ....
> ceph-deploy osd create test-host1:/dev/sde
> ceph-deploy osd create test-host2:/dev/sde
> ....
>
> What now to do and whether it is necessary to repair it at all?
> Assembled on luminous, yesterday updated to Nautilus.
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io