If the OSD daemon dies, then it will have closed all of its fd's and
there should be no more lock. Therefore you almost certainly have some
other process running that is holding the lock.
You may have to do a bit of digging in /proc/locks. Determine the
dev+inode number of the file on which the lock is being set and find it
in /proc/locks. Then you can track down the PID that's holding that
lock.
Cheers,
Jeff
On Wed, 2020-02-12 at 09:03 -0800, Yiming Zhang wrote:
The weird thing is I don’t have systemd-udev installed
on my server.
Is there any other possible solutions?
The error only happens when I redirect osd data to a raw device.
Thanks,
Yiming
On Feb 12, 2020, at 8:36 AM, Sage Weil
<sage(a)newdream.net> wrote:
Talib was chasing down a similar issue a while back and found that the
root cause was systemd-udev, which spawns a process that opens the device
after it is closed. You might try removing or disabling that package
and see if it goes away?
On Wed, 12 Feb 2020, Yiming Zhang wrote:
> Hi All,
>
> I noticed a locking issue in kernel device.
> When I stopped the ceph cluster and all daemons, the kernel device _lock somehow is
still held and this line below will return r < 0:
>
> int KernelDevice::_lock()
> {
> int r = ::flock(fd_directs[WRITE_LIFE_NOT_SET], LOCK_EX | LOCK_NB);
> …
> }
>
> The way I stop the cluster and daemons:
>
> sudo ../src/stop.sh
> sudo bin/init-ceph --verbose forcestop
>
> This error happens even after the reboot when I try to use vstart:
>
> bdev _lock flock failed on ceph/build/dev/osd0/block
> bdev open failed to lock /home/yzhan298/ceph/build/dev/osd0/block: (11) Resource
temporarily unavailable
> OSD::mkfs: couldn't mount ObjectStore: error (11) Resource temporarily
unavailable
> ** ERROR: error creating empty object store in ceph/build/dev/osd0: (11) Resource
temporarily unavailable
>
>
> Please advice. (On master branch)
>
> Thanks,
> Yiming
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io