On Fri, 2019-10-11 at 11:02 -0700, Yiming Zhang wrote:
Hi Sage and Sam,
Reboot works for the first time! After reboot, when I run the vstart command for the
first time, everything is fine,
but failed for the second time after the stop command is called. And then I rebooted
again, and the error is persisted.
I checked the osd, ceph process and I didn’t see any active threads in the system.
Any thoughts?
Thanks,
Yiming
> On Oct 10, 2019, at 5:02 AM, Sage Weil <sweil(a)redhat.com> wrote:
>
> On Wed, 9 Oct 2019, Yiming Zhang wrote:
> > Hi Sage,
> >
> > I have experienced an error when I tries to use vstart to create a cluster
running on raw device. Here is my vstart command:
> >
> > sudo MON=1 OSD=1 MDS=0 ../src/vstart.sh -b -d -n -x -l -o 'bluestore block
path = /dev/sda' -o 'bluestore fsck on mkfs = false' -o 'bluestore fsck on
mount = false' -o 'bluestore fsck on umount = false' -o 'bluestore block
db path = ' -o 'bluestore block wal path = ' -o 'bluestore block wal
create = false' -o 'bluestore block db create = false' -o 'bluefs
preextend wal files = true'
> >
> > And here is my error list:
> > /users/ceph/build/bin/ceph-osd -i 0 -c /users/yzhan298/ceph/build/ceph.conf
> > 7f34590d1d80 -1 Falling back to public interface
> > 7f34590d1d80 -1 bdev(0x562ce4c72000 /users/yzhan298/ceph/build/dev/osd0/block)
_lock flock failed on /users/yzhan298/ceph/build/dev/osd0/block
> > 7f34590d1d80 -1 bdev(0x562ce4c72000 /users/yzhan298/ceph/build/dev/osd0/block)
open failed to lock /users/yzhan298/ceph/build/dev/osd0/block: (11) Resource temporarily
unavailable
> > 7f34590d1d80 -1 osd.0 0 OSD:init: unable to mount object store
> > 7f34590d1d80 -1 ** ERROR: osd init failed: (11) Resource temporarily
unavailable
>
These locks should go away when the file descriptor on which they were
acquired is closed. When you see this, what does this command show (run
as root)?
# lsof /users/yzhan298/ceph/build/dev/osd0/block
That should tell you if anyone else has it open. Also, if you know
dev/ino combination for the block device, you can try to track it down
the lock in /proc/locks.
--
Jeff Layton <jlayton(a)redhat.com>