Hi,
we have also seen such cases, it seems that sometimes (when the controller / device is
broken in special ways),
device mapper keeps the volume locked.
You can check as follows:
1) Check if lvs / pvs / vgs show some undefined device. In that case, you may have to
flush the lvmeta cache:
pvscan --cache
Check again if zapping works after this, if not, continue.
2) Check major and minor number of the device:
$ ls -la /dev/sdi
brw-rw----. 1 root disk 66, 96 13. Jan 18:14 /dev/sdi
=> In this case, it would be 66:96
3) Check if device mapper still has it locked:
$ dmsetup ls --tree
...
ceph--1f6780e6--b120--4876--b674--aa3337847114-osd--block--1325f49b--fead--40ba--957e--ec6b2968d456
(253:1)
└─ (66:96)
...
=> In this case, it is still mapped!
4) Attempt to remove it:
$ dmsetup remove
ceph--1f6780e6--b120--4876--b674--aa3337847114-osd--block--1325f49b--fead--40ba--957e--ec6b2968d456
=> If this fails, you may have to use --force (if the disk does not accept any
read/write anymore, this might happen).
Please make absolutely sure this is the correct disk before using force.
5) If after this, lvs / pvs / vgs show some undefined device, you may have to flush the
lvmeta cache:
pvscan --cache
Latest after these steps, zapping should work, at least we never encountered anything
worse (apart from hangs with a broken RAID controler firmware, where we needed to eject
and reinsert the disk
and rescan devices to make the controller lock go away).
Cheers,
Oliver
Am 19.09.20 um 14:49 schrieb Marc Roos:
[@~]# ceph-volume lvm zap /dev/sdi
--> Zapping: /dev/sdi
--> --destroy was not specified, but zapping a whole device will remove
the partition table
stderr: wipefs: error: /dev/sdi: probing initialization failed: Device
or resource busy
--> failed to wipefs device, will try again to workaround probable race
condition
stderr: wipefs: error: /dev/sdi: probing initialization failed: Device
or resource busy
--> failed to wipefs device, will try again to workaround probable race
condition
I can see where it is busy, at least not in lsof
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io