Hi guys:
I had setup ceph cluster and mount rbd on one machine. I delete ceph cluster and
reinstall follow the manual.
but I still have rbd device mount on my machine. I can not access mount point.
This is my detail info, I want to delete all old rbd device, what should I do?
node1 $> rbd device list
id pool namespace image snap
device
0 rbd foo -
/dev/rbd0
1 kube kubernetes-dynamic-pvc-1cc43c5b-ade1-11e9-9a92-863e3c12afd1 -
/dev/rbd1
node1 $> df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/rootvg-lv_root 10995712 2528388 8467324 23% /
devtmpfs 2012656 0 2012656 0% /dev
tmpfs 2023588 0 2023588 0% /dev/shm
tmpfs 2023588 207340 1816248 11% /run
tmpfs 2023588 0 2023588 0% /sys/fs/cgroup
/dev/sda1 520868 116936 403932 23% /boot
/dev/mapper/rootvg-lv_var 5232640 3226816 2005824 62% /var
/dev/mapper/rootvg-lv_tmp 5232640 33060 5199580 1% /tmp
/dev/rbd0 3997376 16392 3754888 1% /mnt
tmpfs 404720 0 404720 0% /run/user/1001
node1 $> rbd trush list
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.
node1 $> rbd info rbd/foo
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.
[
https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-anima…
无病毒。www.avast.com<https://www.avast.com/sig-email?utm_medium=email&u…