Hi Conrad,
On Wed, May 17, 2023 at 2:41 PM Conrad Hoffmann <ch(a)bitfehler.net> wrote:
On 5/17/23 18:07, Stefan Kooman wrote:
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this case.
It is a cluster with 3 monitors. You can find a console log of me
verifying that `mon_allow_pool_delete` is indeed true on all monitors
but still fail to remove the volume here:
That's not just a volume, that's the whole filesystem. If that's what
you want to do ... I see the MDS daemon is still up. IIRC there should
be no MDS running if you want to delete the fs. Can you stop the MDS
daemon and try again.
That sort of got me in the right direction, but I am still confused. I
don't think I understand the difference between a volume and a
filesystem. I think I followed [1] when I set this up. It says to use
`ceph fs volume create`. I went ahead and ran it again, and it certainly
creates something that shows up in both `ceph fs ls` and `ceph fs volume
ls`. Also, [2] says "FS volumes, an abstraction for CephFS file
systems", so I guess they are the same thing?
Yes.
At any rate, shutting down the MDS did _not_ help with
`ceph fs volume
rm` (it failed with the same error message), but it _did_ help with
`ceph fs rm`, which then worked. Hard to make sense of, but I am pretty
sure the error message I was seeing is pretty non-sensical in that
context. Under what circumstance will `ceph fs volume rm` even work if
it fails to delete a volume I just created?
`fs rm` just removes the file system from the monitor maps. You still
have the data pools lying around which is what the `volume rm` command
is complaining about.
Try:
ceph config set global mon_allow_pool_delete true
ceph fs volume rm ...
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D