On 2020-09-25 04:40, Peter Sarossy wrote:
hey folks,
I have managed to fat finger a config apply command and accidentally
deleted the CRD for one of my pools. The operator went ahead and tried to
purge it, but fortunately since it's used by CephFS it was unable to.
Redeploying the exact same CRD does not make the operator stop trying to
delete it though.
Any hints on how to make the operator forget about the deletion request and
leave it be?
No, sorry, not using ROOK / k8s for Ceph. You might want to set the
following though, just to make sure deleting things is hard (those knobs
were invented for situations like this):
# Do not accidentally delete the whole thing
osd_pool_default_flag_nodelete = true
mon_allow_pool_delete = false
This way you have to manually change those first before Ceph is even
able to delete anything.
Gr. Stefan