Hello,
we observed massive and sudden growth of the mon db size on disk, from
50MB to 20GB+ (GB!) and thus reaching 100% disk usage on the mountpoint.
As far as we can see, it happens if we set "noout" for a node reboot:
After the node and the OSDs come back it looks like the mon db size
increased drastically.
We have 14.2.11, 10 OSD @ 2TB and cephfs in use.
Is this a known issue? Should we avoid noout?
TIA,
derjohn
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net
Facebook: https://www.facebook.com/netlabdotnet
Twitter: https://twitter.com/netlabdotnet
Hi,
we evaluate rados gateway to provide S3 Storage.
There is one question related to backup/snapshots.
Is there any way to create snapshots of buckets and or backup a bucket?
And how we can access data of a sapshot?
I found only some very old information which indicates that this is not
possible. Maybe there are new features which I didn't find.
Regards
Manuel
Hi All,
I have a cluster that that I tried upgrading from 15.2.4 to 15.2.5 using the command ‘ceph orch upgrade start —ceph-version 15.2.5’. After upgrading two of the three mgrs, the third mgr failed and the upgrade stopped. I was able to get the third mgr upgraded by changing the systemd file and manually removing the podman image. I have upgraded ceph-common outside the containers on all the mgr/mon nodes. The cluster repots HEALTH_OK. However, any ‘ceph orch’ command I issue now always returns:
Error ENOENT: Module not found
If I run ‘ceph orch ps —verbose’, I get:
parsed_args: Namespace(admin_socket=None, block=False, cephconf=None, client_id=None, client_name=None, cluster=None, cluster_timeout=None, completion=False, help=False, input_file=None, output_file=None, output_format=None, period=1, setgroup=None, setuser=None, status=False, verbose=True, version=False, watch=False, watch_channel=None, watch_debug=False, watch_error=False, watch_info=False, watch_sec=False, watch_warn=False), childargs: ['orch', 'ps']
cmd000: pg stat
...
better match: 2.5 > 1.5: orch ps [<hostname>] [<service_name>] [<daemon_type>] [<daemon_id>] [plain|json|json-pretty|yaml] [--refresh]
bestcmds_sorted:
[{'flags': 8,
'help': 'List daemons known to orchestrator',
'module': 'mgr',
'perm': 'r',
'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=orch),
argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=ps),
argdesc(<class 'ceph_argparse.CephString'>, req=False, name=hostname, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephString'>, req=False, name=service_name, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephString'>, req=False, name=daemon_type, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephString'>, req=False, name=daemon_id, n=1, numseen=0),
argdesc(<class 'ceph_argparse.CephChoices'>, req=False, name=format, n=1, numseen=0, strings=plain|json|json-pretty|yaml),
argdesc(<class 'ceph_argparse.CephBool'>, req=False, name=refresh, n=1, numseen=0)]}]
Submitting command: {'prefix': 'orch ps', 'target': ('mon-mgr', '')}
submit ['{"prefix": "orch ps", "target": ["mon-mgr", ""]}'] to mon-mgr
Error ENOENT: Module not found
Any help would be appreciated.
Thanks,
-TJ Ragan
Hi Marc,
Did you have any success with `ceph-volume` for activating your OSD?
I am having a similar problem where the command `ceph-bluestore-tool`
fails to be able to read a label for a previously created OSD on an
LVM partition. I had previously been using the OSD without issues, but
after a reboot it fails to load.
1. I had initially created my OSD using Ceph Octopus 15.x with `ceph
orch daemon add osd <my hostname>:boot/cephfs_meta` that was able to
create an OSD on the LVM partition and bring up an OSD.
2. After a reboot, the OSD fails to come up, with error from
`ceph-bluestore-tool` happening inside the container specifically
being unable to read the label of the device.
3. When I query the symlinked device /dev/boot/cephfs_meta ->
/dev/dm3, with `dmsetup info /dev/dm-3`, I can see the state is active
and that it has a UUID, etc.
4. I installed `ceph-osd` CentOS package providing the
ceph-bluestore-tool, and tried to manually test and `sudo
ceph-bluestore-tool show-label --dev /dev/dm-3` fails to read the
label. When I try with other OSD's that were created for entire disks
this command is able to read the label and print out information.
I am considering submitting a ticket to the ceph issue tracker, as I
am unable to figure out why the ceph-bluestore-tool cannot read the
labels and it seems either the OSD was initially created incorrectly
or there is a bug in ceph-bluestore-tool.
One possibility is that I did not have the LVM2 package installed on
this host prior to the `ceph orch daemon add ..` command and this
caused a particular issue with the LVM partition OSD.
-Matt
On Sat, Sep 19, 2020 at 9:11 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
>
>
> [@]# ceph-volume lvm activate 36 82b94115-4dfb-4ed0-8801-def59a432b0a
> Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-36
> Running command: /usr/bin/ceph-authtool
> /var/lib/ceph/osd/ceph-36/lockbox.keyring --create-keyring --name
> client.osd-lockbox.82b94115-4dfb-4ed0-8801-def59a432b0a --add-key
> AQBxA2Zfj6avOBAAIIHqNNY2J22EnOZV+dNzFQ==
> stdout: creating /var/lib/ceph/osd/ceph-36/lockbox.keyring
> added entity client.osd-lockbox.82b94115-4dfb-4ed0-8801-def59a432b0a
> auth(key=AQBxA2Zfj6avOBAAIIHqNNY2J22EnOZV+dNzFQ==)
> Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-36/lockbox.keyring
> Running command: /usr/bin/ceph --cluster ceph --name
> client.osd-lockbox.82b94115-4dfb-4ed0-8801-def59a432b0a --keyring
> /var/lib/ceph/osd/ceph-36/lockbox.keyring config-key get
> dm-crypt/osd/82b94115-4dfb-4ed0-8801-def59a432b0a/luks
> Running command: /usr/sbin/cryptsetup --key-file - --allow-discards
> luksOpen
> /dev/ceph-9263e83b-7660-4f5b-843a-2111e882a17e/osd-block-82b94115-4dfb-4
> ed0-8801-def59a432b0a I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb
> stderr: Device I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb already exists.
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-36
> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir --dev /dev/mapper/I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb
> --path /var/lib/ceph/osd/ceph-36 --no-mon-config
> stderr: failed to read label for
> /dev/mapper/I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb: (2) No such file or
> directory
> --> RuntimeError: command returned non-zero exit status: 1
>
> dmsetup ls lists this????
>
> Where is an option to set the weight? As far as I can see you can only
> set this after peering started?
>
> How can I mount this tmpfs manually to inspect this? Maybe put in the
> manual[1]?
>
>
> [1]
> https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
Good evening,
since 2018 we have been using a custom script to create disks /
partitions, because at the time both ceph-disk and ceph-volume exhibited
bugs that made them unreliable for us.
We recently re-tested ceph-volume and while it seems generally speaking
[0] to work, using LVM seems to introduce an additional layer that is
not needed from our side.
The script we created is little less than 100 lines long and works by
specifying the device and its device class:
./ceph-osd-create-start /dev/sdd ssd
Everything else is determined by the script. As the script is very
simple and work independently of any init system, we wanted to discuss
whether it would make sense for anyone else to re-integrate something
like it back into ceph upstream?
We are aware that ceph-disk has been deprecated, however
ceph-volume raw had some issues our setup [2].
The script itself can be found on [1]. At the end it has some ungleich
specifics, but we'd be very open to remove that.
Best regards,
Nico
[0] https://tracker.ceph.com/issues/47724
[1] https://code.ungleich.ch/ungleich-public/ungleich-tools/-/blob/master/ceph-…
[2]
[00:19:45] server8.place6:/var/lib/ceph/osd# ceph-volume raw prepare --bluestore --data /dev/sdn
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d0fa7074-6cdf-4947-ac1e-e73dd0ec1fe8
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
--> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Running command: /bin/chown -R ceph:ceph /dev/sdn
Running command: /bin/ln -s /dev/sdn /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
stderr: 2020-09-28 00:20:48.276 7f8c90f27700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2020-09-28 00:20:48.276 7f8c90f27700 -1 AuthRegistry(0x7f8c8c081d08) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
stderr: got monmap epoch 12
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQA/EHFfINr/HhAAVFAP9NF2LLGFrJEGXbbMSw==
stdout: creating /var/lib/ceph/osd/ceph-3/keyring
added entity osd.3 auth(key=AQA/EHFfINr/HhAAVFAP9NF2LLGFrJEGXbbMSw==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid d0fa7074-6cdf-4947-ac1e-e73dd0ec1fe8 --setuser ceph --setgroup ceph
stderr: 2020-09-28 00:20:48.760 7f6e34ebfd80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
stderr: 2020-09-28 00:20:55.912 7f6e34ebfd80 -1 bdev(0x561a9a0f6700 /var/lib/ceph/osd/ceph-3//block) _lock flock failed on /var/lib/ceph/osd/ceph-3//block
stderr: 2020-09-28 00:20:55.912 7f6e34ebfd80 -1 bdev(0x561a9a0f6700 /var/lib/ceph/osd/ceph-3//block) open failed to lock /var/lib/ceph/osd/ceph-3//block: (11) Resource temporarily unavailable
stderr: 2020-09-28 00:20:55.912 7f6e34ebfd80 -1 OSD::mkfs: couldn't mount ObjectStore: error (11) Resource temporarily unavailable
stderr: 2020-09-28 00:20:55.912 7f6e34ebfd80 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-3/: (11) Resource temporarily unavailable
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.3 --yes-i-really-mean-it
stderr: 2020-09-28 00:20:56.072 7f3e446a9700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2020-09-28 00:20:56.072 7f3e446a9700 -1 AuthRegistry(0x7f3e3c081d08) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
stderr: purged osd.3
--> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid d0fa7074-6cdf-4947-ac1e-e73dd0ec1fe8 --setuser ceph --setgroup ceph
[00:20:56] server8.place6:/var/lib/ceph/osd#
--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
Hi,
on octopus 15.2.4 I have an issue with cephfs tag auth. The following
works fine:
client.f9desktop
key: ....
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw pool=cephfs_data, allow rw pool=ssd_data,
allow rw pool=fast_data, allow rw pool=arich_data, allow rw
pool=ecfast_data
but this one does not.
client.f9desktopnew
key: ....
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs
For the 2nd, mds works, files can be created or removed, but client
read/write (native client, kernel version 5.7.4) fails with I/O error,
so osd part does not seem to be working properly.
Any clues what can be wrong? the cephfs was created in jewel...
Another issue is: if osd caps are updated (adding data pool), then some
clients refresh the caps, but most of them do not, and the only way to
refresh it is to remount the filesystem. working tag would solve it.
Best regards,
Andrej
--
_____________________________________________________________
prof. dr. Andrej Filipcic, E-mail: Andrej.Filipcic(a)ijs.si
Department of Experimental High Energy Physics - F9
Jozef Stefan Institute, Jamova 39, P.o.Box 3000
SI-1001 Ljubljana, Slovenia
Tel.: +386-1-477-3674 Fax: +386-1-477-3166
-------------------------------------------------------------
Dear All,
a week ago we had to reboot our ESXi nodes since our CEPH cluster sudennly stopped serving all I/O. We have identified a VM (vCenter appliance) which was swapping heavily and causing heavy load. However, since then we are experiencing strange issues, as if the cluster cannot handle any spike in I/O load like migration or VM reboot.
The main problem is that the iSCSI commands issued by ESXi sometimes time out and ESXi reports inaccessible datastore. It disrupts the I/O heavily, we had to reboot the vmware cluster entirely several times. It started suddennly after approx 10 months of operation without problems.
I can see a steadily increasing number of dropped Rx packets on the iSCSI network interfaces in the OSDs.
Our CEPH setup is following: 4 OSDs, each having 3 10TB 7.2k rpm HDDs. The OSDs are connected by 25 Gbps Ethernet to the other nodes. For the RBD pools I have 64 PGs. The OSDs have 32 GB RAM, free is around 1G on each, I have seen even lower, though. OS is CentOS 7, CEPH release is Nautilus 14.2.11 deployed by ceph-ansible. MONs are virtualized in ESXi nodes on the local SSD drives.
iSCSI NICs are on separate VLAN, other traffic is served via bond with balance-xor (LACP is unusable due to VMware limitation for using SW iSCSI HBA) in a different VLAN. Our network is Mellanox based - SN2100 switches and Connect-X 5 NICs.
The iSCSI target serves 2 LUNs in RBD pool which is erasure coded. Yesterday I have increased the number of PGs for that pool from 64 to 128, without much effect after the cluster finished rebalancing.
In OSD servers kernel log we see the following:
[299560.618893] iSCSI Login negotiation failed.
[303088.450088] Did not receive response to NOPIN on CID: 0, failing connection for I_T Nexus iqn.1994-05.com.redhat:esxi1,i,0x00023d000002,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,t,0x01
[324926.694077] Did not receive response to NOPIN on CID: 0, failing connection for I_T Nexus iqn.1994-05.com.redhat:esxi2,i,0x00023d000001,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,t,0x01
[407067.404538] ABORT_TASK: Found referenced iSCSI task_tag: 5891
[407076.077175] ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 5891
[411677.887690] ABORT_TASK: Found referenced iSCSI task_tag: 6722
[411683.297425] ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 6722
The error in ESXi looks like this:
naa.60014053b46fc760ff0470dbd7980263" on path "vmhba64:C1:T0:L0" Failed:
2020-10-01T05:38:51.291Z cpu49:2144076)NMP: nmp_ThrottleLogForDevice:3856: Cmd 0x89 (0x459a5b1b9480, 2097241) to dev "naa.6001405a527d78935724451aa5f53513" on path "vmhba64:C2:T0:L1" Failed:
2020-10-01T05:38:57.098Z cpu44:2099346)NMP: nmp_ThrottleLogForDevice:3856: Cmd 0x8a (0x45ba96710ec0, 2107403) to dev "naa.60014053b46fc760ff0470dbd7980263" on path "vmhba64:C1:T0:L0" Failed:
2020-10-01T05:38:57.122Z cpu71:2098965)NMP: nmp_ThrottleLogForDevice:3856: Cmd 0x89 (0x45ba9676aec0, 2146212) to dev "naa.60014053b46fc760ff0470dbd7980263" on path "vmhba64:C1:T0:L0" Failed:
2020-10-01T05:38:57.256Z cpu65:2098959)NMP: nmp_ThrottleLogForDevice:3856: Cmd 0x89 (0x459a4179d8c0, 2146269) to dev "naa.6001405a527d78935724451aa5f53513" on path "vmhba64:C2:T0:L1" Failed:
We would appreciate any help you can give us.
Thank you very much.
Regards,
Martin Golasowski
I have been creating lvm osd's with:
ceph-volume lvm zap --destroy /dev/sdf && ceph-volume lvm create --data
/dev/sdf --dmcrypt
Because this procedure failed:
ceph-volume lvm zap --destroy /dev/sdf
(waiting on slow human typing)
ceph-volume lvm create --data /dev/sdf --dmcrypt
However when I was looking at the /var/lib/ceph/osd I expected these lvm
mounts to list as world writable[1]
So I decided to compare:
[@osd]# cp -a ceph-33 ceph-33.bak2
[@osd]# service ceph-osd@33 stop
[@osd]# service ceph-volume@lvm-33-9a7a9a7c-8fc8-441c-8380-acf7f8b1a670
start
[BUG1]
With the zap && create, I seem to be running the osd without the tmpfs
mounted?! So that means that if I reboot this node and the content is
different from the tmpfs. I have a serious problem.
So I am trying to unmount ceph-33
[BUG2]
Wtf ceph-osd@33 is running! Something started (ceph-volume?) the osd,
without even having a chance to inspect the difference of these folders.
If ceph-volume 'by design' starts the osd, then design is bad, nobody is
expecting this behaviour and I have no idea what can go wrong if this
startup data in tmpfs ceph-33 mount is different from the lvm create
files on the os disk.
Stopping again osd-33
[@osd]# service ceph-osd@33 stop
Trying to unmount again ceph-33
[@osd]# service ceph-volume@lvm-33-9a7a9a7c-8fc8-441c-8380-acf7f8b1a670
stop
[BUG3]
service ceph-volume just does not unmount tmpfs. I have to unmount with
umount /var/lib/ceph/osd/ceph-33
Inspecting differences of both,
[@osd]# ls -l ceph-33.bak2 ceph-33.new
ceph-33.new:
total 28
lrwxrwxrwx 1 ceph ceph 50 Oct 1 10:06 block ->
/dev/mapper/1K8AX3-D3Gv-VKdY-0wTW-qjgd-txAu-JbNJHo
-rw------- 1 ceph ceph 37 Oct 1 10:06 ceph_fsid
-rw------- 1 ceph ceph 37 Oct 1 10:06 fsid
-rw------- 1 ceph ceph 56 Oct 1 10:06 keyring
-rw------- 1 ceph ceph 106 Oct 1 10:06 lockbox.keyring
-rw------- 1 ceph ceph 6 Oct 1 10:06 ready
-rw------- 1 ceph ceph 10 Oct 1 10:06 type
-rw------- 1 ceph ceph 3 Oct 1 10:06 whoami
ceph-33.bak2:
total 56
-rw-r----- 1 ceph ceph 373 Sep 30 21:23 activate.monmap
lrwxrwxrwx 1 ceph ceph 50 Sep 30 21:23 block ->
/dev/mapper/1K8AX3-D3Gv-VKdY-0wTW-qjgd-txAu-JbNJHo
-rw------- 1 ceph ceph 2 Sep 30 21:23 bluefs
-rw------- 1 ceph ceph 37 Sep 30 21:23 ceph_fsid
-rw-r----- 1 ceph ceph 37 Sep 30 21:23 fsid
-rw------- 1 ceph ceph 56 Sep 30 21:23 keyring
-rw------- 1 ceph ceph 8 Sep 30 21:23 kv_backend
-rw------- 1 ceph ceph 106 Sep 30 21:23 lockbox.keyring
-rw------- 1 ceph ceph 21 Sep 30 21:23 magic
-rw------- 1 ceph ceph 4 Sep 30 21:23 mkfs_done
-rw------- 1 ceph ceph 41 Sep 30 21:23 osd_key
-rw------- 1 ceph ceph 6 Sep 30 21:23 ready
-rw------- 1 ceph ceph 3 Sep 30 21:23 require_osd_release
-rw------- 1 ceph ceph 10 Sep 30 21:23 type
-rw------- 1 ceph ceph 3 Sep 30 21:23 whoami
The contents of the files in new (tmpfs) luckily are the same as in bak2
(ceph-volume create?). However as you can see I miss quite a few files
in the tmpfs
So I am giving it a try and start this osd.33 with the tmpfs mounted.
[@osd]# service ceph-volume@lvm-33-9a7a9a7c-8fc8-441c-8380-acf7f8b1a670
start^C
[@osd]# ps -ef | grep ceph-osd | grep 33
[@osd]# service ceph-volume@lvm-33-9a7a9a7c-8fc8-441c-8380-acf7f8b1a670
start
Redirecting to /bin/systemctl start
ceph-volume(a)lvm-33-9a7a9a7c-8fc8-441c-8380-acf7f8b1a670.service
And indeed, again ceph-osd is started.
[@osd]# ps -ef | grep ceph-osd | grep 33
ceph 1651105 1 48 11:29 ? 00:00:00 /usr/bin/ceph-osd -f
--cluster ceph --id 33 --setuser ceph --setgroup ceph
[QUESTION1]
Should I just copy these files like kv_backend, mkfs_done to the tmpfs
mount? I seem to have these files on other ceph-volume created osd's.
[QUESTION2]
Is there a reasonable explanation for running into such issues, before I
start thinking this is shitty scripting?
[1]
https://tracker.ceph.com/issues/47549
Hello, we are using
Ceph 14.x for our s3 storages and some of our customers want to create a locked object bucket.
BUT:
While the creation of a locked bucket works, the objects are still deletable.
Any ideas or hints?
Best regards:
Torsten