Hi Ceph users
We are using Ceph Pacific (16) in this specific deployment.
In our use case we do not want our users to be able to generate signature v4 URLs because they bypass the policies that we set on buckets (e.g IP restrictions).
Currently we have a sidecar reverse proxy running that filters requests with signature URL specific request parameters.
This is obviously not very efficient and we are looking to replace this somehow in the future.
1. Is there an option in RGW to disable this signed URLs (e.g returning status 403)?
2. If not is this planned or would it make sense to add it as a configuration option?
3. Or is the behaviour of not respecting bucket policies in RGW with signature v4 URLs a bug and they should be actually applied?
Thanks you for your help and let me know if you have any questions
Marc Singer
Hi,
following up on the previous thread (After hardware failure tried to
recover ceph and followed instructions for recovery using OSDS), we
were able to get ceph back into a healthy state (including the unfound
object). Now the CephFS needs to be recovered and I'm having trouble
to fully understand the docs [1] which the next steps would be. We ran
the following which according to [1] sets the state to existing but
failed:
ceph fs new <fs_name> <metadata_pool> <data_pool> --force --recover
But how to continue from here? Should we expect an active MDS at this
point or not? Because the "ceph fs status" output still shows rank 0
as failed. We then tried:
ceph fs set <fs_name> joinable true
But apparently it was already joinable, nothing changed. Before doing
anything (destructive) from the advanced options [2] I wanted to ask
the community, how to get on from here. I pasted the mds logs at the
bottom, I'm not really sure if the current state is expected or not.
Apparently, the journal recovers but the purge_queue does not:
mds.0.41 Booting: 2: waiting for purge queue recovered
mds.0.journaler.pq(ro) _finish_probe_end write_pos = 14797504512
(header had 14789452521). recovered.
mds.0.purge_queue operator(): open complete
mds.0.purge_queue operator(): recovering write_pos
monclient: get_auth_request con 0x55c280bc5c00 auth_method 0
monclient: get_auth_request con 0x55c280ee0c00 auth_method 0
mds.0.journaler.pq(ro) _finish_read got error -2
mds.0.purge_queue _recover: Error -2 recovering write_pos
mds.0.purge_queue _go_readonly: going readonly because internal IO
failed: No such file or directory
mds.0.journaler.pq(ro) set_readonly
mds.0.41 unhandled write error (2) No such file or directory, force
readonly...
mds.0.cache force file system read-only
force file system read-only
Is this expected because the "--recover" flag prevents an active MDS
or not? Before running "ceph mds rmfailed ..." and/or "ceph fs reset
<file system name>" with the --yes-i-really-mean-it flag I'd like to
ask for your input. In which case should we run those commands? The
docs are not really clear to me. Any input is highly appreciated!
Thanks!
Eugen
[1] https://docs.ceph.com/en/latest/cephfs/recover-fs-after-mon-store-loss/
[2]
https://docs.ceph.com/en/latest/cephfs/administration/#advanced-cephfs-admi…
---snip---
Dec 07 15:35:48 node02 bash[692598]: debug -90>
2023-12-07T13:35:47.730+0000 7f4cd855f700 1 mds.storage.node02.hemalk
Updating MDS map to version 41 from mon.0
Dec 07 15:35:48 node02 bash[692598]: debug -89>
2023-12-07T13:35:47.730+0000 7f4cd855f700 4 mds.0.purge_queue
operator(): data pool 3 not found in OSDMap
Dec 07 15:35:48 node02 bash[692598]: debug -88>
2023-12-07T13:35:47.730+0000 7f4cd855f700 5 asok(0x55c27fe86000)
register_command objecter_requests hook 0x55c27fe16310
Dec 07 15:35:48 node02 bash[692598]: debug -87>
2023-12-07T13:35:47.730+0000 7f4cd855f700 10 monclient: _renew_subs
Dec 07 15:35:48 node02 bash[692598]: debug -86>
2023-12-07T13:35:47.730+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -85>
2023-12-07T13:35:47.730+0000 7f4cd855f700 10 log_channel(cluster)
update_config to_monitors: true to_syslog: false syslog_facility:
prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port:
12201)
Dec 07 15:35:48 node02 bash[692598]: debug -84>
2023-12-07T13:35:47.730+0000 7f4cd855f700 4 mds.0.purge_queue
operator(): data pool 3 not found in OSDMap
Dec 07 15:35:48 node02 bash[692598]: debug -83>
2023-12-07T13:35:47.730+0000 7f4cd855f700 4 mds.0.0 apply_blocklist:
killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec 07 15:35:48 node02 bash[692598]: debug -82>
2023-12-07T13:35:47.730+0000 7f4cd855f700 1 mds.0.41 handle_mds_map i
am now mds.0.41
Dec 07 15:35:48 node02 bash[692598]: debug -81>
2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.41 handle_mds_map
state change up:standby --> up:replay
Dec 07 15:35:48 node02 bash[692598]: debug -80>
2023-12-07T13:35:47.734+0000 7f4cd855f700 5
mds.beacon.storage.node02.hemalk set_want_state: up:standby -> up:replay
Dec 07 15:35:48 node02 bash[692598]: debug -79>
2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.41 replay_start
Dec 07 15:35:48 node02 bash[692598]: debug -78>
2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0:
opening inotable
Dec 07 15:35:48 node02 bash[692598]: debug -77>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -76>
2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0:
opening sessionmap
Dec 07 15:35:48 node02 bash[692598]: debug -75>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -74>
2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0:
opening mds log
Dec 07 15:35:48 node02 bash[692598]: debug -73>
2023-12-07T13:35:47.734+0000 7f4cd855f700 5 mds.0.log open
discovering log bounds
Dec 07 15:35:48 node02 bash[692598]: debug -72>
2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0:
opening purge queue (async)
Dec 07 15:35:48 node02 bash[692598]: debug -71>
2023-12-07T13:35:47.734+0000 7f4cd855f700 4 mds.0.purge_queue open:
opening
Dec 07 15:35:48 node02 bash[692598]: debug -70>
2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.journaler.pq(ro)
recover start
Dec 07 15:35:48 node02 bash[692598]: debug -69>
2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.journaler.pq(ro)
read_head
Dec 07 15:35:48 node02 bash[692598]: debug -68>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -67>
2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0:
loading open file table (async)
Dec 07 15:35:48 node02 bash[692598]: debug -66>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -65>
2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0:
opening snap table
Dec 07 15:35:48 node02 bash[692598]: debug -64>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -63>
2023-12-07T13:35:47.734+0000 7f4cd1d52700 4 mds.0.journalpointer
Reading journal pointer '400.00000000'
Dec 07 15:35:48 node02 bash[692598]: debug -62>
2023-12-07T13:35:47.734+0000 7f4cd1d52700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -61>
2023-12-07T13:35:47.734+0000 7f4cd4557700 2 mds.0.cache Memory usage:
total 316452, rss 43088, heap 198940, baseline 198940, 0 / 0 inodes
have caps, 0 caps, 0 caps per inode
Dec 07 15:35:48 node02 bash[692598]: debug -60>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _renew_subs
Dec 07 15:35:48 node02 bash[692598]: debug -59>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
_send_mon_message to mon.node02 at v2:10.40.99.12:3300/0
Dec 07 15:35:48 node02 bash[692598]: debug -58>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
handle_get_version_reply finishing 1 version 10835
Dec 07 15:35:48 node02 bash[692598]: debug -57>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
handle_get_version_reply finishing 2 version 10835
Dec 07 15:35:48 node02 bash[692598]: debug -56>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
handle_get_version_reply finishing 3 version 10835
Dec 07 15:35:48 node02 bash[692598]: debug -55>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
handle_get_version_reply finishing 4 version 10835
Dec 07 15:35:48 node02 bash[692598]: debug -54>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
handle_get_version_reply finishing 5 version 10835
Dec 07 15:35:48 node02 bash[692598]: debug -53>
2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient:
handle_get_version_reply finishing 6 version 10835
Dec 07 15:35:48 node02 bash[692598]: debug -52>
2023-12-07T13:35:47.734+0000 7f4cdb565700 10 monclient:
get_auth_request con 0x55c280bc5800 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -51>
2023-12-07T13:35:47.734+0000 7f4cdbd66700 10 monclient:
get_auth_request con 0x55c280dc6800 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -50>
2023-12-07T13:35:47.734+0000 7f4cdad64700 10 monclient:
get_auth_request con 0x55c280dc7800 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -49>
2023-12-07T13:35:47.734+0000 7f4cd3555700 1 mds.0.journaler.pq(ro)
_finish_read_head loghead(trim 14789115904, expire 14789452521, write
14789452521, stream_format 1). probing for end of log (from
14789452521)...
Dec 07 15:35:48 node02 bash[692598]: debug -48>
2023-12-07T13:35:47.734+0000 7f4cd3555700 1 mds.0.journaler.pq(ro)
probing for end of the log
Dec 07 15:35:48 node02 bash[692598]: debug -47>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 1 mds.0.journaler.mdlog(ro)
recover start
Dec 07 15:35:48 node02 bash[692598]: debug -46>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 1 mds.0.journaler.mdlog(ro)
read_head
Dec 07 15:35:48 node02 bash[692598]: debug -45>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 4 mds.0.log Waiting for
journal 0x200 to recover...
Dec 07 15:35:48 node02 bash[692598]: debug -44>
2023-12-07T13:35:47.738+0000 7f4cdbd66700 10 monclient:
get_auth_request con 0x55c280dc7c00 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -43>
2023-12-07T13:35:47.738+0000 7f4cd2553700 1 mds.0.journaler.mdlog(ro)
_finish_read_head loghead(trim 1416940748800, expire 1416947000701,
write 1417125359769, stream_format 1). probing for end of log (from
1417125359769)...
Dec 07 15:35:48 node02 bash[692598]: debug -42>
2023-12-07T13:35:47.738+0000 7f4cd2553700 1 mds.0.journaler.mdlog(ro)
probing for end of the log
Dec 07 15:35:48 node02 bash[692598]: debug -41>
2023-12-07T13:35:47.738+0000 7f4cdb565700 10 monclient:
get_auth_request con 0x55c280e2fc00 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -40>
2023-12-07T13:35:47.738+0000 7f4cdad64700 10 monclient:
get_auth_request con 0x55c280ee0400 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -39>
2023-12-07T13:35:47.738+0000 7f4cd2553700 1 mds.0.journaler.mdlog(ro)
_finish_probe_end write_pos = 1417129492480 (header had
1417125359769). recovered.
Dec 07 15:35:48 node02 bash[692598]: debug -38>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 4 mds.0.log Journal 0x200
recovered.
Dec 07 15:35:48 node02 bash[692598]: debug -37>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 4 mds.0.log Recovered
journal 0x200 in format 1
Dec 07 15:35:48 node02 bash[692598]: debug -36>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 2 mds.0.41 Booting: 1:
loading/discovering base inodes
Dec 07 15:35:48 node02 bash[692598]: debug -35>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 0 mds.0.cache creating
system inode with ino:0x100
Dec 07 15:35:48 node02 bash[692598]: debug -34>
2023-12-07T13:35:47.738+0000 7f4cd1d52700 0 mds.0.cache creating
system inode with ino:0x1
Dec 07 15:35:48 node02 bash[692598]: debug -33>
2023-12-07T13:35:47.742+0000 7f4cdbd66700 10 monclient:
get_auth_request con 0x55c280dc7400 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -32>
2023-12-07T13:35:47.742+0000 7f4cd2553700 2 mds.0.41 Booting: 2:
replaying mds log
Dec 07 15:35:48 node02 bash[692598]: debug -31>
2023-12-07T13:35:47.742+0000 7f4cd2553700 2 mds.0.41 Booting: 2:
waiting for purge queue recovered
Dec 07 15:35:48 node02 bash[692598]: debug -30>
2023-12-07T13:35:47.742+0000 7f4cd3555700 1 mds.0.journaler.pq(ro)
_finish_probe_end write_pos = 14797504512 (header had 14789452521).
recovered.
Dec 07 15:35:48 node02 bash[692598]: debug -29>
2023-12-07T13:35:47.742+0000 7f4cd3555700 4 mds.0.purge_queue
operator(): open complete
Dec 07 15:35:48 node02 bash[692598]: debug -28>
2023-12-07T13:35:47.742+0000 7f4cd3555700 4 mds.0.purge_queue
operator(): recovering write_pos
Dec 07 15:35:48 node02 bash[692598]: debug -27>
2023-12-07T13:35:47.742+0000 7f4cdb565700 10 monclient:
get_auth_request con 0x55c280bc5c00 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -26>
2023-12-07T13:35:47.742+0000 7f4cdad64700 10 monclient:
get_auth_request con 0x55c280ee0c00 auth_method 0
Dec 07 15:35:48 node02 bash[692598]: debug -25>
2023-12-07T13:35:47.746+0000 7f4cd3555700 0 mds.0.journaler.pq(ro)
_finish_read got error -2
Dec 07 15:35:48 node02 bash[692598]: debug -24>
2023-12-07T13:35:47.746+0000 7f4cd3555700 -1 mds.0.purge_queue
_recover: Error -2 recovering write_pos
Dec 07 15:35:48 node02 bash[692598]: debug -23>
2023-12-07T13:35:47.746+0000 7f4cd3555700 1 mds.0.purge_queue
_go_readonly: going readonly because internal IO failed: No such file
or directory
Dec 07 15:35:48 node02 bash[692598]: debug -22>
2023-12-07T13:35:47.746+0000 7f4cd3555700 1 mds.0.journaler.pq(ro)
set_readonly
Dec 07 15:35:48 node02 bash[692598]: debug -21>
2023-12-07T13:35:47.746+0000 7f4cd3555700 -1 mds.0.41 unhandled write
error (2) No such file or directory, force readonly...
Dec 07 15:35:48 node02 bash[692598]: debug -20>
2023-12-07T13:35:47.746+0000 7f4cd3555700 1 mds.0.cache force file
system read-only
Dec 07 15:35:48 node02 bash[692598]: debug -19>
2023-12-07T13:35:47.746+0000 7f4cd3555700 0 log_channel(cluster) log
[WRN] : force file system read-only
---snip---
Dear fellow cephers,
today we observed a somewhat worrisome inconsistency on our ceph fs. A file created on one host showed up as 0 length on all other hosts:
[user1@host1 h2lib]$ ls -lh
total 37M
-rw-rw---- 1 user1 user1 12K Nov 1 11:59 dll_wrapper.py
[user2@host2 h2lib]# ls -l
total 34
-rw-rw----. 1 user1 user1 0 Nov 1 11:59 dll_wrapper.py
[user1@host1 h2lib]$ cp dll_wrapper.py dll_wrapper.py.test
[user1@host1 h2lib]$ ls -l
total 37199
-rw-rw---- 1 user1 user1 11641 Nov 1 11:59 dll_wrapper.py
-rw-rw---- 1 user1 user1 11641 Nov 1 13:10 dll_wrapper.py.test
[user2@host2 h2lib]# ls -l
total 45
-rw-rw----. 1 user1 user1 0 Nov 1 11:59 dll_wrapper.py
-rw-rw----. 1 user1 user1 11641 Nov 1 13:10 dll_wrapper.py.test
Executing a sync on all these hosts did not help. However, deleting the problematic file and replacing it with a copy seemed to work around the issue. We saw this with ceph kclients of different versions, it seems to be on the MDS side.
How can this happen and how dangerous is it?
ceph fs status (showing ceph version):
# ceph fs status
con-fs2 - 1662 clients
=======
RANK STATE MDS ACTIVITY DNS INOS
0 active ceph-15 Reqs: 14 /s 2307k 2278k
1 active ceph-11 Reqs: 159 /s 4208k 4203k
2 active ceph-17 Reqs: 3 /s 4533k 4501k
3 active ceph-24 Reqs: 3 /s 4593k 4300k
4 active ceph-14 Reqs: 1 /s 4228k 4226k
5 active ceph-13 Reqs: 5 /s 1994k 1782k
6 active ceph-16 Reqs: 8 /s 5022k 4841k
7 active ceph-23 Reqs: 9 /s 4140k 4116k
POOL TYPE USED AVAIL
con-fs2-meta1 metadata 2177G 7085G
con-fs2-meta2 data 0 7085G
con-fs2-data data 1242T 4233T
con-fs2-data-ec-ssd data 706G 22.1T
con-fs2-data2 data 3409T 3848T
STANDBY MDS
ceph-10
ceph-08
ceph-09
ceph-12
MDS version: ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)
There is no health issue:
# ceph status
cluster:
id: abc
health: HEALTH_WARN
3 pgs not deep-scrubbed in time
services:
mon: 5 daemons, quorum ceph-01,ceph-02,ceph-03,ceph-25,ceph-26 (age 9w)
mgr: ceph-25(active, since 7w), standbys: ceph-26, ceph-01, ceph-03, ceph-02
mds: con-fs2:8 4 up:standby 8 up:active
osd: 1284 osds: 1279 up (since 2d), 1279 in (since 5d)
task status:
data:
pools: 14 pools, 25065 pgs
objects: 2.20G objects, 3.9 PiB
usage: 4.9 PiB used, 8.2 PiB / 13 PiB avail
pgs: 25039 active+clean
26 active+clean+scrubbing+deep
io:
client: 799 MiB/s rd, 55 MiB/s wr, 3.12k op/s rd, 1.82k op/s wr
The inconsistency seems undiagnosed, I couldn't find anything interesting in the cluster log. What should I look for and where?
I moved the folder to another location for diagnosis. Unfortunately, I don't have 2 clients any more showing different numbers, I see a 0 length now everywhere for the moved folder. I'm pretty sure though that the file still is non-zero length.
Thanks for any pointers.
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Hi guys!
Based on our observation of the impact of the balancer on the
performance of the entire cluster, we have drawn conclusions that we
would like to discuss with you.
- A newly created pool should be balanced before being handed over
to the user. This, I believe, is quite evident.
- When replacing a disk, it is advisable to exchange it directly
for a new one. As soon as the OSD replacement occurs, the balancer
should be invoked to realign any improperly placed PGs during the disk
outage and disk recovery.
Perhaps an even better method is to pause recovery and backfilling
before removing the disk, remove the disk itself, promptly add a new
one, and then resume recovery and backfilling. It's essential to per
form all of this as quickly as possible (using a script).
Ad. We are using a community balancer developed by Jonas Jelton because
the built-in one does not meet our requirements.
What are your thoughts on this?
Michal
Details of this release are summarized here:
https://tracker.ceph.com/issues/63443#note-1
Seeking approvals/reviews for:
smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
rados - Neha, Radek, Travis, Ernesto, Adam King
rgw - Casey
fs - Venky
orch - Adam King
rbd - Ilya
krbd - Ilya
upgrade/quincy-x (reef) - Laura PTL
powercycle - Brad
perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
Please reply to this email with approval and/or trackers of known
issues/PRs to address them.
TIA
YuriW
Hi,
I have an Openstack platform deployed with Yoga and ceph-ansible pacific on
Rocky 8.
Now I need to do an upgrade to Openstack zed with octopus on Rocky 9.
This is the path of the upgrade I have traced
- upgrade my nodes to Rocky 9 keeping Openstack yoga with ceph-ansible
pacific.
- convert ceph pacific from ceph-ansible to cephadm.
- stop Openstack platform yoga
- upgrade ceph pacific to octopus
- upgrade Openstack yoga to zed.
Any thoughts or guide lines to keep in mind and follow regarding ceph
convertion and upgrade.
Ps : on my ceph I have rbd, rgw and cephfs pools.
Regards.
Hi All,
Recently I upgraded my cluster from Quincy to Reef. Everything appeared to go smoothly and without any issues arising.
I was forced to poweroff the cluster, performing the ususal procedures beforehand and everything appears to have come back fine. Every service reports green across the board except....
If i try to copy any files from a cephfs mountpoint whether kernel or fuse the actual copy will hang. ls/stat etc all work which indicates metadata appears fine but copying always hangs.
I can copy objects direct using the rados toolset which indicates the underlying data exists.
The system itself reports no errors and thinks its healthy.
The entire cluster and cephfs clients are all Rocky9.
Any advice would be much appreciatd. I'd find this easier to deal with if the cluster actually gave me an error....
Hi community,
I am have multiple bucket was delete but lifecycle of bucket still exist,
how i can delete it with radosgw-admin, because user can't access to bucket
for delete lifecycle. User for this bucket does not exist.
root@ceph:~# radosgw-admin lc list
[
{
"bucket": ":r30203:f3fec4b6-a248-4f3f-be75-b8055e61233a.33081.8",
"started": "Wed, 06 Dec 2023 10:43:55 GMT",
"status": "COMPLETE"
},
{
"bucket": ":r30304:f3fec4b6-a248-4f3f-be75-b8055e61233a.33081.13",
"started": "Wed, 06 Dec 2023 10:43:54 GMT",
"status": "COMPLETE"
},
{
"bucket":
":ec3204cam04:f3fec4b6-a248-4f3f-be75-b8055e61233a.31736.1",
"started": "Wed, 06 Dec 2023 10:44:30 GMT",
"status": "COMPLETE"
},
{
"bucket": ":r30105:f3fec4b6-a248-4f3f-be75-b8055e61233a.33081.5",
"started": "Wed, 06 Dec 2023 10:44:40 GMT",
"status": "COMPLETE"
},
{
"bucket": ":r30303:f3fec4b6-a248-4f3f-be75-b8055e61233a.33081.14",
"started": "Wed, 06 Dec 2023 10:44:40 GMT",
"status": "COMPLETE"
},
{
"bucket":
":ec3201cam02:f3fec4b6-a248-4f3f-be75-b8055e61233a.56439.2",
"started": "Wed, 06 Dec 2023 10:43:56 GMT",
"status": "COMPLETE"
},
Thanks to the community.