hi , all
I m using rgw multisite with ceph 17.2.5 deployed with rook.
A number of bucket.sync-status mdlogs with names of buckets deleted during maintenance
were found.
test env )
bash-4.4$ rados -p master.rgw.log ls | grep bucket.sync-status | grep test1
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:5
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:4
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:1
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:7
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:2
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:10
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:6
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:3
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:0
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:9
bucket.sync-status.a788ebed-10a9-48da-8fd4-709323da68e7:test1:8da53b60-0940-46e1-a821-551347d82d2c.16016.2:8
bash-4.4$ radosgw-admin bucket list
[
"rook-ceph-bucket-checker-94e835a7-6356-46bf-a90c-591b23b15959",
"230314",
"cyyoon",
"test23031303"
]
How to delete the bucket.sync-status mdlog of the deleted bucket in this situation? Should
I proceed with mdlog trim ?
As these issues accumulated in the prod environment, a large omap issue occurred in the
log pool.
root@osd-001:~# radosgw-admin log list| grep [DELETED BUCKET NAME] | wc -l
25636
Any other ideas as to what might be causing this, or anything else we could try to help
diagnose or fix this? Thanks in advance!