Hi Amit,
Yes in the non-ec pool there’re like 600 files .meta but i dont know if its safe move to
data pool.
Does anyone know if there is a way to generate a synthetic .meta only so that the delete
command is capable of deleting the file?
Regards
Manuel
De: Amit Ghadge <amitg.b14(a)gmail.com>
Enviado el: lunes, 29 de junio de 2020 6:14
Para: EDH - Manuel Rios <mriosfer(a)easydatahost.com>
Asunto: Re: [ceph-users] rgw : unable to find part(s) of aborted multipart upload of
[object].meta
You can also check in default.rgw.buckets.non-ec pool for unmerged multipart or bucket
indexes that need to be fix.
On Mon, Jun 29, 2020 at 5:56 AM EDH - Manuel Rios
<mriosfer@easydatahost.com<mailto:mriosfer@easydatahost.com>> wrote:
Hi Dev's
With the failures of the previous versions in the buckets due to the shardings.
We have started a copy of the buckets to new buckets to clean our ceph cluster.
After synchronizing the bucket with the AWS cli, we are in the phase of deleting the old
buckets.
We have tried unsuccessfully: radosgw-admin bucket rm --bucket = XXXX --purge-objects
Result is a loop that shows with the debug: "NOTE: unable to find part (s) of aborted
multipart upload of [object] .meta"
After seeing this failure we have tried to clean up using "rados".
For this we have listed all the objects belonging to the pool bucket using its
marker_id.
Once done, we send them a script to delete massively, with "rados -p [rgw-pool.data]
rm [object]"
The result of all is similar to the following:
rados -p default.rgw.buckets.data rm
48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-3369403d-e0bf-45e3-89ba-614b6d390dc5/CBB_BIM-EURODG/CBB_DiskImage/Disk_00000000-0000-0000-0000-000000000000/Volume_NTFS_00000000-0000-0000-0000-000000000001$/20200104230152/131.cbrevision.5K5_leiUZoHQsjBvUxw2QbM1WQPQLlc
error removing
default.rgw.buckets.data>48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-3369403d-e0bf-45e3-89ba-614b6d390dc5/CBB_BIM-EURODG/CBB_DiskImage/Disk_00000000-0000-0000-0000-000000000000/Volume_NTFS_00000000-0000-0000-0000-000000000001$/20200104230152/131.cbrevision.5K5_leiUZoHQsjBvUxw2QbM1WQPQLlc:
(2) No such file or directory
Any idea or better way to clean the cluster of these objects?
Our estimates are 100TB of wrong objects.
Regards
Manuel
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>
Show replies by thread