Hell Ingo

We had the same issue, the fix is to update bucket instance metadata information. Unfortunately in a source code there is a special code to avoid this kind of change.
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_bucket.cc#L3005

So to fix a bucket you have to comment those two lines, compile radosgw-admin and update bucket index metadata with proper location (In my example they were wrong or empty)

Jacek

pon., 2 gru 2019 o 12:28 Ingo Reimann <ireimann@dunkel.de> napisał(a):
Hi,

2 years after my issue [ https://tracker.ceph.com/issues/22928 | https://tracker.ceph.com/issues/22928 ] the next one fires back.

The Problem:
Old Buckets have their index and data in rgw.buckets:
root@cephrgw01:~# radosgw-admin metadata get bucket:testtesttesty
{
"key": "bucket:testtesttesty",
"ver": {
"tag": "_E_OHNhD28Zu1DeuvyGq8Q8b",
"ver": 1
},
"mtime": "2013-11-11 09:25:56.000000Z",
"data": {
"bucket": {
"name": "testtesttesty",
"marker": "default.2542971.19",
"bucket_id": "default.2542971.19",
"tenant": "",
"explicit_placement": {
"data_pool": "rgw.buckets",
"data_extra_pool": "",
"index_pool": "rgw.buckets"
}
},
"owner": "123",
"creation_time": "2013-11-11 09:25:56.000000Z",
"linked": "true",
"has_bucket_info": "false"
}
}

After upgrade from luminous to nautilus i get 400(InvalidArgument) and "NOTICE: invalid dest placement" in the radosgw-log on access to the buckets

My zone defines:
root@cephrgw01:~# radosgw-admin zone get
{
"id": "default",
"name": "default",
"domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"lc_pool": ".log:lc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
"usage_log_pool": ".usage",
"reshard_pool": ".log:reshard",
"user_keys_pool": ".users",
"user_email_pool": ".users.email",
"user_swift_pool": ".users.swift",
"user_uid_pool": ".users.uid",
"otp_pool": "default.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "rgw.buckets"
}
},
"data_extra_pool": "rgw.buckets.non-ec",
"index_type": 0
}
}
],
"metadata_heap": ".rgw.meta",
"realm_id": "*********************c"
}

Now i am a little bit lost. I added a new placement to my zone and zonegroup
radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id pre-jewel
radosgw-admin zone placement add --rgw-zonegroup default --placement-id pre-jewel --data-pool rgw.buckets --index-pool rgw.buckets data-extra-pool ""
radosgw-admin period update --commit

root@cephrgw01:~# radosgw-admin zone get
{
"id": "default",
"name": "default",
"domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"lc_pool": ".log:lc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
"usage_log_pool": ".usage",
"reshard_pool": ".log:reshard",
"user_keys_pool": ".users",
"user_email_pool": ".users.email",
"user_swift_pool": ".users.swift",
"user_uid_pool": ".users.uid",
"otp_pool": "default.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "rgw.buckets"
}
},
"data_extra_pool": "rgw.buckets.non-ec",
"index_type": 0
}
},
{
"key": "pre-jewel",
"val": {
"index_pool": "rgw.buckets",
"storage_classes": {
"STANDARD": {
"data_pool": "rgw.buckets"
}
},
"data_extra_pool": "",
"index_type": 0
}
}
],
"metadata_heap": ".rgw.meta",
"realm_id": "****************c"
}

Nevertheless, only the luminous gateways may list my old buckets. As far as I can see, I may only change the placement_rule for new buckets. Is there any chance to make radosgw find the old indices and complete the upgrade to nautilus?

Many thanks,

Ingo

--
Ingo Reimann
        [ https://www.dunkel.de/ ]
Dunkel GmbH
Philipp-Reis-Straße 2
65795 Hattersheim
Fon: +49 6190 889-100
Fax: +49 6190 889-399
eMail: support@dunkel.de
http://www.Dunkel.de/   Amtsgericht Frankfurt/Main
HRB: 37971
Geschäftsführer: Axel Dunkel
Ust-ID: DE 811622001
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io


--
Jacek Suchenia
jacek.suchenia@gmail.com