Ingo

We got this comfort to shut down S3 interface for couple of minutes where we could fix buckets - all the operation took ~3-5 minutes so we didn't lock buckets

Jacek

pon., 9 gru 2019 o 15:31 Ingo Reimann <ireimann@dunkel.de> napisał(a):
Hi Jacek,

thanks! I wanted to do follow exactly that plan with the metadata!

For moving the index objects to the proper pool - did you lock the buckets somehow?I wanted to avoid moving the indeces in the first step, when the placement targets allow leaving everything as it is.

We have a lot of traffic on the buckets, so i thought about modifying the reshard command and store the "resharded" index on the new place. Just now, i am not sure, if i need that and if i am able to do the right modifications on the code..

Kind regards,
Ingo


Von: "Jacek Suchenia" <jacek.suchenia@gmail.com>
An: "Ingo Reimann" <ireimann@dunkel.de>, "ceph-users" <ceph-users@ceph.io>
Gesendet: Montag, 9. Dezember 2019 13:34:26
Betreff: Re: [ceph-users] Re: nautilus radosgw fails with pre jewel buckets - index objects not at right place

Ingo
Yes, we had to use multicore machine to do this efficiently ;-)
Update procedure is very similar to commands described here: https://docs.ceph.com/docs/master/radosgw/layout/

so radosgw-admin metadata get bucket.instance:<bucket>:<instance>
then fix JSON
then radoshw-admin metadata set bucket.instance:<bucket>:<instnace>

Good luck!
ps: using this tweak we were able also to move some index objects from storage pool to index pool, but you have to migrate objects along with OMAP keys

Jacek

pon., 9 gru 2019 o 08:39 Ingo Reimann <ireimann@dunkel.de> napisał(a):
Hi Jacek,

thanks for your help. I will try to inject my new placement_target "pre-jewel" as placement_rule for that buckets with the modified radosgw-admin. Just now struggeling with the compile prozess. 

kind regards,
Ingo

Von: "Jacek Suchenia" <jacek.suchenia@gmail.com>
An: "ceph-users" <ceph-users@ceph.io>
Gesendet: Sonntag, 8. Dezember 2019 16:47:16
Betreff: [ceph-users] Re: nautilus radosgw fails with pre jewel buckets - index objects not at right place

Hell Ingo

We had the same issue, the fix is to update bucket instance metadata information. Unfortunately in a source code there is a special code to avoid this kind of change.

So to fix a bucket you have to comment those two lines, compile radosgw-admin and update bucket index metadata with proper location (In my example they were wrong or empty)

Jacek

pon., 2 gru 2019 o 12:28 Ingo Reimann <ireimann@dunkel.de> napisał(a):
Hi,

2 years after my issue [ https://tracker.ceph.com/issues/22928 | https://tracker.ceph.com/issues/22928 ] the next one fires back.

The Problem:
Old Buckets have their index and data in rgw.buckets:
root@cephrgw01:~# radosgw-admin metadata get bucket:testtesttesty
{
"key": "bucket:testtesttesty",
"ver": {
"tag": "_E_OHNhD28Zu1DeuvyGq8Q8b",
"ver": 1
},
"mtime": "2013-11-11 09:25:56.000000Z",
"data": {
"bucket": {
"name": "testtesttesty",
"marker": "default.2542971.19",
"bucket_id": "default.2542971.19",
"tenant": "",
"explicit_placement": {
"data_pool": "rgw.buckets",
"data_extra_pool": "",
"index_pool": "rgw.buckets"
}
},
"owner": "123",
"creation_time": "2013-11-11 09:25:56.000000Z",
"linked": "true",
"has_bucket_info": "false"
}
}

After upgrade from luminous to nautilus i get 400(InvalidArgument) and "NOTICE: invalid dest placement" in the radosgw-log on access to the buckets

My zone defines:
root@cephrgw01:~# radosgw-admin zone get
{
"id": "default",
"name": "default",
"domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"lc_pool": ".log:lc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
"usage_log_pool": ".usage",
"reshard_pool": ".log:reshard",
"user_keys_pool": ".users",
"user_email_pool": ".users.email",
"user_swift_pool": ".users.swift",
"user_uid_pool": ".users.uid",
"otp_pool": "default.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "rgw.buckets"
}
},
"data_extra_pool": "rgw.buckets.non-ec",
"index_type": 0
}
}
],
"metadata_heap": ".rgw.meta",
"realm_id": "*********************c"
}

Now i am a little bit lost. I added a new placement to my zone and zonegroup
radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id pre-jewel
radosgw-admin zone placement add --rgw-zonegroup default --placement-id pre-jewel --data-pool rgw.buckets --index-pool rgw.buckets data-extra-pool ""
radosgw-admin period update --commit

root@cephrgw01:~# radosgw-admin zone get
{
"id": "default",
"name": "default",
"domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"lc_pool": ".log:lc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
"usage_log_pool": ".usage",
"reshard_pool": ".log:reshard",
"user_keys_pool": ".users",
"user_email_pool": ".users.email",
"user_swift_pool": ".users.swift",
"user_uid_pool": ".users.uid",
"otp_pool": "default.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "rgw.buckets"
}
},
"data_extra_pool": "rgw.buckets.non-ec",
"index_type": 0
}
},
{
"key": "pre-jewel",
"val": {
"index_pool": "rgw.buckets",
"storage_classes": {
"STANDARD": {
"data_pool": "rgw.buckets"
}
},
"data_extra_pool": "",
"index_type": 0
}
}
],
"metadata_heap": ".rgw.meta",
"realm_id": "****************c"
}

Nevertheless, only the luminous gateways may list my old buckets. As far as I can see, I may only change the placement_rule for new buckets. Is there any chance to make radosgw find the old indices and complete the upgrade to nautilus?

Many thanks,

Ingo

--
Ingo Reimann
        [ https://www.dunkel.de/ ]
Dunkel GmbH
Philipp-Reis-Straße 2
65795 Hattersheim
Fon: +49 6190 889-100
Fax: +49 6190 889-399
eMail: support@dunkel.de
http://www.Dunkel.de/   Amtsgericht Frankfurt/Main
HRB: 37971
Geschäftsführer: Axel Dunkel
Ust-ID: DE 811622001
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io


--

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io


--



--
Jacek Suchenia
jacek.suchenia@gmail.com