Hello, Ceph users,
what is the best way how to change the storage layout of all buckets
in radosgw?
I have default.rgw.buckets.data pool as replicated, and I want to use
an erasure-coded layout instead. One way is to use cache tiering
as described here:
https://cephnotes.ksperis.com/blog/2015/04/15/ceph-pool-migration/
Could this be done under the running radosgw? If I read this correctly,
it should be done, because radosgw is just another RADOS client.
Another possible approach would be to create a new erasure-coded pool,
a new zone placement, and set it as default. But how can I migrate
the existing data? If I understand it correctly, the default placement
applies only to new buckets.
Something like this:
ceph osd erasure-code-profile set k5m2 k=5 m=2
ceph osd pool create default.rgw.buckets.ecdata erasure k5m2
ceph osd pool application enable default.rgw.buckets.ecdata radosgw
radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id ecdata-placement
radosgw-admin zone placement add --rgw-zone default --placement-id ecdata-placement --data-pool default.rgw.buckets.ecdata --index-pool default.rgw.buckets.index --data-extra-pool default.rgw.buckets.non-ec
radosgw-admin zonegroup placement default --rgw-zonegroup default --placement-id ecdata-placement
How to continue from this point?
And a secondary question: what purpose does the data-extra-pool serve?
Thanks!
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| https://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
We all agree on the necessity of compromise. We just can't agree on
when it's necessary to compromise. --Larry Wall
Hi,
When trying to log in to rgb via the dashboard, an error appears in the
logs ValueError: invalid literal for int() with base 10: '443
ssl_certificate=config://rgw/cert/rgw.test'
RGW with SSL
If rgw is without ssl, everything works fine
ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)
Please tell me how to solve this?
Hi All,
Just successfully(?) completed a "live" update of the first node of a
Ceph Quincy cluster from RL8 to RL9. Everything "seems" to be working -
EXCEPT the iSCSI Gateway on that box.
During the update the ceph-iscsi package was removed (ie
`ceph-iscsi-3.6-2.g97f5b02.el8.noarch.rpm` - this is the latest package
available from the Ceph Repos). So, obviously, I reinstalled the package.
However, `dnf` is throwing errors (unsurprisingly, as that package is an
el8 package and this box is now running el9): that package requires
python 3.6 and el9 runs with python 3.8 (I believe).
So my question(s) is: Can I simply "downgrade" python to 3.6, or is
there an el9-compatible version of `ceph-iscsi` somewhere, and/or is
there some process I need to follow to get the iSCSI Gateway back up and
running?
Some further info: The next step in my
"happy-happy-fun-time-holiday-ICT-maintenance" was to upgrade the
current Ceph Cluster to use `cephadm` and to go from Ceph-Quincy to
Ceph-Reef - is this my ultimate upgrade path to get the iSCSI G/W back?
BTW the Ceph Cluster is used *only* to provide iSCSI LUNS to an oVirt
(KVM) Cluster front-end. Because it is the holidays I can take the
entire network down (ie shutdown all the VMs) to facilitate this update
process, which also means that I can use some other way (ie a non-iSCSI
way - I think) to connect the Ceph SAN Cluster to the oVirt VM-Hosting
Cluster - if *this* is the solution (ie no iSCSI) does someone have any
experience in running oVirt off of Ceph in a non-iSCSI way - and could
you be so kind as to provide some pointers/documentation/help?
And before anyone says it, let me: "I broke, now I own it" :-)
Thanks in advance, and everyone have a Merry Christmas, Heavenly
Hanukkah, Quality Kwanzaa, Really-good (upcoming) Ramadan, and/or a
Happy Holidays.
Cheers
Dulux-Oz
Hello.
We are using Ceph storage to test whether we can run the service by uploading and saving more than 40 billion files.
So I'd like to check the contents below.
1) Maximum number of Rados gateway objects that can be stored in one cluster using the bucket index
2) Maximum number of Rados gateway objects that can be stored in one bucket
Although we have referred to the limitations on the number of Rados gateway objects mentioned in existing documents, it seems theoretically unlimited
If you have operated the number of files at the level we think in actual services or products, we would appreciate it if you could share them.
Below are related documents and related settings values.
> Related documents
- https://documentation.suse.com/ses/5.5/html/ses-all/cha-ceph-gw.html
- https://www.ibm.com/docs/en/storage-ceph/6?topic=resharding-limitations-buc…
- https://docs.ceph.com/en/latest/dev/radosgw/bucket_index/
> Related config
- rgw_dynamic_resharding: true
- rgw_max_objs_per_shard: 100000
- rgw_max_dynamic_shards : 65521