Hi Casey,
thank you for an update. Now it's clear why it happened and we will adopt
our code for multipart.
Cheers,
Arvydas
On Tue, Oct 10, 2023, 18:36 Casey Bodley <cbodley(a)redhat.com> wrote:
hi Arvydas,
it looks like this change corresponds to
https://tracker.ceph.com/issues/48322 and
https://github.com/ceph/ceph/pull/38234. the intent was to enforce the
same limitation as AWS S3 and force clients to use multipart copy
instead. this limit is controlled by the config option
rgw_max_put_size which defaults to 5G. the same option controls other
operations like Put/PostObject, so i wouldn't recommend raising it as
a workaround for copy
this change really should have been mentioned in the release notes -
apologies for that omission
On Tue, Oct 10, 2023 at 10:58 AM Arvydas Opulskis <zebediejus(a)gmail.com>
wrote:
Hi all,
after upgrading our cluster from Nautilus -> Pacific -> Quincy we noticed
we can't copy bigger objects anymore via S3.
An error we get:
"Aws::S3::Errors::EntityTooLarge (Aws::S3::Errors::EntityTooLarge)"
After some tests we have following findings:
* Problems starts for objects bigger than 5 GB (multipart limit)
* Issue starts after upgrading to Quincy (17.2.6). In latest Pacific
(16.2.13) it works fine.
* For Quincy it works ok with AWS S3 CLI "cp" command, but doesn't work
using AWS Ruby3 SDK client with copy_object command.
* For Pacific setup both clients work ok
* From RGW logs seems like AWS S3 CLI client handles multipart copying
"under the hood", so it is succesful.
It is stated in AWS documentation, that for uploads (and copying) bigger
than 5GB files we should use multi part API for AWS S3. For some reason
it
worked for years in Ceph and stopped working
after Quincy release, even I
couldn't find something in release notes addressing this change.
So, is this change permanent and should be considered as bug fix?
Both Pacific and Quincy clusters were running on Rocky 8.6 OS, using
Beast
frontend.
Arvydas
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io