We checked s3cmd --debug and endpoint is ok (Working with existing buckets
works ok with same s3cmd config). From what I read, "max_buckets": 0 means
that there is no quota for the number of buckets. There are also users who
have "max_buckets": 1000, and those users have the same access_denied issue
when creating a bucket.
We also tried other bucket names and it is the same issue.
On Thu, Mar 30, 2023 at 6:28 PM Boris Behrens <bb(a)kervyn.de> wrote:
Hi Kamil,
is this with all new buckets or only the 'test' bucket? Maybe the name is
already taken?
Can you check s3cmd --debug if you are connecting to the correct endpoint?
Also I see that the user seems to not be allowed to create bukets
...
"max_buckets": 0,
...
Cheers
Boris
Am Do., 30. März 2023 um 17:43 Uhr schrieb Kamil Madac <
kamil.madac(a)gmail.com>gt;:
Hi Eugen
It is version 16.2.6, we checked quotas and we can't see any applied
quotas
for users. As I wrote, every user is affected.
Are there any non-user or
global quotas, which can cause that no user can create a bucket?
Here is example output of newly created user which cannot create buckets
too:
{
"user_id": "user123",
"display_name": "user123",
"email": "",
"suspended": 0,
"max_buckets": 0,
"subusers": [],
"keys": [
{
"user": "user123",
"access_key": "ZIYY6XNSC06EU8YPL1AM",
"secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
],
"swift_keys": [],
"caps": [
{
"type": "buckets",
"perm": "*"
}
],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
On Thu, Mar 30, 2023 at 1:25 PM Eugen Block <eblock(a)nde.ag> wrote:
> Hi,
>
> what ceph version is this? Could you have hit some quota?
>
> Zitat von Kamil Madac <kamil.madac(a)gmail.com>om>:
>
> > Hi,
> >
> > One of my customers had a correctly working RGW cluster with two
zones
in
one
zonegroup and since a few days ago users are not able to create
buckets
> and are always getting Access denied. Working with existing buckets
works
> > (like listing/putting objects into existing bucket). The only
operation
> > which is not working is bucket
creation. We also tried to create a
new
> > user, but the behavior is the same, and
he is not able to create the
> > bucket. We tried s3cmd, python script with boto library and also
> Dashboard
> > as admin user. We are always getting Access Denied. Zones are
in-sync.
Has anyone experienced such behavior?
Thanks in advance, here are some outputs:
$ s3cmd -c .s3cfg_python_client mb s3://test
ERROR: Access to bucket 'test' was denied
ERROR: S3 error: 403 (AccessDenied)
Zones are in-sync:
Primary cluster:
# radosgw-admin sync status
realm 5429b434-6d43-4a18-8f19-a5720a89c621 (solargis-prod)
zonegroup 00e4b3ff-1da8-4a86-9f52-4300c6d0f149 (solargis-prod-ba)
zone 6067eec6-a930-45c7-af7d-a7ef2785a2d7 (solargis-prod-ba-dc)
metadata sync no sync (zone is master)
data sync source: e84fd242-dbae-466c-b4d9-545990590995
(solargis-prod-ba-hq)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
Secondary cluster:
# radosgw-admin sync status
realm 5429b434-6d43-4a18-8f19-a5720a89c621 (solargis-prod)
zonegroup 00e4b3ff-1da8-4a86-9f52-4300c6d0f149 (solargis-prod-ba)
zone e84fd242-dbae-466c-b4d9-545990590995 (solargis-prod-ba-hq)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 6067eec6-a930-45c7-af7d-a7ef2785a2d7
(solargis-prod-ba-dc)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
--
Kamil Madac
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Kamil Madac <https://kmadac.github.io/>
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io