Hi,
I notice that CompleteMultipartUploadResult does return an empty ETag
field when completing an multipart upload in v17.2.3.
I haven't had the possibility to verify from which version this changed
and can't find in the changelog that it should be fixed in newer version.
The response looks like:
<?xml version="1.0" encoding="UTF-8"?>
<CompleteMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/ <http://s3.amazonaws.com/doc/2006-03-01/>">
<Location>s3.myceph.com/test-bucket/test.file</Location>
<Bucket>test-bucket</Bucket>
<Key>test.file</Key>
<ETag></ETag>
</CompleteMultipartUploadResult>
I have found a old issue that is closed around 9 years ago with the same
issue so I guess that this has been fixed before.
https://tracker.ceph.com/issues/6830 <https://tracker.ceph.com/issues/6830>
It looks like my account to the tracker is still not activated so I
can't create or comment on the issue.
Best regards,
Lars Dunemark
Dear Ceph users,
my cephadm-managed cluster is currently based on 17.2.3. I see that
17.2.5 is available on quay.io, so I'd like to upgrade. I read the
upgrade guide (https://docs.ceph.com/en/quincy/cephadm/upgrade/) and the
"Potential problems" section is reassuringly short. Still I'm worried
about potential problems that might arise during this upgrade: my
cluster is in production and I fear about possible data losses or
extended downtimes due to upgrade problems. So what should I expect from
the upgrade process? Is it usually a smooth experience or are there
frequent/known/probable issues? Are there some good practices to
minimize the occurrency of problems and to eventually prepare for
recovery procedures?
Thank you,
Nicola
I’m trying to upgrade from 16.2.7 to 16.2.11. Reading the documentation, I cut and paste the orchestrator command to begin the upgrade, but I mistakenly pasted directly from the docs and it initiated an “upgrade” to 16.2.6. I stopped the upgrade per the docs and reissued the command specifying 16.2.11 but now I see no progress in ceph -s. Cluster is healthy but it feels like the upgrade process is just paused for some reason.
Thanks!
-jeremy
Hi,
on our large ceph cluster with 60 servers, 1600 OSDs, we have observed
that small system nvmes are wearing out rapidly. Our monitoring shows
mon writes on average about 10MB/s to store.db. For small system nvmes
of 250GB and DWPD of ~1, this turns out to be too much, 0.8TB/day or
1.5PB in 5 years, too much even for 3DWPD of the same capacity.
Apart from replacing the drives with larger ones, more durable,
preferably both, do you have any suggestions if these writes can be
reduced? Actually, the mon writes match 0.15Hz rate of .sst file
creation of 64MB....
Best regards,
Andrej
--
_____________________________________________________________
prof. dr. Andrej Filipcic, E-mail: Andrej.Filipcic(a)ijs.si
Department of Experimental High Energy Physics - F9
Jozef Stefan Institute, Jamova 39, P.o.Box 3000
SI-1001 Ljubljana, Slovenia
Tel.: +386-1-477-3674 Fax: +386-1-425-7074
-------------------------------------------------------------
Hello,
I'm trying to mount RBD using rbd map, but I get this error message:
# rbd map hdb_backup/VCT --id client --keyring
/etc/ceph/ceph.client.VCT.keyring
rbd: couldn't connect to the cluster!
Checking on Ceph server the required permission for relevant keyring exists:
# ceph-authtool -l /etc/ceph/ceph.client.VCT.keyring
[client.VCT]
key = AQBj3LZjNGn/BhAAG8IqMyH0WLKi4kTlbjiW7g==
# ceph auth get client.VCT
[client.VCT]
key = AQBj3LZjNGn/BhAAG8IqMyH0WLKi4kTlbjiW7g==
caps mon = "allow r"
caps osd = "allow rwx pool hdb_backup object_prefix
rbd_data.b768d4baac048b; allow rwx pool hdb_backup object_prefix
rbd_header.b768d4baac048b; allow rx pool hdb_backup object_prefix
rbd_id.VCT"
exported keyring for client.VCT
Can you please advise how to fix this error?
THX
Hi,
When I was looking at a complete multipart upload request I found that the response did return an empty ETag entry in the XML.
If I query the keys metadata after the Complete is done it will return the expected ETag so it looks like it is calculated correctly.
<?xml version="1.0" encoding="UTF-8"?>
<CompleteMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Location>testsystem/test/upload-file</Location>
<Bucket>test</Bucket>
<Key>upload-file</Key>
<ETag></ETag>
</CompleteMultipartUploadResult>
I only have a cluster running on v17.2.3 so I haven't verified if this still exist on latest version.
I found an really old issue that was closed ~9 years ago with the same issue.
https://tracker.ceph.com/issues/6830
The problem is that my account to the tracker doesn't seams to work as is should so I can't login and comment on it or create a new ticket.
It also looks like there is a inconsistency between https://docs.ceph.com/en/latest/radosgw/s3/objectops/#complete-multipart-up… that says that ETag is required in the request but for eg. AWS documentation doesn't have it as a possible argument for the CompleteMultipartUpload so it is not possible to send using common third party libraries.
Best regards,
Lars Dunemark
Hi Cephers, We
have two octopus 15.2.17 clusters in a multisite configuration. Every
once in a while we have to perform a bucket reshard (most recently on
613 shards) and this practically kills our replication for a few days. Does anyone know of any priority mechanics within sync to give priority to other buckets and/or lower them? Are there any improvements to this in higher versions of ceph that we
could take advantage of if we upgrade the cluster (I haven't found any)? How to safely perform the increase of rgw_data_log_num_shards, because
the documentation only says: "The values of rgw_data_log_num_shards and
rgw_md_log_max_shards should not be changed after sync has started."
Does this mean that I should block access to the cluster, wait until
sync is caught up with source/master, change this value, restart rgw and
unblock access? Kind Regards, Tom
Hi,
please forgive me if this has been asked before - I could not find any information on this topic.
I am using ceph with librados via the phprados extension. Since upgrading to the current ceph versions where OpenSSL is used in in librados, I observe that using PHP's libcurl integration and other features which rely on OpenSSL randomly fail when opening a TLS connection. I suspect that librados somehow initializes or uninitializes OpenSSL in a way that interferes with the OpenSSL usage of libcurl / PHP's fsockopen.
Did anybody make a similar experience?
Thanks,
Patrick
Hello to everyone
When I use this command to see bucket usage
radosgw-admin bucket stats --bucket=<bucket>
It work only when the owner of the bucket is activated
How to see the usage even when the owner is suspended ?
Here is 2 exemple, one with the owner activated et the other one with owner
suspended:
radosgw-admin bucket stats --bucket=bonjour
{
"bucket": "bonjour",
"num_shards": 11,
"tenant": "",
"zonegroup": "46d4ba06-76ff-44b4-a441-54197517ded2",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "f8c2e3e2-da22-4c80-b330-466db13bbf6a.204114.85",
"marker": "f8c2e3e2-da22-4c80-b330-466db13bbf6a.204114.85",
"index_type": "Normal",
"owner": "identifiant_leviia_GB6mSIAmTt48cY5O",
"ver":
"0#148,1#124,2#134,3#155,4#199,5#123,6#165,7#141,8#133,9#154,10#137",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
"mtime": "0.000000",
"creation_time": "2023-02-24T16:16:14.196314Z",
"max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#",
"usage": {
"rgw.main": {
"size": 532572233,
"size_actual": 535318528,
"size_utilized": 532572233,
"size_kb": 520091,
"size_kb_actual": 522772,
"size_kb_utilized": 520091,
"num_objects": 1486
},
"rgw.multimeta": {
"size": 0,
"size_actual": 0,
"size_utilized": 0,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 0,
"num_objects": 0
}
},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}
radosgw-admin bucket stats --bucket=locking4
{
"bucket": "locking4",
"num_shards": 11,
"tenant": "",
"zonegroup": "46d4ba06-76ff-44b4-a441-54197517ded2",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "f8c2e3e2-da22-4c80-b330-466db13bbf6a.204114.80",
"marker": "f8c2e3e2-da22-4c80-b330-466db13bbf6a.204114.80",
"index_type": "Normal",
"owner": "identifiant_leviia_xf4q139fq1",
"ver": "0#1,1#1,2#1,3#1,4#1,5#1,6#1,7#1,8#1,9#1,10#1",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
"mtime": "0.000000",
"creation_time": "2023-02-23T12:49:24.089538Z",
"max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#",
"usage": {},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}
As you can see with the bucket where the owner is suspended (locking4) it
lack the part:
"usage": {
"rgw.main": {
"size": 532572233,
"size_actual": 535318528,
"size_utilized": 532572233,
"size_kb": 520091,
"size_kb_actual": 522772,
"size_kb_utilized": 520091,
"num_objects": 1486
},
"rgw.multimeta": {
"size": 0,
"size_actual": 0,
"size_utilized": 0,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 0,
"num_objects": 0
}
},
How to have this part even when the owner is suspended ? Is it possible via
API ?
All the best
Heya,
ever since we had that one osd causing the entire cluster to hang
(it's been removed since),
we keep having hard to debug issues.
for example sometimes on start, qemu just hangs forever.
when i kill it manually, the next start works fine.
when i map the same volume using krbd on another host, it also appears fine.
How would you debug an issue like this? Since it happens before the vm
even starts, i'm assuming it must be the mon that hangs?
'rbd lock ls' is empty, thats not it.
is this possibly caused by running inconsistent mon versions? How would i know?
ceph tell mon.\* version
mon.mon-yca4ceph: {
"version": "16.2.11",
"release": "pacific",
"release_type": "stable"
}
mon.mon-yca5ceph: {
"version": "16.2.6",
"release": "pacific",
"release_type": "stable"
}
mon.mon-yca6ceph: {
"version": "16.2.11",
"release": "pacific",
"release_type": "stable"
}
--
+4916093821054