Hi all:
ceph version: 15.2.7 (88e41c6c49beb18add4fdb6b4326ca466d931db8)
I have a strange question, I just create a multiple site for Ceph cluster.
But I notice the old data of source cluster is not synced. Only new data
will be synced into second zone cluster.
Is there anything I need to do to enable full sync for bucket or this is a
bug?
Thanks
Hi,
Is there any way to log the x-amz-request-id along with the request in
the rgw logs? We're using beast and don't see an option in the
configuration documentation to add headers to the request lines. We
use centralized logging and would like to be able to search all layers
of the request path (edge, lbs, ceph, etc) with a x-amz-request-id.
Right now, all we see is this:
debug 2021-04-01T15:55:31.105+0000 7f54e599b700 1 beast:
0x7f5604c806b0: x.x.x.x - - [2021-04-01T15:55:31.105455+0000] "PUT
/path/object HTTP/1.1" 200 556 - "aws-sdk-go/1.36.15 (go1.15.3; linux;
amd64)" -
We've also tried this:
ceph config set global rgw_enable_ops_log true
ceph config set global rgw_ops_log_socket_path /tmp/testlog
After doing this, inside the rgw container, we can socat -
UNIX-CONNECT:/tmp/testlog and see the log entries being recorded that
we want, but there has to be a better way to do this, where the logs
are emitted like the request logs above by beast, so that we can
handle it using journald. If there's an alternative that would
accomplish the same thing, we're very open to suggestions.
Thank you,
David
Hello Anthony,
it was introduced in octopus 15.2.10
See: https://docs.ceph.com/en/latest/releases/octopus/
Do you know how you would set it in pacific? :)
Guess, there shouldnt be much difference...
Thank you
Mehmet
Am 28. April 2021 19:21:19 MESZ schrieb Anthony D'Atri <anthony.datri(a)gmail.com>:
>I think that’s new with Pacific.
>
>> On Apr 28, 2021, at 1:26 AM, ceph(a)elchaka.de wrote:
>>
>>
>>
>> Hello,
>>
>> I have an octopus cluster and want to change some values - but i
>cannot find any documentation on how to set values(multiple) with
>>
>> bluestore_rocksdb_options_annex
>>
>> Could someone give me some examples.
>> I would like to do this like ceph config set ...
>>
>> Thanks in advice
>> Mehmet
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hello All,I was running 15.2.8 via cephadm on docker Ubuntu 20.04I just attempted to upgrade to 16.2.1 via the automated method, it successfully upgraded the mon/mgr/mds and some OSD's, however it then failed on an OSD and hasn't been able to pass even after stopping and restarting the upgrade.It reported the following ""message": "Error: UPGRADEREDEPLOYDAEMON: Upgrading daemon osd.35 on host sn-s01 failed.""If I run 'ceph health detail' I get lot's of the following error : "ValueError: not enough values to unpack (expected 2, got 1)" throughout the detail reportUpon googling, it looks like I am hitting something along the lines of https://158.69.68.89/issues/48924 & https://tracker.ceph.com/issues/49522What do I need to do to either get around this bug, or a way I can manually upgrade the remaining ceph OSD's to 16.2.1, currently my cluster is working but the last OSD it failed to upgrade is currently offline (I guess as no image attached to it now as it failed to pull it), and I have a cluster with OSD's from not 15.2.8 and 16.2.1Thanks
Sent via MXlogin
Hi all,
this is a follow-up on "reboot breaks OSDs converted from ceph-disk to ceph-volume simple".
I converted a number of ceph-disk OSDs to ceph-volume using "simple scan" and "simple activate". Somewhere along the way, the OSDs meta-data gets rigged and the prominent symptom is that the symlink block is changes from a part-uuid target to an unstable device name target like:
before conversion:
block -> /dev/disk/by-partuuid/9123be91-7620-495a-a9b7-cc85b1de24b7
after conversion:
block -> /dev/sdj2
This is a huge problem as the "after conversion" device names are unstable. I have now a cluster that I cannot reboot servers on due to this problem. OSDs randomly re-assigned devices will refuse to start with:
2021-03-02 15:56:21.709 7fb7c2549b80 -1 OSD id 241 != my id 248
Please help me with getting out of this mess.
Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Hello,
Is it only me that's getting Internal error when trying to create issues in the bugtracker for some day or two?
https://tracker.ceph.com/issues/new
Best regards
I'm trying to understand what and where radosgw listen ?
There is a lot of contradictory or redundant informations about that.
First about the contradictory informations for the socket.
At https://docs.ceph.com/en/pacific/radosgw/config-ref/ <https://docs.ceph.com/en/pacific/radosgw/config-ref/>, it says rgw_socket_path, but at https://docs.ceph.com/en/pacific/man/8/radosgw/ <https://docs.ceph.com/en/pacific/man/8/radosgw/> is says 'rgw socket path'
That problem is quite common in the ceph documentation. Are both value accepted ?
Next about some naming, or binding IP. Where it's defined, and how ?
You have:
rgw_frontends = "beast ssl_endpoint=0.0.0.0:443 port=443 ..."
rgw_host =
rgw_port =
rgw_dns_name =
That's a lot of redundancy, or contradictory informations. What is the purpose of each one ? What is the difference between
rgw_frontends = ".. port = ..."
and
rgw_port =
?
Or rgw_host and rgw_dns_name. What is the difference ?
The documentation provides no help at all:
rgw_dns_name
Description: The DNS name of the served domain. See also the hostnames setting within regions.
The description says nothing new, it just repeat the field name.
Is one of them used by the manager for communication ? I already had the problem for the entry in the certificate used by the frontend, it used an IP coming from nowhere.
If a fcgi is used, how the manager find the endpoint ?
I'm trying to set up a new ceph cluster, and I've hit a bit of a blank.
I started off with centos7 and cephadm. Worked fine to a point, except I
had to upgrade podman but it mostly worked with octopus.
Since this is a fresh cluster and hence no data at risk, I decided to jump
straight into Pacific when it came out and upgrade. Which is where my
trouble began. Mostly because Pacific needs a version on lvm later than
what's in centos7.
I can't upgrade to centos8 as my boot drives are not supported by centos8
due to the way redhst disabled lots of disk drivers. I think I'm looking at
Ubuntu or debian.
Given cephadm has a very limited set of depends it would be good to have a
supported matrix, it would also be good to have a check in cephadm on
upgrade, that says no I won't upgrade if the version of lvm2 is too low on
any host and let's the admin fix the issue and try again.
I was thinking to upgrade to centos8 for this project anyway until I
relised that centos8 can't support my hardware I've inherited. But
currently I've got a broken cluster unless I can workout some way to
upgrade lvm in centos7.
Peter.