Regarding https://tracker.ceph.com/issues/45142 reported against nautilus,
> 2020-04-19 09:05:05.421 7fe2eeea9700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [1] but i only support [2,1]
> failed to fetch mon config (--no-mon-config to skip
I don't understand why this negotiation would fail. Are there any
known bugs here? Or can anyone explain this?
Thanks,
Casey
Hi folks,
I was reading https://ceph.io/community/automatic-cephfs-recovery-after-blacklisting/
about the new recover_session=clean feature.
The end of that blog post says that this setting involves a trade-off:
"availability is more important than correctness"
Are there cases where the old behavior is really safer than simply
returning errors?
It seems like this feature would not make things worse for
applications. Can we make recover_session=clean the default?
We're happy to announce the second bugfix release of Ceph Octopus stable
release series, we recommend that all Octopus users upgrade. This
release has a range of fixes across all components and a security fix.
Notable Changes
---------------
* CVE-2020-10736: Fixed an authorization bypass in mons & mgrs (Olle
SegerDahl, Josh Durgin)
For the complete changelog please refer to the full release blog at
https://ceph.io/releases/v15-2-2-octopus-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.2.tar.gz
* For packages, see
http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 0c857e985a29d90501a285f242ea9c008df49eb8
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi all,
I'm trying to use AVX2 to accelerate base64 encode/decode.
"make check" could pass in my local host development.
There's unknown failure in Ceph community's verification environment.
I can't find why the failure is triggered and how to reproduce it.
Could anyone help check the failure log to give some clue about how
to reproduce it?
1) PR: https://github.com/ceph/ceph/pull/35080
2) Error log: https://jenkins.ceph.com/job/ceph-pull-requests/51749/
B.R.
Changcheng
Hi all. About 2 days my Ceph cluster goes 1 millions IO/s on reads
default.rgw.buckets.index. When this happens my PUT requests goes up to 200
req/s but I had 500 req/s before and there were no high IO/s on that pool.
When this happens my rgw nodes and OSDs goes up to 100% cpu usage.
Do you have any indea what’s going on here that this pool gets 1 millions
IO/s?
Also I have upgraded to 14.2.8 but problem still persists.
Thanks for you help :)
Hello everyone,
I m implementing a jaeger tracing system in RGW, some images below
[image: RGW_DELETE_OBJ.png]For deleting a object
[image: RGW_LIST_BUCKETS.png]
for getting the list of buckets in clusters
I have two tags in jaegerUI to filter various spans one is gateway(swift or
s3) and another is RGwOpeartion type like for example putting an object has
a name "RGW_OP_PUT_OBJ".
is this much detail sufficient? or should I go more deep into each
functions and try to trace those.
Also what should any improvement and changes I can make into this to make
this more developer friendly.
Thank you