I have a doubt,
What I seems to understand :
when a client makes a req to rgw, then the req is spawn into multiple
threads and the each of the thread will have its own req_state instances,
and so if I define any data structure inside req_state then there cant be
any race condition right to that data structure
Am I right? If not then does this mean that req_state is global to multiple
threads and there is chance of race condition?
Regarding https://tracker.ceph.com/issues/45142 reported against nautilus,
> 2020-04-19 09:05:05.421 7fe2eeea9700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods  but i only support [2,1]
> failed to fetch mon config (--no-mon-config to skip
I don't understand why this negotiation would fail. Are there any
known bugs here? Or can anyone explain this?
I was reading https://ceph.io/community/automatic-cephfs-recovery-after-blacklisting/
about the new recover_session=clean feature.
The end of that blog post says that this setting involves a trade-off:
"availability is more important than correctness"
Are there cases where the old behavior is really safer than simply
It seems like this feature would not make things worse for
applications. Can we make recover_session=clean the default?
I'm trying to use AVX2 to accelerate base64 encode/decode.
"make check" could pass in my local host development.
There's unknown failure in Ceph community's verification environment.
I can't find why the failure is triggered and how to reproduce it.
Could anyone help check the failure log to give some clue about how
to reproduce it?
1) PR: https://github.com/ceph/ceph/pull/35080
2) Error log: https://jenkins.ceph.com/job/ceph-pull-requests/51749/
Hi all. About 2 days my Ceph cluster goes 1 millions IO/s on reads
default.rgw.buckets.index. When this happens my PUT requests goes up to 200
req/s but I had 500 req/s before and there were no high IO/s on that pool.
When this happens my rgw nodes and OSDs goes up to 100% cpu usage.
Do you have any indea what’s going on here that this pool gets 1 millions
Also I have upgraded to 14.2.8 but problem still persists.
Thanks for you help :)