---------- Forwarded message ---------
From: Abhinav Singh <singhabhinav0796(a)gmail.com>
Date: Tue, Jun 2, 2020 at 2:20 PM
Subject: Re: RGW JaegerTracing Doubt
To: Yuval Lifshitz <ylifshit(a)redhat.com>
Here is two commits which for tracing object deletion with function
overloading
https://github.com/ceph/ceph/commit/1ec9c76b4d3ff7f5fd5ac83150ad1d5e83655276https://github.com/ceph/ceph/commit/3a331ffb60852f472c57697011b20879662130e0
I can change the code to rgw_op.cc and rgw_rest.cc because they have access
to req_state but the functions of RGWRados needs to be rewritten as a whole
if we apply overloading, that will be very messy I guess
On Tue, 2 Jun 2020, 14:13 Abhinav Singh, <singhabhinav0796(a)gmail.com> wrote:
> I will share my commit once I build it successfully it will take some time
> though.
>
> On Tue, 2 Jun 2020, 14:08 Yuval Lifshitz, <ylifshit(a)redhat.com> wrote:
>
>> i think that adding anything "global" to hold info that belongs in a
>> specific call stack is not a good idea.
>> even if your map is thread_local and would not require any locks (and
>> assuming all processing is done in one thread), its not clear how you can
>> lookup the right requests from different inner function calls?
>>
>> seems like function overloading is the correct solution.
>>
>> On Tue, Jun 2, 2020 at 11:22 AM Abhinav Singh <singhabhinav0796(a)gmail.com>
>> wrote:
>>
>>> Yes you are right, I realized the same thing just moments before.
>>>
>>> Could you suggest any tips how to manage this without function
>>> overloading?
>>>
>>> On Tue, 2 Jun 2020, 13:24 Yuval Lifshitz, <ylifshit(a)redhat.com> wrote:
>>>
>>>> the problem with this solution is not the cost of searching the hash
>>>> map, it is making this map thread safe.
>>>> adding a lock would have a very bad impact on performance.
>>>>
>>>> On Tue, Jun 2, 2020 at 5:53 AM Abhinav Singh <
>>>> singhabhinav0796(a)gmail.com> wrote:
>>>>
>>>>> One way of doing this is to store vector of req_state in and
>>>>> unorderd_map<id,req _state>
>>>>> But searching through might cause some time latency, so to counter this
>>>>> I will put a size limit of thousand so that when vector gets big it
>>>>> erases all its element along with unordered_map.
>>>>> this will ensure that cost of searching operation will be
>>>>> greatly reduced.
>>>>>
>>>>> Will this do?
>>>>>
>>>>> On Mon, 1 Jun 2020, 21:34 Abhinav Singh, <singhabhinav0796(a)gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hello everyone,
>>>>>>
>>>>>> My `req_state*` is containing spans for a particular request to trace
>>>>>> that request, but as we know req_state is not available everywhere I tried
>>>>>> to insert a req_state variable in CephContext class because every portion
>>>>>> of RGW has access to it and so they will also have access to req_state,
>>>>>> but this wont work because it is on time initialized and when request run
>>>>>> in parallel race condition might occur and traces will be inaccurate.
>>>>>> The Second method I tried was to include req_state in RGWRadosStore
>>>>>> and RGWUserCtl because these are accessible to every function which I want
>>>>>> to trace, but again these also have race condition risk.
>>>>>>
>>>>>> Can anyone give me any tip how to make req_state available in all
>>>>>> functions(if not all then majority) particularly this functions like
>>>>>> RGWRadosStore and RGWUserCtl
>>>>>>
>>>>>> Thank You.
>>>>>>
>>>>> _______________________________________________
>>>>> Dev mailing list -- dev(a)ceph.io
>>>>> To unsubscribe send an email to dev-leave(a)ceph.io
>>>>>
>>>>
Hello everyone,
My `req_state*` is containing spans for a particular request to trace that
request, but as we know req_state is not available everywhere I tried to
insert a req_state variable in CephContext class because every portion of
RGW has access to it and so they will also have access to req_state, but
this wont work because it is on time initialized and when request run in
parallel race condition might occur and traces will be inaccurate.
The Second method I tried was to include req_state in RGWRadosStore and
RGWUserCtl because these are accessible to every function which I want to
trace, but again these also have race condition risk.
Can anyone give me any tip how to make req_state available in all
functions(if not all then majority) particularly this functions like
RGWRadosStore and RGWUserCtl
Thank You.
Hi,
I am a research student from India working on QoS for distributed storage systems. I was studying the implementation of mClock in ceph for my research purposes. I am stuck with some doubts. It would be really helpful if someone can clarify some of them.
If possible please help me understand these. Also please correct me if anything is wrong here.
* I was checking https://www.slideshare.net/ssusercee823/implementing-distributed-mclock-in-…, there it was mentioned that dmClock is implemented in ceph. But when I checked the master branch of ceph, rho and delta are not sent from the client and add_request() function is called with null_req_params. Is dmClock implemented in some other branch of ceph?
* When an MOSDRepOp is received in an OSD node, PullPriorityQueue::add_request() is called with a client id that corresponds to the primary OSD which sent the MOSDRepOp. Why actual client which sent the MOSDOp is not taken as the client id? Is there any particular reason for this?
* When an MOSDRep has reached the primary osd node, it goes to the mclock queue. When it is dequeued and MOSDRepOps are sent to replica nodes, these requests will again have to wait in replica node's mclock queue(client id corresponds to primary osd). Will this cause an inefficiency since it waits two times in queues?
Thanks,
Prathyush PV