Today I rebased my branch after 3 weeks, I saw that req_info is newly
created, so is this req_info unqiue to every request like req_state?
As I have to change my code accordingly then.
hello everyone,
I have a branch which I m rebasing with my master, but during rebasing I
seem to loose data I manually resolved all the commits, after that when I
building the build is failing, the main problem I m facing in `rgw_sal.h`
and in `rgw_common.h` where the error is "req_state doesnt have a
bucket_info" , and in some places I saw bucket_info being replaced by
s->bucket->get_info().
This is what I m not understanding, some changes are merged successfully,
but some are ignored why is this happening?
My branch - https://github.com/suab321321/ceph/tree/wip-jaegerTracer
Hi all,
Recently I have done some test and read some code about spdk bdev with
rbd, and I was really confused about how it works with polling.
If I understand correctly, I think in spdk, the thread who get the
completed IO have to be the thread who send the IO, every thread can not
get the completed IO of other thread.But in ceph rbd, one imange only have
one queue, and when calling *rbd_poll_io_events* , it will pop all the IO
until it's empty or exceeds the max count.
I think there is no way to distinguish which thread an IO belongs
to.Why it can works in spdk?
Thanks.
hello everyone,
How to test Bulk operation of RGW like upload and delete. Boto3 does not
have any Bulk and swift has a kind of segment upload, but that is not bulk
upload I confirmed.Is there any way to test bulk upload?
Thank You
Hello All,
Everytime I'm seen some issue while test ceph Multisite in our test
environment, so looking is there any updated design document are available
that can be help to get understand each component.
Thanks in advance.
-AmitG
Hi,
I have a confusion regarding a struct data type in Ceph CRUSH source code.
Header fie in here
https://github.com/ceph/ceph/blob/master/src/crush/crush.h
If you see below. there is a struct data type namely _weight_set_. What I
have understood going through different CRUSH maps, this _weight_set_ array
should be a 2D array, am I right?
struct crush_choose_arg {
__s32 *ids; /*!< values to use instead of items */
__u32 ids_size; /*!< size of the __ids__ array */
struct crush_weight_set *weight_set; /*!< weight replacements for a given
position */
__u32 weight_set_positions;
};
BR
Bobby
The s3select library uses a cool custom arena allocator for a lot of
its small allocations, so they can be mostly satisfied without locking
and freed all at once at the end of a query. This allocator is only
currently used for the AST nodes and functions, though, and doesn't
cover allocations from std library types like string/map/vector.
I was looking more closely at the std::pmr library from c++17 as a way
to use this custom allocator with the std containers. I found that it
also provides a
https://en.cppreference.com/w/cpp/memory/monotonic_buffer_resource
class with similar properties to this custom arena allocator:
"The class std::pmr::monotonic_buffer_resource is a special-purpose
memory resource class that releases the allocated memory only when the
resource is destroyed. It is intended for very fast memory allocations
in situations where memory is used to build up a few objects and then
is released all at once.
monotonic_buffer_resource can be constructed with an initial buffer.
If there is no initial buffer, or if the buffer is exhausted,
additional buffers are obtained from an upstream memory resource
supplied at construction. The size of buffers obtained follows a
geometric progression.
monotonic_buffer_resource is not thread-safe."
I think the ideal s3select interface would take a
std::pmr::memory_resource* as input, so the application could pass in
whatever allocation strategy it wanted. Radosgw could choose to use
std::pmr::monotonic_buffer_resource directly, a derived
memory_resource that wraps it, a custom memory_resource that uses the
existing logic in class s3select_allocator, or even nullptr to use the
default new/delete resource.
Integration of std::pmr in s3select could come in two parts: one for
use with std library types, and one for the general allocation of
things like our AST nodes and functions.
For the std containers, we can replace types like std::vector with
their aliases in namespace std::pmr - i.e. std::pmr::vector<T> is just
a std::vector<T, std::pmr::polymorphic_allocator<T>>. When
constructing these container types, we'd just have to pass a pointer
to its memory_resource as the allocator argument. That would also
require passing the pointer to the constructors of any types that have
std containers as members.
For general allocations, we can use std::unique_ptr<> with a custom
Deleter that frees its memory back to the std::pmr::memory_resource it
came from, and a helper function like pmr_allocate_unique<T>() that
takes a std::pmr::memory_resource pointer, allocates/constructs a T
using the std::pmr::polymorphic_allocator<T>, and returns it as a
unique_ptr. Use of unique_ptr means that the object lifetimes are
managed automatically, instead of having to track all allocations in a
list with a cleanup step that calls their destructors.
Here's what the unique_ptr stuff looks like:
#include <memory>
#include <memory_resource>
// a unique_ptr Deleter that frees memory from a polymorphic memory resource
class pmr_deleter {
std::pmr::memory_resource* r;
public:
pmr_deleter(std::pmr::memory_resource* r = nullptr) : r(r) {}
template <typename T>
void operator()(T* ptr) const {
std::pmr::polymorphic_allocator<T> alloc{r};
alloc.destroy(ptr);
alloc.deallocate(ptr, 1);
}
};
// a unique_ptr alias for pmr-allocated pointers
template <typename T>
using pmr_unique_ptr = std::unique_ptr<T, pmr_deleter>;
template <typename T, typename ...Args>
pmr_unique_ptr<T> pmr_allocate_unique(std::pmr::memory_resource* r,
Args&& ...args)
{
std::pmr::polymorphic_allocator<T> alloc{r};
auto p = alloc.allocate(1);
try {
alloc.construct(p, std::forward<Args>(args)...); // may throw
return {p, r};
} catch (const std::exception&) {
alloc.deallocate(p, 1);
throw;
}
}
Hello everyone,
When am tracing request relating to bucket like creating one or tags
operation in a bucket, then upto what level should trace it so for the
trace to be useful, for example after a bucket is created , then after that
the major time is taken by `store_bucket_instane_info` and
`store_bucket_entrypoint_info`, in the file `rgw_bucket.cc` so is it
necessary to go deep inside this function(they are calling
`RGWSI_Bucket_SObj` inside), or till rgw_bucket.cc is fine?
Thank you.
Hi all,
When the client writes to rbd image, it'll hold the "exclusive lock".
If the client dies abruptly without releaseing the "exclusive lock",
how the other client know that they could write to the rbd image?
I write below program. If the first process/client died abrutply
(built with -DKILL_DEAD, send kill signal), the second
process/client(build without -DKILL_DEAD) could still write to the
rbd image. However, there's an abvious "time delay" that the second
client could write to the rbd image.
Is there's a mechanism to watch/monitor that the "exclusive lock" is
released after the client, which hold it, died abruptly?
Progam:
1 #include <rbd/librbd.hpp>
2 #include <rados/librados.hpp>
3
4 #include <cstring>
5 #include <iostream>
6 #include <string>
7
8 void err_msg(int ret, const std::string &msg = "") {
9 std::cerr << "[error] msg:" << msg << " strerror: "
10 << strerror(-ret) << std::endl;
11 }
12 void err_exit(int ret, const std::string &msg = "") {
13 err_msg(ret, msg);
14 exit(EXIT_FAILURE);
15 }
16
17 int main(int argc, char* argv[]) {
18 int ret = 0;
19 librados::Rados rados;
20
21 ret = rados.init("admin");
22 if (ret < 0)
23 err_exit(ret,"failed to initialize rados");
24 ret = rados.conf_read_file("ceph.conf");
25 if (ret < 0)
26 err_exit(ret, "failed to parse ceph.conf");
27
28 ret = rados.connect();
29 if (ret < 0)
30 err_exit(ret, "failed to connect to rados cluster");
31
32 librados::IoCtx io_ctx;
33 std::string pool_name = "rbd";
34 ret = rados.ioctx_create(pool_name.c_str(), io_ctx);
35 if (ret < 0) {
36 rados.shutdown();
37 err_exit(ret, "failed to create ioctx");
38 }
39
40 // rbd
41 librbd::RBD rbd;
42
43 librbd::Image image;
44 std::string image_name = "fio_test";
45 ret = rbd.open(io_ctx, image, image_name.c_str());
46 if (ret < 0) {
47 io_ctx.close();
48 rados.shutdown();
49 err_exit(ret, "failed to open rbd image");
50 } else {
51 std::cout << "open image succeeded" << std::endl;
52 }
53
54 ceph::bufferlist bw;
55 bw.append(std::string("changcheng"));
56 image.write(0, bw.length(), bw);
57
58 ceph::bufferlist br;
59 int read = image.read(0, bw.length(), br);
60 br.append(std::string(4, '\0'));
61 std::cout << br.c_str() << std::endl;
62
63 #if defined(KILL_DEAD)
64 while(1);
65 #endif
66
67 done:
68 image.close();
69 io_ctx.close();
70 rados.shutdown();
71 exit(EXIT_SUCCESS);
72 }
B.R.
Changcheng
Hi Folks,
I was planning to present cephfs io500 testing tomorrow but it turns out
that the io500 birds of a feather session is starting at the same time.
We'll reschedule for a future date. In the mean time, if anyone is
interested you can join the io500 session here:
https://www.vi4io.org/io500/bofs/isc20/start
Otherwise, have a great week everyone!
Thanks,
Mark