we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph
to python3. and from now on, the teuthology-worker runs in a python3
environment by default unless specified otherwise using
- we need to write tests in python3 in master now
- teuthology should be python3 compatible.
- teuthology bug fixes should be backported to "py2" branch.
if you run into any issues related to python3 due to the above
changes, please let me know. and i will try to fix it ASAP.
currently, the tests under qa/ directories in ceph:ceph master branch
are python2 and python3 compatible. but since we've moved to python3,
there is no need to be python2 compatible anymore. since the sepia lab
is still using ubuntu xenial, we cannot use features offered by
python3.6 at this moment yet. but we do plan to upgrade the OS to
bionic soon. before that happens, the tests need to be compatible with
the next step is to
- drop python2 support in ceph:ceph master branch, and
- drop python2 support in ceph:teuthology master.
- backport python3 compatible changes to octopus and nautilus to ease
the pain of backport
Hi all. About 2 days my Ceph cluster goes 1 millions IO/s on reads
default.rgw.buckets.index. When this happens my PUT requests goes up to 200
req/s but I had 500 req/s before and there were no high IO/s on that pool.
When this happens my rgw nodes and OSDs goes up to 100% cpu usage.
Do you have any indea what’s going on here that this pool gets 1 millions
Also I have upgraded to 14.2.8 but problem still persists.
Thanks for you help :)
I am currently trying to implement an option to utilize Ceph's Bluestore as an backend for the storage framework JULEA, but I am currently stuck on an error when I'm using ObjectStore::create to initialize the ObjectStore.
I'm initializing Ceph beforehand using the global_init, but on calling of ObjectStore::create(https://github.com/JCoym/julea/blob/3866a3cc2edfda6a09e… I get a SegFault from a mutex function of Ceph.
I have currently no idea what causes this error, so I would be glad if someone has an idea what I'm missing.
*** Caught signal (Segmentation fault) ** in thread 7fcaf900e740 thread_name:bluestore_test ceph version 15.1.0-1422-g3064f20220
(3064f2022029fb2a63802316d8c97dfdae3b2337) octopus (rc)
1: (()+0x12e4f5e) [0x7fcb05902f5e]
2: (()+0x14b20) [0x7fcaf9f52b20]
) const+0x2a) [0x7fcafb3ea302]
7: (PerfCountersCollection::add(PerfCounters*)+0x37) [0x7fcafb6c8e93]
8: (Throttle::Throttle(CephContext*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long,
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)+0xc0) [0x7fcb0534e3de]
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2d) [0x7fcb0534e31b]
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int)+0x100) [0x7fcb051b030a]
13: (julea_bluestore_init()+0x138) [0x7fcb051a4cb4]
14: ./bluestore_test() [0x40851e]
15: (__libc_start_main()+0xf3) [0x7fcaf932e1a3]
16: ./bluestore_test() [0x40843e]
In Mimic, how extensive is the S3 bucket policy support?
I'm trying to configure a bucket to require encryption using the following
policy, but it doesn't appear to have any effect, I can still upload
unencrypted objects. I tried different variations on the policy structure
but nothing seems to have any effect and I don't see anything in the logs
(debug_rgw = 5/5).
We're happy to announce that a couple of weeks ago, we've submitted a few Github pull requests adding initial Windows support. A big thank you to the people that have already reviewed the patches.
To bring some context about the scope and current status of our work: we're mostly targeting the client side, allowing Windows hosts to consume rados, rbd and cephfs resources.
We have Windows binaries capable of writing to rados pools. We're using mingw to build the ceph components, mostly due to the fact that it requires the minimum amount of changes to cross compile ceph for Windows. However, we're soon going to switch to MSVC/Clang due to mingw limitations and long standing bugs. Porting the unit tests is also something that we're currently working on.
The next step will be implementing a virtual miniport driver so that RBD volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping to leverage librbd as much as possible as part of a daemon that will communicate with the driver. We're also aiming at cephfs and considering using Dokan, which is FUSE compatible.
Merging the open PRs would allow us to move forward, focusing on the drivers and avoiding rebase issues. Any help on that is greatly appreciated.
Last but not least, I'd like to thank Suse, who's sponsoring this effort!
It is a newbie question. I would be really thankful if you can answer it
please. I want to compile the Ceph source code. Because I want to profile
*Librados* and *CRUSH* function stack, loops, execution tome etc on CPU.
Please verify if this is the right track I am following:
- I have cloned the Ceph from Ceph git repository
- I have installed the build code dependencies from script *install-deps.sh*
- Because I would like to use the* gdb debug* client program later, the
client program will depend on the librados library, so I must compile ceph
in debug mode. Therefore I would modify the parameters of ceph cmake in
*do_cmake.sh* script accordingly.
- Then I compile *do_cmake*
*- *In build I run* make - j 32*
*- *To start the developer mode, I run *make vstart.*
*- *In the developer mode I can write *READ* and *WRITE* tests...compile
these tests and then use some profiling tool to call the compiled
executable to profile the function stacks.
Is this the correct way for* profiling*? Please let me know if it is fine
or is there something more also.