hi folks,
we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph
to python3. and from now on, the teuthology-worker runs in a python3
environment by default unless specified otherwise using
"--teuthology-branch py2".
which means:
- we need to write tests in python3 in master now
- teuthology should be python3 compatible.
- teuthology bug fixes should be backported to "py2" branch.
if you run into any issues related to python3 due to the above
changes, please let me know. and i will try to fix it ASAP.
currently, the tests under qa/ directories in ceph:ceph master branch
are python2 and python3 compatible. but since we've moved to python3,
there is no need to be python2 compatible anymore. since the sepia lab
is still using ubuntu xenial, we cannot use features offered by
python3.6 at this moment yet. but we do plan to upgrade the OS to
bionic soon. before that happens, the tests need to be compatible with
Python3.5.
the next step is to
- drop python2 support in ceph:ceph master branch, and
- drop python2 support in ceph:teuthology master.
- backport python3 compatible changes to octopus and nautilus to ease
the pain of backport
--
Regards
Kefu Chai
Hi all. About 2 days my Ceph cluster goes 1 millions IO/s on reads
default.rgw.buckets.index. When this happens my PUT requests goes up to 200
req/s but I had 500 req/s before and there were no high IO/s on that pool.
When this happens my rgw nodes and OSDs goes up to 100% cpu usage.
Do you have any indea what’s going on here that this pool gets 1 millions
IO/s?
Also I have upgraded to 14.2.8 but problem still persists.
Thanks for you help :)
Hi,
I am currently trying to implement an option to utilize Ceph's Bluestore as an backend for the storage framework JULEA, but I am currently stuck on an error when I'm using ObjectStore::create to initialize the ObjectStore.
I'm initializing Ceph beforehand using the global_init, but on calling of ObjectStore::create(https://github.com/JCoym/julea/blob/3866a3cc2edfda6a09e… I get a SegFault from a mutex function of Ceph.
I have currently no idea what causes this error, so I would be glad if someone has an idea what I'm missing.
Thanks,
Johannes
*** Caught signal (Segmentation fault) ** in thread 7fcaf900e740 thread_name:bluestore_test ceph version 15.1.0-1422-g3064f20220
(3064f2022029fb2a63802316d8c97dfdae3b2337) octopus (rc)
1: (()+0x12e4f5e) [0x7fcb05902f5e]
2: (()+0x14b20) [0x7fcaf9f52b20]
3: (ceph::mutex_debug_detail::mutex_debugging_base::_enable_lockdep()
const+0xc) [0x7fcafb3e59a2]
4:
(ceph::mutex_debug_detail::mutex_debug_impl<false>::enable_lockdep(bool
) const+0x2a) [0x7fcafb3ea302]
5:
(ceph::mutex_debug_detail::mutex_debug_impl<false>::lock(bool)+0x23)
[0x7fcafb3e85e5]
6: (std::lock_guard<ceph::mutex_debug_detail::mutex_debug_impl<false>
>::lock_guard(ceph::mutex_debug_detail::mutex_debug_impl<false>&)+0x2f)
[0x7fcafb406173]
7: (PerfCountersCollection::add(PerfCounters*)+0x37) [0x7fcafb6c8e93]
8: (Throttle::Throttle(CephContext*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long,
bool)+0x4f6) [0x7fcafb4dfe56]
9:
(BlueStore::BlueStoreThrottle::BlueStoreThrottle(CephContext*)+0xf3)
[0x7fcb053dfcb1]
10: (BlueStore::BlueStore(CephContext*,
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long)+0xc0) [0x7fcb0534e3de]
11: (BlueStore::BlueStore(CephContext*,
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x2d) [0x7fcb0534e31b]
12: (ObjectStore::create(CephContext*,
std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int)+0x100) [0x7fcb051b030a]
13: (julea_bluestore_init()+0x138) [0x7fcb051a4cb4]
14: ./bluestore_test() [0x40851e]
15: (__libc_start_main()+0xf3) [0x7fcaf932e1a3]
16: ./bluestore_test() [0x40843e]
In Mimic, how extensive is the S3 bucket policy support?
I'm trying to configure a bucket to require encryption using the following
policy, but it doesn't appear to have any effect, I can still upload
unencrypted objects. I tried different variations on the policy structure
but nothing seems to have any effect and I don't see anything in the logs
(debug_rgw = 5/5).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::testing/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
}
]
}
Thanks,
Wyllys Ingersoll
Hi,
We're happy to announce that a couple of weeks ago, we've submitted a few Github pull requests[1][2][3] adding initial Windows support. A big thank you to the people that have already reviewed the patches.
To bring some context about the scope and current status of our work: we're mostly targeting the client side, allowing Windows hosts to consume rados, rbd and cephfs resources.
We have Windows binaries capable of writing to rados pools[4]. We're using mingw to build the ceph components, mostly due to the fact that it requires the minimum amount of changes to cross compile ceph for Windows. However, we're soon going to switch to MSVC/Clang due to mingw limitations and long standing bugs[5][6]. Porting the unit tests is also something that we're currently working on.
The next step will be implementing a virtual miniport driver so that RBD volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping to leverage librbd as much as possible as part of a daemon that will communicate with the driver. We're also aiming at cephfs and considering using Dokan, which is FUSE compatible.
Merging the open PRs would allow us to move forward, focusing on the drivers and avoiding rebase issues. Any help on that is greatly appreciated.
Last but not least, I'd like to thank Suse, who's sponsoring this effort!
Lucian Petrut
Cloudbase Solutions
[1] https://github.com/ceph/ceph/pull/31981
[2] https://github.com/ceph/ceph/pull/32027
[3] https://github.com/ceph/rocksdb/pull/42
[4] http://paste.openstack.org/raw/787534/
[5] https://sourceforge.net/p/mingw-w64/bugs/816/
[6] https://sourceforge.net/p/mingw-w64/bugs/527/
Hi,
It is a newbie question. I would be really thankful if you can answer it
please. I want to compile the Ceph source code. Because I want to profile
*Librados* and *CRUSH* function stack, loops, execution tome etc on CPU.
Please verify if this is the right track I am following:
- I have cloned the Ceph from Ceph git repository
- I have installed the build code dependencies from script *install-deps.sh*
- Because I would like to use the* gdb debug* client program later, the
client program will depend on the librados library, so I must compile ceph
in debug mode. Therefore I would modify the parameters of ceph cmake in
*do_cmake.sh* script accordingly.
- Then I compile *do_cmake*
*- *In build I run* make - j 32*
*- *To start the developer mode, I run *make vstart.*
*- *In the developer mode I can write *READ* and *WRITE* tests...compile
these tests and then use some profiling tool to call the compiled
executable to profile the function stacks.
Is this the correct way for* profiling*? Please let me know if it is fine
or is there something more also.
Bobby !
Hi all,
a few weeks ago, a number of virtual Ceph Developer Summit meetings took
place as a replacement for the in-person summit that was planned as part
of Cephalocon in Seoul: https://pad.ceph.com/p/cds-pacific
The Ceph Dashboard team also participated in these and held three video
conference meetings to lay out our plans for the Pacific release.
For details, please take a look at our notes at this Etherpad:
https://pad.ceph.com/p/ceph-dashboard-pacific-priorities
We tried to identify a few "themes", outlining individual tasks which we
keep track of in the tracker.ceph.com bug tracker. The tracker issues
should be used for discussing and defining the tasks at hand.
A key theme for the upcoming Ceph Pacific release is the intention to
further deepen and enhance the integration and support with cephadm and
the orchestrator.
For Ceph octopus, we tried focusing on the most common day-2 operation
which is OSD management, but going forward we would like to also support
the deployment and management of all other Ceph-related services that
can be rolled out via cephadm and the orchestrator.
In a hopefully not so distant future, we would like to be able to use
the dashboard as a kind of "graphical installer", that guides the user
through the entire installation deployment process of a Ceph cluster
from scratch (well, almost: starting from an initial Mon+Mgr deployment).
Another key theme is closing feature gaps: the various services of a
Ceph cluster like RBD or RGW are constantly evolving and getting new
features, so we always are trying to catch up with the latest
developments there.
We're also looking into enhancing our monitoring/alerting support and
integration with Grafana and Prometheus.
Last but not least, we always try to enhance and improve existing
functionality and work on better usability and user experience. This
also includes bigger refactoring work or updating key components that
the dashboard depends on.
As always, we would like the dashboard to be an application that Ceph
administrators like and actually *want* to use to perform their jobs, so
we are very keen on getting your feedback here!
If there is anything you are missing or if you find any part of the
dashboard to be confusing or not helpful, we'd like to know about it!
Please get in touch with us to share your impressions and ideas. The
best way to do this is to join the #ceph-dashboard IRC channel on OFTC
or by filing a bug report via the tracker:
https://tracker.ceph.com/projects/mgr/issues/new
Thank you,
Lenz
--
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)