I'm happy to announce the another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild and the
ceph-csi project is starting to make use of this library.
Specific questions, comments, bugs etc are best directed at our github issues
we're currently investigating to set up a Teuthology cluster to run the
Ceph integration test suite on IBM Z, to improve test coverage on our
However, we're not sure what hardware resources are required to do so. The
target configuration should be large enough to comfortably support running
an instance of the full Ceph integration tests. Is there some data
available from your experience with such installations on how large this
cluster needs to be then?
In particular, what number of nodes, #cpus and memory per node, number
(type/size) of disks that should be attached?
Thanks for any data / estimates you can provide!
Details of this release summarized here:
rados - PASSED
rgw - requires rerun
rbd - FAILED, Jason approved?
fs - FAILED, Greg approved?
kcephfs - FAILED, Greg approved?
multimds - FAILED, Greg approved?
krbd - FAILED, Ilya, Jason approved?
ceph-deploy - FAILED, Brad, can you take a look pls?
ceph-disk - FAILED, Jan, Andrew can you take a look pls?
upgrade/client-upgrade-jewel - PASSED
upgrade/client-upgrade-luminous - PASSED
upgrade/luminous-x (mimic) - still in progress
upgrade/mimic-x (nautilus) - FAILED need to discuss with Josh )we did
not run it for previous point)
upgrade/mimic-p2p - tests needs fixing
powercycle - still in progress
ceph-ansible - FAILED, Brad is fixing
ceph-volume - FAILED, Jan pls take a look
Please review the results and reply/approve/comment.
There is a general documentation meeting called the "DocuBetter Meeting",
and it is held every two weeks. The next DocuBetter Meeting will be on
April 08, 2020 at 0830 PST, and will run for thirty minutes. Everyone with
a documentation-related request or complaint is invited. The meeting will
be held here: https://bluejeans.com/908675367
Send documentation-related requests and complaints to me by replying to
this email and CCing me at zac.dover(a)gmail.com.
This message will be sent to dev(a)ceph.io every Monday morning, North
The next DocuBetter meeting is scheduled for:
08 Apr 2020 0830 PST
08 Apr 2020 1630 UTC
09 Apr 2020 0230 AEST
I'm one of the members of the team currently maintaining, and hopefully
improving, go-ceph (ceph library bindings for Go) .
An issue I keep returning to is the nature of api calls such as:
The argument named cmd for these functions is of type `const char **cmd` and
is followed by a `size_t cmdlen`. I'd like to better understand the rationale
for array-of-char* strings and how this is intended to be used.
When I first saw the function I initially thought it was meant to support
multiple "commands" getting issued at once but later started thinking that a
single command could be split across multiple strings, and experimentation
demonstrated that this does indeed work. I've attempted reading the sources
but nothing jumps out at me to explain this approach. Most callers of these
functions seem to only use a single "command string".
I also tried looking to see how the code makes use of the array (or vector
after the transition to C++) but I will admit I'm not very familiar with C++
and get a bit lost in some of the templates and overloading.
Could someone who is familiar with this better explain how this argument and
it's type is meant to be used?
I want to make sure we're making good use of the apis that ceph provides in
the wrapper library, or at the very least I want to make sure we're not mis-
using the apis. :-)
Thank you for your time.
1 - https://github.com/ceph/go-ceph
I am a novice Ceph user and I am going through Ceph CRUSH algorithm. I
would like to do some profiling of CRUSH computations on a CPU.
Are there any CRUSH benchmarks?
Any profiling tool for CRUSH?
I would like to examine what are the compute intensive parts of CRUSH on
May be there is some research paper on it already?
I am really looking forward for some helping tips.
I am trying to understand **Straw2** bucket used in **CRUSH algorithm** of
**Ceph**. I have some specific questions. The code is given below:
- Why there is a need of taking **log** of **hash value**?
- Is **x** the **placement ps** calculated by **crush_hash32_2** function?
- What function **crush_ln()** in the given code (mapper.c) is actually
computing? I am confused by the comment **2^44*log2(input+1)**.
- Why there is a need of creating a negative number based on **ln (natural
log)** of hash value?
Please help me understand these points.
Thanks in advance