I am trying to use CEPH dmclock to see how it works for QoS control.
Especially, I want to set “osd_op_queue” as “mclock_client” to config
different [r, w, l] for each client. The CEPH version I use is nautilus
I noticed that in "OSD CONFIG REFERENCE" section of CEPH documentation, it
states that "the mClock based ClientQueue (mclock_client) also incorporates
the client identifier in order to promote fairness between clients.", so I
believe librados can support per-client configurations right now. I wonder
how I can set up the CEPH configuration to config different (r, w, l) for
different clients using such “client identifier"? Thanks.
Due to increases in amount of testing and length of logs, the (Long
Running) Ceph cluster in the Sepia lab has been reaching 95-98% capacity
over the past few days. Since almost everything else got deleted on the
cluster a few months ago, I need to reduce the amount of test logs we
keep on hand.
- Keep 14 days of passed job logs
- Compress failed job logs older than 30 days
- Delete failed job logs older than 365 days
We will now be deleting failed job logs older than 300 days. We may be
able to increase the cluster's capacity with the purchase of additional
hardware which I will discuss with the appropriate stakeholders.
Systems Administrator, RDU
We have merged  and  with the aim to help developers run upgrade
tests on teuthology easily. One can now schedule an upgrade suite run
in teuthology just like any other suite, by just passing "-s upgrade".
This was not possible earlier due to the dependency of client-upgrades
on versions other than the one the suite was being run from. Now
upgrade-clients is a suite of its own, independent of other upgrades.
Moving forward, upgrades suites in master/Pacific and beyond will also
follow the same structure.
Let's try to make sure we care about backward compatibility and run
upgrade tests wherever required.
 https://github.com/ceph/ceph/pull/36435 - nautilus
 https://github.com/ceph/ceph/pull/36436 - octopus
We're happy to announce the availability of the eleventh release in the
Nautilus series. This release brings a number of bugfixes across all
major components of Ceph. We recommend that all Nautilus users upgrade
to this release.
* RGW: The `radosgw-admin` sub-commands dealing with orphans --
`radosgw-admin orphans find`, `radosgw-admin orphans finish`,
`radosgw-admin orphans list-jobs` -- have been deprecated. They
have not been actively maintained and they store intermediate
results on the cluster, which could fill a nearly-full cluster.
They have been replaced by a tool, currently considered
* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
scheduled scrubs of the type disabled will be aborted. All user initiated
scrubs are NOT interrupted.
* Fixed a ceph-osd crash in _committed_osd_maps when there is a failure to encode
the first incremental map. issue#46443: https://github.com/ceph/ceph/pull/46443
For the detailed changelog please refer to the blog entry at
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.11.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: f7fdb2f52131f54b891a2ec99d8205561242cdaf
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
I'm happy to announce the another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild, including the
Specific questions, comments, bugs etc are best directed at our github issues