Hi all.
Is this repo still alive?
https://github.com/ceph/python-crush
It seems that it is not compatible with newer versions of Ceph. :(
If not, is there any alternate for this tool?
Thanks.
Hi,
I am trying to use CEPH dmclock to see how it works for QoS control.
Especially, I want to set “osd_op_queue” as “mclock_client” to config
different [r, w, l] for each client. The CEPH version I use is nautilus
14.2.9.
I noticed that in "OSD CONFIG REFERENCE" section of CEPH documentation, it
states that "the mClock based ClientQueue (mclock_client) also incorporates
the client identifier in order to promote fairness between clients.", so I
believe librados can support per-client configurations right now. I wonder
how I can set up the CEPH configuration to config different (r, w, l) for
different clients using such “client identifier"? Thanks.
Best,
Zhenbo Qiao
According to this doc(https://docs.ceph.com/docs/master/dev/developer_guide/running-tests-usi…, I have got Sepia lab access. However when I push a branch to ceph-ci, I got 403.
I have searched documents but got nothing about how this problem.
By the way, the way to create the label likes "wip-username-testing" in ceph repo also confuse me.
Hi all,
Due to increases in amount of testing and length of logs, the (Long
Running) Ceph cluster in the Sepia lab has been reaching 95-98% capacity
over the past few days. Since almost everything else got deleted on the
cluster a few months ago, I need to reduce the amount of test logs we
keep on hand.
Currently we:
- Keep 14 days of passed job logs
- Compress failed job logs older than 30 days
- Delete failed job logs older than 365 days
We will now be deleting failed job logs older than 300 days. We may be
able to increase the cluster's capacity with the purchase of additional
hardware which I will discuss with the appropriate stakeholders.
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
Hi Folks,
Sorry for the potential repeat email, I had last week's date in the
title. The weekly performance meeting will be starting in about 20
minutes! Today, Avnum Hanukov would like to introduce and discuss a
paper from usenix talking about drawbacks of the CRUSH algorithm and an
extension called "MapX" that potentially improves performance. You can
download the paper here:
https://www.usenix.org/system/files/fast20-wang_li.pdf
Hope to see you there!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
Hi everyone,
We have merged [1] and [2] with the aim to help developers run upgrade
tests on teuthology easily. One can now schedule an upgrade suite run
in teuthology just like any other suite, by just passing "-s upgrade".
This was not possible earlier due to the dependency of client-upgrades
on versions other than the one the suite was being run from. Now
upgrade-clients is a suite of its own, independent of other upgrades.
Moving forward, upgrades suites in master/Pacific and beyond will also
follow the same structure.
Let's try to make sure we care about backward compatibility and run
upgrade tests wherever required.
Thanks,
Neha
[1] https://github.com/ceph/ceph/pull/36435 - nautilus
[2] https://github.com/ceph/ceph/pull/36436 - octopus
We're happy to announce the availability of the eleventh release in the
Nautilus series. This release brings a number of bugfixes across all
major components of Ceph. We recommend that all Nautilus users upgrade
to this release.
Notable Changes
---------------
* RGW: The `radosgw-admin` sub-commands dealing with orphans --
`radosgw-admin orphans find`, `radosgw-admin orphans finish`,
`radosgw-admin orphans list-jobs` -- have been deprecated. They
have not been actively maintained and they store intermediate
results on the cluster, which could fill a nearly-full cluster.
They have been replaced by a tool, currently considered
experimental, `rgw-orphan-list`.
* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
scheduled scrubs of the type disabled will be aborted. All user initiated
scrubs are NOT interrupted.
* Fixed a ceph-osd crash in _committed_osd_maps when there is a failure to encode
the first incremental map. issue#46443: https://github.com/ceph/ceph/pull/46443
For the detailed changelog please refer to the blog entry at
https://ceph.io/releases/v14-2-11-nautilus-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.11.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: f7fdb2f52131f54b891a2ec99d8205561242cdaf
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
I'm happy to announce the another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.5.0
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
Specific questions, comments, bugs etc are best directed at our github issues
tracker.
---
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com