We're happy to announce the first bug fix release of Ceph Nautilus
We recommend all nautilus users upgrade to this release. For upgrading
from older releases of
ceph, general guidelines for upgrade to nautilus must be followed
* The default value for `mon_crush_min_required_version` has been
changed from `firefly` to `hammer`, which means the cluster will
issue a health warning if your CRUSH tunables are older than hammer.
There is generally a small (but non-zero) amount of data that will
move around by making the switch to hammer tunables; for more
If possible, we recommend that you set the oldest allowed client to
or later. You can tell what the current oldest allowed client is
ceph osd dump | min_compat_client
If the current value is older than hammer, you can tell whether it
is safe to make this change by verifying that there are no clients
older than hammer current connected to the cluster with::
The newer `straw2` CRUSH bucket type was introduced in hammer, and
ensuring that all clients are hammer or newer allows new features
only supported for `straw2` buckets to be used, including the
`crush-compat` mode for the :ref:`balancer`.
* Ceph now packages python bindings for python3.6 instead of
python3.4, because EPEL7 recently switched from python3.4 to
python3.6 as the native python3. see the `announcement
for more details on the background of this change.
* Nautilus-based librbd clients cannot open images stored on
For a detailed changelog please refer to the official release notes
entry at the ceph blog
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.1.tar.gz
* For packages, see
* Release git sha1: d555a9489eb35f84f2e1ef49b77e19da9d113972
We are happy to announce the next bugfix release for v12.2.x Luminous
stable release series. We recommend all luminous users to upgrade to
this release. Many thanks to everyone who contributed backports and a
special mention to Yuri for the QE efforts put in to this release.
* In 12.2.11 and earlier releases, keyring caps were not checked for validity,
so the caps string could be anything. As of 12.2.12, caps strings are
validated and providing a keyring with an invalid caps string to, e.g.,
`ceph auth add` will result in an error.
For the complete changelog, please refer to the release blog entry at
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.12.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 1436006594665279fe734b4c15d7e08c13ebd777
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
This is a reminder that Cephalocon Barcelona is coming up next month (May
19-20), and it's going to be great! We have two days of Ceph content over
four tracks, including:
- A Rook tutorial for deploy Ceph over SSD instances
- Several other Rook and Kubernetes related talks, including
self-service provisioning of object storage via kubernetes claim-like
- Two sessions on project Crimson, which is a reimplementation
of the Ceph OSD using Seastar, DPDK, and SPDK, targeting high
performance hardware and storage devices
- Talks from Ceph power users with large-scale deployments from CERN,
China Mobile, OVH, MeerKAT (part of the Square Kilometer Array), UWisc,
- How to build fast NVMe-based Ceph clusters on a reasonable budget
- Using Ceph to provide NFS, CIFS, iSCSI service
- The latest status of the Ceph management dashboard
- Best practices for multi-cluster capabilities in RGW and RBD
- S3 API compatibility in RGW, now and over the long term
- and lots more!
You can see the full schedule here (minus a few keynote slots that haven't
been finalized yet):
There are going to be a lot of Ceph developers in Barcelona for the
conference, both for Cephalocon (Sunday and Monday) and for KubeCon
(Tuesday through Thursday).
For more information (including registration), see
For those interested in Kubernetes and the intersection of storage with
the leading container orchestration platform, KubeCon + CloudNativeCon
runs directly after Cephalocon for the rest of the week.
We're looking forward to seeing many of you there!