First, this is a reminder that there is a Tech Talk tomorrow from Guy
Margalit about NooBaa, a multi-cloud object data services platform:
Jan 17 at 19:00 UTC
Why, you might ask?
There is a lot of interest among many Ceph developers and vendors to
expand our multi-cloud, inter-cluster capabilities, especially in the
object space. That would include replicating object data across cluster
and public clouds, intelligently placing buckets, managing migration of
buckets between sites, and allowing all of that (and more) to be driven by
policy instead of human operators.
We'd like to understand the level of interest in these capabilities among
Ceph users, expanding the scope of Ceph to include management of data as
well as the persistence part of storage we traditionally focused on, and
ultimately determine whether and how we can integrate NooBaa and RGW to
provide the full set of capabilities.
If you're interested in the future direction of object storage, and how
Ceph can play well in a multi-cloud, hybrid cloud world, I encourage you
to join tomorrow or catch the recording!
This is the fourth bugfix release of the Mimic v13.2.x long term stable
release series. This release includes two security fixes atop of v13.2.3
We recommend all users upgrade to this version. If you've already
upgraded to v13.2.3, the same restrictions from v13.2.2->v13.2.3 apply
here as well.
* CVE-2018-16846: rgw: enforce bounds on max-keys/max-uploads/max-parts (`issue#35994 <http://tracker.ceph.com/issues/35994>`_)
* CVE-2018-14662: mon: limit caps allowed to access the config store
Notable Changes in v13.2.3
* The default memory utilization for the mons has been increased
somewhat. Rocksdb now uses 512 MB of RAM by default, which should
be sufficient for small to medium-sized clusters; large clusters
should tune this up. Also, the `mon_osd_cache_size` has been
increase from 10 OSDMaps to 500, which will translate to an
additional 500 MB to 1 GB of RAM for large clusters, and much less
for small clusters.
* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
'damaged' state when upgrading Ceph cluster from previous version.
The bug is fixed in v13.2.3. If you are already running v13.2.2,
upgrading to v13.2.3 does not require special action.
* The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.
For more details, see the `BlueStore docs <http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#a…>`_.
* This version contains an upgrade bug, http://tracker.ceph.com/issues/36686,
due to which upgrading during recovery/backfill can cause OSDs to fail. This
bug can be worked around, either by restarting all the OSDs after the upgrade,
or by upgrading when all PGs are in "active+clean" state. If you have already
successfully upgraded to 13.2.2, this issue should not impact you. Going
forward, we are working on a clean upgrade path for this feature.
For more details please refer to the release blog at
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.4.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b10be4d44915a4d78a8e06aa31919e74927b142e
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)