This is the 7th backport release in the Octopus series. This release fixes
a serious bug in RGW that has been shown to cause data loss when a read of
a large RGW object (i.e., one with at least one tail segment) takes longer than
one half the time specified in the configuration option `rgw_gc_obj_min_wait`.
The bug causes the tail segments of that read object to be added to the RGW
garbage collection queue, which will in turn cause them to be deleted after
a period of time.
Changelog
---------
* rgw: during GC defer, prevent new GC enqueue
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.7.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 88e41c6c49beb18add4fdb6b4326ca466d931db8
This is the 6th backport release in the Octopus series. This releases
fixes a security flaw affecting Messenger V2 for Octopus & Nautilus. We
recommend users to update to this release.
Notable Changes
---------------
* CVE 2020-25660: Fix a regression in Messenger V2 replay attacks
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.6.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: cb8c61a60551b72614257d632a574d420064c17a
This is the 13th backport release in the Nautilus series. This release fixes a
regression introduced in v14.2.12, and a few ceph-volume & RGW fixes. We
recommend users to update to this release.
Notable Changes
---------------
* Fixed a regression that caused breakage in clusters that referred to ceph-mon
hosts using dns names instead of ip addresses in the `mon_host` param in
`ceph.conf` (issue#47951)
* ceph-volume: the ``lvm batch`` subcommand received a major rewrite
Changelog
---------
* ceph-volume: major batch refactor (pr#37522, Jan Fajerski)
* mgr/dashboard: Proper format iSCSI target portals (pr#37060, Volker Theile)
* rpm: move python-enum34 into rhel 7 conditional (pr#37747, Nathan Cutler)
* mon/MonMap: fix unconditional failure for init_with_hosts (pr#37816, Nathan Cutler, Patrick Donnelly)
* rgw: allow rgw-orphan-list to note when rados objects are in namespace (pr#37799, J. Eric Ivancich)
* rgw: fix setting of namespace in ordered and unordered bucket listing (pr#37798, J. Eric Ivancich)
--
Abhishek
This is the fifth backport release of the Ceph Octopus stable release
series. This release brings a range of fixes across all components. We
recommend that all Octopus users upgrade to this release.
Notable Changes
---------------
* CephFS: Automatic static subtree partitioning policies may now be configured
using the new distributed and random ephemeral pinning extended attributes on
directories. See the documentation for more information:
https://docs.ceph.com/docs/master/cephfs/multimds/
* Monitors now have a config option `mon_osd_warn_num_repaired`, 10 by default.
If any OSD has repaired more than this many I/O errors in stored data a
`OSD_TOO_MANY_REPAIRS` health warning is generated.
* Now when noscrub and/or no deep-scrub flags are set globally or per pool,
scheduled scrubs of the type disabled will be aborted. All user initiated
scrubs are NOT interrupted.
* Fix an issue with osdmaps not being trimmed in a healthy cluster (
issue#47297, pr#36981)
For the detailed changelog please refer to the blog entry at
https://ceph.io/releases/v15-2-5-octopus-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.5.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 2c93eff00150f0cc5f106a559557a58d3d7b6f1f
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
We're happy to announce the availability of the eleventh release in the
Nautilus series. This release brings a number of bugfixes across all
major components of Ceph. We recommend that all Nautilus users upgrade
to this release.
Notable Changes
---------------
* RGW: The `radosgw-admin` sub-commands dealing with orphans --
`radosgw-admin orphans find`, `radosgw-admin orphans finish`,
`radosgw-admin orphans list-jobs` -- have been deprecated. They
have not been actively maintained and they store intermediate
results on the cluster, which could fill a nearly-full cluster.
They have been replaced by a tool, currently considered
experimental, `rgw-orphan-list`.
* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
scheduled scrubs of the type disabled will be aborted. All user initiated
scrubs are NOT interrupted.
* Fixed a ceph-osd crash in _committed_osd_maps when there is a failure to encode
the first incremental map. issue#46443: https://github.com/ceph/ceph/pull/46443
For the detailed changelog please refer to the blog entry at
https://ceph.io/releases/v14-2-11-nautilus-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.11.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: f7fdb2f52131f54b891a2ec99d8205561242cdaf
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
This is to announce the retirement of v13.2.X Mimic stable release
series, and there will no longer be any more backport releases to the
Mimic series. Any more patches to the mimic branch will have to be
tested by the developer submitting the patches and approved by the tech
lead of the respective component before merge to keep the branch stable.
The last release of Mimic was v13.2.10 released on Apr 2020. This is
keeping up with the active 2 stable releases and 24 month support cycle,
which is documented at
https://docs.ceph.com/docs/master/releases/general/#lifetime-of-stable-rele…
Users are requested to upgrade to Nautilus or Octopus.
For the official blog post link please refer to
https://ceph.io/releases/mimic-is-retired/
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
We're happy to announce the tenth release in the Nautilus series. In
addition to fixing a security-related bug in RGW, this release brings a
number of bugfixes across all major components of Ceph. We recommend
that all Nautilus users upgrade to this release. For a detailed
changelog please refer to the ceph release blog at:
https://ceph.io/releases/v14-2-10-nautilus-released
Notable Changes
---------------
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
(William Bowling, Adam Mohammed, Casey Bodley)
* RGW: Bucket notifications now support Kafka endpoints. This requires librdkafka of
version 0.9.2 and up. Note that Ubuntu 16.04.6 LTS (Xenial Xerus) has an older
version of librdkafka, and would require an update to the library.
* The pool parameter `target_size_ratio`, used by the pg autoscaler,
has changed meaning. It is now normalized across pools, rather than
specifying an absolute ratio. For details, see :ref:`pg-autoscaler`.
If you have set target size ratios on any pools, you may want to set
these pools to autoscale `warn` mode to avoid data movement during
the upgrade::
ceph osd pool set <pool-name> pg_autoscale_mode warn
* The behaviour of the `-o` argument to the rados tool has been reverted to
its orignal behaviour of indicating an output file. This reverts it to a more
consistent behaviour when compared to other tools. Specifying object size is now
accomplished by using an upper case O `-O`.
* The format of MDSs in `ceph fs dump` has changed.
* Ceph will issue a health warning if a RADOS pool's `size` is set to 1
or in other words the pool is configured with no redundancy. This can
be fixed by setting the pool size to the minimum recommended value
with::
ceph osd pool set <pool-name> size <num-replicas>
The warning can be silenced with::
ceph config set global mon_warn_on_pool_no_redundancy false
* RGW: bucket listing performance on sharded bucket indexes has been
notably improved by heuristically -- and significantly, in many
cases -- reducing the number of entries requested from each bucket
index shard.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b340acf629a010a74d90da5782a2c5fe0b54ac20
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
We’re happy to announce the availability of the third Octopus stable
release series. This release mainly is a workaround for a potential OSD
corruption in v15.2.2. We advise users to upgrade to v15.2.3 directly.
For users running v15.2.2 please execute the following::
ceph config set osd bluefs_preextend_wal_files false
Changelog
~~~~~~~~~
* bluestore: common/options.cc: disable bluefs_preextend_wal_files
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.3.tar.gz
* For packages, see
http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: d289bbdec69ed7c1f516e0a093594580a76b78d0