Hi everyone,
We are sorry to announce that, due to the recent coronavirus outbreak, we
are canceling Cephalocon for March 3-5 in Seoul.
More details will follow about how to best handle cancellation of hotel
reservations and so forth. Registrations will of course be
refunded--expect an email with details in the next day or two.
We are still looking into whether it makes sense to reschedule the event
for later in the year.
Thank you to everyone who has helped to plan this event, submitted talks,
and agreed to sponsor. It makes us sad to cancel, but the safety of
our community is of the utmost importance, and it was looking increasing
unlikely that we could make this event a success.
Stay tuned...
This is the seventh update to the Ceph Nautilus release series. This is
a hotfix release primarily fixing a couple of security issues. We
recommend that all users upgrade to this release.
Notable Changes
---------------
* CVE-2020-1699: Fixed a path traversal flaw in Ceph dashboard that
could allow for potential information disclosure (Ernesto Puerta)
* CVE-2020-1700: Fixed a flaw in RGW beast frontend that could lead to
denial of service from an unauthenticated client (Or Friedmann)
Blog Link: https://ceph.io/releases/v14-2-7-nautilus-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.7.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
We're happy to announce 13th bug fix release of the Luminous v12.2.x
long term stable release series. We recommend that all users upgrade to
this release. Many thanks to all the contributors, in particular Yuri &
Nathan, in getting this release out of the door. This shall be the last
release of the Luminous series.
For a detailed release notes, please check out the official blog entry
at https://ceph.io/releases/v12-2-13-luminous-released/
Notable Changes
---------------
* Ceph now packages python bindings for python3.6 instead of python3.4,
because EPEL7 recently switched from python3.4 to python3.6 as the
native python3. see the announcement[1] for more details on the
background of this change.
* We now have telemetry support via a ceph-mgr module. The telemetry module is
absolutely on an opt-in basis, and is meant to collect generic cluster
information and push it to a central endpoint. By default, we're pushing it
to a project endpoint at https://telemetry.ceph.com/report, but this is
customizable using by setting the 'url' config option with::
ceph telemetry config-set url '<your url>'
You will have to opt-in on sharing your information with::
ceph telemetry on
You can view exactly what information will be reported first with::
ceph telemetry show
Should you opt-in, your information will be licensed under the
Community Data License Agreement - Sharing - Version 1.0, which you can
read at https://cdla.io/sharing-1-0/
The telemetry module reports information about CephFS file systems,
including:
- how many MDS daemons (in total and per file system)
- which features are (or have been) enabled
- how many data pools
- approximate file system age (year + month of creation)
- how much metadata is being cached per file system
As well as:
- whether IPv4 or IPv6 addresses are used for the monitors
- whether RADOS cache tiering is enabled (and which mode)
- whether pools are replicated or erasure coded, and
which erasure code profile plugin and parameters are in use
- how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
- aggregate stats about the CRUSH map, like which algorithms are used, how
big buckets are, how many rules are defined, and what tunables are in use
* A health warning is now generated if the average osd heartbeat ping
time exceeds a configurable threshold for any of the intervals
computed. The OSD computes 1 minute, 5 minute and 15 minute intervals
with average, minimum and maximum values. New configuration option
`mon_warn_on_slow_ping_ratio` specifies a percentage of
`osd_heartbeat_grace` to determine the threshold. A value of zero
disables the warning. New configuration option
`mon_warn_on_slow_ping_time` specified in milliseconds over-rides the
computed value, causes a warning when OSD heartbeat pings take longer
than the specified amount. New admin command `ceph daemon mgr.#
dump_osd_network [threshold]` command will list all connections with a
ping time longer than the specified threshold or value determined by
the config options, for the average for any of the 3 intervals. New
admin command `ceph daemon osd.# dump_osd_network [threshold]` will do
the same but only including heartbeats initiated by the specified OSD.
* The configuration value `osd_calc_pg_upmaps_max_stddev` used for upmap
balancing has been removed. Instead use the mgr balancer config
`upmap_max_deviation` which now is an integer number of PGs of
deviation from the target PGs per OSD. This can be set with a command
like `ceph config set mgr mgr/balancer/upmap_max_deviation 2`. The
default `upmap_max_deviation` is 1. There are situations where crush
rules would not allow a pool to ever have completely balanced PGs. For
example, if crush requires 1 replica on each of 3 racks, but there are
fewer OSDs in 1 of the racks. In those cases, the configuration value
can be increased.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.13.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 584a20eb0237c657dc0567da126be145106aa47e
[1]: https://lists.fedoraproject.org/archives/list/epel-announce@lists.fedorapro…
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer HRB 21284 (AG Nürnberg)