This is the second (and possibly final) point release for Giant.
We recommend all v0.87.x Giant users upgrade to this release.
Notable Changes
---------------
* ceph-objectstore-tool: only output unsupported features when
incompatible (#11176 David Zafman)
* common: do not implicitly unlock rwlock on destruction (Federico
Simoncelli)
* common: make wait timeout on empty queue configurable (#10818 Samuel
Just)
* crush: pick ruleset id that matches and rule id (Xiaoxi Chen)
* crush: set_choose_tries = 100 for new erasure code rulesets (#10353 Loic
Dachary)
* librados: check initialized atomic safely (#9617 Josh Durgin)
* librados: fix failed tick_event assert (#11183 Zhiqiang Wang)
* librados: fix looping on skipped maps (#9986 Ding Dinghua)
* librados: fix op submit with timeout (#10340 Samuel Just)
* librados: pybind: fix memory leak (#10723 Billy Olsen)
* librados: pybind: keep reference to callbacks (#10775 Josh Durgin)
* librados: translate operation flags from C APIs (Matthew Richards)
* libradosstriper: fix write_full on ENOENT (#10758 Sebastien Ponce)
* libradosstriper: use strtoll instead of strtol (Dongmao Zhang)
* mds: fix assertion caused by system time moving backwards (#11053 Yan,
Zheng)
* mon: allow injection of random delays on writes (Joao Eduardo Luis)
* mon: do not trust small osd epoch cache values (#10787 Sage Weil)
* mon: fail non-blocking flush if object is being scrubbed (#8011 Samuel
Just)
* mon: fix division by zero in stats dump (Joao Eduardo Luis)
* mon: fix get_rule_avail when no osds (#10257 Joao Eduardo Luis)
* mon: fix timeout rounds period (#10546 Joao Eduardo Luis)
* mon: ignore osd failures before up_from (#10762 Dan van der Ster, Sage
Weil)
* mon: paxos: reset accept timeout before writing to store (#10220 Joao
Eduardo Luis)
* mon: return if fs exists on 'fs new' (Joao Eduardo Luis)
* mon: use EntityName when expanding profiles (#10844 Joao Eduardo Luis)
* mon: verify cross-service proposal preconditions (#10643 Joao Eduardo
Luis)
* mon: wait for osdmon to be writeable when requesting proposal (#9794
Joao Eduardo Luis)
* mount.ceph: avoid spurious error message about /etc/mtab (#10351 Yan,
Zheng)
* msg/simple: allow RESETSESSION when we forget an endpoint (#10080 Greg
Farnum)
* msg/simple: discard delay queue before incoming queue (#9910 Sage Weil)
* osd: clear_primary_state when leaving Primary (#10059 Samuel Just)
* osd: do not ignore deleted pgs on startup (#10617 Sage Weil)
* osd: fix FileJournal wrap to get header out first (#10883 David Zafman)
* osd: fix PG leak in SnapTrimWQ (#10421 Kefu Chai)
* osd: fix journalq population in do_read_entry (#6003 Samuel Just)
* osd: fix operator== for op_queue_age_hit and fs_perf_stat (#10259 Samuel
Just)
* osd: fix rare assert after split (#10430 David Zafman)
* osd: get pgid ancestor from last_map when building past intervals
(#10430 David Zafman)
* osd: include rollback_info_trimmed_to in {read,write}_log (#10157 Samuel
Just)
* osd: lock header_lock in DBObjectMap::sync (#9891 Samuel Just)
* osd: requeue blocked op before flush it was blocked on (#10512 Sage
Weil)
* osd: tolerate missing object between list and attr get on backfill
(#10150 Samuel Just)
* osd: use correct atime for eviction decision (Xinze Chi)
* rgw: flush XML header on get ACL request (#10106 Yehuda Sadeh)
* rgw: index swift keys appropriately (#10471 Hemant Bruman, Yehuda Sadeh)
* rgw: send cancel for bucket index pending ops (#10770 Baijiaruo, Yehuda
Sadeh)
* rgw: swift: support X_Remove_Container-Meta-{key} (#01475 Dmytro
Iurchenko)
For more detailed information, see
http://ceph.com/docs/master/_downloads/v0.87.2.txt
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.87.2.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
This bug fix release fixes a few critical issues with CRUSH. The most
important addresses a bug in feature bit enforcement that may prevent
pre-hammer clients from communicating with the cluster during an upgrade.
This only manifests in some cases (for example, when the 'rack' type is in
use in the CRUSH map, and possibly other cases), but for safety we
strongly recommend that all users use 0.94.1 instead of 0.94 when
upgrading.
There is also a fix in the new straw2 buckets when OSD weights are 0.
We recommend that all v0.94 users upgrade.
Notable changes
---------------
* crush: fix divide-by-0 in straw2 (#11357 Sage Weil)
* crush: fix has_v4_buckets (#11364 Sage Weil)
* osd: fix negative degraded objects during backfilling (#7737 Guang Yang)
For more detailed information, see the complete changelog at
http://docs.ceph.com/docs/master/_downloads/v0.94.1.txt
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.94.1.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
Hi All,
This is a new release of ceph-deploy that includes a new feature for
Hammer and bugfixes. ceph-deploy can be installed from the ceph.com
hosted repos for Firefly, Giant, Hammer, or testing, and is also
available on PyPI.
ceph-deploy now defaults to installing the Hammer release. If you need
to install a different release, use the --release flag.
To go along with the Hammer release, ceph-deploy now includes support
for a drastically simplified deployment for RGW. See further details
at [1] and [2].
This release also fixes an issue where keyrings pushed to remote nodes
ended up with world-readable permissions.
The full changelog can be seen at [3].
Please update!
Cheers,
- Travis
[1] http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
[2] http://ceph.com/ceph-deploy/docs/rgw.html
[3] http://ceph.com/ceph-deploy/docs/changelog.html#id2
http://ceph.com/rpm-hammer/
Or,
ceph-deploy install --stable=hammer HOST
sage
On Tue, 7 Apr 2015, O'Reilly, Dan wrote:
> Where are the RPM repos for HAMMER?
>
> Dan O'Reilly
> UNIX Systems Administration
>
> 9601 S. Meridian Blvd.
> Englewood, CO 80112
> 720-514-6293
>
>
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@lists.ceph.com] On Behalf Of Sage Weil
> Sent: Tuesday, April 07, 2015 2:55 PM
> To: ceph-announce(a)ceph.com; ceph-devel(a)vger.kernel.org; ceph-users(a)ceph.com; ceph-maintainers(a)ceph.com
> Subject: [ceph-users] v0.94 Hammer released
>
> This major release is expected to form the basis of the next long-term stable series. It is intended to supercede v0.80.x Firefly.
>
> Highlights since Giant include:
>
> * RADOS Performance: a range of improvements have been made in the
> OSD and client-side librados code that improve the throughput on
> flash backends and improve parallelism and scaling on fast machines.
> * Simplified RGW deployment: the ceph-deploy tool now has a new
> 'ceph-deploy rgw create HOST' command that quickly deploys a
> instance of the S3/Swift gateway using the embedded Civetweb server.
> This is vastly simpler than the previous Apache-based deployment.
> There are a few rough edges (e.g., around SSL support) but we
> encourage users to try the new method:
>
> http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
>
> * RGW object versioning: RGW now supports the S3 object versioning
> API, which preserves old version of objects instead of overwriting
> them.
> * RGW bucket sharding: RGW can now shard the bucket index for large
> buckets across, improving performance for very large buckets.
> * RBD object maps: RBD now has an object map function that tracks
> which parts of the image are allocating, improving performance for
> clones and for commands like export and delete.
> * RBD mandatory locking: RBD has a new mandatory locking framework
> (still disabled by default) that adds additional safeguards to
> prevent multiple clients from using the same image at the same time.
> * RBD copy-on-read: RBD now supports copy-on-read for image clones,
> improving performance for some workloads.
> * CephFS snapshot improvements: Many many bugs have been fixed with
> CephFS snapshots. Although they are still disabled by default,
> stability has improved significantly.
> * CephFS Recovery tools: We have built some journal recovery and
> diagnostic tools. Stability and performance of single-MDS systems is
> vastly improved in Giant, and more improvements have been made now
> in Hammer. Although we still recommend caution when storing
> important data in CephFS, we do encourage testing for non-critical
> workloads so that we can better guage the feature, usability,
> performance, and stability gaps.
> * CRUSH improvements: We have added a new straw2 bucket algorithm
> that reduces the amount of data migration required when changes are
> made to the cluster.
> * RADOS cache tiering: A series of changes have been made in the
> cache tiering code that improve performance and reduce latency.
> * Experimental RDMA support: There is now experimental support the RDMA
> via the Accelio (libxio) library.
> * New administrator commands: The 'ceph osd df' command shows
> pertinent details on OSD disk utilizations. The 'ceph pg ls ...'
> command makes it much simpler to query PG states while diagnosing
> cluster issues.
>
> Other highlights since Firefly include:
>
> * CephFS: we have fixed a raft of bugs in CephFS and built some
> basic journal recovery and diagnostic tools. Stability and
> performance of single-MDS systems is vastly improved in Giant.
> Although we do not yet recommend CephFS for production deployments,
> we do encourage testing for non-critical workloads so that we can
> better guage the feature, usability, performance, and stability
> gaps.
> * Local Recovery Codes: the OSDs now support an erasure-coding scheme
> that stores some additional data blocks to reduce the IO required to
> recover from single OSD failures.
> * Degraded vs misplaced: the Ceph health reports from 'ceph -s' and
> related commands now make a distinction between data that is
> degraded (there are fewer than the desired number of copies) and
> data that is misplaced (stored in the wrong location in the
> cluster). The distinction is important because the latter does not
> compromise data safety.
> * Tiering improvements: we have made several improvements to the
> cache tiering implementation that improve performance. Most
> notably, objects are not promoted into the cache tier by a single
> read; they must be found to be sufficiently hot before that happens.
> * Monitor performance: the monitors now perform writes to the local
> data store asynchronously, improving overall responsiveness.
> * Recovery tools: the ceph-objectstore-tool is greatly expanded to
> allow manipulation of an individual OSDs data store for debugging
> and repair purposes. This is most heavily used by our QA
> infrastructure to exercise recovery code.
>
> I would like to take this opportunity to call out the amazing growth in contributors to Ceph beyond the core development team from Inktank.
> Hammer features major new features and improvements from Intel, UnitedStack, Yahoo, UbuntuKylin, CohortFS, Mellanox, CERN, Deutsche Telekom, Mirantis, and SanDisk.
>
> Dedication
> ----------
>
> This release is dedicated in memoriam to Sandon Van Ness, aka Houkouonchi, who unexpectedly passed away a few weeks ago. Sandon was responsible for maintaining the large and complex Sepia lab that houses the Ceph project's build and test infrastructure. His efforts have made an important impact on our ability to reliably test Ceph with a relatively small group of people. He was a valued member of the team and we will miss him. H is also for Houkouonchi.
>
> Upgrading
> ---------
>
> * If your existing cluster is running a version older than v0.80.x
> Firefly, please first upgrade to the latest Firefly release before
> moving on to Giant. We have not tested upgrades directly from
> Emperor, Dumpling, or older releases.
>
> We *have* tested:
>
> * Firefly to Hammer
> * Firefly to Giant to Hammer
> * Dumpling to Firefly to Hammer
>
> * Please upgrade daemons in the following order:
>
> #. Monitors
> #. OSDs
> #. MDSs and/or radosgw
>
> Note that the relative ordering of OSDs and monitors should not matter, but
> we primarily tested upgrading monitors first.
>
> * The ceph-osd daemons will perform a disk-format upgrade improve the
> PG metadata layout and to repair a minor bug in the on-disk format.
> It may take a minute or two for this to complete, depending on how
> many objects are stored on the node; do not be alarmed if they do
> not marked "up" by the cluster immediately after starting.
>
> * If upgrading from v0.93, set
> osd enable degraded writes = false
>
> on all osds prior to upgrading. The degraded writes feature has
> been reverted due to 11155.
>
> * The LTTNG tracing in librbd and librados is disabled in the release packages
> until we find a way to avoid violating distro security policies when linking
> libust.
>
>
> For more information
> --------------------
>
> http://ceph.com/docs/master/release-notes/#v0-94-hammer
>
> Getting Ceph
> ------------
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://ceph.com/download/ceph-0.94.tar.gz
> * For packages, see http://ceph.com/docs/master/install/get-packages
> * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
This major release is expected to form the basis of the next long-term
stable series. It is intended to supercede v0.80.x Firefly.
Highlights since Giant include:
* RADOS Performance: a range of improvements have been made in the
OSD and client-side librados code that improve the throughput on
flash backends and improve parallelism and scaling on fast machines.
* Simplified RGW deployment: the ceph-deploy tool now has a new
'ceph-deploy rgw create HOST' command that quickly deploys a
instance of the S3/Swift gateway using the embedded Civetweb server.
This is vastly simpler than the previous Apache-based deployment.
There are a few rough edges (e.g., around SSL support) but we
encourage users to try the new method:
http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
* RGW object versioning: RGW now supports the S3 object versioning
API, which preserves old version of objects instead of overwriting
them.
* RGW bucket sharding: RGW can now shard the bucket index for large
buckets across, improving performance for very large buckets.
* RBD object maps: RBD now has an object map function that tracks
which parts of the image are allocating, improving performance for
clones and for commands like export and delete.
* RBD mandatory locking: RBD has a new mandatory locking framework
(still disabled by default) that adds additional safeguards to
prevent multiple clients from using the same image at the same time.
* RBD copy-on-read: RBD now supports copy-on-read for image clones,
improving performance for some workloads.
* CephFS snapshot improvements: Many many bugs have been fixed with
CephFS snapshots. Although they are still disabled by default,
stability has improved significantly.
* CephFS Recovery tools: We have built some journal recovery and
diagnostic tools. Stability and performance of single-MDS systems is
vastly improved in Giant, and more improvements have been made now
in Hammer. Although we still recommend caution when storing
important data in CephFS, we do encourage testing for non-critical
workloads so that we can better guage the feature, usability,
performance, and stability gaps.
* CRUSH improvements: We have added a new straw2 bucket algorithm
that reduces the amount of data migration required when changes are
made to the cluster.
* RADOS cache tiering: A series of changes have been made in the
cache tiering code that improve performance and reduce latency.
* Experimental RDMA support: There is now experimental support the RDMA
via the Accelio (libxio) library.
* New administrator commands: The 'ceph osd df' command shows
pertinent details on OSD disk utilizations. The 'ceph pg ls ...'
command makes it much simpler to query PG states while diagnosing
cluster issues.
Other highlights since Firefly include:
* CephFS: we have fixed a raft of bugs in CephFS and built some
basic journal recovery and diagnostic tools. Stability and
performance of single-MDS systems is vastly improved in Giant.
Although we do not yet recommend CephFS for production deployments,
we do encourage testing for non-critical workloads so that we can
better guage the feature, usability, performance, and stability
gaps.
* Local Recovery Codes: the OSDs now support an erasure-coding scheme
that stores some additional data blocks to reduce the IO required to
recover from single OSD failures.
* Degraded vs misplaced: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than the desired number of copies) and
data that is misplaced (stored in the wrong location in the
cluster). The distinction is important because the latter does not
compromise data safety.
* Tiering improvements: we have made several improvements to the
cache tiering implementation that improve performance. Most
notably, objects are not promoted into the cache tier by a single
read; they must be found to be sufficiently hot before that happens.
* Monitor performance: the monitors now perform writes to the local
data store asynchronously, improving overall responsiveness.
* Recovery tools: the ceph-objectstore-tool is greatly expanded to
allow manipulation of an individual OSDs data store for debugging
and repair purposes. This is most heavily used by our QA
infrastructure to exercise recovery code.
I would like to take this opportunity to call out the amazing growth
in contributors to Ceph beyond the core development team from Inktank.
Hammer features major new features and improvements from Intel,
UnitedStack, Yahoo, UbuntuKylin, CohortFS, Mellanox, CERN, Deutsche
Telekom, Mirantis, and SanDisk.
Dedication
----------
This release is dedicated in memoriam to Sandon Van Ness, aka
Houkouonchi, who unexpectedly passed away a few weeks ago. Sandon was
responsible for maintaining the large and complex Sepia lab that
houses the Ceph project's build and test infrastructure. His efforts
have made an important impact on our ability to reliably test Ceph
with a relatively small group of people. He was a valued member of
the team and we will miss him. H is also for Houkouonchi.
Upgrading
---------
* If your existing cluster is running a version older than v0.80.x
Firefly, please first upgrade to the latest Firefly release before
moving on to Giant. We have not tested upgrades directly from
Emperor, Dumpling, or older releases.
We *have* tested:
* Firefly to Hammer
* Firefly to Giant to Hammer
* Dumpling to Firefly to Hammer
* Please upgrade daemons in the following order:
#. Monitors
#. OSDs
#. MDSs and/or radosgw
Note that the relative ordering of OSDs and monitors should not matter, but
we primarily tested upgrading monitors first.
* The ceph-osd daemons will perform a disk-format upgrade improve the
PG metadata layout and to repair a minor bug in the on-disk format.
It may take a minute or two for this to complete, depending on how
many objects are stored on the node; do not be alarmed if they do
not marked "up" by the cluster immediately after starting.
* If upgrading from v0.93, set
osd enable degraded writes = false
on all osds prior to upgrading. The degraded writes feature has
been reverted due to 11155.
* The LTTNG tracing in librbd and librados is disabled in the release packages
until we find a way to avoid violating distro security policies when linking
libust.
For more information
--------------------
http://ceph.com/docs/master/release-notes/#v0-94-hammer
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.94.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy