This bug fix release fixes a few critical issues with CRUSH. The most
important addresses a bug in feature bit enforcement that may prevent
pre-hammer clients from communicating with the cluster during an upgrade.
This only manifests in some cases (for example, when the 'rack' type is in
use in the CRUSH map, and possibly other cases), but for safety we
strongly recommend that all users use 0.94.1 instead of 0.94 when
upgrading.
There is also a fix in the new straw2 buckets when OSD weights are 0.
We recommend that all v0.94 users upgrade.
Notable changes
---------------
* crush: fix divide-by-0 in straw2 (#11357 Sage Weil)
* crush: fix has_v4_buckets (#11364 Sage Weil)
* osd: fix negative degraded objects during backfilling (#7737 Guang Yang)
For more detailed information, see the complete changelog at
http://docs.ceph.com/docs/master/_downloads/v0.94.1.txt
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.94.1.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
Hi All,
This is a new release of ceph-deploy that includes a new feature for
Hammer and bugfixes. ceph-deploy can be installed from the ceph.com
hosted repos for Firefly, Giant, Hammer, or testing, and is also
available on PyPI.
ceph-deploy now defaults to installing the Hammer release. If you need
to install a different release, use the --release flag.
To go along with the Hammer release, ceph-deploy now includes support
for a drastically simplified deployment for RGW. See further details
at [1] and [2].
This release also fixes an issue where keyrings pushed to remote nodes
ended up with world-readable permissions.
The full changelog can be seen at [3].
Please update!
Cheers,
- Travis
[1] http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
[2] http://ceph.com/ceph-deploy/docs/rgw.html
[3] http://ceph.com/ceph-deploy/docs/changelog.html#id2
http://ceph.com/rpm-hammer/
Or,
ceph-deploy install --stable=hammer HOST
sage
On Tue, 7 Apr 2015, O'Reilly, Dan wrote:
> Where are the RPM repos for HAMMER?
>
> Dan O'Reilly
> UNIX Systems Administration
>
> 9601 S. Meridian Blvd.
> Englewood, CO 80112
> 720-514-6293
>
>
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@lists.ceph.com] On Behalf Of Sage Weil
> Sent: Tuesday, April 07, 2015 2:55 PM
> To: ceph-announce(a)ceph.com; ceph-devel(a)vger.kernel.org; ceph-users(a)ceph.com; ceph-maintainers(a)ceph.com
> Subject: [ceph-users] v0.94 Hammer released
>
> This major release is expected to form the basis of the next long-term stable series. It is intended to supercede v0.80.x Firefly.
>
> Highlights since Giant include:
>
> * RADOS Performance: a range of improvements have been made in the
> OSD and client-side librados code that improve the throughput on
> flash backends and improve parallelism and scaling on fast machines.
> * Simplified RGW deployment: the ceph-deploy tool now has a new
> 'ceph-deploy rgw create HOST' command that quickly deploys a
> instance of the S3/Swift gateway using the embedded Civetweb server.
> This is vastly simpler than the previous Apache-based deployment.
> There are a few rough edges (e.g., around SSL support) but we
> encourage users to try the new method:
>
> http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
>
> * RGW object versioning: RGW now supports the S3 object versioning
> API, which preserves old version of objects instead of overwriting
> them.
> * RGW bucket sharding: RGW can now shard the bucket index for large
> buckets across, improving performance for very large buckets.
> * RBD object maps: RBD now has an object map function that tracks
> which parts of the image are allocating, improving performance for
> clones and for commands like export and delete.
> * RBD mandatory locking: RBD has a new mandatory locking framework
> (still disabled by default) that adds additional safeguards to
> prevent multiple clients from using the same image at the same time.
> * RBD copy-on-read: RBD now supports copy-on-read for image clones,
> improving performance for some workloads.
> * CephFS snapshot improvements: Many many bugs have been fixed with
> CephFS snapshots. Although they are still disabled by default,
> stability has improved significantly.
> * CephFS Recovery tools: We have built some journal recovery and
> diagnostic tools. Stability and performance of single-MDS systems is
> vastly improved in Giant, and more improvements have been made now
> in Hammer. Although we still recommend caution when storing
> important data in CephFS, we do encourage testing for non-critical
> workloads so that we can better guage the feature, usability,
> performance, and stability gaps.
> * CRUSH improvements: We have added a new straw2 bucket algorithm
> that reduces the amount of data migration required when changes are
> made to the cluster.
> * RADOS cache tiering: A series of changes have been made in the
> cache tiering code that improve performance and reduce latency.
> * Experimental RDMA support: There is now experimental support the RDMA
> via the Accelio (libxio) library.
> * New administrator commands: The 'ceph osd df' command shows
> pertinent details on OSD disk utilizations. The 'ceph pg ls ...'
> command makes it much simpler to query PG states while diagnosing
> cluster issues.
>
> Other highlights since Firefly include:
>
> * CephFS: we have fixed a raft of bugs in CephFS and built some
> basic journal recovery and diagnostic tools. Stability and
> performance of single-MDS systems is vastly improved in Giant.
> Although we do not yet recommend CephFS for production deployments,
> we do encourage testing for non-critical workloads so that we can
> better guage the feature, usability, performance, and stability
> gaps.
> * Local Recovery Codes: the OSDs now support an erasure-coding scheme
> that stores some additional data blocks to reduce the IO required to
> recover from single OSD failures.
> * Degraded vs misplaced: the Ceph health reports from 'ceph -s' and
> related commands now make a distinction between data that is
> degraded (there are fewer than the desired number of copies) and
> data that is misplaced (stored in the wrong location in the
> cluster). The distinction is important because the latter does not
> compromise data safety.
> * Tiering improvements: we have made several improvements to the
> cache tiering implementation that improve performance. Most
> notably, objects are not promoted into the cache tier by a single
> read; they must be found to be sufficiently hot before that happens.
> * Monitor performance: the monitors now perform writes to the local
> data store asynchronously, improving overall responsiveness.
> * Recovery tools: the ceph-objectstore-tool is greatly expanded to
> allow manipulation of an individual OSDs data store for debugging
> and repair purposes. This is most heavily used by our QA
> infrastructure to exercise recovery code.
>
> I would like to take this opportunity to call out the amazing growth in contributors to Ceph beyond the core development team from Inktank.
> Hammer features major new features and improvements from Intel, UnitedStack, Yahoo, UbuntuKylin, CohortFS, Mellanox, CERN, Deutsche Telekom, Mirantis, and SanDisk.
>
> Dedication
> ----------
>
> This release is dedicated in memoriam to Sandon Van Ness, aka Houkouonchi, who unexpectedly passed away a few weeks ago. Sandon was responsible for maintaining the large and complex Sepia lab that houses the Ceph project's build and test infrastructure. His efforts have made an important impact on our ability to reliably test Ceph with a relatively small group of people. He was a valued member of the team and we will miss him. H is also for Houkouonchi.
>
> Upgrading
> ---------
>
> * If your existing cluster is running a version older than v0.80.x
> Firefly, please first upgrade to the latest Firefly release before
> moving on to Giant. We have not tested upgrades directly from
> Emperor, Dumpling, or older releases.
>
> We *have* tested:
>
> * Firefly to Hammer
> * Firefly to Giant to Hammer
> * Dumpling to Firefly to Hammer
>
> * Please upgrade daemons in the following order:
>
> #. Monitors
> #. OSDs
> #. MDSs and/or radosgw
>
> Note that the relative ordering of OSDs and monitors should not matter, but
> we primarily tested upgrading monitors first.
>
> * The ceph-osd daemons will perform a disk-format upgrade improve the
> PG metadata layout and to repair a minor bug in the on-disk format.
> It may take a minute or two for this to complete, depending on how
> many objects are stored on the node; do not be alarmed if they do
> not marked "up" by the cluster immediately after starting.
>
> * If upgrading from v0.93, set
> osd enable degraded writes = false
>
> on all osds prior to upgrading. The degraded writes feature has
> been reverted due to 11155.
>
> * The LTTNG tracing in librbd and librados is disabled in the release packages
> until we find a way to avoid violating distro security policies when linking
> libust.
>
>
> For more information
> --------------------
>
> http://ceph.com/docs/master/release-notes/#v0-94-hammer
>
> Getting Ceph
> ------------
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://ceph.com/download/ceph-0.94.tar.gz
> * For packages, see http://ceph.com/docs/master/install/get-packages
> * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
This major release is expected to form the basis of the next long-term
stable series. It is intended to supercede v0.80.x Firefly.
Highlights since Giant include:
* RADOS Performance: a range of improvements have been made in the
OSD and client-side librados code that improve the throughput on
flash backends and improve parallelism and scaling on fast machines.
* Simplified RGW deployment: the ceph-deploy tool now has a new
'ceph-deploy rgw create HOST' command that quickly deploys a
instance of the S3/Swift gateway using the embedded Civetweb server.
This is vastly simpler than the previous Apache-based deployment.
There are a few rough edges (e.g., around SSL support) but we
encourage users to try the new method:
http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
* RGW object versioning: RGW now supports the S3 object versioning
API, which preserves old version of objects instead of overwriting
them.
* RGW bucket sharding: RGW can now shard the bucket index for large
buckets across, improving performance for very large buckets.
* RBD object maps: RBD now has an object map function that tracks
which parts of the image are allocating, improving performance for
clones and for commands like export and delete.
* RBD mandatory locking: RBD has a new mandatory locking framework
(still disabled by default) that adds additional safeguards to
prevent multiple clients from using the same image at the same time.
* RBD copy-on-read: RBD now supports copy-on-read for image clones,
improving performance for some workloads.
* CephFS snapshot improvements: Many many bugs have been fixed with
CephFS snapshots. Although they are still disabled by default,
stability has improved significantly.
* CephFS Recovery tools: We have built some journal recovery and
diagnostic tools. Stability and performance of single-MDS systems is
vastly improved in Giant, and more improvements have been made now
in Hammer. Although we still recommend caution when storing
important data in CephFS, we do encourage testing for non-critical
workloads so that we can better guage the feature, usability,
performance, and stability gaps.
* CRUSH improvements: We have added a new straw2 bucket algorithm
that reduces the amount of data migration required when changes are
made to the cluster.
* RADOS cache tiering: A series of changes have been made in the
cache tiering code that improve performance and reduce latency.
* Experimental RDMA support: There is now experimental support the RDMA
via the Accelio (libxio) library.
* New administrator commands: The 'ceph osd df' command shows
pertinent details on OSD disk utilizations. The 'ceph pg ls ...'
command makes it much simpler to query PG states while diagnosing
cluster issues.
Other highlights since Firefly include:
* CephFS: we have fixed a raft of bugs in CephFS and built some
basic journal recovery and diagnostic tools. Stability and
performance of single-MDS systems is vastly improved in Giant.
Although we do not yet recommend CephFS for production deployments,
we do encourage testing for non-critical workloads so that we can
better guage the feature, usability, performance, and stability
gaps.
* Local Recovery Codes: the OSDs now support an erasure-coding scheme
that stores some additional data blocks to reduce the IO required to
recover from single OSD failures.
* Degraded vs misplaced: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than the desired number of copies) and
data that is misplaced (stored in the wrong location in the
cluster). The distinction is important because the latter does not
compromise data safety.
* Tiering improvements: we have made several improvements to the
cache tiering implementation that improve performance. Most
notably, objects are not promoted into the cache tier by a single
read; they must be found to be sufficiently hot before that happens.
* Monitor performance: the monitors now perform writes to the local
data store asynchronously, improving overall responsiveness.
* Recovery tools: the ceph-objectstore-tool is greatly expanded to
allow manipulation of an individual OSDs data store for debugging
and repair purposes. This is most heavily used by our QA
infrastructure to exercise recovery code.
I would like to take this opportunity to call out the amazing growth
in contributors to Ceph beyond the core development team from Inktank.
Hammer features major new features and improvements from Intel,
UnitedStack, Yahoo, UbuntuKylin, CohortFS, Mellanox, CERN, Deutsche
Telekom, Mirantis, and SanDisk.
Dedication
----------
This release is dedicated in memoriam to Sandon Van Ness, aka
Houkouonchi, who unexpectedly passed away a few weeks ago. Sandon was
responsible for maintaining the large and complex Sepia lab that
houses the Ceph project's build and test infrastructure. His efforts
have made an important impact on our ability to reliably test Ceph
with a relatively small group of people. He was a valued member of
the team and we will miss him. H is also for Houkouonchi.
Upgrading
---------
* If your existing cluster is running a version older than v0.80.x
Firefly, please first upgrade to the latest Firefly release before
moving on to Giant. We have not tested upgrades directly from
Emperor, Dumpling, or older releases.
We *have* tested:
* Firefly to Hammer
* Firefly to Giant to Hammer
* Dumpling to Firefly to Hammer
* Please upgrade daemons in the following order:
#. Monitors
#. OSDs
#. MDSs and/or radosgw
Note that the relative ordering of OSDs and monitors should not matter, but
we primarily tested upgrading monitors first.
* The ceph-osd daemons will perform a disk-format upgrade improve the
PG metadata layout and to repair a minor bug in the on-disk format.
It may take a minute or two for this to complete, depending on how
many objects are stored on the node; do not be alarmed if they do
not marked "up" by the cluster immediately after starting.
* If upgrading from v0.93, set
osd enable degraded writes = false
on all osds prior to upgrading. The degraded writes feature has
been reverted due to 11155.
* The LTTNG tracing in librbd and librados is disabled in the release packages
until we find a way to avoid violating distro security policies when linking
libust.
For more information
--------------------
http://ceph.com/docs/master/release-notes/#v0-94-hammer
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.94.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
This is a bugfix release for firefly. It fixes a performance regression
in librbd, an important CRUSH misbehavior (see below), and several RGW
bugs. We have also backported support for flock/fcntl locks to ceph-fuse
and libcephfs.
We recommend that all Firefly users upgrade.
For more detailed information, see
http://docs.ceph.com/docs/master/_downloads/v0.80.9.txt
Adjusting CRUSH maps
--------------------
* This point release fixes several issues with CRUSH that trigger
excessive data migration when adjusting OSD weights. These are most
obvious when a very small weight change (e.g., a change from 0 to
.01) triggers a large amount of movement, but the same set of bugs
can also lead to excessive (though less noticeable) movement in
other cases.
However, because the bug may already have affected your cluster,
fixing it may trigger movement *back* to the more correct location.
For this reason, you must manually opt-in to the fixed behavior.
In order to set the new tunable to correct the behavior::
ceph osd crush set-tunable straw_calc_version 1
Note that this change will have no immediate effect. However, from
this point forward, any 'straw' bucket in your CRUSH map that is
adjusted will get non-buggy internal weights, and that transition
may trigger some rebalancing.
You can estimate how much rebalancing will eventually be necessary
on your cluster with::
ceph osd getcrushmap -o /tmp/cm
crushtool -i /tmp/cm --num-rep 3 --test --show-mappings > /tmp/a 2>&1
crushtool -i /tmp/cm --set-straw-calc-version 1 -o /tmp/cm2
crushtool -i /tmp/cm2 --reweight -o /tmp/cm2
crushtool -i /tmp/cm2 --num-rep 3 --test --show-mappings > /tmp/b 2>&1
wc -l /tmp/a # num total mappings
diff -u /tmp/a /tmp/b | grep -c ^+ # num changed mappings
Divide the total number of lines in /tmp/a with the number of lines
changed. We've found that most clusters are under 10%.
You can force all of this rebalancing to happen at once with::
ceph osd crush reweight-all
Otherwise, it will happen at some unknown point in the future when
CRUSH weights are next adjusted.
Notable Changes
---------------
* ceph-fuse: flock, fcntl lock support (Yan, Zheng, Greg Farnum)
* crush: fix straw bucket weight calculation, add straw_calc_version
tunable (#10095 Sage Weil)
* crush: fix tree bucket (Rongzu Zhu)
* crush: fix underflow of tree weights (Loic Dachary, Sage Weil)
* crushtool: add --reweight (Sage Weil)
* librbd: complete pending operations before losing image (#10299 Jason
Dillaman)
* librbd: fix read caching performance regression (#9854 Jason Dillaman)
* librbd: gracefully handle deleted/renamed pools (#10270 Jason Dillaman)
* mon: fix dump of chooseleaf_vary_r tunable (Sage Weil)
* osd: fix PG ref leak in snaptrimmer on peering (#10421 Kefu Chai)
* osd: handle no-op write with snapshot (#10262 Sage Weil)
* radosgw-admin: create subuser when creating user (#10103 Yehuda Sadeh)
* rgw: change multipart uplaod id magic (#10271 Georgio Dimitrakakis,
Yehuda Sadeh)
* rgw: don't overwrite bucket/object owner when setting ACLs (#10978
Yehuda Sadeh)
* rgw: enable IPv6 for embedded civetweb (#10965 Yehuda Sadeh)
* rgw: fix partial swift GET (#10553 Yehuda Sadeh)
* rgw: fix quota disable (#9907 Dong Lei)
* rgw: index swift keys appropriately (#10471 Hemant Burman, Yehuda Sadeh)
* rgw: make setattrs update bucket index (#5595 Yehuda Sadeh)
* rgw: pass civetweb configurables (#10907 Yehuda Sadeh)
* rgw: remove swift user manifest (DLO) hash calculation (#9973 Yehuda
Sadeh)
* rgw: return correct len for 0-len objects (#9877 Yehuda Sadeh)
* rgw: S3 object copy content-type fix (#9478 Yehuda Sadeh)
* rgw: send ETag on S3 object copy (#9479 Yehuda Sadeh)
* rgw: send HTTP status reason explicitly in fastcgi (Yehuda Sadeh)
* rgw: set ulimit -n from sysvinit (el6) init script (#9587 Sage Weil)
* rgw: update swift subuser permission masks when authenticating (#9918
Yehuda Sadeh)
* rgw: URL decode query params correctly (#10271 Georgio Dimitrakakis,
Yehuda Sadeh)
* rgw: use attrs when reading object attrs (#10307 Yehuda Sadeh)
* rgw: use \r\n for http headers (#9254 Benedikt Fraunhofer, Yehuda Sadeh)
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.80.9.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
Hi All,
This is a new release of ceph-deploy that changes a couple of behaviors.
On RPM-based distros, ceph-deploy will now automatically enable
check_obsoletes in the Yum priorities plugin. This resolves an issue
many community members hit where package dependency resolution was
breaking due to conflicts between upstream packaging (hosted on
ceph.com) and downstream (i.e., Fedora or EPEL).
The other important change is that when using ceph-deploy to install
Ceph packages on a RHEL machine, the --release flag *must* be used if
you want to install upstream packages. In other words, if you want to
install Giant on a RHEL machine, you would need to use "ceph-deploy
install --release giant". If the --release flag is not used,
ceph-deploy will expect to use downstream package on RHEL. This is
documented at [1].
The full changelog can be seen at [2].
Please update!
- Travis
[1] http://ceph.com/ceph-deploy/docs/install.html#distribution-notes
[2] http://ceph.com/ceph-deploy/docs/changelog.html#id1
This is the first (and possibly final) point release for Giant. Our focus
on stability fixes will be directed towards Hammer and Firefly.
We recommend that all v0.87 Giant users upgrade to this release.
Upgrading
---------
* Due to a change in the Linux kernel version 3.18 and the limits of the
FUSE interface, ceph-fuse needs be mounted as root on at least some
systems. See issues #9997, #10277, and #10542 for details.
Notable Changes
---------------
* build: disable stack-execute bit on assembler objects (#10114 Dan Mick)
* build: support boost 1.57.0 (#10688 Ken Dreyer)
* ceph-disk: fix dmcrypt file permissions (#9785 Loic Dachary)
* ceph-disk: run partprobe after zap, behave with partx or partprobe
(#9665 #9721 Loic Dachary)
* cephfs-journal-tool: fix import for aged journals (#9977 John Spray)
* cephfs-journal-tool: fix journal import (#10025 John Spray)
* ceph-fuse: use remount to trim kernel dcache (#10277 Yan, Zheng)
* common: add cctid meta variable (#6228 Adam Crume)
* common: fix dump of shard for ghobject_t (#10063 Loic Dachary)
* crush: fix bucket weight underflow (#9998 Pawel Sadowski)
* erasure-code: enforce chunk size alignment (#10211 Loic Dachary)
* erasure-code: regression test suite (#9420 Loic Dachary)
* erasure-code: relax caucy w restrictions (#10325 Loic Dachary)
* libcephfs,ceph-fuse: allow xattr caps on inject_release_failure (#9800
John Spray)
* libcephfs,ceph-fuse: fix cap flush tid comparison (#9869 Greg Farnum)
* libcephfs,ceph-fuse: new flag to indicated sorted dcache (#9178 Yan,
Zheng)
* libcephfs,ceph-fuse: prune cache before reconnecting to MDS (Yan, Zheng)
* librados: limit number of in-flight read requests (#9854 Jason Dillaman)
* libradospy: fix thread shutdown (#8797 Dan Mick)
* libradosstriper: fix locking issue in truncate (#10129 Sebastien Ponce)
* librbd: complete pending ops before closing mage (#10299 Jason Dillaman)
* librbd: fix error path on image open failure (#10030 Jason Dillaman)
* librbd: gracefully handle deleted/renamed pools (#10270 Jason Dillaman)
* librbd: handle errors when creating ioctx while listing children (#10123
Jason Dillaman)
* mds: fix compat version in MClientSession (#9945 John Spray)
* mds: fix journaler write error handling (#10011 John Spray)
* mds: fix locking for file size recovery (#10229 Yan, Zheng)
* mds: handle heartbeat_reset during shutdown (#10382 John Spray)
* mds: store backtrace for straydir (Yan, Zheng)
* mon: allow tiers for FS pools (#10135 John Spray)
* mon: fix caching of last_epoch_clean, osdmap trimming (#9987 Sage Weil)
* mon: fix 'fs ls' on peons (#10288 John Spray)
* mon: fix MDS health status from peons (#10151 John Spray)
* mon: fix paxos off-by-one (#9301 Sage Weil)
* msgr: simple: do not block on takeover while holding global lock (#9921
Greg Farnum)
* osd: deep scrub must not abort if hinfo is missing (#10018 Loic Dachary)
* osd: fix misdirected op detection (#9835 Sage Weil)
* osd: fix past_interval display for acting (#9752 Loic Dachary)
* osd: fix PG peering backoff when behind on osdmaps (#10431 Sage Weil)
* osd: handle no-op write with snapshot case (#10262 Ssage Weil)
* osd: use fast-dispatch (Sage Weil, Greg Farnum)
* rados: fix write to /dev/null (Loic Dachary)
* radosgw-admin: create subuser when needed (#10103 Yehuda Sadeh)
* rbd: avoid invalidating aio_write buffer during image import (#10590
Jason Dillaman)
* rbd: fix export with images > 2GB (Vicente Cheng)
* rgw: change multipart upload id magic (#10271 Georgios Dimitrakakis,
Yehuda Sadeh)
* rgw: check keystone auth for S3 POST (#10062 Abhishek Lekshmanan)
* rgw: check timestamp for S3 keystone auth (#10062 Abhishek Lekshmanan)
* rgw: fix partial GET with swift (#10553 Yehuda Sadeh)
* rgw: fix quota disable (#9907 Dong Lei)
* rgw: fix rare corruption of object metadata on put (#9576 Yehuda Sadeh)
* rgw: fix S3 object copy content-type (#9478 Yehuda Sadeh)
* rgw: headers end with \r\n (#9254 Benedikt Fraunhofer, Yehuda Sadeh)
* rgw: remove swift user manifest DLO hash calculation (#9973 Yehuda
Sadeh)
* rgw: return correct len when len is 0 (#9877 Yehuda Sadeh)
* rgw: return X-Timestamp field (#8911 Yehuda Sadeh)
* rgw: run radosgw as apache with systemd (#10125)
* rgw: sent ETag on S3 object copy (#9479 Yehuda Sadeh)
* rgw: sent HTTP status reason explicitly in fastcgi (Yehuda Sadeh)
* rgw: set length for keystone token validation (#7796 Mark Kirkwood,
Yehuda Sadeh)
* rgw: set ulimit -n on sysvinit before starting daemon (#9587 Sage Weil)
* rgw: update bucket index on set_attrs (#5595 Yehuda Sadeh)
* rgw: update swift subuser permission masks when authenticating (#9918
Yehuda Sadeh)
* rgw: URL decode HTTP query params correction (#10271 Georgios
Dimitrakakis, Yehuda Sadeh)
* rgw: use cached attrs while reading object attrs (#10307 Yehuda Sadeh)
* rgw: use strict_strtoll for content length (#10701 Axel Dunkel, Yehuda
Sadeh)
For more detailed information, see
* http://docs.ceph.com/docs/master/_downloads/v0.87.1.txt
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.87.1.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
This is the second-to-last chunk of new stuff before Hammer. Big items
include additional checksums on OSD objects, proxied reads in the
cache tier, image locking in RBD, optimized OSD Transaction and
replication messages, and a big pile of RGW and MDS bug fixes.
Upgrading
---------
* The experimental 'keyvaluestore-dev' OSD backend has been renamed
'keyvaluestore' (for simplicity) and marked as experimental. To
enable this untested feature and acknowledge that you understand
that it is untested and may destroy data, you need to add the
following to your ceph.conf::
enable experimental unrecoverable data corrupting featuers =
keyvaluestore
* The following librados C API function calls take a 'flags' argument
whose value is now correctly interpreted:
rados_write_op_operate()
rados_aio_write_op_operate()
rados_read_op_operate()
rados_aio_read_op_operate()
The flags were not correctly being translated from the librados
constants to the internal values. Now they are. Any code that is
passing flags to these methods should be audited to ensure that they are
using the correct LIBRADOS_OP_FLAG_* constants.
* The 'rados' CLI 'copy' and 'cppool' commands now use the copy-from
operation, which means the latest CLI cannot run these commands against
pre-firefly OSDs.
* The librados watch/notify API now includes a watch_flush() operation to
flush the async queue of notify operations. This should be called by
any watch/notify user prior to rados_shutdown().
Notable Changes
---------------
* add experimental features option (Sage Weil)
* build: fix 'make check' races (#10384 Loic Dachary)
* build: fix pkg names when libkeyutils is missing (Pankag Garg, Ken
Dreyer)
* ceph: make 'ceph -s' show PG state counts in sorted order (Sage Weil)
* ceph: make 'ceph tell mon.* version' work (Mykola Golub)
* ceph-monstore-tool: fix/improve CLI (Joao Eduardo Luis)
* ceph: show primary-affinity in 'ceph osd tree' (Mykola Golub)
* common: add TableFormatter (Andreas Peters)
* common: check syncfs() return code (Jianpeng Ma)
* doc: do not suggest dangerous XFS nobarrier option (Dan van der Ster)
* doc: misc updates (Nilamdyuti Goswami, John Wilkins)
* install-deps.sh: do not require sudo when root (Loic Dachary)
* libcephfs: fix dirfrag trimming (#10387 Yan, Zheng)
* libcephfs: fix mount timeout (#10041 Yan, Zheng)
* libcephfs: fix test (#10415 Yan, Zheng)
* libcephfs: fix use-afer-free on umount (#10412 Yan, Zheng)
* libcephfs: include ceph and git version in client metadata (Sage Weil)
* librados: add watch_flush() operation (Sage Weil, Haomai Wang)
* librados: avoid memcpy on getxattr, read (Jianpeng Ma)
* librados: create ioctx by pool id (Jason Dillaman)
* librados: do notify completion in fast-dispatch (Sage Weil)
* librados: remove shadowed variable (Kefu Chain)
* librados: translate op flags from C APIs (Matthew Richards)
* librbd: differentiate between R/O vs R/W features (Jason Dillaman)
* librbd: exclusive image locking (Jason Dillaman)
* librbd: fix write vs import race (#10590 Jason Dillaman)
* librbd: gracefully handle deleted/renamed pools (#10270 Jason Dillaman)
* mds: asok command for fetching subtree map (John Spray)
* mds: constify MDSCacheObjects (John Spray)
* misc: various valgrind fixes and cleanups (Danny Al-Gaaf)
* mon: fix 'mds fail' for standby MDSs (John Spray)
* mon: fix stashed monmap encoding (#5203 Xie Rui)
* mon: implement 'fs reset' command (John Spray)
* mon: respect down flag when promoting standbys (John Spray)
* mount.ceph: fix suprious error message (#10351 Yan, Zheng)
* msgr: async: many fixes, unit tests (Haomai Wang)
* msgr: simple: retry binding to port on failure (#10029 Wido den
Hollander)
* osd: add fadvise flags to ObjectStore API (Jianpeng Ma)
* osd: add get_latest_osdmap asok command (#9483 #9484 Mykola Golub)
* osd: EIO on whole-object reads when checksum is wrong (Sage Weil)
* osd: filejournal: don't cache journal when not using direct IO (Jianpeng
Ma)
* osd: fix ioprio option (Mykola Golub)
* osd: fix scrub delay bug (#10693 Samuel Just)
* osd: fix watch reconnect race (#10441 Sage Weil)
* osd: handle no-op write with snapshot (#10262 Sage Weil)
* osd: journal: fix journal zeroing when direct IO is enabled (Xie Rui)
* osd: keyvaluestore: cleanup dead code (Ning Yao)
* osd, mds: 'ops' as shorthand for 'dump_ops_in_flight' on asok (Sage
Weil)
* osd: memstore: fix size limit (Xiaoxi Chen)
* osd: misc scrub fixes (#10017 Loic Dachary)
* osd: new optimized encoding for ObjectStore::Transaction (Dong Yuan)
* osd: optimize filter_snapc (Ning Yao)
* osd: optimize WBThrottle map with unordered_map (Ning Yao)
* osd: proxy reads during cache promote (Zhiqiang Wang)
* osd: proxy read support (Zhiqiang Wang)
* osd: remove legacy classic scrub code (Sage Weil)
* osd: remove unused fields in MOSDSubOp (Xiaoxi Chen)
* osd: replace MOSDSubOp messages with simpler, optimized MOSDRepOp
(Xiaoxi Chen)
* osd: store whole-object checksums on scrub, write_full (Sage Weil)
* osd: verify kernel is new enough before using XFS extsize ioctl, enable
by default (#9956 Sage Weil)
* rados: use copy-from operation for copy, cppool (Sage Weil)
* rgw: change multipart upload id magic (#10271 Yehuda Sadeh)
* rgw: decode http query params correction (#10271 Yehuda Sadeh)
* rgw: fix content length check (#10701 Axel Dunkel, Yehuda Sadeh)
* rgw: fix partial GET in swift (#10553 Yehuda Sadeh)
* rgw: fix shutdown (#10472 Yehuda Sadeh)
* rgw: include XML ns on get ACL request (#10106 Yehuda Sadeh)
* rgw: misc fixes (#10307 Yehuda Sadeh)
* rgw: only track cleanup for objects we write (#10311 Yehuda Sadeh)
* rgw: tweak error codes (#10329 #10334 Yehuda Sadeh)
* rgw: use gc for multipart abort (#10445 Aaron Bassett, Yehuda Sadeh)
* sysvinit: fix race in 'stop' (#10389 Loic Dachary)
* test: fix bufferlist tests (Jianpeng Ma)
* tests: improve docker-based tests (Loic Dachary)
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.92.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy