Hi everyone,
Ceph Days are coming to New York City again this year, co-hosted by
Bloomberg Engineering and Clyso!
We're planning a full day of Ceph content, well timed to learn about the
latest and greatest Squid release.
https://ceph.io/en/community/events/2024/ceph-days-nyc/
We're opening the CFP for presenters today -- it will close on March 26th
so please get your proposals in quickly!
Registration is also open now and space is limited, so book now to reserve
your seat!
Looking forward to seeing you there!
-- dan
We're happy to announce the 1st backport release in the Reef series.
This is the first backport release in the Reef series, and the first
with Debian packages, for Debian Bookworm.
We recommend all users update to this release.
https://ceph.io/en/news/blog/2023/v18-2-1-reef-released/
Notable Changes
---------------
* RGW: S3 multipart uploads using Server-Side Encryption now replicate
correctly in
multi-site. Previously, the replicas of such objects were corrupted
on decryption.
A new tool, ``radosgw-admin bucket resync encrypted multipart``, can
be used to
identify these original multipart uploads. The ``LastModified``
timestamp of any
identified object is incremented by 1ns to cause peer zones to
replicate it again.
For multi-site deployments that make any use of Server-Side Encryption, we
recommended running this command against every bucket in every zone after all
zones have upgraded.
* CEPHFS: MDS evicts clients which are not advancing their request
tids which causes
a large buildup of session metadata resulting in the MDS going
read-only due to
the RADOS operation exceeding the size threshold.
`mds_session_metadata_threshold`
config controls the maximum size that a (encoded) session metadata can grow.
* RGW: New tools have been added to radosgw-admin for identifying and
correcting issues with versioned bucket indexes. Historical bugs with the
versioned bucket index transaction workflow made it possible for the index
to accumulate extraneous "book-keeping" olh entries and plain placeholder
entries. In some specific scenarios where clients made concurrent requests
referencing the same object key, it was likely that a lot of extra index
entries would accumulate. When a significant number of these entries are
present in a single bucket index shard, they can cause high bucket listing
latencies and lifecycle processing failures. To check whether a versioned
bucket has unnecessary olh entries, users can now run ``radosgw-admin
bucket check olh``. If the ``--fix`` flag is used, the extra entries will
be safely removed. A distinct issue from the one described thus far, it is
also possible that some versioned buckets are maintaining extra unlinked
objects that are not listable from the S3/ Swift APIs. These extra objects
are typically a result of PUT requests that exited abnormally, in the middle
of a bucket index transaction - so the client would not have received a
successful response. Bugs in prior releases made these unlinked objects easy
to reproduce with any PUT request that was made on a bucket that was actively
resharding. Besides the extra space that these hidden, unlinked objects
consume, there can be another side effect in certain scenarios, caused by
the nature of the failure mode that produced them, where a client of a bucket
that was a victim of this bug may find the object associated with
the key to7fe91d5d5842e04be3b4f514d6dd990c54b29c76
be in an inconsistent state. To check whether a versioned bucket has unlinked
entries, users can now run ``radosgw-admin bucket check unlinked``. If the
``--fix`` flag is used, the unlinked objects will be safely removed. Finally,
a third issue made it possible for versioned bucket index stats to be
accounted inaccurately. The tooling for recalculating versioned bucket stats
also had a bug, and was not previously capable of fixing these inaccuracies.
This release resolves those issues and users can now expect that the existing
``radosgw-admin bucket check`` command will produce correct results. We
recommend that users with versioned buckets, especially those that existed
on prior releases, use these new tools to check whether their buckets are
affected and to clean them up accordingly.
* mgr/snap-schedule: For clusters with multiple CephFS file systems, all the
snap-schedule commands now expect the '--fs' argument.
* RADOS: A POOL_APP_NOT_ENABLED health warning will now be reported if
the application is not enabled for the pool irrespective of whether
the pool is in use or not. Always add ``application`` label to a pool
to avoid reporting of POOL_APP_NOT_ENABLED health warning for that pool.
The user might temporarilty mute this warning using
``ceph health mute POOL_APP_NOT_ENABLED``.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-18.2.1.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: 7fe91d5d5842e04be3b4f514d6dd990c54b29c76
We're very happy to announce the first stable release of the Reef series.
We express our gratitude to all members of the Ceph community who
contributed by proposing pull requests, testing this release,
providing feedback, and offering valuable suggestions.
Major Changes from Quincy:
- RADOS: RocksDB has been upgraded to version 7.9.2.
- RADOS: There have been significant improvements to RocksDB iteration
overhead and performance.
- RADOS: The perf dump and perf schema commands have been deprecated
in favor of the new counter dump and counter schema commands.
- RADOS: Cache tiering is now deprecated.
- RADOS: A new feature, the "read balancer", is now available, which
allows users to balance primary PGs per pool on their clusters.
- RGW: Bucket resharding is now supported for multi-site configurations.
- RGW: There have been significant improvements to the stability and
consistency of multi-site replication.
- RGW: Compression is now supported for objects uploaded with
Server-Side Encryption.
- Dashboard: There is a new Dashboard page with improved layout.
Active alerts and some important charts are now displayed inside
cards.
- RBD: Support for layered client-side encryption has been added.
- Telemetry: Users can now opt in to participate in a leaderboard in
the telemetry public dashboards.
We encourage you to read the full release notes at
https://ceph.io/en/news/blog/2023/v18-2-0-reef-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-18.2.0.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 5dd24139a1eada541a3bc16b6941c5dde975e26d
Did you know? Every Ceph release is built and tested on resources
funded directly by the non-profit Ceph Foundation.
If you would like to support this and our other efforts, please
consider joining now https://ceph.io/en/foundation/.
Hi everyone!
Here's a quick community update.
Ceph Days
========
- [Ceph Days Seoul](https://ceph.io/en/community/events/2023/ceph-days-korea/) June 14
- [Ceph Days Vancouver](https://ceph.io/en/community/events/2023/ceph-days-vancouver/) June 15
We are still looking for speakers for Ceph Days Vancouver until May 17. So if you are attending the OpenInfra Summit, consider adding Ceph Days Vancouver to your schedule and use this speaking opportunity!
Registrants will receive a significant discount to attend both conferences! But hurry, prices go up on May 12.
[Call for Proposals](https://survey.zohopublic.com/zs/TVCCCQ)
[Register](https://openinfrafoundation.formstack.com/forms/ceph_day_at_openi…
If you want to host your local Ceph Days, please contact me directly for assistance.
Cephalocon 2023
=============
After a couple of cancellations due to working around the challenges of the pandemic, the Ceph community finally came together in masses on April 16-18. The first day was focused on the future development of Ceph at the Developer Summit. The following two days were packed and dedicated to breakout sessions by the community on a wide range of topics around Ceph.
If you couldn't join us, the recordings are now available on our Ceph YouTube channel [Cephalocon 2023 playlist](https://www.youtube.com/playlist?list=PLrBUGiINAakPd9nuoorqeOuS9P….
Blog
===
- [Celebrating one exabyte of Ceph storage!](https://ceph.io/en/news/blog/2023/telemetry-celebrate-1-exabyte/)
- [Ceph Reef Freeze Part 1: RBD Performance](https://ceph.io/en/news/blog/2023/reef-freeze-rbd-performance/)
- [Ceph Reef Freeze Part 2: RGW Performance](https://ceph.io/en/news/blog/2023/reef-freeze-rgw-performance/)
- [Introducing a new landing page](https://ceph.io/en/news/blog/2023/landing-page/)
--
Mike Perez
Community Manager
Ceph Foundation
As an open source project, our goal is to create software that is
accessible, transparent, and inclusive. We believe that user feedback
is a crucial part of achieving that goal. By taking just a few minutes
to complete our survey, you can help shape the future of our project
and ensure that it remains a valuable resource for the wider
community.
https://survey.zohopublic.com/zs/s2D7RQ
Here are a few reasons why we believe taking our user survey is worth your time:
Your voice matters: We want to hear from as many users as possible to
ensure that we are meeting the needs of a diverse range of people and
use cases. Your feedback will help us make informed decisions about
the direction of the project.
We are committed to transparency: By sharing your thoughts and
opinions with us, you can help us maintain transparency and
accountability in our development process. Your feedback will help us
identify areas where we can improve our documentation, communication,
and community engagement.
We want to make the project better: We are constantly striving to
improve our project and make it more useful for our users. Your
feedback will help us identify areas where we can improve our
features, functionality, and user experience.
The survey should take no more than 10-15 minutes to complete, and all
responses will be kept confidential. Your participation is completely
voluntary, and you can opt-out at any time.
--
Mike Perez