I have cloned the Ceph repo, checked out the Pacific branch, and am trying
to add some custom logging to rgw or radosgw code, specifically the bucket
I opened a ticket for a potential ceph bug found when deleting buckets with
large objects (1.4tb or larger).
From my understanding, I don't need to build the whole project and may be
able to just build rgw and radosgw libs.
Does anyone have any insight on adding some additional logging?
Senior Software Engineer
cat.felts(a)osnexus.com | 1.360.633.6247
what are the conditions for a client_dentry_callback_t to be triggered?
Also, what's the use case for this callback?
FWIW I've not been able to trigger my client_dentry_callback_t callback. On
the other hand, my client_ino_callback_t callback gets triggered every now
and then in my test setup (a single node, VM 16.2.5 deployment with a linux
kernel client mount competing with a custom user-space client mount).
Thanks a lot!
We're happy to announce the 6th backport release in the Pacific series.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
* MGR: The pg_autoscaler has a new default 'scale-down' profile which
provides more performance from the start for new pools (for newly
created clusters). Existing clusters will retain the old behavior, now
called the 'scale-up' profile. For more details, see:
* CephFS: the upgrade procedure for CephFS is now simpler. It is no
longer necessary to stop all MDS before upgrading the sole active MDS.
After disabling standby-replay, reducing max_mds to 1, and waiting for
the file systems to become stable (each fs with 1 active and 0 stopping
daemons), a rolling upgrade of all MDS daemons can be performed.
* Dashboard: now allows users to set up and display a custom message
(MOTD, warning, etc.) in a sticky banner at the top of the page. For
more details, see:
* Several fixes in BlueStore, including a fix for the deferred write
regression, which led to excessive RocksDB flushes and compactions.
Previously, when bluestore_prefer_deferred_size_hdd was equal to or more
than bluestore_max_blob_size_hdd (both set to 64K), all the data was
deferred, which led to increased consumption of the column family used
to store deferred writes in RocksDB. Now, the
bluestore_prefer_deferred_size parameter independently controls deferred
writes, and only writes smaller than this size use the deferred write path.
* The default value of osd_client_message_cap has been set to 256, to
provide better flow control by limiting maximum number of in-flight
* PGs no longer show a active+clean+scrubbing+deep+repair state when
osd_scrub_auto_repair is set to true, for regular deep-scrubs with no
* ceph-mgr-modules-core debian package does not recommend ceph-mgr-rook
anymore. As the latter depends on python3-numpy which cannot be imported
in different Python sub-interpreters multi-times if the version of
python3-numpy is older than 1.19. Since apt-get installs the Recommends
packages by default, ceph-mgr-rook was always installed along with
ceph-mgr debian package as an indirect dependency. If your workflow
depends on this behavior, you might want to install ceph-mgr-rook
* This is the first release built for Debian Bullseye.
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.6.tar.gz
* Containers at https://hub.docker.com/r/ceph/ceph/tags?name=v16.2.6
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: ee28fb57e47e9f88813e24bbf4c14496ca299d31
We are glad to share that Ceph will be participating in Grace Hopper
Celebration #OpenSourceDay2021 as a featured project, yet again!
In this all-day hackathon, we'll be assisting new contributors to make
contributions to Ceph, with the help of mentors, who’ll be helping them out
throughout the event.
All those who have already registered for GHC this year, are welcome to
join. For those who haven’t, we encourage you to join us next year! Please
share this with anybody who would like to contribute to Ceph or any
other open source project. This is a great opportunity for making your
first open source contribution.