Hi Ceph Developers,
The February CDM is coming up tomorrow, *Wednesday, February 1st @ 16:00
UTC*. See more meeting details below.
Please add any topics you'd like to discuss to the agenda:
See you there,
UTC: Wednesday, February 1, 16:00 UTC
Mountain View, CA, US: Wednesday, February 1, 8:00 PST
Phoenix, AZ, US: Wednesday, February 1, 9:00 MST
Denver, CO, US: Wednesday, February 1, 9:00 MST
Huntsville, AL, US: Wednesday, February 1, 10:00 CST
Raleigh, NC, US: Wednesday, February 1, 11:00 EST
London, England: Wednesday, February 1, 16:00 GMT
Paris, France: Wednesday, February 1, 17:00 CET
Helsinki, Finland: Wednesday, February 1, 18:00 EET
Tel Aviv, Israel: Wednesday, February 1, 18:00 IST
Pune, India: Wednesday, February 1, 21:30 IST
Brisbane, Australia: Thursday, February 2, 2:00 AEST
Singapore, Asia: Thursday, February 2, 0:00 +08
Auckland, New Zealand: Thursday, February 2, 5:00 NZDT
Software Engineer, Ceph Storage
Red Hat Inc. <https://www.redhat.com>
@RedHat <https://twitter.com/redhat> Red Hat
<https://www.linkedin.com/company/red-hat> Red Hat
My name is Abhinav Ohri. I am a UG student interested in contributing to
the open-source community. I found your organisation interesting and I
react. I would love to start contributing to your project. I would be glad
if you can ping me with some "good first bug" issues.
The weekly performance meeting will be starting in approximately 5
minutes at 8AM PST. Today let's continue the conversation with Josh B
about the latency spikes they are seeing in production. Please feel
free to add your own topic to the etherpad as well!
Here is the summary of the CLT meeting held on Jan 25th 2023:
- Following up on the recent issues with the LRC, there is a desire to
standardize the ops processes, clarify responsibilities and
escalations now that there is no longer a single person responsible
for the infrastructure. One idea is to add health alerts to the Slack
#sepia channel, which David Orman volunteered to look into. It was
also suggested to set up a weekly infra call to organize and
eventually scrub the tracker Sepia issues weekly. Not finding a good
time for this meeting, it was agreed to use the first 30mins of the
CLT weekly for the time being.
- Dev / Community chat seems to be centralizing on Slack, which looks
popular but has a couple issues. First there was a limitation on
allowed domains. Mike has cleared this and anyone should be able to
register now. There is also work ongoing to bridge between Slack and
IRC. (In an informal poll of the CLT, 13 responded that they will use
Slack directly, and 2 will continue using IRC).
- Next there was a discussion about the status of the upcoming Reef
release. Josh asked that leads please update the ceph-backlog trello.
Next it was discussed if the release timing should be delayed to
account for the build and test infrastructure issues these past
months. All leads chimed in, and it was agreed that the freeze will be
moved back to the end of February, with the new target release date
being end of June. The team will still target an RC to be available
for Cephalocon mid-April.
- tracker.ceph.com is often overloaded -- Adam will check the confirm
wrt. rate limits or throttles.
- There is a significant backlog in the ML moderation queues. (And
also in the tracker account signup queue, which Ilya and Adam
discussed how to clear). It was noted that the ML moderation is
currently manual, via the web UI -- it would be appreciated if anyone
knows how to enable moderation by email or some other automation.
- Mike announced that ceph-devel is now archived to MARC at
https://marc.info/?l=ceph-dev&r=1&b=202301&w=2 -- It was asked if MARC
allows importing historical mails, which e.g. could be bundled into an
mbox and uploaded somewhere. Mike will inquire.
- Lastly, major kudos to everyone that pacific 16.2.11 was in the
process of being published! Some user confusion re: the publishing
process was discussed (e.g. currently packages trickle into the repos
prior to release notes being merged or published). It was proposed
that going forward the notes+announcement would come out before the
repos would be updated.
most of rgw's test suites depend on the python nose library and, while
we knew it was ancient and unmaintained, it was never worth the effort
to rewrite tests. with python 3.9, nose stopped working entirely
(https://github.com/nose-devs/nose/issues/1099), but everything still
worked in upstream testing with older python. ubuntu 22.04 only ships
python 3.10, so this has finally become a blocker for testing the reef
in https://github.com/ceph/s3-tests/pull/482, all of s3-tests' nose
dependencies were replaced with pytest.
https://github.com/ceph/ceph/pull/49826 was just merged to run that
updated version in ceph's qa suites. we only validated that against
the rgw suite, but i know there are other suites that run s3-tests -
please let me know if any of those start failing
other rgw tests that still need conversion to pytest:
We're happy to announce the 11th backport release in the Pacific series.
We recommend users to update to this release. For detailed release
notes with links & changelog please refer to the official blog entry
* Cephfs: The 'AT_NO_ATTR_SYNC' macro is deprecated, please use the standard
'AT_STATX_DONT_SYNC' macro. The 'AT_NO_ATTR_SYNC' macro will be removed in
* Trimming of PGLog dups is now controlled by the size instead of the version.
This fixes the PGLog inflation issue that was happening when the on-line
(in OSD) trimming got jammed after a PG split operation. Also, a new off-line
mechanism has been added: `ceph-objectstore-tool` got `trim-pg-log-dups` op
that targets situations where OSD is unable to boot due to those
If that is the case, in OSD logs the "You can be hit by THE DUPS BUG" warning
will be visible.
Relevant tracker: https://tracker.ceph.com/issues/53729
* RBD: `rbd device unmap` command gained `--namespace` option. Support for
namespaces was added to RBD in Nautilus 14.2.0 and it has been possible to
map and unmap images in namespaces using the `image-spec` syntax since then
but the corresponding option available in most other commands was missing.
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.11.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: 3cf40e2dca667f68c6ce3ff5cd94f01e711af894
this refers to the 'Merge pull request' button on github, and its
options for the merge strategy. the only option enabled for the
ceph.git repo is 'create a merge commit', and that's the right default
for main and stable release branches
but for feature branches that aren't ready to merge to main, i'd like
the option to use 'rebase and merge'. that way, we can target pull
requests at the feature branch and merge them without merge commits.
then once the feature branch is stable and ready, we can open a pull
request to merge the feature branch to main
as i understand it, there's a policy not to include merge commits in
pull requests, though i don't see it documented in
is there a way to adjust github's merge strategy so that the 'create a
merge commit' option is only enforced for stable branches?
Happy New Year all!
This release remains to be in "progress"/"on hold" status as we are
sorting all infrastructure-related issues.
Unless I hear objections, I suggest doing a full rebase/retest QE
cycle (adding PRs merged lately) since it's taking much longer than
anticipated when sepia is back online.
On Thu, Dec 15, 2022 at 9:14 AM Yuri Weinstein <yweinste(a)redhat.com> wrote:
> Details of this release are summarized here:
> Release Notes - TBD
> Seeking approvals for:
> rados - Neha (https://github.com/ceph/ceph/pull/49431 is still being
> tested and will be merged soon)
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ernesto
> rgw - Casey (rwg will be rerun on the latest SHA1)
> rbd - Ilya, Deepika
> krbd - Ilya, Deepika
> fs - Venky, Patrick
> upgrade/nautilus-x (pacific) - Neha, Laura
> upgrade/octopus-x (pacific) - Neha, Laura
> upgrade/pacific-p2p - Neha - Neha, Laura
> powercycle - Brad
> ceph-volume - Guillaume, Adam K
I've been looking at Ceph's Jaeger tracing support, and was wondering why it uses port 6799 instead of the default 6831.
It isn't compatible with the documentation directions for running Jaeger using vstart.sh, and it is also hard-coded in common/tracer.cc. Is there any reason why Ceph doesn't use the standard port, and/or why it couldn't be made configurable?