hi Ernesto and lists,
>  https://github.com/ceph/ceph/pull/47501
are we planning to backport this to quincy so we can support centos 9
there? enabling that upgrade path on centos 9 was one of the
conditions for dropping centos 8 support in reef, which i'm still keen
if not, can we find another resolution to
https://tracker.ceph.com/issues/58832? as i understand it, all of
those python packages exist in centos 8. do we know why they were
dropped for centos 9? have we looked into making those available in
epel? (cc Ken and Kaleb)
On Fri, Sep 2, 2022 at 12:01 PM Ernesto Puerta <epuertat(a)redhat.com> wrote:
> Hi Kevin,
>> Isn't this one of the reasons containers were pushed, so that the packaging isn't as big a deal?
> Yes, but the Ceph community has a strong commitment to provide distro packages for those users who are not interested in moving to containers.
>> Is it the continued push to support lots of distros without using containers that is the problem?
> If not a problem, it definitely makes it more challenging. Compiled components often sort this out by statically linking deps whose packages are not widely available in distros. The approach we're proposing here would be the closest equivalent to static linking for interpreted code (bundling).
> Thanks for sharing your questions!
> Kind regards,
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
Details of this release are summarized here:
Release Notes - TBD
The reruns were in the queue for 4 days because of some slowness issues.
The core team (Neha, Radek, Laura, and others) are trying to narrow
down the root cause.
Seeking approvals/reviews for:
rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to test
and merge at least one PR https://github.com/ceph/ceph/pull/50575 for
rgw - Casey
fs - Venky (the fs suite has an unusually high amount of failed jobs,
any reason to suspect it in the observed slowness?)
orch - Adam King
rbd - Ilya
krbd - Ilya
upgrade/octopus-x - Laura is looking into failures
upgrade/pacific-x - Laura is looking into failures
upgrade/quincy-p2p - Laura is looking into failures
client-upgrade-octopus-quincy-quincy - missing packages, Adam Kraitman
is looking into it
powercycle - Brad
ceph-volume - needs a rerun on merged
Please reply to this email with approval and/or trackers of known
issues/PRs to address them.
Also, share any findings or hypnosis about the slowness in the
execution of the suite.
Josh, Neha - gibba and LRC upgrades pending major suites approvals.
RC release - pending major suites approvals.
the rgw suite just started hitting a tcmalloc crash on main against
ubuntu 20.04: https://tracker.ceph.com/issues/59269
> src/tcmalloc.cc:332] Attempt to free invalid pointer 0x55e8173eebd0
this happens during startup of one of our librgw_file test cases
https://tracker.ceph.com/issues/58219 tracks a similar crash from the
fs suite in december, which apparently went away after reverting a
change to ceph-dencoder's linkage, but it doesn't sound like we ever
found the root cause
is this crash showing up anywhere else? has anything changed with
respect to tcmalloc versions or linkage recently?
On April 1, 2023, the Ceph GitHub organization will start requiring
two-factor authentication for all accounts (both members and outside
collaborators). Any account that doesn't have two-factor
authentication enabled on that date will be automatically removed from
the organization. Disabling two-factor authentication on the account
after that date would also remove it from the organization.
If you don't have two-factor authentication enabled on your account
already (some core developers still don't!), follow instructions at 
to set it up. See  for a good explanation of why it matters.
I will be taking a very much needed holiday starting tomorrow and thus
will not be running the next two performance meetings. Josh has kindly
accepted the responsibility for running them however!
If there are problems with the Jitsi transition I'm not responsible. ;)
Bluejeans (Assuming it's not Jitsi!):
We are looking for inputs on a new feature to be implemented to move clog
messages storage from monstore db, refer trello card  for more details
around this topic.
Currently, every clog message goes to monstore db as well as debug/warning
messages generates clog messages 1000s of times per seconds which leads to
monstore db growing at an exponential rate in a catastrophic failure
The primary use cases for the logm entries in monstore db are :
- For "ceph log last" commands to get historical clog entries
- Ceph dashboard (mgr is subscriber of log-info which propagate clog to
@Patrick Donnelly <pdonnell(a)redhat.com> suggested a viable solution to move
the cluster log storage to a new mgr module which handles the "ceph log
last" command. The clog data can be stored in the .mgr pool via
Alternatively, if we donot want to get rid of logm storage from monstore db
then the other solutions would be :
- Stop writing logm entries to mon db if there are excessive entries
- Filter out clog DBG entries and only log WRN/INF/ERR entries.
Looking forward to additional perspectives arounds this topic. Feel free to
add your inputs to trello card  or reply to this email-thread.
- gibba cluster has been upgraded
- saw known issue with slow backfilling, will use gibba cluster to
test fix for it but fix is not targeted for 17.2.6 release
- Still need to upgrade LRC
- Once LRC is upgraded and everything is approved, will release rc
- release notes PR is open
- cephalocon dev summit
- draft schedule: https://pad.ceph.com/p/cephalocon-dev-summit-2023
- want to do some planning for the next few years here since that
goes better in person
- Potentially some panel involving users
- cache tiering
- summary of last week's discussion for those who missed it
- plan is still to deprecate but not remove in reef
- some docs work at the minimum will be done for the deprecation,
possibly some added warnings if possible
- Enforcing 2FA on github to be in the Ceph org
- Not a lot of action following previous email about this
- Need to do something about the bots in the ceph org
- CephaloBot @ceph-jenkins
- Ceph Sepia lab robot @sepia-robot
- Red Hat Ceph Storage Jenkins bot @rhcs-jenkins
- Want infra team to look at this
- Have cc'd Dan Mick, Adam Kraitman, Zack Cerza on this email,
hoping one of them could see what these bots do and impact
if they are
removed from ceph org.
- Discussion on scale testing
- Currently, Gibba is being used for other things so we don't have
any cluster with large logical scale
- Might be able to get some scale or performance testing from
community members, but nothing certain
- Gibba could be brought back to large logical scale later if we have
features that need the scale testing
- Transitioning to Jitsi
- Bluejeans will be unavailable as of April 1st
- Discussed recording process for Jitsi meetings
- Mike Perez will make some new Jitsi meeting rooms for teams who
- Upgrade suites
- Some of the upgrade suites start from tagged releases, when they
used to start from the tip of the stable branch
- wouldn't be too hard to have them use builds for the tip of the
branch, just need to go through and change them
- some of the tests are specifically peer-to-peer tests testing
upgrades from certain point releases to others, so those should not be
- Adam King
Recently I published a blog on s3-select.
the Blog discusses what it is, and why it is required.
the last paragraph discusses the Trino(analytic SQL utility) and its
integration with Ceph/s3select.
that integration is still on-work, and it is quite promising.
Trino does not just provides a comprehensive SQL but also provides
scalable processing for the SQL statements.
we will be glad to hear your ideas and comments.
1) copy a snapshot to an image,
2) no need to copy snapshots,
3) no dependency after copy,
4) all same image format 2.
In that case, is rbd cp the same as rbd clone + rbd flatten?
I ran some tests, seems like it, but want to confirm, in case of missing anything.
Also, seems cp is a bit faster and flatten, is that true?