Welcome to Aviv Caro as new Ceph NVMe-oF lead
Reef status:
* reef 18.1.3 built, gibba cluster upgraded, plan to publish this week
* https://pad.ceph.com/p/reef_final_blockers all resolved except for
bookworm builds https://tracker.ceph.com/issues/61845
* only blockers will merge to reef so the release matches final rc
Planning for distribution updates earlier in release process:
* centos 9 testing wasn't enabled for reef until very late
-- partly because of missing python dependencies
-- required fixes to test suites of every component so we couldn't
merge until everything was fixed
* also applies to major dependencies like boost and rocksdb
-- boost upgrade on main disrupted testing on other release branches
-- build containerization in CI would help a lot here. discussion
continues tomorrow in Ceph Infrastructure meeting
Improving the documentation/procedure for deploying a vstart cluster:
* including installation of dependencies and compilation
-- add test coverage on fresh distros to verify that all required
dependencies are installed
* README.md will be the canonical guide
CDS concluded yesterday:
* recordings at
https://ceph.io/en/community/events/2023/ceph-developer-summit-squid/
* component leads to update ceph backlog on trello
Hi Folks,
The weekly performance meeting will be starting in approximately 45
minutes at 8AM PST. Today, Esmaeil Mirvakili will be presenting his work
on CoDel support in BlueStore to help reduce buffer bloat! Should be a
very interesting talk!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Meeting URL:
https://meet.jit.si/ceph-performance
Mark
--
Best Regards,
Mark Nelson
Head of R&D (USA)
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson(a)clyso.com
We are hiring: https://www.clyso.com/jobs/
Hello,
As of yesterday, the main branch now uses version 1.82 of Boost.
The main driver of the change is improved functionality in Boost.Asio.
C++20 coroutines are the biggest motivator, but there are other nice
things like a type-erased handler class.
A minor caveat: Due to an unfixed ODR violation introduced in
Boost.Phoenix 1.81 (https://github.com/boostorg/phoenix/issues/111),
we disable Phoenix's tuple support, also introduced in 1.81. Hopefully
the bug will be fixed in the next version.
Thank you.
Hi folks,
Today we discussed:
- Reef is almost ready! The remaining issues are tracked in [1]. In
particular, an epel9 package is holding back the release.
- Vincent Hsu, Storage Group CTO of IBM, presented a proposal outline
for a Ceph Foundation Client Council. This council would be composed
of 10-25 invited significant operators or users of Ceph. The function
of the council is to provide essential feedback on use-cases,
pain-points, and successes arising during their use of Ceph. This
feedback will be used to steer development and initiatives. More
information on this will be forthcoming once the proposal is
finalized.
The monthly user <-> dev meeting will be reevaluated in light of
this, possibly continuing on as usual.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
hi Ernesto and lists,
> [1] https://github.com/ceph/ceph/pull/47501
are we planning to backport this to quincy so we can support centos 9
there? enabling that upgrade path on centos 9 was one of the
conditions for dropping centos 8 support in reef, which i'm still keen
to do
if not, can we find another resolution to
https://tracker.ceph.com/issues/58832? as i understand it, all of
those python packages exist in centos 8. do we know why they were
dropped for centos 9? have we looked into making those available in
epel? (cc Ken and Kaleb)
On Fri, Sep 2, 2022 at 12:01 PM Ernesto Puerta <epuertat(a)redhat.com> wrote:
>
> Hi Kevin,
>
>>
>> Isn't this one of the reasons containers were pushed, so that the packaging isn't as big a deal?
>
>
> Yes, but the Ceph community has a strong commitment to provide distro packages for those users who are not interested in moving to containers.
>
>> Is it the continued push to support lots of distros without using containers that is the problem?
>
>
> If not a problem, it definitely makes it more challenging. Compiled components often sort this out by statically linking deps whose packages are not widely available in distros. The approach we're proposing here would be the closest equivalent to static linking for interpreted code (bundling).
>
> Thanks for sharing your questions!
>
> Kind regards,
> Ernesto
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
Hi,
How RBD mirror tracks mirroring process, on local storage?
Say, RBD mirror is running on host-1, when host-1 goes down,
start RBD mirror on host-2. In that case, is RBD mirror on host-2
going to continue the mirroring?
Thanks!
Tony
Hi Folks,
The weekly performance meeting tomorrow is canceled. Unfortunately I
have a conflict and won't be able to make it. We'll reconvene on July
20th when Esmaeil Mirvakili will present his work on CoDel in
BlueStore. See you then!
Thanks,
Mark
--
Best Regards,
Mark Nelson
Head of R&D (USA)
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson(a)clyso.com
We are hiring: https://www.clyso.com/jobs/
Hi everyone,
This is the second release candidate for Reef.
The Reef release comes with a new RockDB version (7.9.2) [0], which
incorporates several performance improvements and features. Our
internal testing doesn't show any side effects from the new version,
but we are very eager to hear community feedback on it. This is the
first release to have the ability to tune RockDB settings per column
family [1], which allows for more granular tunings to be applied to
different kinds of data stored in RocksDB. A new set of settings has
been used in Reef to optimize performance for most kinds of workloads
with a slight penalty in some cases, outweighed by large improvements
in use cases such as RGW, in terms of compactions and write
amplification. We would highly encourage community members to give
these a try against their performance benchmarks and use cases. The
detailed list of changes in terms of RockDB and BlueStore can be found
in https://pad.ceph.com/p/reef-rc-relnotes.
If any of our community members would like to help us with performance
investigations or regression testing of the Reef release candidate,
please feel free to provide feedback via email or in
https://pad.ceph.com/p/reef_scale_testing. For more active
discussions, please use the #ceph-at-scale slack channel in
ceph-storage.slack.com.
This RC has gone thru partial testing due to issues we are
experiencing in the sepia lab.
Please try it out and report any issues you encounter. Happy testing!
Thanks,
YuriW
Hello,
We have had some main branch PRs tagged with core + needs-qa +
wip-yuri-testing just sit there for a while (e.g. [1], [2]). In the
CLT meeting it became clear that there is some confusion about the
process: for example, Yuri doesn't pick up main branch PRs on his own
and instead expects a nod from someone on the RADOS team. This is
because unlike with release branch PRs, the relative priority isn't
always clear.
On the other hand, not all PRs that get tagged with core by the labeler
necessarily need a RADOS run. For example, src/mon/MDSMonitor.cc would
probably be better covered by the CephFS suite.
Questions:
- What are the expectations from the RADOS team here? Is core +
needs-qa combination sufficient or something else is required?
- Since RADOS is kind of a catch-all suite, is there anyone sweeping
PRs tagged with needs-qa but not core (e.g. common or build/ops) for
including in RADOS runs? Such PRs tend to linger for months (e.g.
[3], [4]). Perhaps we need to introduce a new needs-rados-qa label
specifically for such cases? If used on PRs that are anyway tagged
with core it would make a request more explicit.
[1] https://github.com/ceph/ceph/pull/50503
[2] https://github.com/ceph/ceph/pull/52124
[3] https://github.com/ceph/ceph/pull/48672
[4] https://github.com/ceph/ceph/pull/50301
Thanks,
Ilya