Hi Folks,
The weekly performance meeting will be starting in approximately 15
minutes at 8AM PST. Today I'm hoping we'll get some feedback from David
Orman's team about what they are seeing with RocksDB performance
improvements and hopefully an update from Josh Baergen on write
amplification improvements. Please feel free to add your own topics!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
did anything change recently with respect to the python bindings for
rados? the rgw suite just started failing this way on both centos and
ubuntu: https://tracker.ceph.com/issues/58643
are any other qa suites hitting this?
There are occasions when I would like a branch pushed to ceph-ci to only trigger one build, say centos9+x86_64. I created a PR for that specific case[1], but I’m wondering how feasible it would be to create a more generalized solution. It feels wasteful to launch, say, 7 builds when you’re only interested in 1, and it may slow down other developers.
My thoughts aren’t fully concrete yet, in part I don’t the details of the code that interprets those .yml files, but here’s a sketch…
1. Those .yml files look for a tag in the branch name along the lines of “build-file” using the existing regex mechanism already leveraged in ceph-dev-new-trigger.yml.
2. When that tag is found in the branch, use wget to pull a specific file from the top-level of that branch. Say the file is called “ceph-build.config”, I believe a command along the lines of the following would pull that one file without cloning the whole repository:
wget https://raw.githubusercontent.com/ceph/ceph-ci/${GIT_BRANCH}/ceph-build.con…
3. That file would then define the values of DISTROS, ARCHS, FLAVOR, and maybe also FORCE. BRANCH would be defined by the existing variable ${GIT_BRANCH} variable. I don’t know which format for that file would be easiest — .yml, key/value pairs, etc.
4. The build process would then continue as it does now.
Perhaps there are hurdles that I’m not seeing. But I thought it’d be worth putting the idea out there. Thanks for considering,
Eric
[1] https://github.com/ceph/ceph-build/pull/2104
Hi Ceph Developers,
Ceph is applying to be part of Google Summer of Code
<https://summerofcode.withgoogle.com/> (GSoC) and is going to be part of
Outreachy <http://outreachy.org> this summer.
If you have any ideas for intern projects please add them to the pad below.
Projects are due by *Thursday February 23rd *and projects will be added to
ceph.io as they come in*.*
https://pad.ceph.com/p/project-ideas
I will be reaching out to tech leads and other previous mentors within the
community over the next week.
Best,
Ali
Hi Folks,
The weekly performance meeting will be starting in approximately 20
minutes at 8AM PST. Today David Orman's team at 11:11 Systems will talk
about some of their experiments with the RocksDB tunings we made in the
main branch. I would also like to talk a little bit about ideas to
redcuce seeks on HDD based clusters if there is time. Please feel free
to add your own topics!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
distro testing for reef
* https://github.com/ceph/ceph/pull/49443 adds centos9 and ubuntu22 to
supported distros
* centos9 blocked by teuthology bug https://tracker.ceph.com/issues/58491
- lsb_release command no longer exists, use /etc/os-release instead
- ceph stopped depending on lsb_release in 2021 with
https://github.com/ceph/ceph/pull/42770
* ubuntu22 not blocked by teuthology, but the new python version
breaks most of the rgw tests
can we drop centos8 or ubuntu20 support for reef?
* we usually support the latest centos and two ubuntu LTSs
* users need an upgrade path that doesn't require OS and ceph upgrade
at the same time
* we might be able to drop centos8 support for Reef by adding centos9
support to Quincy
* python versioning issues make longer-term support of older distros
problematic. related work:
- https://github.com/ceph/ceph/pull/41979
- https://github.com/ceph/ceph/pull/47501
ondisk format changes in minor releases
* https://github.com/ceph/ceph/pull/48915 introduced some BlueFS log
changes in 16.2.11 which makes it incompatible with previous Pacific
releases. Hence no downgrade is permitted any more.
- doc text tracked in https://tracker.ceph.com/issues/58625
* how do we prevent these issues in the future?
- better testing of mixed-version rgw/mds/mgr/etc
infrastructure update
* a planned network outage yesterday still affecting LRC
Hi Ceph Developers and Users,
Various upstream developers and I are working on adding labels to perf
counters (https://github.com/ceph/ceph/pull/48657).
We would like to understand the ramifications of changing the format of the
json dumped by the `perf dump` command for the Reef Release on users and
components of Ceph.
As an example given in the PR, currently unlabeled counters are dumped like
this in comparison with their new labeled counterparts.
"some unlabeled_counter": {
"put_b": 1048576,
},
"some labeled_counter": {
"labels": {
"Bucket: "bkt1",
"User: "user1",
},
"counters": {
"put_b": 1048576,
},
},
Here is an example given in the PR of the old style unlabeled counters
being dumped in the same format as the labeled counters:
"some unlabeled": {
"labels": {
},
"counters": {
"put_b": 1048576,
},
},
"some labeled": {
"labels": {
"Bucket: "bkt1",
"User: "user1",
},
"counters": {
"put_b": 1048576,
},
},
Would users/consumers of these counters be opposed to changing the format?
Why is this the case?
As far as I know there are ceph-mgr modules related to Prometheus and
telemetry that are consuming the current unlabeled counters. Also this
topic will be discussed at the upcoming Ceph Developer Monthly EMEA as well.
Best,
Ali