this ceph-object-corpus repo is the basis of our ceph-dencoder test
src/test/encoding/readable.sh, which verifies that we can still decode
all of the data structures encoded by older ceph versions
i'd like to raise awareness that this ceph-object-corpus repo hasn't
been updated with new encodings since pacific 16.2.0, so we're missing
important regression test coverage since then
Nitzan prepared the encodings for reef 18.2.0 in
https://github.com/ceph/ceph-object-corpus/pull/17, but those haven't
merged yet. i had opened https://github.com/ceph/ceph/pull/54735 to
test that, but 'make check' identified failures like:
> The following tests FAILED:
> 147 - readable.sh (Failed)
>
> **** reencode of /home/jenkins-build/build/workspace/ceph-pull-requests/ceph-object-corpus/archive/18.2.0/objects/chunk_refs_t/ccb69d9ecd572c1f6ed9598899773cf1 resulted in a different dump ****
can we find a way to prioritize this? it would be great to have these
reef encodings while we're validating the squid release
there was a consensus to drop support for ubuntu focal and centos
stream 8 with the squid release, and i'd love to remove those distros
from the shaman build matrix for squid and main branches asap
however, i see that quincy never supported ubuntu jammy, so our quincy
upgrade tests still have to run against focal. that means we'd still
have to build focal packages for squid
would it be possible to start building jammy packages for quincy to
allow those upgrade tests to run jammy instead?
this isn't an issue on the centos side because we've been building
centos 9 packages for quincy even though it's not listed in
https://docs.ceph.com/en/latest/start/os-recommendations/#platforms
Hi Guillaume,
We need to move the Squid release to use a CentOS Stream 9 image
instead of 8. (CentOS Stream 8 is EOL in a few months.)
We also want to move Ceph's Dockerfile
(https://github.com/ceph/ceph-container/issues/2171)
What is the next step to accomplish these goals?
good morning,
i am trying to understand how ceph snapshot works.
i have read that snap are cow, which means if i am correct that if a new
write update an exising block on a volume, the "old" block is copied to
snap before overwrite it on original volume, am i right?
so, i creted a volume say 10 GB in size, empty, then created a snap.
so, coming to my doubt, why the snap is 10GB in size? it should be 0,
because no new write update were done, am i right?
thank you VERY much for your time.
Hi folks,
Today we discussed:
- [casey] on dropping ubuntu focal support for squid
- Discussion thread:
https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/ONAWOAE7MPMT7CP6KH…
- Quincy doesn't build jammy packages, so quincy->squid upgrade tests
have to run on focal
- proposing to add jammy packages for quincy to enable that upgrade path
(from 17.2.8+)
- https://github.com/ceph/ceph-build/pull/2206
- Need to indicate that Quincy clusters must upgrade to jammy before
upgrading to Squid.
- T release name: https://pad.ceph.com/p/t
- Tentacle wins!
- Patrick to do release kick-off
- Cephalocon news?
- Planning is in progress; no news as knowledgeable parties not present
for this meeting.
- Volunteers for compiling the Contributor Credits?
-
https://tracker.ceph.com/projects/ceph/wiki/Ceph_contributors_list_maintena…
- Laura will give it a try.
- Plan for tagged vs. named Github milestones?
- Continue using priority order for qa testing: exhaust testing on
tagged milestone, then go to "release" catch-all milestone
- v18.2.2 hotfix release next
- Reef HEAD is still cooking with to-be-addressed upgrade issues.
- v19.1.0 (first Squid RC)
- two rgw features still waiting to go into squid
- cephfs quiesce feature to be backported
- Nightlies crontab to be updated by Patrick.
- V19.1.0 milestone: https://github.com/ceph/ceph/milestone/21
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hello everyone,
As part of nvmeof monitor PR, a new dependency of grpc/C++ was introduced to the ceph codebase.
No issues with centos9, however on Ubuntu we stumbled upon an unexpected issue, declared grrpc as ceph .deb package build deps under debian/control , however Ubuntu grpc packages do not include cmake files, , see https://github.com/grpc/grpc/issues/29977
For instance, make check are failing on "CMake Error at src/CMakeLists.txt:900 (find_package): gRPC" . This issue renders Ubuntu packages as unusable.
Could anyone assist with fixing the Ubuntu grpc packages?
Thank you,
~baum
Redouane and Avan came to me with an issue with RGW related metrics that
warrants a broader community discussion for all daemons. For more
information, the issue is being tracked by
https://tracker.ceph.com/issues/64598
Currently, metrics consumed by Prometheus related to the RGW are being
generated by combining two parts:
1. The RGW perf counters: these counters are generated by the ceph-exporter
by parsing the output of the rgw command `ceph counter dump`.
2. The RGW metadata (daemon, ceph-version, hostname, etc): this information
is generated by the prometheus mgr module.
To combine the two parts ceph-exporter uses a key field called instance_id,
which is generated as follows:
1. On the ceph-exporter side asok admin socket filename is parsed to
extract the daemon_id which is used to derive the instance_id.
2. On the prometheus-mgr module side orchestrator (cephadm or rook) is
called to get the daemon_id then instance_id is derived from the daemon_id
This approach/design suffers from the following issues:
1. It creates a strong dependency between prometheus-mgr module and the
orchestrator module (this has already caused issues for Rook environments,
ceph v18.2.1 metrics are completely broken because of this)
2. instance_id on the ceph-exporter side mgmt is weak as it relies on
socket filename parsing
3. instance_id generation is error-prone as it relies on how daemon_ids are
handled by the orchestrator module (which is difference between rook and
cephadm)
The issue for RGW is that with certain orchestrators, for example in Rook,
there is a mismatch between the instance IDs for the metrics emitted by the
exporter and the metrics from the prometheus manager module.
This has ramifications when running queries in Prometheus when the instance
id is the primary key between the metrics in the queries.
There are many options for solutions, and I'd be happy to hear the
community's thoughts about what they think.
Here are ours (Avan, Redouane, and I):
1. We think daemon specific metrics meant for Prometheus should only be
emitted from one place, and that place should be the newer ceph-exporter.
2. We discussed having a command you can run on an admin socket that would
emit all of the metadata that is currently being sent by the manager
module. This way we're not relying on parsing file names anymore.
3. promtheus-mgr module will still exist and will be used to emit cluster
wise metrics
The command could be something like `ceph who-am-i` that you would expect
to work on any daemons admin socket, or something daemon specific like
`ceph rgw-info`.
In other words, move the metadata source from the mgr-prometheus module to
the ceph-exporter and use this new command `ceph who-am-i` to get it. This
way, each ceph-daemon will be self-sufficient and able to provide the
metadata needed to label/tag its metrics.
At this moment this affects at least two daemons: rgw and rbd-mirror, but
following the approach above and by introducing the new generic command we
can follow the same pattern for other legacy (or new) daemons.
Look forward to hearing other thoughts,
Ali, Redouane, Avan
Hi Ceph Developers,
Ceph is going to be part of Google Summer of Code
<https://summerofcode.withgoogle.com/> (GSoC) this summer.
If you have any ideas for intern projects please add them to the pad below.
Projects are due by *Tuesday March 12th *and projects will be added to
ceph.io as they come in*.*
https://pad.ceph.com/p/project-ideas
I will be reaching out to tech leads and other previous mentors within the
community over the next week.
Best,
Ali
Greetings!
I hope this message finds you well. I am writing to express my admiration for the innovative technology that your organization is working on for GSoC 2024. I have been following your work and find it truly amazing.
Currently, I am committed to another organisation and may not be able to contribute to your projects at this time. However, I noticed that your organisation sometimes faces a shortage of proposals. If such a situation arises, please consider this email as an expression of my interest.
I hold a strong interest in your organisation and would be more than willing to submit a comprehensive proposal should the need arise. I have also been an active contribution at open source projects and have decent programming experience.
Thank you. I look forward to the possibility of future interactions.
Best regards,
Akhilesh.