Greetings folks, next week is APAC-friendly time: 2100 ET on Mar 3rd
We've got 3 topics on the agend so far:
preloaded EC profiles - kbader -
In general the idea is to make it easier to tune ceph out of the box
with profiles of common settings across the ceph stack.
build/test optimizations - joshd/nojha -
Brainstorming about ways to approach efficient use of the sepia lab
and improve testing and build times for developers. This will be a
gsoc/outreachy project this summer.
libcephsqlite - batrick - https://github.com/ceph/ceph/pull/39191
A single-client SQL interface for rados - the first application is as
a persistence interface for mgr modules.
See you there!
Details of this release summarized here:
While some suites are still rerunning failed jobs, seeking approvals
rados - Neha, Sebastian
rgw - Casey
rbd - Jason
krbd - Ilya
fs, kcephfs, multimds - Patrick
upgrade/client-upgrade-luminous-octopus - Josh
upgrade/client-upgrade-mimic-octopus - Josh
ceph-volume - Jan, Dimitri
I want to invite you to apply to an internship program called Outreachy!
Outreachy provides three-month internships to work in Free and Open
Source Software (FOSS). Outreachy internship projects may include
programming, user experience, documentation, illustration, graphical
design, or data science. Interns often find employment after their
internship with Outreachy sponsors or jobs that use the skills they
learned during their internship.
Ceph has had ten projects submitted in Outreachy since 2018. Now we can
submit more projects for our May-August 2021 round!
The project can be coordinated on the etherpad:
Projects need to be submitted by the mentor here for approval:
Outreachy internships run twice a year. The internships run from May to
August and December to March. Interns are paid a stipend of $6,000 USD
for the three months of work.
Outreachy internships are entirely remote and are open to applicants
around the world. Interns work remotely with experienced mentors. We
expressly invite women (both cis and trans), trans men, and genderqueer
people to apply. We also expressly invite applications from residents
and nationals of the United States of any gender. They are Black/African
American, Hispanic/Latin@, Native American/American Indian, Alaska
Native, Native Hawaiian, or Pacific Islander. Anyone who faces
under-representation, systematic bias, or discrimination in their
country's technology industry is invited to apply. More details and
eligibility criteria can be found here:
The next Outreachy internship round is from May 24, 2021, until Aug. 24,
Initial applications are currently open. Initial applications are due on
Feb. 22, 2021, at 4 pm UTC. Apply today:
Applying to Outreachy is a little different than other internship
programs. You'll fill out an initial application. If your initial
application is approved, you'll move onto the five-week contribution
phase. During the contribution phase, you'll make contact with project
mentors and contribute to the project. Outreachy organizers have found
that the most vital applicants contact mentors early, ask many
questions, and continually submit contributions throughout the
Please let Ali or I know if you have any questions about the program.
The Outreachy organizers (Karen Sandler, Sage Sharp, Marina
Zhurakhinskaya, Cindy Pallares, and Tony Sebro) can all be reached
through our contact form:
We hope you'll help us spread the word about Outreachy internships!
This Sunday (21/2) I plan to upgrade both jenkins.ceph.com and
2.jenkins.ceph.com to version 2.263.4
The expected downtime is approximately one hour
I will send another notice once it's done
Hi Ceph Developers,
Ceph is applying to be part of Google Summer of Code
<https://summerofcode.withgoogle.com/> (GSoC) and is going to be part of
Outreachy <http://outreachy.org> this summer.
If you have any ideas for intern projects please add them to the pad below.
Projects are due by *Friday March 5th *though we want to get a few up on
the website earlier*.*
I will be reaching out to tech leads and other previous mentors within the
community over the next week.
One of the things that ceph-ansible does that cephadm does not is
automatically adjust the osd_memory_target based on the amount of memory
and number of OSDs on the host. The approach is pretty simple: if the
not_hci flag is set, then we take some factor (.7) * total memory and
divide by osd count, and set osd_memory_target accordingly.
We'd like to bring this "hands off" behavior over to cephadm in some form..
ideally one that is a bit more sophisticated. The currently proposal in
written up in this pad:
The key ideas:
- An autotune_memory_target config option (boolean) can be enabled for a
specific daemon, class, host, or the whole cluster
- It will apply not just to OSDs but other daemons. Currently mon and osd
have a memory_target config; MDS has a similar setting for its cache that
we can work with. The other daemons don't have configurable memory
footprints (for the most part) but we can come up with some estimate of
their *usage* based on some conservative estimates, maybe with cluster size
- The autotuner would look at other (ceph) daemons on the host that aren't
being tuned and deduct that memory from the total available, and then divvy
up the remaining memory among the autotuned daemons. Initially we'd only
support autotuning osd, mon, and mds.
There are some open questions about how to implement some of the details,
For instance, the current proposal doesn't record what the "tuned" memory
is in ceph anywhere, but rather would look at what 'cephadm ls' says about
the deployed containers' limits, compare that to what it thinks should
happen, and redeploy daemons as needed.
It might also make sense to allow these targets to be expressed via the
'service spec' yaml, which is more or less equivalent to the CRs in Rook
(which currently do allow osd memory target to be set).
I'm not entirely sure if this should be an orchestrator function (i.e.,
work with both rook and cephadm) or cephadm-only. It would be helpful to
hear whether this sort of function would make any sense in a rook
cluster... are there cases where the kubernetes nodes are dedicated to ceph
and it makes sense to magically slurp up all of the hosts' RAM? We
probably don't want to be in the business of checking with kubernetes about
unused host resources and adjusting things whenever the k8s scheduler does
something on the host...