There will be a DocuBetter meeting on Thursday, 25 Feb 2021 at 0200 UTC.
We will discuss the Google Season of Docs proposals, the cephadm docs
reorganization
being undertaken by Sebastian, and the Teuthology Guide cleanup.
DocuBetter Meeting -- APAC
24 Feb 2021
0200 UTC
https://bluejeans.com/908675367https://pad.ceph.com/p/Ceph_Documentation
Greetings folks, next week is APAC-friendly time: 2100 ET on Mar 3rd
We've got 3 topics on the agend so far:
preloaded EC profiles - kbader -
https://pad.ceph.com/p/preloaded-ec+rados-profiles
In general the idea is to make it easier to tune ceph out of the box
with profiles of common settings across the ceph stack.
build/test optimizations - joshd/nojha -
https://pad.ceph.com/p/test-optimizations-2021
Brainstorming about ways to approach efficient use of the sepia lab
and improve testing and build times for developers. This will be a
gsoc/outreachy project this summer.
libcephsqlite - batrick - https://github.com/ceph/ceph/pull/39191
A single-client SQL interface for rados - the first application is as
a persistence interface for mgr modules.
See you there!
Josh
Please label (label:nautilus-batch-1 label:needs-qa) PRs needed for
the next nautilus point release ASAP as we are getting ready to lock a
repo and start QE validation.
Thx
YuriW
Details of this release summarized here:
https://tracker.ceph.com/issues/49241#note-1
While some suites are still rerunning failed jobs, seeking approvals
and reviews:
rados - Neha, Sebastian
rgw - Casey
rbd - Jason
krbd - Ilya
fs, kcephfs, multimds - Patrick
upgrade/client-upgrade-luminous-octopus - Josh
upgrade/client-upgrade-mimic-octopus - Josh
ceph-volume - Jan, Dimitri
Thx
YuriW
Hi everyone,
I want to invite you to apply to an internship program called Outreachy!
Outreachy provides three-month internships to work in Free and Open
Source Software (FOSS). Outreachy internship projects may include
programming, user experience, documentation, illustration, graphical
design, or data science. Interns often find employment after their
internship with Outreachy sponsors or jobs that use the skills they
learned during their internship.
Ceph has had ten projects submitted in Outreachy since 2018. Now we can
submit more projects for our May-August 2021 round!
The project can be coordinated on the etherpad:
https://pad.ceph.com/p/project-ideas
Projects need to be submitted by the mentor here for approval:
https://www.outreachy.org/communities/cfp/ceph/
Outreachy internships run twice a year. The internships run from May to
August and December to March. Interns are paid a stipend of $6,000 USD
for the three months of work.
Outreachy internships are entirely remote and are open to applicants
around the world. Interns work remotely with experienced mentors. We
expressly invite women (both cis and trans), trans men, and genderqueer
people to apply. We also expressly invite applications from residents
and nationals of the United States of any gender. They are Black/African
American, Hispanic/Latin@, Native American/American Indian, Alaska
Native, Native Hawaiian, or Pacific Islander. Anyone who faces
under-representation, systematic bias, or discrimination in their
country's technology industry is invited to apply. More details and
eligibility criteria can be found here:
https://www.outreachy.org/apply/eligibility/
The next Outreachy internship round is from May 24, 2021, until Aug. 24,
2021.
Initial applications are currently open. Initial applications are due on
Feb. 22, 2021, at 4 pm UTC. Apply today:
https://www.outreachy.org/apply/
Applying to Outreachy is a little different than other internship
programs. You'll fill out an initial application. If your initial
application is approved, you'll move onto the five-week contribution
phase. During the contribution phase, you'll make contact with project
mentors and contribute to the project. Outreachy organizers have found
that the most vital applicants contact mentors early, ask many
questions, and continually submit contributions throughout the
contribution phase.
Please let Ali or I know if you have any questions about the program.
The Outreachy organizers (Karen Sandler, Sage Sharp, Marina
Zhurakhinskaya, Cindy Pallares, and Tony Sebro) can all be reached
through our contact form:
https://www.outreachy.org/contact/contact-us/.
We hope you'll help us spread the word about Outreachy internships!
--
Mike Perez
Hey all,
As of today, we will no longer be building packages for upstream
ceph.git branches each time a PR is merged. Instead, we will build each
active (master, pacific, octopus, nautilus) branch twice daily ONLY if
there were changes since the last build.
https://github.com/ceph/ceph-build/pull/1743 makes this change.
This job is now disabled:
https://jenkins.ceph.com/view/all/job/ceph-dev-trigger/
This job will handle ceph.git dev builds:
https://jenkins.ceph.com/job/ceph-dev-nightly/
The purpose of this change is to reduce the load on the CI builders.
There is no change to dev branches in ceph-ci.git.
--
David Galloway
Senior Systems Administrator
Ceph Engineering
Hi Folks,
The performance meeting will start in about 25 minutes! This week, Kyle
Bader would like to talk about some of the work he's been doing for high
performance deployments and also discuss EC and cache tiering. Hope to
see you there!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
Hey All,
This Sunday (21/2) I plan to upgrade both jenkins.ceph.com and
2.jenkins.ceph.com to version 2.263.4
The expected downtime is approximately one hour
I will send another notice once it's done
Thanks,
--
Adam Kraitman
Systems Administrator
Ceph Engineering
IRC: akraitma
Hi Ceph Developers,
Ceph is applying to be part of Google Summer of Code
<https://summerofcode.withgoogle.com/> (GSoC) and is going to be part of
Outreachy <http://outreachy.org> this summer.
If you have any ideas for intern projects please add them to the pad below.
Projects are due by *Friday March 5th *though we want to get a few up on
the website earlier*.*
https://pad.ceph.com/p/project-ideas
I will be reaching out to tech leads and other previous mentors within the
community over the next week.
Best,
Ali
Hi everyone,
One of the things that ceph-ansible does that cephadm does not is
automatically adjust the osd_memory_target based on the amount of memory
and number of OSDs on the host. The approach is pretty simple: if the
not_hci flag is set, then we take some factor (.7) * total memory and
divide by osd count, and set osd_memory_target accordingly.
We'd like to bring this "hands off" behavior over to cephadm in some form..
ideally one that is a bit more sophisticated. The currently proposal in
written up in this pad:
https://pad.ceph.com/p/autotune_memory_target
The key ideas:
- An autotune_memory_target config option (boolean) can be enabled for a
specific daemon, class, host, or the whole cluster
- It will apply not just to OSDs but other daemons. Currently mon and osd
have a memory_target config; MDS has a similar setting for its cache that
we can work with. The other daemons don't have configurable memory
footprints (for the most part) but we can come up with some estimate of
their *usage* based on some conservative estimates, maybe with cluster size
factored in.
- The autotuner would look at other (ceph) daemons on the host that aren't
being tuned and deduct that memory from the total available, and then divvy
up the remaining memory among the autotuned daemons. Initially we'd only
support autotuning osd, mon, and mds.
There are some open questions about how to implement some of the details,
though.
For instance, the current proposal doesn't record what the "tuned" memory
is in ceph anywhere, but rather would look at what 'cephadm ls' says about
the deployed containers' limits, compare that to what it thinks should
happen, and redeploy daemons as needed.
It might also make sense to allow these targets to be expressed via the
'service spec' yaml, which is more or less equivalent to the CRs in Rook
(which currently do allow osd memory target to be set).
I'm not entirely sure if this should be an orchestrator function (i.e.,
work with both rook and cephadm) or cephadm-only. It would be helpful to
hear whether this sort of function would make any sense in a rook
cluster... are there cases where the kubernetes nodes are dedicated to ceph
and it makes sense to magically slurp up all of the hosts' RAM? We
probably don't want to be in the business of checking with kubernetes about
unused host resources and adjusting things whenever the k8s scheduler does
something on the host...
sage