Hey folks, now that Pacific is out I wanted to bring up docs backports.
Today, docs.ceph.com shows master by default, with an appropriate
warning at the top that it represents a development version.
Since the primary audience of the docs is users, not developers, I
suggest that we switch the default branch to the latest stable, i.e.
pacific, and apply the normal backport process to docs that are
relevant to the latest stable release as well.
To kickstart things, I'll prepare a backport of the existing
doc changes since the pacific release.
What do folks think?
(Forwarding to the correct list, I got the kernel ceph-devel by
mistake, sorry to anyone getting this twice!)
---------- Forwarded message ---------
From: John Spray <jcspray(a)gmail.com>
Date: Mon, Aug 30, 2021 at 4:09 PM
Subject: Rust async bindings for rados
Earlier in the year I was having fun learning Rust, and one of the
things I did was to extend the existing ceph-rust crate with
async+streams support. Rust's async stream support is very nice, and
makes writing small+fast gateway-like things quite low effort.
I'm not actively working on this any more but I've parked it in a PR
here for anyone who's interested
https://github.com/ceph/ceph-rust/pull/79 -- I thought I'd drop a note
to the list just in case it's of interest.
All the best,
On Wed, May 19, 2021 at 11:32:04AM +0800, Zhi Zhang wrote:
> On Wed, May 19, 2021 at 11:19 AM Zhi Zhang <zhang.david2011(a)gmail.com>
> > On Tue, May 18, 2021 at 10:58 PM Mykola Golub <to.my.trociny(a)gmail.com>
> > wrote:
> > >
> > > Could you please provide the full rbd-nbd log? If it is too large for
> > > the attachment then may be via some public url?
> > ceph.rbd-client.log.bz2
> > <https://drive.google.com/file/d/1TuiGOrVAgKIJ3BUmiokG0cU12fnlQ3GR/view?usp=…>
> > I uploaded it to google driver. Pls check it out.
> We found the reader_entry thread got zero byte when trying to read the nbd
> request header, then rbd-nbd exited and closed the socket. But we haven't
> figured out why read zero byte?
Ok. I was hoping to find some hint in the log, why the read from the
kernel could return without data, but I don't see it.
From experience it could happen when the rbd-nbd got stack or was too
slow so the kernel failed after timeout, but it looked different in
the logs AFAIR. Anyway you can try increasing the timeout using
rbd-nbd --timeout (--io-timeout in newer versions) option. The default
is 30 sec.
If it does not help, probably you will find a clue increasing the
kernel debug level for nbd (it seems it is possible to do).
One week left for talk submissions for linux.conf.au 2022 (virtualized
for the second time, January 14-16 2022).
-------- Forwarded Message --------
Subject: [lca-announce] linux.conf.au 2022 - Call for Sessions now open!
Date: Tue, 10 Aug 2021 08:07:01 +1000
From: linux.conf.au Announcements <lca-announce(a)lists.linux.org.au>
To: LCA Announce <lca-announce(a)lists.linux.org.au>, Linux Aus
The linux.conf.au 2022 Call for Sessions is now open. It will stay open
until 11:59pm 5 September 2021 Anywhere on Earth (AoE).
The theme for 2022 is 'community'.
After the challenges of the past year, our community has explored ways
to rebuild the connections we were used to having face-to-face. Many
Open Source communities already had their roots online, so how can this
be applied to other areas, and how can we keep people interested as they
shift to living even more of their lives online? How can we keep in
contact with connections in other countries in a virtual way?
If you have ideas or developments you'd like to share with the open
source community at linux.conf.au, we'd love to hear from you.
## Call for Sessions
The main conference runs on Saturday 15 and Sunday 16 January, with
multiple streams catering for a wide range of interest areas.
We invite you to submit a session proposal on a topic you are familiar
with via our proposals portal at
Talks are 45 minute presentations on a single topic presented in lecture
format. Each accepted talk will receive one ticket to attend the conference.
## Call for Miniconfs
We are pleased to announce we will again have four Miniconfs on the
first day, Friday 14 January 2022. These are:
* GO GLAM meets Community
* Open Hardware
* System Administration
Based on feedback over a few years, we will be introducing two major
changes for Miniconfs in 2022: all presentations will be 30 minutes
long, and each accepted presentation will receive one ticket to the
The Call for Miniconf Sessions is now open on our website, so we
encourage you to submit your proposal today. Check out our Miniconfs
page at https://lca2022.linux.org.au/programme/miniconfs/ for more
## No need to book flights or hotels
Don't forget: the conference will be an online, virtual experience. This
means our speakers will be beaming in from their own homes or workplaces.
The organising team will be able to help speakers with their tech
set-up. Each accepted presenter will have a tech check prior to the
event to smooth out any difficulties and there will be an option to
## Introducing LCA Local
We know many of you have missed the experience of a face-to-face
conference and in 2022 we are launching LCA Local.
While our conference will be online, we are inviting people to join
others in their local area and participate in the conference together.
More information and an expression of interest form for LCA Local will
be released soon.
## Have you got an idea?
You can find out how to submit your session proposals at
https://lca2022.linux.org.au/programme/proposals/. If you have any other
questions, you can contact us via email at contact(a)lca2022.linux.org.au.
The session selection committee is looking forward to reading your
submissions. We would also like to thank them for coming together and
volunteering their time to help put this conference together.
linux.conf.au 2022 Organising Team
Read this online at https://lca2022.linux.org.au/news/call-for-sessions/
lca-announce mailing list
I'd like to share a few updates on completed/ongoing RADOS and Crimson
projects with the community.
Significant PRs merged
- Remove allocation metadata from RocksDB - should significantly
improve small write performance
- PG Autoscaler scale-down profile - default in new clusters for
better performance out of the box (pending pacific release)
- Support in msgr 2.0 for on-wire compression for osd-osd communication
- BlueStore deferred writes behavior for large writes on spinners
(pending pacific release)
- Fix for the ceph-bluestore-tool reshard option (pending pacific release)
- New perf channel in telemetry to capture performance metrics (work
- Rook integration with Crimson has started, Radek working on fixing
issues as they come up
- Seastore: lba rewrite merged
- Seastore: work continues on extent manager PR (prerequisite for
multi-device, tiering, and pmem)
- Seastore: several improvements to metrics, will be important for
evaluating options for performance improvements
- More interruptible_future stabilization
- More progress on QoS for background activities in the OSD. The
process of setting appropriate mclock parameters has been automated
and now happens once during OSD startup (pending pacific release)
- Ongoing work to capture better scrub statistics needed for QoS
- Work on client vs client QoS has started
Neha & Sam
are there any news or plans to support current Debian Stable?
It was released beginning of August, and it's question, should I use a
different distro, or can I wait for packages..
in the rgw refactoring meetings, we've been discussing ways to improve
space utilization for workloads of mixed object sizes
i think it's worth bring this up in Mark's performance call as well,
to explore other options from the osd/librados perspective
most of our discussion so far has centered around ways to use s3's
storage classes (which rgw maps to different rados pools) as a way to
direct object uploads to an appropriately-configured pool depending on
the object's size. for example, all objects under 1M would be assigned
to a SMALL storage class, while the rest go to LARGE. doing this
directly is tricky, because http requests don't always tell us the
full object size up front. this strategy could also lead to confusion
in s3 applications, because the storage class is a visible part of the
protocol and clients expect to have control over it
you can read more about storage classes and rgw pool placement in
https://docs.ceph.com/en/latest/radosgw/placement/. essentially, each
bucket chooses a 'placement target' on creation, and that placement
target defines which storage classes are available for its object
uploads. each storage class defines the rados pool to use for the
object data. each placement target has a default storage class called
STANDARD which is used for object uploads that don't specify a storage
class. this STANDARD pool is also used to store all of the bucket's
head objects, regardless of their storage class. objects uploaded to
the STANDARD storage class store up to 4MB of data in the head object,
and the rest in tail objects of the same pool. objects uploaded to
other storage classes only store metadata in the head object - all of
their data goes in tail objects in their own pool
in today's call, Yehuda made the observation that for this use case,
it would be ideal to put all head objects in a pool with small
min_alloc_size and all tails in larger-sized pools. this way, even
though we don't necessarily know the full object size up front, we'd
still place all small objects in the correctly-sized pool, with larger
objects spilling over into their own tail pools
this doesn't quite match up with our existing implementation though,
because we put the STANDARD storage class' tail objects in the same
pool as the head objects, and other storage classes only store data in
so i suggested an additional option to specify a 'head object pool' in
the placement target that's independent of its storage classes. when
specified, all head objects would be written to that pool instead,
along with a configurable amount of data. benefits of this strategy
would be that it preserves the storage class behavior that clients
expect, and enables an optional configuration for a space-optimized
head object pool