Hey folks, now that Pacific is out I wanted to bring up docs backports.
Today, docs.ceph.com shows master by default, with an appropriate
warning at the top that it represents a development version.
Since the primary audience of the docs is users, not developers, I
suggest that we switch the default branch to the latest stable, i.e.
pacific, and apply the normal backport process to docs that are
relevant to the latest stable release as well.
To kickstart things, I'll prepare a backport of the existing
doc changes since the pacific release.
What do folks think?
On Wed, May 19, 2021 at 11:32:04AM +0800, Zhi Zhang wrote:
> On Wed, May 19, 2021 at 11:19 AM Zhi Zhang <zhang.david2011(a)gmail.com>
> > On Tue, May 18, 2021 at 10:58 PM Mykola Golub <to.my.trociny(a)gmail.com>
> > wrote:
> > >
> > > Could you please provide the full rbd-nbd log? If it is too large for
> > > the attachment then may be via some public url?
> > ceph.rbd-client.log.bz2
> > <https://drive.google.com/file/d/1TuiGOrVAgKIJ3BUmiokG0cU12fnlQ3GR/view?usp=…>
> > I uploaded it to google driver. Pls check it out.
> We found the reader_entry thread got zero byte when trying to read the nbd
> request header, then rbd-nbd exited and closed the socket. But we haven't
> figured out why read zero byte?
Ok. I was hoping to find some hint in the log, why the read from the
kernel could return without data, but I don't see it.
From experience it could happen when the rbd-nbd got stack or was too
slow so the kernel failed after timeout, but it looked different in
the logs AFAIR. Anyway you can try increasing the timeout using
rbd-nbd --timeout (--io-timeout in newer versions) option. The default
is 30 sec.
If it does not help, probably you will find a clue increasing the
kernel debug level for nbd (it seems it is possible to do).
As nautilus is nearing its EOL we are planning to do a next (maybe the
last) point release - nautilus 14.2.22 in the first part of June 2021.
If you have any code changes that are to be included pls raise PRs and
add labels "nautilus -batch1" and "needs-qa", so they can be tested
and merged in time for 14.2.22.
With Pacific (16.2.0) out of the door we have the RBD persistent
WriteBack cache for RBD:
Has anybody performed some benchmarks with the RBD cache?
Interested in the QD=1 bs=4k performance mainly.
I don't have any proper hardware available to run benchmarks on yet.
We have two issues:
Caused by numpy not supporting Python sub-interpreters. Unfortunately, the
latter issue came up in the most recent Octopus validations. I suspect
a matter of time, till our users are affected by it.
Note that removing numpy is not easy, as kuberenetes-client depends
(transitively) on numpy.
The performance meeting will be starting in about 70 minutes. Today the
only topic I have is that we'll discuss whether or not it makes sense to
do another submission for CephFS for the ISC21 IO500 competition.
Please feel free to add your own topic!