Hi Xie,
Find my response inline.
On Mon, Jun 3, 2019 at 11:11 PM <xie.xingguo(a)zte.com.cn> wrote:
>
> Josh & Neha,
>
> The partial recovery (https://github.com/ceph/ceph/pull/21722, I'd consider "incremental recovery" a better naming btw)
>
> has been merged into master a couple of weeks ago, which is great.
>
> However, I don't really like the way it tracing the modified content very much - it actually traces the unmodifed/clean parts of the specific object instead,
>
> which is neither straightforward nor super efficient.
>
> Can we change the design to record dirty regions instead? There should be two benefits I can think of by doing so:
In order to do any kind of partial(or incremental) recovery we need to
keep track of dirty/clean regions, the PR we merged chose to track
clean regions. If you can make a case for using dirty regions instead
by a) coming up with an implementation b) backing it up with reason
and numbers that can prove that it is better, we'll be happy to take a
look at it.
>
> 1、dirty_regions are smaller (should always be == clean_regions.size() - 1), which as a result can save us approximate
>
> 3000 (pg log entries) * 16 * 100 (100 pg per osd) = 4MB memory as well as bluestore.db space
>
> 2、we can re-use the existing modified_ranges of OpContext to trace the data regions modified by an op
This sounds like a good idea to me.
Thanks,
Neha
>
>
> What do you think?
>
>
>
>
>
Hi,
I've asked about this in IRC already, but due to timezone foo ceph-devel might
be more effective.
I was wondering if there was a plan or expectation of creating cephfs subvolumes
using a luminous ceph_volume_client on a nautilus cluster (or any other sensible
version combination)?
Currently this does not work, due to the volume client using the now removed
'ceph mds dump' command. The fix is straight forward, but depending on if that
should work this could be more complex (essentially making ceph_volume_client
aware of the version of the ceph cluster).
I'm aware of the current refactor of the volume client as a mgr module. Will we
backport this to luminous? Or is there an expectation that the volume client and
the ceph cluster have to run the same version?
Best,
Jan
--
Jan Fajerski
Engineer Enterprise Storage
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Hi list,
I came across some strange MDS behaviour recently where it is not possible to
start and MDS on a machine that happens to have the hostname "admin".
This turns out to be this code
https://github.com/ceph/ceph/blob/master/src/common/entity_name.cc#L128 that is
called by ceph-mds here
https://github.com/ceph/ceph/blob/master/src/ceph_mds.cc#L116.
Together with the respective systemd unit file (passing "--id %i") this prevents
starting an MDS on a machine witht he hostname admin.
Is this just old code and chance or is there a reason behind this? The MDS is
the only daemon doing that, though I did not check for other but similar checks
in other daemons.
Best,
Jan
--
Jan Fajerski
Engineer Enterprise Storage
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)