On Wed, Jun 30, 2021 at 12:42 AM Manuel Holtgrewe <zyklenfrei(a)gmail.com> wrote:
I'm sorry if I'm asking for the obvious or missing a previous discussion of
this but I could not find the answer to my question online. I'd be happy to
be pointed to the right direction only.
The cephfs-mirror tool in pacific looks extremely promising. How does it
work exactly? Is it based on files and (recursive) ctime or rather based on
object information? Does it handle incremental changes (only) between
We started out with a design to use recursive stats (rctime) to
identify changes in a directory tree, but soon hit a dead-end with
bugs (in recursive stats, especially for snapshots) that were not
straightforward to fix. Milind (cc) has made some good progress on
fixing those. So, the mirror daemon design progressed with identifying
changes between snapshots based on file mtime. This means a full walk
on a directory tree to identify differences, which is
not-so-performant for huge data sets. Moreover, this incremental
mechanism is used between the current snapshot (to-be-syned) and the
previous snapshot -- since the previous snapshot contents are present
on the remote file system (for a directory root), the mirror daemon
can do local snapshot comparison to identify changes between
There is an issue related to this that mentions recursive ctime. But that
would mean that users could "rsync -a" data to the file system and this
would not get synchronized.
Do you mean the rctime not being synchronized?
I have good experience with ZFS which is able to identify changes between
two snapshots A and B and then only transfer these changes (using a
sub-file level, on the ZFS equivalent of blocks to my understanding) to
another server with the same file system that is in the exact state as
snapshot A. Does cephfs-mirror work the same?
Not really. The mirror daemon uses libcephfs to talk to local and
remote file systems. There is no mirror daemon instance running on the
remote cluster to "receive" data.
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io