On Mon, Jun 8, 2020 at 6:18 PM Hans van den Bogert <hansbogert(a)gmail.com> wrote:
Rather unsatisfactory to not know where it really went wrong, but completely removing all
traces of peer settings and auth keys, and I redid the peer bootstrap and this did result
in a working sync.
My initial mirror config stemmed from Nautilus and was configged for journaling on a
pool. Perhaps transitioning to an image based snapshot config has some problems? But
that's just guessing.
Perhaps -- snapshot-based mirroring is not supported on a pool
configured for journal-based mirroring.
Thanks for the follow-up though!
Regards,
Hans
On Mon, Jun 8, 2020, 13:38 Jason Dillaman <jdillama(a)redhat.com> wrote:
>
> On Sun, Jun 7, 2020 at 8:06 AM Hans van den Bogert <hansbogert(a)gmail.com>
wrote:
> >
> > Hi list,
> >
> > I've awaited octopus for a along time to be able to use mirror with
> > snapshotting, since my setup does not allow for journal based
> > mirroring. (K8s/Rook 1.3.x with ceph 15.2.2)
> >
> > However, I seem to be stuck, i've come to the point where on the
> > cluster on which the (non-active) replicas should reside I get this:
> >
> > ```
> > rbd mirror pool status -p replicapool --verbose
> >
> > ...
> > pvc-f7ca0b55-ed38-4d9f-b306-7db6a0157e2e:
> > global_id: d3a301f2-4f54-4e9e-b251-c55ddbb67dc6
> > state: up+starting_replay
> > description: starting replay
> > service: a on nldw1-6-26-1
> > last_update: 2020-06-07 11:54:54
> > ...
> > ```
> >
> > That seems good, right? But I don't see any actual data being copied
> > into the failover cluster.
> >
> > Anybody any ideas what to check?
>
> Can you look at the log files for "rbd-mirror" daemon? I wonder if it
> starts and quickly fails.
>
> > Also, is it correct, you won't see mirror snapshots with the
'normal'
> > `rbd snap` commands?
>
> Yes, "rbd snap ls" only shows user-created snapshots by default. You
> can use "rbd snap ls --all" to see all snapshots on an image.
>
> > Thanks in advance,
> >
> > Hans
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >
>
>
> --
> Jason
>
--
Jason