It claimed error when I promoted the non-primary image at first. But the
command executed successfully after a while, without '--force'.
error:
"rbd: error promoting image to primary2020-06-09 19:56:30.662 7f27e17fa700
-1 librbd::mirror::PromoteRequest: 0x558fa971fd20 handle_get_info: image is
still primary within a remote cluster
2020-06-09 19:56:30.662 7f2804362b00 -1 librbd::api::Mirror: image_promote:
failed to promote image"
Besides that, I wrote data by adding `date` to the end of a file, and
flushed data by 'echo 3 > /proc/sys/vm/drop_caches'.
Jason Dillaman <jdillama(a)redhat.com> 于2020年6月9日周二 下午7:48写道:
On Tue, Jun 9, 2020 at 7:26 AM Zhenshi Zhou
<deaderzzs(a)gmail.com> wrote:
I did promote the non-primary image, or I couldn't disable the image
mirror.
OK, that means that 100% of the data was properly transferred since it
needs to replay previous events before it can get to the demotion
event, replay that, so that you could non-force promote. How are you
writing to the original primary image? Are you flushing your data?
Jason Dillaman <jdillama(a)redhat.com>
于2020年6月9日周二 下午7:19写道:
>
> On Mon, Jun 8, 2020 at 11:42 PM Zhenshi Zhou <deaderzzs(a)gmail.com>
wrote:
> >
> > I have just done a test on rbd-mirror. Follow the steps:
> > 1. deploy two new clusters, clusterA and clusterB
> > 2. configure one-way replication from clusterA to clusterB with
rbd-mirror
> > 3. write data to rbd_blk on clusterA
once every 5 seconds
> > 4. get information with 'rbd mirror image status rbd_blk', the
"state" is "up+replaying"
> > 5. demote image on clusterA(just wanna
stop syncing and switch client
connection to clusterB)
> >
> > The result is:
> > 1. I find that the "last_update" from "rbd mirror image
status"
updates every 30 seconds, which means I will lose at most 30s of data
>
> The status updates are throttled but not the data transfer.
>
> > 2. I stopped syncing on 11:02, while the data from rbd_blk on the
clusterB is not newer than 10:50.
>
> After demotion, you need to promote on the original non-primary side
> (w/o the --force) option. It won't let you non-force promote unless it
> has all the data copied. It will continue to copy data while the other
> side has been demoted until it fully catches up.
>
> >
> > Did I have the wrong steps in the switching progress?
> >
> >
> >
> > Zhenshi Zhou <deaderzzs(a)gmail.com> 于2020年6月9日周二 上午8:57写道:
> >>
> >> Well, I'm afraid that the image didn't replay continuously, which
means I have some data lost.
> >> The "rbd mirror image
status" shows the image is replayed and its
time is just before I demote
> >> the primary image. I lost about 24
hours' data and I'm not sure
whether there is an interval
> >> between the synchronization.
> >>
> >> I use version 14.2.9 and I deployed a one direction mirror.
> >>
> >> Zhenshi Zhou <deaderzzs(a)gmail.com> 于2020年6月5日周五 上午10:22写道:
> >>>
> >>> Thank you for the clarification. That's very clear.
> >>>
> >>> Jason Dillaman <jdillama(a)redhat.com> 于2020年6月5日周五 上午12:46写道:
> >>>>
> >>>> On Thu, Jun 4, 2020 at 3:43 AM Zhenshi Zhou
<deaderzzs(a)gmail.com>
wrote:
> >>>> >
> >>>> > My condition is that the primary image being used while
rbd-mirror sync.
> >>>> > I want to get the
period between the two times of rbd-mirror
transfer the
> >>>> > increased data.
> >>>> > I will search those options you provided, thanks a lot :)
> >>>>
> >>>> When using the original (pre-Octopus) journal-based mirroring, once
> >>>> the initial sync completes to transfer the bulk of the image data
from
> >>>> a point-in-time dynamic
snapshot, any changes post sync will be
> >>>> replayed continuously from the stream of events written to the
journal
> >>>> on the primary image. The
"rbd mirror image status" against the
> >>>> non-primary image will provide more details about the current
state of
> >>>> the journal replay.
> >>>>
> >>>> With the Octopus release, we now also support snapshot-based
mirroring
> >>>> where we transfer any image
deltas between two mirroring snapshots.
> >>>> These mirroring snapshots are different from user-created snapshots
> >>>> and their life-time is managed by RBD mirroring (i.e. they are
> >>>> automatically pruned when no longer needed). This version of
mirroring
> >>>> probably more closely
relates to your line of questioning since the
> >>>> period of replication is at whatever period you create new
mirroring
> >>>> snapshots (provided your two
clusters can keep up).
> >>>>
> >>>> >
> >>>> > Eugen Block <eblock(a)nde.ag> 于2020年6月4日周四 下午3:28写道:
> >>>> >
> >>>> > > The initial sync is a full image sync, the rest is based
on
the object
> >>>> > > sets created.
There are several options to control the
mirroring, for
> >>>> > > example:
> >>>> > >
> >>>> > > rbd_journal_max_concurrent_object_sets
> >>>> > > rbd_mirror_concurrent_image_syncs
> >>>> > > rbd_mirror_leader_max_missed_heartbeats
> >>>> > >
> >>>> > > and many more. I'm not sure I fully understand what
you're
asking,
> >>>> > > maybe you could
rephrase your question?
> >>>> > >
> >>>> > >
> >>>> > > Zitat von Zhenshi Zhou <deaderzzs(a)gmail.com>om>:
> >>>> > >
> >>>> > > > Hi Eugen,
> >>>> > > >
> >>>> > > > Thanks for the reply. If rbd-mirror constantly
synchronize
changes,
> >>>> > > > what
frequency to replay once? I don't find any options I
can config.
> >>>> > > >
> >>>> > > > Eugen Block <eblock(a)nde.ag> 于2020年6月4日周四
下午2:54写道:
> >>>> > > >
> >>>> > > >> Hi,
> >>>> > > >>
> >>>> > > >> that's the point of rbd-mirror, to constantly
replay
changes from the
> >>>> > > >> primary
image to the remote image (if the rbd journal
feature is
> >>>> > > >>
enabled).
> >>>> > > >>
> >>>> > > >>
> >>>> > > >> Zitat von Zhenshi Zhou
<deaderzzs(a)gmail.com>om>:
> >>>> > > >>
> >>>> > > >> > Hi all,
> >>>> > > >> >
> >>>> > > >> > I'm gonna deploy a rbd-mirror in order
to sync image from
clusterA to
> >>>> > > >> >
clusterB.
> >>>> > > >> > The image will be used while syncing.
I'm not sure if the
rbd-mirror
> >>>> > > will
> >>>> > > >> > sync image
> >>>> > > >> > continuously or not. If not, I will inform
clients not to
write data
> >>>> > > in
> >>>> > > >> it.
> >>>> > > >> >
> >>>> > > >> > Thanks. Regards
> >>>> > > >> >
_______________________________________________
> >>>> > > >> > ceph-users mailing list --
ceph-users(a)ceph.io
> >>>> > > >> > To unsubscribe send an email to
ceph-users-leave(a)ceph.io
> >>>> > > >>
> >>>> > > >>
> >>>> > > >> _______________________________________________
> >>>> > > >> ceph-users mailing list -- ceph-users(a)ceph.io
> >>>> > > >> To unsubscribe send an email to
ceph-users-leave(a)ceph.io
> >>>> > > >>
> >>>> > >
> >>>> > >
> >>>> > >
> >>>> > >
> >>>> > _______________________________________________
> >>>> > ceph-users mailing list -- ceph-users(a)ceph.io
> >>>> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Jason
> >>>>
>
>
> --
> Jason
>
--
Jason