________________________________
From: "Jason Dillaman" <jdillama(a)redhat.com>
To: "adamb" <adamb(a)medent.com>
Cc: "Eugen Block" <eblock(a)nde.ag>ag>, "ceph-users"
<ceph-users(a)ceph.io>io>, "Matt Wilder" <matt.wilder(a)bitmex.com>
Sent: Thursday, January 21, 2021 9:25:11 AM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 21, 2021 at 8:34 AM Adam Boyhan <adamb(a)medent.com> wrote:
When cloning the snapshot on the remote cluster I can't see my ext4 filesystem.
Using the same exact snapshot on both sides. Shouldn't this be consistent?
Yes. Has the replication process completed ("rbd mirror image status
CephTestPool1/vm-100-disk-0")?
Primary Site
root@Ccscephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep TestSnapper1
10621 TestSnapper1
2 TiB Thu Jan 21 08:15:22 2021 user
root@Ccscephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
CephTestPool1/vm-100-disk-0-CLONE
root@Ccscephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id admin --keyring
/etc/ceph/ceph.client.admin.keyring
/dev/nbd0
root@Ccscephtest1:~# mount /dev/nbd0 /usr2
Secondary Site
root@Bunkcephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep TestSnapper1
10430 TestSnapper1
2 TiB Thu Jan 21 08:20:08 2021 user
root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
CephTestPool1/vm-100-disk-0-CLONE
root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id admin --keyring
/etc/ceph/ceph.client.admin.keyring
/dev/nbd0
root@Bunkcephtest1:~# mount /dev/nbd0 /usr2
mount: /usr2: wrong fs type, bad option, bad superblock on /dev/nbd0, missing codepage or
helper program, or other error.
________________________________
From: "adamb" <adamb(a)medent.com>
To: "dillaman" <dillaman(a)redhat.com>
Cc: "Eugen Block" <eblock(a)nde.ag>ag>, "ceph-users"
<ceph-users(a)ceph.io>io>, "Matt Wilder" <matt.wilder(a)bitmex.com>
Sent: Wednesday, January 20, 2021 3:42:46 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
Awesome information. I new I had to be missing something.
All of my clients will be far newer than mimic so I don't think that will be an
issue.
Added the following to my ceph.conf on both clusters.
rbd_default_clone_format = 2
root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
CephTestPool2/vm-100-disk-0-CLONE
root@Bunkcephmon2:~# rbd ls CephTestPool2
vm-100-disk-0-CLONE
I am sure I will be back with more questions. Hoping to replace our Nimble storage with
Ceph and NVMe.
Appreciate it!
________________________________
From: "Jason Dillaman" <jdillama(a)redhat.com>
To: "adamb" <adamb(a)medent.com>
Cc: "Eugen Block" <eblock(a)nde.ag>ag>, "ceph-users"
<ceph-users(a)ceph.io>io>, "Matt Wilder" <matt.wilder(a)bitmex.com>
Sent: Wednesday, January 20, 2021 3:28:39 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan <adamb(a)medent.com> wrote:
That's what I though as well, specially based on this.
Note
You may clone a snapshot from one pool to an image in another pool. For example, you may
maintain read-only images and snapshots as templates in one pool, and writeable clones in
another pool.
root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
CephTestPool2/vm-100-disk-0-CLONE
2021-01-20T15:06:35.854-0500 7fb889ffb700 -1 librbd::image::CloneRequest: 0x55c7cf8417f0
validate_parent: parent snapshot must be protected
root@Bunkcephmon2:~# rbd snap protect CephTestPool1/vm-100-disk-0@TestSnapper1
rbd: protecting snap failed: (30) Read-only file system
You have two options: (1) protect the snapshot on the primary image so
that the protection status replicates or (2) utilize RBD clone v2
which doesn't require protection but does require Mimic or later
clients [1].
From: "Eugen Block" <eblock(a)nde.ag>
To: "adamb" <adamb(a)medent.com>
Cc: "ceph-users" <ceph-users(a)ceph.io>io>, "Matt Wilder"
<matt.wilder(a)bitmex.com>
Sent: Wednesday, January 20, 2021 3:00:54 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
But you should be able to clone the mirrored snapshot on the remote
cluster even though it’s not protected, IIRC.
Zitat von Adam Boyhan <adamb(a)medent.com>om>:
Two separate 4 node clusters with 10 OSD's in
each node. Micron 9300
NVMe's are the OSD drives. Heavily based on the Micron/Supermicro
white papers.
When I attempt to protect the snapshot on a remote image, it errors
with read only.
root@Bunkcephmon2:~# rbd snap protect
CephTestPool1/vm-100-disk-0@TestSnapper1
rbd: protecting snap failed: (30) Read-only file system
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
[1]
https://ceph.io/community/new-mimic-simplified-rbd-image-cloning/
--
Jason