That's what I though as well, specially based on this.
Note
You may clone a snapshot from one pool to an image in another pool. For example, you may
maintain read-only images and snapshots as templates in one pool, and writeable clones in
another pool.
root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
CephTestPool2/vm-100-disk-0-CLONE
2021-01-20T15:06:35.854-0500 7fb889ffb700 -1 librbd::image::CloneRequest: 0x55c7cf8417f0
validate_parent: parent snapshot must be protected
root@Bunkcephmon2:~# rbd snap protect CephTestPool1/vm-100-disk-0@TestSnapper1
rbd: protecting snap failed: (30) Read-only file system
From: "Eugen Block" <eblock(a)nde.ag>
To: "adamb" <adamb(a)medent.com>
Cc: "ceph-users" <ceph-users(a)ceph.io>io>, "Matt Wilder"
<matt.wilder(a)bitmex.com>
Sent: Wednesday, January 20, 2021 3:00:54 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
But you should be able to clone the mirrored snapshot on the remote
cluster even though it’s not protected, IIRC.
Zitat von Adam Boyhan <adamb(a)medent.com>om>:
Two separate 4 node clusters with 10 OSD's in each
node. Micron 9300
NVMe's are the OSD drives. Heavily based on the Micron/Supermicro
white papers.
When I attempt to protect the snapshot on a remote image, it errors
with read only.
root@Bunkcephmon2:~# rbd snap protect
CephTestPool1/vm-100-disk-0@TestSnapper1
rbd: protecting snap failed: (30) Read-only file system
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io