Den tors 30 jan. 2020 kl 15:29 skrev Adam Boyhan <adamb(a)medent.com>om>:
We are looking to role out a all flash Ceph cluster as
storage for our
cloud solution. The OSD's will be on slightly slower Micron 5300 PRO's,
with WAL/DB on Micron 7300 MAX NVMe's.
My main concern with Ceph being able to fit the bill is its snapshot
abilities.
For each RBD we would like the following snapshots
8x 30 minute snapshots (latest 4 hours)
With our current solution (HPE Nimble) we simply pause all write IO on the
10 minute mark for roughly 2 seconds and then we take a snapshot of the
entire Nimble volume. Each VM within the Nimble volume is sitting on a
Linux Logical Volume so its easy for us to take one big snapshot and only
get access to a specific clients data.
Are there any options for automating managing/retention of snapshots
within Ceph besides some bash scripts? Is there anyway to take snapshots of
all RBD's within a pool at a given time?
You could make a snapshot of the whole pool, that would cover all RBDs in
it I gather?
https://docs.ceph.com/docs/nautilus/rados/operations/pools/#make-a-snapshot…
But if you need to work in parallel with each snapshot from different times
and clone them one by one and so forth, doing it per-RBD would be better.
https://docs.ceph.com/docs/nautilus/rbd/rbd-snapshot/
--
May the most significant bit of your life be positive.