For some reason I’d thought replication between clusters was an “official” method of
backing up.
On May 29, 2020, at 4:31 PM,
<DHilsbos(a)performair.com> <DHilsbos(a)performair.com> wrote:
Ludek;
As a cluster system, Ceph isn't really intended to be backed up. It's designed
to take quite a beating, and preserve your data.
From a broader disaster recovery perspective, here's how I architected my clusters:
Our primary cluster is laid out in such a way that an entire rack can fail without read /
write being impacted, much less data integrity. On top of that, our RadosGW was a
multi-site setup which automatically sends a copy of every object to a second cluster at a
different location.
Thus my disaster recovery looks like this:
1 rack or less: no user impact, rebuild rack
2 racks: users are unable to add objects, but existing data is safe, rebuild cluster (or
as below)
Whole site: switch second site to master and continue
No backup or recovery necessary.
You might look the multi-site documentation:
https://docs.ceph.com/docs/master/radosgw/multisite/
I had a long conversation with our owner on this same topic, and how the organization
would have to move from a "Backup & Recover" mindset to a "Disaster
Recovery" mindset. It worked well for us, as we were looking to move more towards
Risk Analysis based approaches anyway.
Thank you,
Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International, Inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com
-----Original Message-----
From: Ludek Navratil [mailto:ludek.navratil@yahoo.co.uk]
Sent: Wednesday, February 5, 2020 6:57 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] OSD backups and recovery
HI all,
what is the best approach for OSD backups and recovery? We use only Radosgw with S3 API
and I need to backup the content of S3 buckets. Currently I sync s3 buckets to local
filesystem and backup the content using Amanda.
I believe that there must a better way to do this but I couldn't find it in docs.
I know that one option is to setup an archive zone, but it requires an additional ceph
cluster that needs to be maintained and looked after. I would rather avoid that.
How can I backup an entire Ceph cluster? Or individual OSDs in the way that will allow me
to recover the data correctly?
Many thanks,Ludek
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io