Why 5 instead of 3 MONs are required?
huxiaoyu(a)horebdata.cn
From: Freddy Andersen
Date: 2021-02-12 16:05
To: huxiaoyu(a)horebdata.cn; Marc; Michal Strnad; ceph-users
Subject: Re: [ceph-users] Re: Backups of monitor
I would say production should have 5 MON servers
From: huxiaoyu(a)horebdata.cn <huxiaoyu(a)horebdata.cn>
Date: Friday, February 12, 2021 at 7:59 AM
To: Marc <Marc(a)f1-outsourcing.eu>eu>, Michal Strnad <michal.strnad(a)cesnet.cz>cz>,
ceph-users <ceph-users(a)ceph.io>
Subject: [ceph-users] Re: Backups of monitor
Normally any production Ceph cluster will have at least 3 MONs, does it reall need a
backup of MON?
samuel
huxiaoyu(a)horebdata.cn
From: Marc
Date: 2021-02-12 14:36
To: Michal Strnad; ceph-users(a)ceph.io
Subject: [ceph-users] Re: Backups of monitor
So why not create an extra start it only when you want to make a backup, wait until it is
up to date, stop it and then stop it to back it up?
-----Original Message-----
From: Michal Strnad <michal.strnad(a)cesnet.cz>
Sent: 11 February 2021 21:15
To: ceph-users(a)ceph.io
Subject: [ceph-users] Backups of monitor
Hi all,
We are looking for a proper solution for backups of monitor (all maps
that they hold). On the internet we found advice that we have to stop
one of monitor, back it up (dump) and start daemon again. But this is
not right approach due to risk of loosing quorum and need of
synchronization after monitor is back online.
Our goal is to have at least some (recent) metadata of objects in
cluster for the last resort when all monitors are in very bad
shape/state and we could start any of them. Maybe there is another
approach but we are not aware of it.
We are running the latest nautilus and three monitors on every cluster.
Ad. We don't want to use more monitors than thee.
Thank you
Cheers
Michal
--
Michal Strnad
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io