The daemons restart (for *some* releases) because of this:
In short, if the selinux module changes, and if you have selinux
enabled, then midway through yum update, there will be a systemctl
restart ceph.target issued.
For the rest -- I think you should focus on getting the PGs all
active+clean as soon as possible, because the degraded and remapped
states are what leads to mon / osdmap growth.
This kind of scenario is why we wrote this tool:
It will use pg-upmap-items to force the PGs to the OSDs where they are
But there is some clarification needed before you go ahead with that.
Could you share `ceph status`, `ceph health detail`?
On Mon, Mar 22, 2021 at 12:05 PM Sam Skipsey <aoanla(a)gmail.com> wrote:
> Hi everyone:
> I posted to the list on Friday morning (UK time), but apparently my email
> is still in moderation (I have an email from the list bot telling me that
> it's held for moderation but no updates).
> Since this is a bit urgent - we have ~3PB of storage offline - I'm posting
> To save retyping the whole thing, I will direct you to a copy of the email
> I wrote on Friday:
> (Since that was sent, we did successfully add big SSDs to the MON hosts so
> they don't fill up their disks with store.db s).
> I would appreciate any advice - assuming this also doesn't get stuck in
> moderation queues.
> Sam Skipsey (he/him, they/them)
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io