I'm attempting to deep scrub all the PGs to see if that helps clear up
some accounting issues, but that's going to take a really long time on
2PB of data.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Apr 8, 2021 at 9:48 PM Robert LeBlanc <robert(a)leblancnet.us> wrote:
>
> Good thought. The storage for the monitor data is a RAID-0 over three
> NVMe devices. Watching iostat, they are completely idle, maybe 0.8% to
> 1.4% for a second every minute or so.
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
>
> On Thu, Apr 8, 2021 at 7:48 PM Zizon Qiu <zzdtsv(a)gmail.com> wrote:
> >
> > Will it be related to some kind of disk issue of that mon located in,which may
casually
> > slow down IO and further the rocksdb?
> >
> >
> > On Fri, Apr 9, 2021 at 4:29 AM Robert LeBlanc <robert(a)leblancnet.us>
wrote:
> >>
> >> I found this thread that matches a lot of what I'm seeing. I see the
> >> ms_dispatch thread going to 100%, but I'm at a single MON, the
> >> recovery is done and the rocksdb MON database is ~300MB. I've tried
> >> all the settings mentioned in that thread with no noticeable
> >> improvement. I was hoping that once the recovery was done (backfills
> >> to reformatted OSDs) that it would clear up, but not yet. So any other
> >> ideas would be really helpful. Our MDS is functioning, but stalls a
> >> lot because the mons miss heartbeats.
> >>
> >> mon_compact_on_start = true
> >> rocksdb_cache_size = 1342177280
> >> mon_lease = 30
> >> mon_osd_cache_size = 200000
> >> mon_sync_max_payload_size = 4096
> >>
> >> ----------------
> >> Robert LeBlanc
> >> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
> >>
> >> On Thu, Apr 8, 2021 at 1:11 PM Stefan Kooman <stefan(a)bit.nl> wrote:
> >> >
> >> > On 4/8/21 6:22 PM, Robert LeBlanc wrote:
> >> > > I upgraded our Luminous cluster to Nautilus a couple of weeks ago
and
> >> > > converted the last batch of FileStore OSDs to BlueStore about 36
hours
> >> > > ago. Yesterday our monitor cluster went nuts and started
constantly
> >> > > calling elections because monitor nodes were at 100% and
wouldn't
> >> > > respond to heartbeats. I reduced the monitor cluster to one to
prevent
> >> > > the constant elections and that let the system limp along until
the
> >> > > backfills finished. There are large amounts of time where ceph
commands
> >> > > hang with the CPU is at 100%, when the CPU drops I see a lot of
work
> >> > > getting done in the monitor logs which stops as soon as the CPU is
at
> >> > > 100% again.
> >> >
> >> >
> >> > Try reducing mon_sync_max_payload_size=4096. I have seen Frank
Schilder
> >> > advise this several times because of monitor issues. Also recently for
a
> >> > cluster that got upgraded from Luminous -> Mimic -> Nautilus.
> >> >
> >> > Worth a shot.
> >> >
> >> > Otherwise I'll try to look in depth and see if I can come up with
> >> > something smart (for now I need to go catch some sleep).
> >> >
> >> > Gr. Stefan