Actually it's the opposite: I have enabled it: `ceph config set
'osd.*' osd_scrub_during_recovery true`, but still no scrubbing. But
now I'm started thinking that the change of `osd_scrub_during_recovery
true` was not imminent. I have waited for some time, and then reverted
it back to the default `false` As the balancing was nearly done - it
started scrubbing. Is it possible that all OSDs in ACTING and UP
acting sets of backfilling PGs are omitted from scrubbing?
P.S. I think there's a misalignment with documentation, which states
the default is `true`:
https://docs.ceph.com/docs/nautilus/rados/configuration/osd-config-ref/#scr…
Code tells it's `false`:
https://github.com/ceph/ceph/blob/46324c2c263175ec562b5ecc178b2c74753c0a84/…
On Fri, May 29, 2020 at 1:10 PM Paul Emmerich <paul.emmerich(a)croit.io> wrote:
Did you disable "osd scrub during recovery"?
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Fri, May 29, 2020 at 12:04 AM Vytenis A <vytenis.adm(a)gmail.com> wrote:
>
> Forgot to mention the CEPH version we're running: Nautilus 14.2.9
>
> On Fri, May 29, 2020 at 12:44 AM Vytenis A <vytenis.adm(a)gmail.com> wrote:
> >
> > Hi list,
> >
> > We have balancer plugin in upmap mode running for a while now:
> >
> > health: HEALTH_OK
> >
> > pgs:
> > 1973 active+clean
> > 194 active+remapped+backfilling
> > 73 active+remapped+backfill_wait
> >
> > recovery: 588 MiB/s, 343 objects/s
> >
> >
> > Our objects are stored on EC pool. We got an PG_NOT_DEEP_SCRUBBED
> > alert and have noticed that no scrubbing (literally zero) was done
> > since the balancing started. Has anyone some ideas why this is
> > happening?
> >
> > "pg deep-scrub <pgid>" did not help.
> >
> > Thanks!
> >
> >
> > --
> > Vytenis
>
>
>
> --
> Vytenis
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Vytenis