Don't know if this will help you
But we do all our scrubbing manually with cron tasks
always the oldest non-scrubbed pg
And to check on scrubbing we use this - which reports the current
active scrubbing process
ceph pg ls scrubbing | sort -k18 -k19 | head -n 20
for us a scrub is 5 minutes +/- maybe 3
Deep scrub is 40 minutes +/- 10
all slow hd's
hth Joe
>> Jayjeet Chakraborty <jayjeetc(a)ucsc.edu>
10/23/2023 1:59 PM >>>
Hi Reto,
Thanks a lot for the instructions. I tried the same, but still
couldn't
trigger scrubbing deterministically. The first time I initiated
scrubbing,
I saw scrubbing status in ceph -s, but for subsequent times, I didn't
see
any scrubbing status. Do you know what might be going on potentially?
Any
ideas would be appreciated. Thanks.
Best Regards,
*Jayjeet Chakraborty*
Ph.D. Student
Department of Computer Science and Engineering
University of California, Santa Cruz
*Email: jayjeetc(a)ucsc.edu <jayjeetc(a)ucsc.edu>*
On Wed, Oct 18, 2023 at 7:47 AM Reto Gysi <rlgysi(a)gmail.com> wrote:
Hi
I haven't updated to reef yet. I've tried this on quincy.
# create a testfile on cephfs.rgysi.data pool
root@zephir:/home/rgysi/misc# echo cephtest123 > cephtest.txt
#list inode of new file
root@zephir:/home/rgysi/misc# ls -i cephtest.txt
1099518867574 cephtest.txt
convert inode value to hex value
root@zephir:/home/rgysi/misc# printf "%x" 1099518867574
100006e7876
# search for this value in the rados pool cephfs.rgysi.data, to find
object(s)
root@zephir:/home/rgysi/misc# rados -p cephfs.rgysi.data ls | grep
100006e7876
100006e7876.00000000
# find pg for the object
root@zephir:/home/rgysi/misc# ceph osd map cephfs.rgysi.data
100006e7876.00000000
osdmap e105365 pool 'cephfs.rgysi.data' (25) object
'100006e7876.00000000'
-> pg 25.ee1befa1 (25.1) -> up ([0,2,8], p0)
acting ([0,2,8], p0)
#Initiate a deep-scrub for this pg
root@zephir:/home/rgysi/misc# ceph pg deep-scrub 25.1
instructing pg 25.1 on osd.0 to deep-scrub
# check status of scrubbing
root@zephir:/home/rgysi/misc# ceph pg ls scrubbing
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES*
OMAP_KEYS* LOG STATE
SINCE VERSION
REPORTED UP ACTING SCRUB_STAMP
DEEP_SCRUB_STAMP
LAST_S
CRUB_DURATION SCRUB_SCHEDULING
25.1 37774 0 0 0 62869823142
0
0 2402 active+clean+scrubbing+deep 7s
105365'1178098
105365:8066292 [0,2,8]p0 [0,2,8]p0
2023-10-18T05:17:48.631392+0000
2023-10-08T11:30:58.883164+0000
3 deep scrubbing for 1s
Best Regards,
Reto
Am Mi., 18. Okt. 2023 um 16:24 Uhr schrieb Jayjeet Chakraborty <
jayjeetc(a)ucsc.edu>gt;:
> Hi all,
>
> Just checking if someone had a chance to go through the scrub
trigger
> issue
> above. Thanks.
>
> Best Regards,
> *Jayjeet Chakraborty*
> Ph.D. Student
> Department of Computer Science and Engineering
> University of California, Santa Cruz
> *Email: jayjeetc(a)ucsc.edu <jayjeetc(a)ucsc.edu>*
>
>
> On Mon, Oct 16, 2023 at 9:01 PM Jayjeet Chakraborty
<jayjeetc(a)ucsc.edu>
> wrote:
>
> > Hi all,
> >
> > I am trying to trigger deep scrubbing in Ceph reef (18.2.0) on
demand
> on a
> > set of files that I randomly write to CephFS. I have tried both
invoking
> > deep-scrub on CephFS using ceph tell and just
deep scrubbing a
> > particular PG. Unfortunately, none of that seems to be working for
me.
> I am
> > monitoring the ceph status output, it never shows any scrubbing
> > information. Can anyone please help me out on this ? In a
nutshell, I
> need
> > Ceph to scrub for me anytime I want. I am using Ceph with default
> configs
> > for scrubbing. Thanks all.
> >
> > Best Regards,
> > *Jayjeet Chakraborty*
> > Ph.D. Student
> > Department of Computer Science and Engineering
> > University of California, Santa Cruz
> > *Email: jayjeetc@
ucsc.edu <jayjeetc(a)ucsc.edu>*
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io