Are you using an EC pool?
On Wed, May 31, 2023 at 11:04 AM Ben <ruidong.gao(a)gmail.com> wrote:
Thank you Patrick for help.
The random write tests are performing well enough, though. Wonder why read test is so
poor with the same configuration(resulting read bandwidth about 15MB/s vs 400MB/s of
write). especially the logs of slow requests are irrelevant with testing ops. I am
thinking it is something with cephfs kernel client?
Any other thoughts?
Patrick Donnelly <pdonnell(a)redhat.com> 于2023年5月31日周三 00:58写道:
>
> On Tue, May 30, 2023 at 8:42 AM Ben <ruidong.gao(a)gmail.com> wrote:
> >
> > Hi,
> >
> > We are performing couple performance tests on CephFS using fio. fio is run
> > in k8s pod and 3 pods will be up running mounting the same pvc to CephFS
> > volume. Here is command line for random read:
> > fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G
> > -numjobs=5 -runtime=500 -group_reporting -directory=/tmp/cache
> > -name=Rand_Read_Testing_$BUILD_TIMESTAMP
> > The random read is performed very slow. Here is the cluster log from
> > dashboard:
> > [...]
> > Any suggestions on the problem?
>
> Your random read workload is too extreme for your cluster of OSDs.
> It's causing slow metadata ops for the MDS. To resolve this we would
> normally suggest allocating a set of OSDs on SSDs for use by the
> CephFS metadata pool to isolate the worklaods.
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Red Hat Partner Engineer
> IBM, Inc.
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D