Thank you for reply.

dd read is 410KB/sĄŁ fio read is 991.23MB/s .

dd*30  410KB*30/1024 =12MB is also so huge diffrent with the fio  991.23MB/s .





At 2019-08-27 22:31:51, jesper@krogh.cc wrote:
concurrency is widely different 1:30 

Jesper 



Sent from myMail for iOS


Tuesday, 27 August 2019, 16.25 +0200 from linghucongsong@163.com <linghucongsong@163.com>:
The performance with the dd and fio diffrent is so huge?

I have 25 OSDS with 8TB hdd. with dd I only get 410KB/s read perfomance,but with fio I get 991.23MB/s read perfomance.

like below:

Thanks in advance!

root@Server-d5754749-cded-4964-8129-ba1accbe86b3:~# time dd of=/dev/zero if=/mnt/testw.dbf bs=4k count=10000 iflag=direct
10000+0 records in
10000+0 records out
40960000 bytes (41 MB, 39 MiB) copied, 99.9445 s, 410 kB/s

real    1m39.950s
user    0m0.040s
sys     0m0.292s



root@Server-d5754749-cded-4964-8129-ba1accbe86b3:~# fio --filename=/mnt/test1 -direct=1 -iodepth 1 -thread -rw=read -ioengine=libaio -bs=4k -size=1G -numjobs=30 -runtime=10 -group_reporting -name=mytest 
mytest: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 30 threads
Jobs: 30 (f=30): [R(30)] [100.0% done] [1149MB/0KB/0KB /s] [294K/0/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=5261: Tue Aug 27 13:37:28 2019
  read : io=9915.2MB, bw=991.23MB/s, iops=253752, runt= 10003msec
    slat (usec): min=2, max=200020, avg=39.10, stdev=1454.14
    clat (usec): min=1, max=160019, avg=38.57, stdev=1006.99
     lat (usec): min=4, max=200022, avg=87.37, stdev=1910.99
    clat percentiles (usec):
     |  1.00th=[    1],  5.00th=[    1], 10.00th=[    1], 20.00th=[    1],
     | 30.00th=[    1], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    2], 90.00th=[    2], 95.00th=[    2],
     | 99.00th=[  612], 99.50th=[  684], 99.90th=[  780], 99.95th=[ 1020],
     | 99.99th=[56064]
    bw (KB  /s): min= 7168, max=46680, per=3.30%, avg=33460.79, stdev=12024.35
    lat (usec) : 2=73.62%, 4=22.38%, 10=0.05%, 20=0.03%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.03%, 500=1.93%, 750=1.75%, 1000=0.14%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
    lat (msec) : 100=0.03%, 250=0.01%
  cpu          : usr=1.83%, sys=4.30%, ctx=104743, majf=0, minf=59
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=2538284/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=9915.2MB, aggrb=991.23MB/s, minb=991.23MB/s, maxb=991.23MB/s, mint=10003msec, maxt=10003msec

Disk stats (read/write):
  vdb: ios=98460/0, merge=0/0, ticks=48840/0, in_queue=49144, util=17.28%




 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io