I did a quick test with wcache off[1]. And have the impression the
simple rados bench of 2 minutes performed a bit worse on my slow hdd's.
[1]
IFS=$'\n' && for line in `mount | grep 'osd/ceph'| awk '{print
$1"
"$3}'| sed -e 's/1 / /' -e 's#/var/lib/ceph/osd/ceph-##'`;do
IFS=' '
arr=($line); service ceph-osd@${arr[1]} stop && smartctl -s wcache,off
${arr[0]} && service ceph-osd@${arr[1]} start ;done
-----Original Message-----
To: Paul Emmerich
Cc: BenoƮt Knecht; s.priebe(a)profihost.ag; ceph-users(a)ceph.io
Subject: [ceph-users] Re: High ceph_osd_commit_latency_ms on Toshiba
MG07ACA14TE HDDs
Hi,
https://yourcmc.ru/wiki/Ceph_performance author here %)
Disabling write cache is REALLY bad for SSDs without capacitors
[consumer SSDs], also it's bad for HDDs with firmwares that don't have
this bug-o-feature. The bug is really common though. I have no idea
where it comes from, but it's really common. When you "disable" the
write cache you actually "enable" the non-volatile write cache on those
drives. Seagate EXOS drives also behave like that... It seems most EXOS
drives have an SSD cache even though it's not mentioned in specs. And it
gets enabled when you do hdparm -W 0. In theory hdparm -W 0 may hurt
linear write performance even on those HDDs, though.
Well, what I was saying was "does it hurt to
unconditionally run
hdparm -W 0 on all disks?"
Which disk would suffer from this? I haven't seen any disk where this
would be a bad idea
Paul
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io