Ah, OK, misunderstood the question.
In my experience, no. I run the corresponding smartctl command on every drive just before
OSD daemon start. I use smartctl because it applies to SAS and SATA drives with the same
command (otherwise, you need to select between hdparm and sdparm). All SAS drives I got
came with write cache disabled by default, however.
I think the blog post gives a very good explanation why disabling volatile write cache on
any drive is either beneficial or has no effect and, therefore, is always safe
(recommended). At least I read it this way and I have no contradicting evidence.
To get back to the last part of your question, I think if the OSD daemon just did it by
default, a lot of people would have a better life.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Paul Emmerich <paul.emmerich(a)croit.io>
Sent: 24 June 2020 17:39:16
To: Frank Schilder
Cc: Frank R; Benoît Knecht; s.priebe(a)profihost.ag; ceph-users(a)ceph.io
Subject: Re: [ceph-users] Re: High ceph_osd_commit_latency_ms on Toshiba MG07ACA14TE HDDs
Well, what I was saying was "does it hurt to unconditionally run hdparm -W 0 on all
disks?"
Which disk would suffer from this? I haven't seen any disk where this would be a bad
idea
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90
On Wed, Jun 24, 2020 at 5:35 PM Frank Schilder
<frans@dtu.dk<mailto:frans@dtu.dk>> wrote:
Yes, non-volatile write cache helps as described in the wiki. When you disable write cache
with hdparm, it actually only disables volatile write cache. That's why SSDs with
power loss protection are recommended for ceph.
A SAS/SATA SSD without any write cache will perform poorly no matter what.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Paul Emmerich <paul.emmerich@croit.io<mailto:paul.emmerich@croit.io>>
Sent: 24 June 2020 17:30:51
To: Frank R
Cc: Benoît Knecht; s.priebe@profihost.ag<mailto:s.priebe@profihost.ag>;
ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: [ceph-users] Re: High ceph_osd_commit_latency_ms on Toshiba MG07ACA14TE HDDs
Has anyone ever encountered a drive with a write cache that actually
*helped*?
I haven't.
As in: would it be a good idea for the OSD to just disable the write cache
on startup? Worst case it doesn't do anything, best case it improves
latency.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90
On Wed, Jun 24, 2020 at 3:49 PM Frank R
<frankaritchie@gmail.com<mailto:frankaritchie@gmail.com>> wrote:
fyi, there is an interesting note on disabling the
write cache here:
https://yourcmc.ru/wiki/index.php?title=Ceph_performance&mobileaction=t…
On Wed, Jun 24, 2020 at 9:45 AM Benoît Knecht
<bknecht@protonmail.ch<mailto:bknecht@protonmail.ch>>
wrote:
Hi Igor,
Igor Fedotov wrote:
> for the sake of completeness one more experiment please if possible:
>
> turn off write cache for HGST drives and measure commit latency once
again.
I just did the same experiment with HGST drives, and disabling the write
cache
on those drives brought the latency down from
about 7.5ms to about 4ms.
So it seems disabling the write cache across the board would be
advisable in
our case. Is it recommended in general, or
specifically when the DB+WAL
is on
the same hard drive?
Stefan, Mark, are you disabling the write cache on your HDDs by default?
Cheers,
--
Ben
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>