Benoit, wondering what are the write cache settings in your case?
And do you see any difference after disabling it if any?
Thanks,
Igor
On 6/24/2020 3:16 PM, Mark Nelson wrote:
> This isn't the first time I've seen drive cache cause problematic
> latency issues, and not always from the same manufacturer.
> Unfortunately it seems like you really have to test the drives you
> want to use before deploying them them to make sure you don't run into
> issues.
>
>
> Mark
>
>
> On 6/24/20 6:36 AM, Stefan Priebe - Profihost AG wrote:
>> HI Ben,
>>
>> yes we have the same issues and switched to seagate for those reasons.
>>
>> you can fix at least a big part of it by disabling the write cache of
>> those drives - generally speaking it seems the toshiba firmware is
>> broken.
>>
>> I was not able to find a newer one.
>>
>> Greets,
>> Stefan
>>
>> Am 24.06.20 um 09:43 schrieb Benoît Knecht:
>>> Hi,
>>>
>>> We have a Nautilus (14.2.9) Ceph cluster with two types of HDDs:
>>>
>>> - TOSHIBA MG07ACA14TE [1]
>>> - HGST HUH721212ALE604 [2]
>>>
>>> They're all bluestore OSDs with no separate DB+WAL and part of the
>>> same pool.
>>>
>>> We noticed that while the HGST OSDs have a commit latency of about
>>> 15ms, the Toshiba OSDs hover around 150ms (these values come from
>>> the `ceph_osd_commit_latency_ms` metric in Prometheus).
>>>
>>> On paper, it seems like those drives have very similar specs, so
>>> it's not clear to me why we're seeing such a large difference when
>>> it comes to commit latency.
>>>
>>> Has anyone had any experience with those Toshiba drives? Or looking
>>> at the specs, do you spot anything suspicious?
>>>
>>> And if you're running a Ceph cluster with various disk
>>> brands/models, have you ever noticed some of them standing out when
>>> looking at `ceph_osd_commit_latency_ms`?
>>>
>>> Thanks in advance for your feedback.
>>>
>>> Cheers,
>>>
>>> --
>>> Ben
>>>
>>> [1]:
>>>
https://toshiba.semicon-storage.com/content/dam/toshiba-ss/asia-pacific/doc…
>>> [2]:
>>>
https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/p…
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io