We have older LSi Raid controller with no HBA/JBOD option. So we expose the single disks
as raid0 devices. Ceph should not be aware of cache status?
But digging deeper in to it it seems that 1 out of 4 serves is performing a lot better and
has super low commit/applay rates while the other have a lot mor (20+) on heavy writes.
This just applys fore the ssd. For the hdds I cant see a difference...
-----Ursprüngliche Nachricht-----
Von: Frank Schilder <frans(a)dtu.dk>
Gesendet: Montag, 31. August 2020 13:19
An: VELARTIS Philipp Dürhammer <p.duerhammer(a)velartis.at>at>;
'ceph-users(a)ceph.io' <ceph-users(a)ceph.io>
Betreff: Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)
Yes, they can - if volatile write cache is not disabled. There are many threads on this,
also recent. Search for "disable write cache" and/or "disable volatile
write cache".
You will also find different methods of doing this automatically.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: VELARTIS Philipp Dürhammer <p.duerhammer(a)velartis.at>
Sent: 31 August 2020 13:02:45
To: 'ceph-users(a)ceph.io'
Subject: [ceph-users] Can 16 server grade ssd's be slower then 60 hdds? (no extra
journals)
I have a productive 60 osd's cluster. No extra Journals. Its performing okay. Now I
added an extra ssd Pool with 16 Micron 5100 MAX. And the performance is little slower or
equal to the 60 hdd pool. 4K random as also sequential reads. All on dedicated 2 times 10G
Network. HDDS are still on filestore. SSD on bluestore. Ceph Luminous.
What should be possible 16 ssd's vs. 60 hhd's no extra journals?
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io