Wow 34K ios 4k iodetph 1 😊
How many nodes, ssd#s and what network?
I can't find any firmware for the lsi card anymore...
-----Ursprüngliche Nachricht-----
Von: Marc Roos <M.Roos(a)f1-outsourcing.eu>
Gesendet: Dienstag, 01. September 2020 23:33
An: VELARTIS Philipp Dürhammer <p.duerhammer(a)velartis.at>at>; reed.dier
<reed.dier(a)focusvq.com>
Cc: ceph-users <ceph-users(a)ceph.io>
Betreff: RE: [ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no
extra journals)
Sorry I am not fully aware of what has been already discussed in this thread. But
can't you flash these LSI logic cards to jbod? I have done this with my 9207 with
sas2flash.
I have attached my fio test of the Micron 5100 Pro/5200 SSDs MTFDDAK1T9TCC. They perform
similar to my samsung sm863a 1,92TB. Only thing weird, is that the rw-4k is 3x slower on
the micron.
-----Original Message-----
To: 'Reed Dier'
Cc: 'ceph-users(a)ceph.io'
Subject: [ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra
journals)
Thank you. I was working in this direction. The situation is a lot better. But I think I
can get still far better.
I could set the controller to writethrough, direct and no read ahead for the ssds.
But I cannot disable the pdcache ☹ there is an option set in the controller "Block
SSD Write Disk Cache Change = Yes" which does not permit to deactivate the ssd cache.
I could not find any solution in google for this controller (LSI MegaRAID SAS 9271-8i) to
change this setting.
I don’t know how much performance gain it will be to deactivate the ssd cache. At least
the micron 5200max has capacitor so I hope it is safe for data loss in case if power
failure. I wrote a request to lsi / Broadcom if they know how I can change this setting.
This is really annyoing.
I will check the cpu power settings. I rode also somewhere it can improve iops a lot. (if
its bad set)
At the moment I get 600iops 4k random write 1 thread and 1 iodepth. I get 40K - 4k random
iops for some instances with 32iodepth. Its not the world but a lot better then before.
Read around 100k iops. For 16 ssd's and 2 x dual 10G nic.
I was reading that good tunings and hardware config can get more then
2000 iops on single thread out of the ssds. I know thet ceph does not shine with single
thread. But 600 iops is not very much...
philipp
-----Ursprüngliche Nachricht-----
Von: Reed Dier <reed.dier(a)focusvq.com>
Gesendet: Dienstag, 01. September 2020 22:37
An: VELARTIS Philipp Dürhammer <p.duerhammer(a)velartis.at>
Cc: ceph-users(a)ceph.io
Betreff: Re: [ceph-users] Can 16 server grade ssd's be slower then 60 hdds? (no extra
journals)
If using storcli/perccli for manipulating the LSI controller, you can disable the on-disk
write cache with:
storcli /cx/vx set pdcache=off
You can also ensure that you turn off write caching at the controller level with storcli
/cx/vx set iopolicy=direct storcli /cx/vx set wrcache=wt
You can also tweak the readahead value for the vd if you want, though with an ssd, I
don't think it will be much of an issue.
storcli /cx/vx set rdcache=nora
I'm sure the megacli alternatives are available with some quick searches.
May also want to check your c-states and p-states to make sure there isn't any
aggressive power saving features getting in the way.
Reed
On Aug 31, 2020, at 7:44 AM, VELARTIS Philipp
Dürhammer
<p.duerhammer(a)velartis.at> wrote:
We have older LSi Raid controller with no HBA/JBOD option. So we
expose the single disks as raid0 devices. Ceph should not be aware of cache
status?
But digging deeper in to it it seems that 1 out of 4
serves is
performing a lot better and has super low commit/applay rates while the other have
a lot mor (20+) on heavy writes. This just applys fore the ssd. For the hdds I cant see a
difference...
-----Ursprüngliche Nachricht-----
Von: Frank Schilder <frans(a)dtu.dk>
Gesendet: Montag, 31. August 2020 13:19
An: VELARTIS Philipp Dürhammer <p.duerhammer(a)velartis.at>at>;
'ceph-users(a)ceph.io' <ceph-users(a)ceph.io>
Betreff: Re: Can 16 server grade ssd's be slower then 60 hdds? (no
extra journals)
Yes, they can - if volatile write cache is not disabled. There are
many threads on this, also recent. Search for "disable write cache"
and/or "disable volatile write cache".
You will also find different methods of doing this automatically.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: VELARTIS Philipp Dürhammer <p.duerhammer(a)velartis.at>
Sent: 31 August 2020 13:02:45
To: 'ceph-users(a)ceph.io'
Subject: [ceph-users] Can 16 server grade ssd's be slower then 60
hdds? (no extra journals)
I have a productive 60 osd's cluster. No extra Journals. Its
performing okay. Now I added an extra ssd Pool with 16 Micron 5100 MAX.
And the performance is little slower or equal to the 60 hdd pool. 4K random as also
sequential reads. All on dedicated 2 times 10G Network.
HDDS are still on filestore. SSD on bluestore. Ceph Luminous.
What should be possible 16 ssd's vs. 60 hhd's
no extra journals?
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io