Haven’t do any fio test for single disk , but did fio for the ceph cluster, actually the
cluster has 12 nodes, and each node has same disks(means, 2 nvmes for cache, and 3 ssds as
osd, 4 hdds also as osd).
Only two nodes has such problem. And these two nodes are crash many times(at least 4
times). The others are good. So it strange.
This cluster has run more than half years.
Thanks,
zx
在 2021年2月22日,下午6:37,Marc
<Marc(a)f1-outsourcing.eu> 写道:
Don't you have problems, just because the Samsung 970 PRO is not suitable for this?
Have you run fio tests to make sure it would work ok?
https://yourcmc.ru/wiki/Ceph_performance
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_…
-----Original Message-----
Sent: 22 February 2021 03:16
users(a)ceph.io>
Subject: [ceph-users] Re: Ceph nvme timeout and then aborting
Thanks for you reply!
Yes, it a Nvme, and on node has two Nvmes as db/wal, one for ssd(0-2)
and another for hdd(3-6).
I have no spare to try.
It’s very strange, the load not very high at that time. and both ssd
and nvme seems healthy.
If cannot fix it. I am afraid I need to setup more nodes and set out
remove these OSDs which using this Nvme?
Thanks,
zx
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io