Hey Igor,
we are currently using these disks - all SATA attached (is it normal to
have some OSDs without waer counter?):
# ceph device ls | awk '{print $1}' | cut -f 1,2 -d _ | sort | uniq -c
18 SAMSUNG_MZ7KH3T8 (4TB)
126 SAMSUNG_MZ7KM1T9 (2TB)
24 SAMSUNG_MZ7L37T6 (8TB)
1 TOSHIBA_THNSN81Q (2TB) (ceph device ls shows a wear of 16% so maybe
we remove this one)
These are the CPUs in the storage hosts:
# ceph osd metadata | grep -F '"cpu": "' | sort -u
"cpu": "Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz",
"cpu": "Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz",
The hosts have between 128GB and 256GB memory and each got between 20 and
30 OSDs.
DB and OSD are using same device, no extra device for DB/WAL.
Seeing your IOPS it looks like we are around the same level.
I am curious if the performance will stay at the current level or degrade
over time.
Am Mo., 27. März 2023 um 13:42 Uhr schrieb Igor Fedotov <
igor.fedotov(a)croit.io>gt;:
Hi Boris,
I wouldn't recommend to take absolute "osd bench" numbers too seriously.
It's definitely not a full-scale quality benchmark tool.
The idea was just to make brief OSDs comparison from c1 and c2.
And for your reference - IOPS numbers I'm getting in my lab with data/DB
colocated:
1) OSD on top of Intel S4600 (SATA SSD) - ~110 IOPS
2) OSD on top of Samsung DCT 983 (M.2 NVMe) - 310 IOPS
3) OSD on top of Intel 905p (Optane NVMe) - 546 IOPS.
Could you please provide a bit more info on the H/W and OSD setup?
What are the disk models? NVMe or SATA? Are DB and main disk shared?
Thanks,
Igor