Hi Roman,
On Mon, Jan 13, 2020 at 5:36 PM Roman Penyaev <rpenyaev(a)suse.de> wrote:
I do not understand. I talk about simple comparison
metric for any
storage application - IOPS. Since both storage applications
(legacy-osd,
crimson-osd) share absolutely the same Ceph spec - that is a fair
choice.
That way you're actually thinking about IOPS from an OSD instance
*disregarding how much HW resources it spends* to serve your
workload. This comparison ignores absolutely fundamental difference
in architecture:
* crimson-osd is single-threaded at the moment. It won't eat more
than 1 CPU core. That's by design.
* ceph-osd is multi-threaded. By default single instance has up to 16
`tp_osd_tp` and 3 `msgr-worker-n` threads. This translates into upper,
theoretical bound of 19 CPU cores. In practice it's of course much
lower but still far above than for crimson-osd.
Both implementations share the same restriction: amount invested on
hardware resources to run the cluster. How much IOPS you will get from
it is determined by the OSD's *computational efficiency*.
The goal is to maximize IOPS from fixed set of hardware OR, rephrased,
to minimize the hardware resources needed to provide a given amount
of IOPS.
The problem is awfully similar to the performance-per-watt metric and
CPU's power efficiency. Electrical / cooling power is scarce resource
just like number of CPU cores is in a Ceph cluster.
Regards,
Radek