On Mon, Apr 6, 2020 at 11:29 AM <ulrich.weigand(a)de.ibm.com> wrote:
Hi Sage,
I can provide some estimates and general
guidance.
Thanks, this is really helpful!
First, the test suites are currently targetted to
run on 'smithi' nodes,
which are relatively low-powered x86 1u machines with a single NVMe
divided into 4 scratch LVs (+ an HDD for boot + logs). (This is somewhat
arbitrary--it's just the hardware we picked so the tests are written to
target that.)
What's the size of those LVs? Do the tests require a particular minimum size?
see more about the hardware here
https://wiki.sepia.ceph.com/doku.php?id=hardware:smithi
cmeno@smithi110:~$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 vg_nvme lvm2 a-- <372.61g 100.00m
cmeno@smithi110:~$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
lv_1 vg_nvme -wi-a----- 89.40g
lv_2 vg_nvme -wi-ao---- 89.40g
lv_3 vg_nvme -wi-ao---- 89.40g
lv_4 vg_nvme -wi-ao---- 89.40g
lv_5 vg_nvme -wi-ao---- 14.90g
cmeno@smithi110:~$
cheers,
Christina
I've been telling the aarch64 folks that we
probably want at least 25-50
similarly-sized nodes in order to run the test suites in a reasonable
amount of time (e.g., minimal rados suite ~day and not days).
I'm not really sure how this maps on the Z hardware, but hopefully this
provides some guidance!
Yes, it does! We'll certainly have to tune this is a bit depending on how everything
performs on Z hardware, but this gives us a starting point.
Bye,
Ulrich
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io