On Thu, 12 Sep 2019, Gregory Farnum wrote:
Have we given up doing any real testing on hard drives
in the lab?
The remaining mira nodes keep being cannibalized into the LRC or as
Jenkins slaves so the numbers there keep dropping, plus they're aging
out.
I don't have a strong opinion here about whether we need to retain an HDD
pool for (presumably?) rados. FWIW I never run the rados suite on mira.
If we're not interested in that specifically (or
maybe even if we
are), I think we should explore some more dynamic allocation of
machines into jenkins/build slaves and out of the pools before we
commit to buying hardware. We've got hundreds of servers that are
mostly statically assigned to tasks, and the ratios we're interested
in are continuing to vary as we build up more jenkins pipelines in
some areas of the code.
I think this is an orthogonal concern that's blocked by time and effort to
fiddle with jenkins. Wnd we can't use these machines for the primary
teuthology pool because the performance and configurations are too
different, so making that investment won't reduce the delays we
now getting teuthology test suites to complete.
That said, yes, I think it would be great to do a pass over all of the old
stuff in the lab and figure out how to make better use of it!
sage
> -Greg
>
> On Thu, Sep 12, 2019 at 8:33 AM Sage Weil <sage(a)newdream.net> wrote:
> >
> > Hi all,
> >
> > The foundation has some money to spend on hardware. Given a budget of
> > ~$200k, what would we buy?
> >
> > I see two main options:
> >
> > - More smithi-class machines. We would presumably take a closer look at
> > the available machines to make a reasonable choice, since it's been a few
> > years now... it might make sense to buy something a bit faster. I think
> > ideally, though, they would go into the same machine pool/class, since
> > that tends to lead to better overall utilization of the hardware.
> >
> > - Build machines. If we buy more beefy boxes, we could reduce our
> > reliance on OVH cloud instances, which are averaging around $15k/month.
> > It'll be a larger initial outlay, but the ongoing cost would come down
> > (particularly since we're fortunate enough to have essentially free
> > datacenter space).
> >
> > Thoughts?
> > sage
> > _______________________________________________
> > Sepia mailing list -- sepia(a)ceph.io
> > To unsubscribe send an email to sepia-leave(a)ceph.io
>
>