Hello,
Finer grained ability to allocate resources to
services. (This process
gets 2g of ram and 1 cpu)
do you really believe this is a benefit? How can it be a benefit to have
crashing or slow OSDs? Sounds cool but doesn't work in most environments I
ever had my hands on.
We often encounter cluster that fall apart or have a meltdown just because
they run out of memory and we use tricks like zram to help them out and
recover their clusters. If I now go and do it per container/osd in a finer
grained way, it will just blow up even more.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am Mi., 17. März 2021 um 18:59 Uhr schrieb Fox, Kevin M <Kevin.Fox(a)pnnl.gov
:
> There are a lot of benefits to containerization that is hard to do without
> it.
Finer grained ability to allocate resources to
services. (This process
> gets 2g of ram and 1 cpu)
> Security is better where only minimal software is available within the
> container so on service compromise its harder to escape.
> Ability to run exactly what was tested / released by upstream. Fewer
> issues with version mismatches. Especially useful across different distros.
> Easier to implement orchestration on top which enables some of the
> advanced features such as easy to allocate iscsi/nfs volumes. Ceph is
> finally doing so now that it is focusing on containers.
> And much more.
>
> ________________________________________
> From: Teoman Onay <tonay(a)redhat.com>
> Sent: Wednesday, March 17, 2021 10:38 AM
> To: Matthew H
> Cc: Matthew Vernon; ceph-users
> Subject: [ceph-users] Re: ceph-ansible in Pacific and beyond?
>
> Check twice before you click! This email originated from outside PNNL.
>
>
> A containerized environment just makes troubleshooting more difficult,
> getting access and retrieving details on Ceph processes isn't as
> straightforward as with a non containerized infrastructure. I am still not
> convinced that containerizing everything brings any benefits except the
> collocation of services.
>
> On Wed, Mar 17, 2021 at 6:27 PM Matthew H <matthew.heler(a)hotmail.com>
> wrote:
>
> > There should not be any performance difference between an
> un-containerized
> > version and a containerized one.
> >
> > The shift to containers makes sense, as this is the general direction
> that
> > the industry as a whole is taking. I would suggest giving cephadm a try,
> > it's relatively straight forward and significantly faster for deployments
> > then ceph-ansible is.
> >
> > ________________________________
> > From: Matthew Vernon <mv3(a)sanger.ac.uk>
> > Sent: Wednesday, March 17, 2021 12:50 PM
> > To: ceph-users <ceph-users(a)ceph.io>
> > Subject: [ceph-users] ceph-ansible in Pacific and beyond?
> >
> > Hi,
> >
> > I caught up with Sage's talk on what to expect in Pacific (
> >
>
https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtu…
> ) and there was no mention
> > of ceph-ansible at all.
> >
> > Is it going to continue to be supported? We use it (and uncontainerised
> > packages) for all our clusters, so I'd be a bit alarmed if it was going
> > to go away...
> >
> > Regards,
> >
> > Matthew
> >
> >
> > --
> > The Wellcome Sanger Institute is operated by Genome Research
> > Limited, a charity registered in England with number 1021457 and a
> > company registered in England with number 2742969, whose registered
> > office is 215 Euston Road, London, NW1 2BE.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>