We have our object storage endpoint fqdn DNS round robining to 2 IP's.
Those 2 IP's are managed by keepalived across 3 servers running
haproxy where each haproxy instance is listening on each round robin'd
IP and then load balanced to 5 servers running radosgw.
On Fri, Sep 4, 2020 at 12:35 PM Oliver Freyermuth
<freyermuth(a)physik.uni-bonn.de> wrote:
>
> Hi,
>
> Am 04.09.20 um 18:20 schrieb DHilsbos(a)performair.com:
> > All;
> >
> > We've been running RadosGW on our nautilus cluster for a while, and
we're going to be adding iSCSI capabilities to our cluster, via 2 additional servers.
> >
> > I intend to also run RadosGW on these servers. That begs the question of how to
"load balance" these servers. I don't believe that we need true load
balancing (i.e. through a dedicated proxy), and I'd rather not add the complexity and
single point of failure.
> >
> > The question then is: Does RadosGW play nice with round-robin DNS? The real
question here is whether RadosGW maintains internal client state locally between
connections. I would expect it's safe, given that it is HTTP, but I'd prefer to
verify.
>
> I am also very interested in the answer — we are operating 3 RGW instances since over
a year with DNS load balancing, but never asked this question,
> and did not observe any issues. It would of course be nice to see this confirmed
;-).
>
> However, most of our clients are using systemd-resolved for DNS caching, as most
Linux distributions do nowadays.
> This breaks load-balancing of each client (i.e. each client stays with a chosen
address after the initial DNS query for the cache period)[0],
> so it might be that if there is an issue, this is hidden by the way most of our
client systems behave.
>
> Cheers,
> Oliver
>
> [0]
https://github.com/systemd/systemd/issues/16297
>
>
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director - Information Technology
> > Perform Air International Inc.
> > DHilsbos(a)PerformAir.com
> >
www.PerformAir.com
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io