object? If its not possible would the best course
of action be to have
standby hardware and quickly recreate the node or perhaps run the
gateways more ephemerally, from a VM or container?
Thanks again.
Respectfully,
*Wes Dillingham*
wes(a)wesdillingham.com <mailto:wes@wesdillingham.com>
LinkedIn <http://www.linkedin.com/in/wesleydillingham>
On Tue, Dec 3, 2019 at 2:45 PM Mike Christie <mchristi(a)redhat.com
<mailto:mchristi@redhat.com>> wrote:
I do not think it's going to do what you want when the node you want to
delete is down.
It looks like we only temporarily stop the gw from being exported. It
does not update the gateway.cfg, because we do the config removal call
on the node we want to delete.
So gwcli would report success and the ls command will show it as no
longer running/exported, but if you restart the rbd-target-api service
then it will show up again.
There is an internal command to do what you want. I will post a PR for
gwlci and so it can be used by dashboard.
On 12/03/2019 01:19 PM, Jason Dillaman wrote:
If I recall correctly, the recent ceph-iscsi
release supports the
removal of a gateway via the "gwcli". I think the Ceph dashboard can
do that as well.
On Tue, Dec 3, 2019 at 1:59 PM Wesley Dillingham
<wes(a)wesdillingham.com
<mailto:wes@wesdillingham.com>> wrote:
>
> We utilize 4 iSCSI gateways in a cluster and have noticed the
following
during patching cycles when we sequentially reboot single
iSCSI-gateways:
>
> "gwcli" often hangs on the still-up iSCSI GWs but sometimes still
functions and gives the message:
>
> "1 gateway is inaccessible - updates will be disabled"
>
> This got me thinking about what the course of action would be
should an
iSCSI gateway fail permanently or semi-permanently, say a
hardware issue. What would be the best course of action to instruct
the remaining iSCSI gateways that one of them is no longer available
and that they should allow updates again and take ownership of the
now-defunct-node's LUNS?
>
> I'm guessing pulling down the RADOS config object and rewriting
it and
re-put'ing it followed by a rbd-target-api restart might do
the trick but am hoping there is a more "in-band" and less
potentially devastating way to do this.
>
> Thanks for any insights.
>
> Respectfully,
>
> Wes Dillingham
> wes(a)wesdillingham.com <mailto:wes@wesdillingham.com>
> LinkedIn
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to
ceph-users-leave(a)ceph.io
<mailto:ceph-users-leave@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io