Yes. I tried and used and had no problem.
On 5 node i have 2x rgw and sometimes for debuging I add new rgw and delete
after the test. Im using Nautilus and never tried on pacific but it should
work same way. Try on test Env first.
23 Nis 2021 Cum 04:51 tarihinde Szabo, Istvan (Agoda) <
Istvan.Szabo(a)agoda.com> şunu yazdı:
Have you ever tried this? Did it work for you?
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com
---------------------------------------------------
On 2021. Apr 22., at 18:30, by morphin <morphinwithyou(a)gmail.com> wrote:
Hello.
Its easy. In ceph.conf copy the rgw fields and change 3 things.
1- name
2- log path name
3- client port.
After that feel free to start rgw service with systemctl. Check service
status and Tail the rgw log file. Try to read or write and check the logs.
If everything works as expected then you are ready to add the new service
to loadbalancer if you have one.
22 Nis 2021 Per 14:00 tarihinde ivan(a)z1storage.com <ivan(a)z1storage.com>
şunu yazdı:
Does anyone know how to create more than 1 rgw per host? Surely it's not
a rare configuration.
On 2021/04/19 17:09, ivan(a)z1storage.com wrote:
Hi Sebastian,
Thank you. Is there a way to create more than 1 rgw per host until
this new feature is released?
On 2021/04/19 11:39, Sebastian Wagner wrote:
Hi Ivan,
this is a feature that is not yet released in Pacific. It seems the
documentation is a bit ahead of time right now.
Sebastian
On Fri, Apr 16, 2021 at 10:58 PM ivan(a)z1storage.com
<mailto:ivan@z1storage.com> <ivan(a)z1storage.com
<mailto:ivan@z1storage.com>> wrote:
Hello,
According to the documentation, there's count-per-host key to 'ceph
orch', but it does not work for me:
:~# ceph orch apply rgw z1 sa-1 --placement='label:rgw
count-per-host:2'
--port=8000 --dry-run
Error EINVAL: Host and label are mutually exclusive
Why it says anything about Host if I don't specify any hosts,
just labels?
~# ceph orch host ls
HOST ADDR LABELS STATUS
s101 s101 mon rgw
s102 s102 mgr mon rgw
s103 s103 mon rgw
s104 s104 mgr mon rgw
s105 s105 mgr mon rgw
s106 s106 mon rgw
s107 s107 mon rgw
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io
<mailto:ceph-users-leave@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
------------------------------
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by copyright
or other legal rules. If you have received it by mistake please let us know
by reply email and delete it from your system. It is prohibited to copy
this message or disclose its content to anyone. Any confidentiality or
privilege is not waived or lost by any mistaken delivery or unauthorized
disclosure of the message. All messages sent to and from Agoda may be
monitored to ensure compliance with company policies, to protect the
company's interests and to remove potential malware. Electronic messages
may be intercepted, amended, lost or deleted, or contain viruses.