Hello, Janne,
Janne Johansson wrote:
Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak
<kas(a)fi.muni.cz>cz>:
Is it possible to install a new radosgw instance manually?
If so, how can I do it?
We are doing it, and I found the same docs issue recently, so Zac
pushed me to provide a skeleton (at least) for such a page. I have
recently made a quincy cluster manual install with RGWs so I will
condense what I did to something that can be used for docs later on
(I'll leave it to Zac to format and merge).
Really short version for you:
Install radosgw debs/rpms on the rgw box(es)
On one of the mons or a box with admin ceph auth run
ceph auth get-or-create client.short-hostname-of-rgw mon 'allow rw'
osd 'allow rwx'
OK, I was looking for something like ... mon 'allow profile radosgw'.
Which can be set, but does not work. This was my main problem, apparently.
mon 'allow rw' works.
On each of the rgw box(es)
create a ceph-user owned dir, for instance like this
install -d -o ceph -g ceph /var/lib/ceph/radosgw/ceph-$(hostname -s)
inside this dir, put the key (or the first two lines of it) you got
from the above ceph auth get-or-create
vi /var/lib/ceph/radosgw/ceph-$(hostname -s)/keyring
Figure out what URL rgw should answer to and all that in the config
parts, but that would be the same
for manual and ceph-adm/orchestrated installs.
and now you should be able to start the service with
systemctl start ceph-radosgw@$(hostname -s).service
Works for me, thanks.
The last part may or may not act up a bit due to two
things, one is
that it may have tried starting lots of times after the deb/rpm got
installed, but long before you added they usable key for it, so doing
a slight boxing match with systemd might be in order, to stop the
service, reset-failed on the service and then restarting it. (and
check that it is enabled, so it starts on next boot also)
No problem with that on my systems.
Secondly, I also tend to run into this issue* where
rgw (and other
parts of ceph!) can't create pools if they don't specify PG numbers,
which rgw doesn't do any longer, and if you get this error, you end up
having to create all the pools manually yourself (from a mon/admin
host or the rgw, but doing it from the rgw requires a lot more
specifying username and keyfile locations than the default admin-key
hosts)
*)
https://tracker.ceph.com/issues/62770
This ticket has a VERY SIMPLE method of testing if ceph versions
has this problem or not, just
run "ceph osd pool create some-name" and see how it fails unless
you add a number behind
it or not.
The help is quite clear that all other parameters are meant to be optional:
osd pool create <pool> [<pg_num:int>] [<pgp_num:int>]
[<pool_type:replicated|erasure>] [<erasure_code_profile>] [<rule>]
[<expected_num_objects:int>] [<size:int>] [<pg_num_min:int>]
[<pg_num_max:int>] [<autoscale_mode:on|off|warn>] [--bulk]
[<target_size_bytes:int>] [<target_size_ratio:float>] : create pool
Also OK on my system.
If there is a (planned) documentation of manual rgw bootstrapping,
it would be nice to have also the names of required pools listed there.
So, thanks for a heplful reply! To sum it up, the following
worked for me (on AlmaLinux 9 host with client.admin.keyring and Ceph Reef):
====================================================================
RGWNAME=`hostname -s`
echo "RGW name is $RGWNAME"
cat >> /etc/ceph/ceph.conf <<EOF
[client.$RGWNAME]
rgw_frontends = beast port=8088
EOF
# modify the configuration to suit your needs, for example enable SSL with
# rgw_frontends = beast ssl_port=4443
ssl_certificate=/etc/pki/tls/certs/$RGWNAME.crt+bundle
ssl_private_key=/etc/pki/tls/private/$RGWNAME.key
mkdir /var/lib/ceph/radosgw/ceph-$RGWNAME
get-or-create client.$RGWNAME mon 'allow rw' osd 'allow rwx' \
/var/lib/ceph/radosgw/ceph-$RGWNAME/keyring
chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-$RGWNAME
systemctl enable --now ceph-radosgw@$RGWNAME
# verify it works
curl
http://127.0.0.1:8088
ceph osd pool ls
# should print at least the following pools:
# .rgw.root
# default.rgw.log
# default.rgw.control
# default.rgw.meta
# ... and maybe also these, after some buckets are created:
# default.rgw.buckets.index
# default.rgw.buckets.non-ec
# default.rgw.buckets.data
====================================================================
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work |
yenya.net - private}>
|
|
https://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
We all agree on the necessity of compromise. We just can't agree on
when it's necessary to compromise. --Larry Wall