Hi,
In the Documentation on
https://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/ it is stated
that you need at least CentOS 7.5 with at least kernel 4.16 and to
install tcmu-runner and ceph-iscsi "from your Linux distribution's
software repository".
CentOS does not know about tcmu-runner nor ceph-iscsi.
Where do I get these RPMs from?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
Good day,
We have a Ceph cluster and make use of object-storage and integrate
with OpenStack. Each OpenStack project/tenant is given a radosgw user
which allows all keystone users of that project to access the
object-storage as that single radosgw user. The radosgw user is the
project id of the OpenStack project/tenant.
Sometimes we have use cases where we want to access the object-storage
outside of the swift-api and use tools like the aws-cli or homebrew
java applications to access the object storage. For this use case what
we do is generate S3 access/secret key for the specific radosgw user
and they have full access to the object storage for that OpenStack
project/tenant.
What we want to know is if it is possible to provide granular access
to containers within a single OpenStack project using S3 access keys
or S3 sub-users? I know that the Swift API has ACL's that can limit by
keystone user but we are exploring the possibility of doing this using
S3 and S3 bucket policies so that the tools our team are developing
(open source) are more transferrable to AWS S3 and Rados GW.
Thanks all,
Jared Baker
Cloud Architect, OICR
Hi,
On my ceph osd servers I have lot of "out of memory messages".
My servers are configured with :
- 32 G of memory
- 11 HDD (3,5 T each) (+ 2 HDD for the system)
And the error messages are :
/[101292.017968] Out of memory: Kill process 2597 (ceph-osd) score 102
or sacrifice child//
//[101292.018836] Killed process 2597 (ceph-osd) total-vm:5048008kB,
anon-rss:3002648kB, file-rss:0kB, shmem-rss:0kB//
/
Top result :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
75469 ceph 20 0 4982324 3,988g 0 S 0,7 12,7 3:45.05
ceph-osd
75499 ceph 20 0 5095896 3,710g 0 S 0,3 11,8 4:08.93
ceph-osd
65848 ceph 20 0 5713748 3,329g 0 S 0,0 10,6 68:49.12
ceph-osd
67237 ceph 20 0 5580720 3,155g 0 S 0,0 10,1 57:15.99
ceph-osd
71113 ceph 20 0 5557608 3,101g 0 S 0,3 9,9 36:47.69
ceph-osd
74745 ceph 20 0 5117212 3,062g 0 S 3,7 9,8 8:13.56
ceph-osd
72494 ceph 20 0 5621156 2,828g 0 S 0,3 9,0 27:19.97
ceph-osd
70954 ceph 20 0 5765016 2,571g 0 S 0,3 8,2 40:02.00
ceph-osd
74817 ceph 20 0 5139328 2,510g 0 S 0,3 8,0 7:24.33
ceph-osd
76523 ceph 20 0 3324820 2,422g 0 S 0,3 7,7 0:55.54
ceph-osd
Is there a way to limit or reduce the memory usage of each osd deamon ?
Thank you,
Regards,
Sylvain PORTIER.
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
I'd like to test how reweighting an OSD will change how the PGs map in the
cluster.
I suspect that I'd dump the CRUSH map and PGs in the cluster that I'm
interested in then use osdmaptool. I'm not understanding how to use
osdmaptool to set the reweight, then query a PG or the entire set of PGs
that I'm interested in. I then suspect that if I'm okay with the new map
that I could inject it into the cluster instead of having to run reweight
on the OSD(s).
This is a Jewel cluster and I'm trying to calculate OSD usage offline, then
inject a map that is more distributed instead of doing a reweight, move the
PGs which take a long time to just rinse and repeat over and over again.
Thanks,
Robert LeBlanc
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Fri, Sep 6, 2019 at 12:00 PM Wesley Dillingham
<wdillingham(a)godaddy.com> wrote:
>
> the iscsi-gateway.cfg seemingly allows for an alternative cephx user other than client.admin to be used, however the comments in the documentations says specifically to use client.admin.
Hmm, can you point out where this is in the docs? Originally,
tcmu-runner didn't support the ability to change the user id, but that
has been available for about a year now [1].
> Other than having the cfg file point to the appropriate key/user with "gateway_keyring" and giving that client read caps on the mons and full access to the pool configured to be used for iscsi are any other particular steps / settings / actions needed?
Just use "profile rbd" for your caps to keep it simple.
> It seems prudent to not use client.admin but I don't want to have unstable behavior or untested setup.
>
> Thanks.
>
> Respectfully,
>
> Wes Dillingham
> wdillingham(a)godaddy.com
> Site Reliability Engineer IV - Platform Storage / Ceph
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
[1] https://github.com/open-iscsi/tcmu-runner/commit/c85ccdcfb7f4b17926eda1df89…
--
Jason
the iscsi-gateway.cfg seemingly allows for an alternative cephx user other than client.admin to be used, however the comments in the documentations says specifically to use client.admin.
Other than having the cfg file point to the appropriate key/user with "gateway_keyring" and giving that client read caps on the mons and full access to the pool configured to be used for iscsi are any other particular steps / settings / actions needed?
It seems prudent to not use client.admin but I don't want to have unstable behavior or untested setup.
Thanks.
Respectfully,
Wes Dillingham
wdillingham(a)godaddy.com
Site Reliability Engineer IV - Platform Storage / Ceph
So, whilst debugging the behaviour in the first thread I created, I needed to create and then destroy pools (to avoid running out of placement groups).
So, I did something like:
ceph osd pool create ec2pool 2048 2048 erasure glasgow-eci-test ec2pool 0
ceph osd pool create ec3pool 2048 2048 erasure glasgow-eci-test2 ec3pool 0
(for two different types of ecpool)
and then removed them with
ceph osd pool rm ec2pool ec2pool --yes-i-really-really-mean-it
ceph osd pool rm ec3pool ec3pool --yes-i-really-really-mean-it
Now, however, something seems to have broken, as if I attempt:
ceph osd pool create ec4pool 2048 2048 erasure glasgow-eci-test3 ec4pool 0
it fails with
Error ENOENT: specified rule ec4pool doesn't exist
(which, of course, it does not, as the whole point of the syntax is that ceph should build the crush rule for me and name it appropriately; and this worked for all the previous times).
ceph health returns HEALTH OK still.
Any suggestions? I've googled around a bit on this, but I can't seem to find anyone discussing it...
Sam
Hi,
is there any chance the list admins could copy the pipermail archive
from lists.ceph.com over to lists.ceph.io? It seems to contain an awful
lot of messages referred elsewhere by their archive URL, many (all?) of
which appear to now lead to 404s.
Example: google "Set existing pools to use hdd device class only". The
top hit is a link to
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-August/029078.html:
$ curl -IL
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-August/029078.html
HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.3 (Ubuntu)
Date: Thu, 29 Aug 2019 12:48:13 GMT
Content-Type: text/html
Content-Length: 194
Connection: keep-alive
Location:
https://lists.ceph.io/pipermail/ceph-users-ceph.com/2018-August/029078.html
Strict-Transport-Security: max-age=31536000
HTTP/1.1 404 Not Found
Server: nginx
Date: Thu, 29 Aug 2019 12:48:14 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 3774
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
Vary: Accept-Language, Cookie
Content-Language: en
Or maybe this is just a redirect rule that needs to be cleverer or more
specific, rather than the apparent catch-all .com/.io redirect?
Cheers,
Florian