Hi, we have a ceph 17.2.6 with ragosgw and a couple of buckets in it.
We use it for backup with lock directly from veeam.
After few backups we got
HEALTH_WARN 2 large omap objects │·····················
[WRN] LARGE_OMAP_OBJECTS: 2 large omap objects │·····················
2 large objects found in pool 'backup.rgw.buckets.index' │·····················
Search the cluster log for 'Large omap object found' for more details.
What is causing this ? could we get a bigger threshold avout omap size and ignore safely this warning ?
What's the issue?
Best regards
ceph version is 16.2.13;
The pg_num is 1024, and the target_pg_num is 32; there is no any data in the pool of ".rgw.buckets.index", but it spend much time to reduce the pg num.
Hi Cephers,
In a multisite config, with one zonegroup and 2 zones, when I look at
`radiosgw-admin zonegroup get`,
I see by defaut these two parameters :
"log_meta": "false",
"log_data": "true",
Where can I find documentation on these, I can't find.
I set log_meta to true, because, why not ?
Is it a bad thing ?
Hi,
I'm installing a new instance (my first) of Ceph. Our cluster runs
AlmaLinux9 + Quincy. Now I'm dealing with CephFS and quotas. I read
documentation about setting up quotas with virtual attributes (xattr) and
creating volumes and subvolumes with a prefixed size. I cannot distinguish
which is the best option for us.
Currently we create a directory with a project name and some subdirectories
inside.
I would like to understand the difference between both options.
Thanks in advance.
--
Dario Graña
PIC (Port d'Informació Científica)
Campus UAB, Edificio D
E-08193 Bellaterra, Barcelona
http://www.pic.es
Avis - Aviso - Legal Notice: http://legal.ifae.es
Dear Ceph folks,
In a Ceph cluster there could be multiple points (e.g. librbd clients) being able to execute rbd commands. My question is that , is there a methold to reliably record or keep a full rbd command history that ever being executed? This would be helpful for auditors as well as for system operators.
any ideas?
Samuel
huxiaoyu(a)horebdata.cn
I'm in the process of exploring if it is worthwhile to add RadosGW to
our existing ceph cluster. We've had a few internal requests for
exposing the S3 API for some of our business units, right now we just
use the ceph cluster for VM disk image storage via RBD.
Everything looks pretty straight forward until we hit multitenancy. The
page on multi-tenancy doesn't dive into permission delegation:
https://docs.ceph.com/en/quincy/radosgw/multitenancy/
The end goal I want is to be able to create a single user per tenant
(Business Unit) which will act as their 'administrator', where they can
then do basically whatever they want under their tenant sandbox (though
I don't think we need more advanced cases like creations of roles or
policies, just create/delete their own users, buckets, objects). I was
hopeful this would just work, and I asked on the ceph IRC channel on
OFTC and was told once I grant a user caps="users=*", they would then be
allowed to create users *outside* of their own tenant using the Rados
Admin API and that I should explore IAM roles.
I think it would make sense to add a feature, such as a flag that can be
set on a user, to ensure they stay in their "sandbox". I'd assume this
is probably a common use-case.
Anyhow, if its possible to do today using iam roles/policies, then
great, unfortunately this is my first time looking at this stuff and
there are some things not immediately obvious.
I saw this online about AWS itself and creating a permissions boundary,
but that's for allowing creation of roles within a boundary:
https://www.qloudx.com/delegate-aws-iam-user-and-role-creation-without-givi…
I'm not sure what "Action" is associated with the Rados Admin API create
user for applying a boundary that the user can only create users with
the same tenant name.
https://docs.ceph.com/en/quincy/radosgw/adminops/#create-user
Any guidance on this would be extremely helpful.
Thanks!
-Brad
Hi,
I'm having an issue with crash daemons on Pacific 16.2.13 hosts. ceph-crash
throws the following error on all hosts:
ERROR:ceph-crash:directory /var/lib/ceph/crash/posted does not exist;
please create
ERROR:ceph-crash:directory /var/lib/ceph/crash/posted does not exist;
please create
ERROR:ceph-crash:directory /var/lib/ceph/crash/posted does not exist;
please create
ceph-crash runs in docker, the container has the directory mounted: -v
/var/lib/ceph/3f50555a-ae2a-11eb-a2fc-ffde44714d86/crash:/var/lib/ceph/crash:z
The mount works correctly:
18:26 [root@ceph02 /var/lib/ceph/3f50555a-ae2a-11eb-a2fc-ffde44714d86]# ls
-al crash/posted/
total 8
drwx------ 2 nobody nogroup 4096 May 6 2021 .
drwx------ 3 nobody nogroup 4096 May 6 2021 ..
18:26 [root@ceph02 /var/lib/ceph/3f50555a-ae2a-11eb-a2fc-ffde44714d86]#
touch crash/posted/a
18:26 [root@ceph02 /var/lib/ceph/3f50555a-ae2a-11eb-a2fc-ffde44714d86]#
docker exec -it c0cd2b8022d8 bash
[root@ceph02 /]# ls -al /var/lib/ceph/crash/posted/
total 8
drwx------ 2 nobody nobody 4096 Jun 1 18:26 .
drwx------ 3 nobody nobody 4096 May 6 2021 ..
-rw-r--r-- 1 root root 0 Jun 1 18:26 a
I.e. the directory actually exists and is correctly mounted in the crash
container, yet ceph-crash says it doesn't exist. How can I convince it
that the directory is there?
Best regards,
Zakhar
Hi Ceph users,
We have 3 clusters running Pacific 16.2.9 all setup in a multisite configuration with no data replication (we wanted to use per bucket policies but never got them working to our satisfaction). All of the resharding documentation I've found regarding multisite is centred around multisite with data replication and having to reshard from the primary region and re-replicating data. But our data is spread amongst the regions and may not be in the primary region.
Testing has shown that resharding from the primary region in the case of a bucket with data only in a remote region results in the remote bucket losing it's ability to list contents (seemingly breaking the index in the remote region).
Is there a way (besides waiting for reef and dynamic bucket resharding for multisite) to reshard buckets in this setup?
Cheers,
Danny
Danny Webb
Principal OpenStack Engineer
Danny.Webb(a)thehutgroup.com
[THG Ingenuity Logo]<https://www.thg.com>
[https://i.imgur.com/wbpVRW6.png]<https://www.linkedin.com/company/thgplc/?originalSubdomain=uk> [https://i.imgur.com/c3040tr.png] <https://twitter.com/thgplc?lang=en>
Dear ceph folks,
I bumped into an very interesting challenge, how to secure erase a rbd image data without any encryption?
The motivation is to ensure that there is no information leak on OSDs after deleting a user specified rbd image, without the extra burden of using rbd encryption.
any ideas, suggestions are highly appreciated,
Samuel
huxiaoyu(a)horebdata.cn