Hello.
I'm trying to list the number of buckets that users have for monitoring
purposes, but I need to list and count the number of buckets per user. Is
it possible to get this information somewhere else?
Thanks, Marcelo
Hi,
I have a CEPH 15.2.4 running in a docker. How to configure for use a
specific data pool? i try put the follow line in the ceph.conf but the
changes not working. .
[client.myclient]
rbd default data pool = Mydatapool
I need it to configure for erasure pool with cloudstack
Can help me? , where is the ceph conf we i need configure?
Thanks.
--
Untitled Document
Hi
Thanks for the reply.
cephadm runs ceph containers automatically. How to set privileged mode
in ceph container?
--
> El 23/9/20 a las 13:24, Daniel Gryniewicz escribió:
>> NFSv3 needs privileges to connect to the portmapper. Try running
>> your docker container in privileged mode, and see if that helps.
>>
>> Daniel
>>
>> On 9/23/20 11:42 AM, Gabriel Medve wrote:
>>> Hi,
>>>
>>> I have a CEPH 15.2.5 running in a docker , i configure nfs ganesha
>>> with nfs version 3 but i can not mount it.
>>> If configure ganesha with nfs version 4 i can mounted without
>>> problems but i need the version 3 .
>>>
>>> The error is mount.nfs: Protocol not supported
>>>
>>> Can help me?
>>>
>>> Thanks.
>>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> --
> Untitled Document
Hi all,
I get these log messages all the time, sometimes also directly to the terminal:
kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)
The cluster is healthy and the MDS complaining is actually both, configured and running as a standby-replay daemon. These messages show up at least every hour, but sometimes with much higher frequency. The cluster seems healthy though.
A google search did not bring up anything useful.
Can anyone shed some light on what this message means?
Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Hi.
I'm a newbie in CephFS and I have some questions about how per-MDS journals
work.
In Sage's paper (osdi '06), I read that each MDSs has its own journal and
it lazily flushes metadata modifications on OSD cluster.
What I'm wondering is that some directory operations like rename work with
multiple metadata and It may work on two or more MDSs and their journals,
so I think it needs some mechanisms to construct a transaction that works
on multiple journals like some distributed transaction mechanisms.
Could anybody explains how per-MDS journals work in such directory
operations? or recommends some references about it?
Thanks.
kyujin.
Hello,
We have functional ceph swarm with a pair of S3 rgw in front that uses
A.B.C.D domain to be accessed.
Now a new client asks to have access using the domain : E.C.D, but to
already existing buckets. This is not a scenario discussed in the docs.
Apparently, looking at the code and by trying it, rgw does not support
multiple domains for the variable rgw_dns_name.
But reading through parts of the code, I am no dev, and my c++ is 25 years
rusty, I get the impression that maybe we could just add a second pair of
rgw S3 servers that would give service to the same buckets, but using a
different domain.
Am I wrong ? Let's say this works, is this an unconscious behaviour that
the ceph team would remove down the road ?
Is there another solution that I might have missed ? We do not have
multi-zone and there are no plans for it. And Cname (rgw_resolve_cname)
seems to only be of use when using static sites (again, from my poor code
reading abilities).
Thank you
--
**AVERTISSEMENT** : Ce courriel et les pièces qui y sont jointes sont
destinés exclusivement au(x) destinataire(s) mentionné(s) ci-dessus et
peuvent contenir de l’information privilégiée ou confidentielle. Si vous
avez reçu ce courriel par erreur, ou s’il ne vous est pas destiné, veuillez
le mentionner immédiatement à l’expéditeur et effacer ce courriel ainsi que
les pièces jointes, le cas échéant. La copie ou la redistribution non
autorisée de ce courriel peut être illégale. Le contenu de ce courriel ne
peut être interprété qu’en conformité avec les lois et règlements qui
régissent les pouvoirs des diverses instances décisionnelles compétentes de
la Ville de Montréal.
Hi.
I'm a newbie in CephFS and I have some questions about how per-MDS journals work.
In Sage's paper (osdi '06), I read that each MDSs has its own journal and it lazily flushes metadata modifications on OSD cluster.
What I'm wondering is that some directory operations like rename work with multiple metadata and It may work on two or more MDSs and their journals,
so I think it needs some mechanisms to construct a transaction that works on multiple journals like some distributed transaction mechanisms.
Could anybody explains how per-MDS journals work in such directory operations? or recommends some references about it?
Thanks.
kyujin.
Hello -
We're using libRADOS directly for our communication between services. Some
of the features are faster and more featured for our use cases than an S3
gateway.
But we do want to leverage the ES Metadata search.
It appears that the Metadata search is built on the object gateway.
Question is - do files which are written directly to an OSD get replicated
using the gateway, or is it only files which are written through the
gateway that get replicated?
Thanks.
Cary