Hi Greg,
Can you please share the api details for COPY_FROM or any reference
document?
Thanks ,
Muthu
On Wed, Jul 3, 2019 at 4:12 AM Brad Hubbard <bhubbard(a)redhat.com> wrote:
On Wed, Jul 3, 2019 at 4:25 AM Gregory Farnum
<gfarnum(a)redhat.com> wrote:
I'm not sure how or why you'd get an object class involved in doing
this in the normal course of affairs.
There's a copy_from op that a client can send and which copies an
object from another OSD into the target object. That's probably the
primitive you want to build on. Note that the OSD doesn't do much
Argh! yes, good idea. We really should document that!
consistency checking (it validates that the
object version matches an
input, but if they don't it just returns an error) so the client
application is responsible for any locking needed.
-Greg
On Tue, Jul 2, 2019 at 3:49 AM Brad Hubbard <bhubbard(a)redhat.com> wrote:
>
> Yes, this should be possible using an object class which is also a
> RADOS client (via the RADOS API). You'll still have some client
> traffic as the machine running the object class will still need to
> connect to the relevant primary osd and send the write (presumably in
> some situations though this will be the same machine).
>
> On Tue, Jul 2, 2019 at 4:08 PM nokia ceph <nokiacephusers(a)gmail.com>
wrote:
> >
> > Hi Brett,
> >
> > I think I was wrong here in the requirement description. It is not
about
data replication , we need same content stored in different
object/name.
> > We store video contents inside the ceph
cluster. And our new
requirement is we need to store same content for different
users , hence
need same content in different object name . if client sends write request
for object x and sets number of copies as 100, then cluster has to clone
100 copies of object x and store it as object x1, objectx2,etc. Currently
this is done in the client side where objectx1, object x2...objectx100 are
cloned inside the client and write request sent for all 100 objects which
we want to avoid to reduce network consumption.
> >
> > Similar usecases are rbd snapshot , radosgw copy .
> >
> > Is this possible in object class ?
> >
> > thanks,
> > Muthu
> >
> >
> > On Mon, Jul 1, 2019 at 7:58 PM Brett Chancellor <
bchancellor(a)salesforce.com> wrote:
> >>
> >> Ceph already does this by default. For each replicated pool, you
can
set the 'size' which is the number of copies you want Ceph to maintain.
The accepted norm for replicas is 3, but you can set it higher if you want
to incur the performance penalty.
> >>
> >> On Mon, Jul 1, 2019, 6:01 AM nokia ceph <nokiacephusers(a)gmail.com>
wrote:
> >>>
> >>> Hi Brad,
> >>>
> >>> Thank you for your response , and we will check this video as well.
> >>> Our requirement is while writing an object into the cluster , if
we can provide number of copies to be made , the network consumption
between client and cluster will be only for one object write. However , the
cluster will clone/copy multiple objects and stores inside the cluster.
> >>>
> >>> Thanks,
> >>> Muthu
> >>>
> >>> On Fri, Jun 28, 2019 at 9:23 AM Brad Hubbard
<bhubbard(a)redhat.com>
wrote:
> >>>>
> >>>> On Thu, Jun 27, 2019 at 8:58 PM nokia ceph <
nokiacephusers(a)gmail.com> wrote:
> >>>> >
> >>>> > Hi Team,
> >>>> >
> >>>> > We have a requirement to create multiple copies of an object
and currently we are handling it in client side to write as separate
objects and this causes huge network traffic between client and cluster.
> >>>> > Is there possibility
of cloning an object to multiple copies
using librados api?
> >>>> > Please share the
document details if it is feasible.
> >>>>
> >>>> It may be possible to use an object class to accomplish what you
want
> >>>> to achieve but the more we
understand what you are trying to do,
the
> >>>> better the advice we can
offer (at the moment your description
sounds
> >>>> like replication which is
already part of RADOS as you know).
> >>>>
> >>>> More on object classes from Cephalocon Barcelona in May this year:
> >>>>
https://www.youtube.com/watch?v=EVrP9MXiiuU
> >>>>
> >>>> >
> >>>> > Thanks,
> >>>> > Muthu
> >>>> > _______________________________________________
> >>>> > ceph-users mailing list
> >>>> > ceph-users(a)lists.ceph.com
> >>>> >
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Cheers,
> >>>> Brad
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users(a)lists.ceph.com
> >>>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Cheers,
> Brad
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Cheers,
Brad