Hello Casey. Thanks again, I still couldn't quite understand this issue of
objects in Ceph, and with your explanation it became clearer.
Thank you, Marcelo
Em qui., 11 de fev. de 2021 às 13:36, Casey Bodley <cbodley(a)redhat.com>
escreveu:
On Thu, Feb 11, 2021 at 9:31 AM Marcelo
<raxidex(a)gmail.com> wrote:
Hi Casey, thank you for the reply.
I was wondering, just as the placement target is in the bucket metadata
in
the index, if it would not be possible to insert
the storage-class
information in the metadata of the object that is in the index as well.
Or
did I get it wrong and there is absolutely no
type of object metadata in
the index, just a listing of the objects?
the bucket index is for bucket listing, so each entry in the index
stores enough metadata (mtime, etag, size, etc) to satisfy the
s3/swift bucket listing APIs. this does include the storage class for
each object
but GetObject requests don't read from the bucket index, they just
look for a 'head object' with the object's name
for objects in the default storage class, we also store the first
chunk (4M) of data in the head object - so a GetObject request can
satisfy small object reads in a single round trip
for objects in non-default storage classes, we need one level of
indirection to locate the data. we *could* potentially go through the
bucket index for this, but the index itself is optional (see indexless
buckets) and has a looser consistency model than the head object,
which we can write atomically when an upload finishes
Thanks again, Marcelo.
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&ut…
Livre
de vírus.
www.avast.com
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&ut…
.
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Em qua., 10 de fev. de 2021 às 11:43, Casey Bodley <cbodley(a)redhat.com>
escreveu:
> On Wed, Feb 10, 2021 at 8:31 AM Marcelo <raxidex(a)gmail.com> wrote:
> >
> > Hello all!
> >
> > We have a cluster where there are HDDs for data and NVMEs for
journals
> and
> > indexes. We recently added pure SSD hosts, and created a storage
class
> SSD.
> > To do this, we create a default.rgw.hot.data pool, associate a crush
rule
> > using SSD and create a HOT storage
class in the placement-target. The
> > problem is when we send an object to use a HOT storage class, it is
in
> both
> > the STANDARD storage class pool and the HOT pool.
> >
> > STANDARD pool:
> > # rados -p default.rgw.buckets.data ls
> > d86dade5-d401-427b-870a-0670ec3ecb65.385198.4_LICENSE
> >
> > # rados -p default.rgw.buckets.data stat
> > d86dade5-d401-427b-870a-0670ec3ecb65.385198.4_LICENSE
> >
>
default.rgw.buckets.data/d86dade5-d401-427b-870a-0670ec3ecb65.385198.4_LICENSE
> > mtime 2021-02-09 14: 54: 14.000000,
size 0
> >
> >
> > HOT pool:
> > # rados -p default.rgw.hot.data ls
> >
>
d86dade5-d401-427b-870a-0670ec3ecb65.385198.4__shadow_.rmpla1NTgArcUQdSLpW4qEgTDlbhn9f_0
> >
> >
> > # rados -p default.rgw.hot.data stat
> >
>
d86dade5-d401-427b-870a-0670ec3ecb65.385198.4__shadow_.rmpla1NTgArcUQdSLpW4qEgTDlbhn9f_0
> >
>
default.rgw.hot.data/d86dade5-d401-427b-870a-0670ec3ecb65.385198.4__shadow_.rmpla1NTgArcUQdSLpW4qEgTDlbhn9f_0
> > mtime 2021-02-09 14: 54: 14.000000,
size 15220
> >
> > The object itself is in the HOT pool, however it creates this other
> object
> > similar to an index in the STANDARD pool. Monitoring with iostat we
> noticed
> > that this behavior generates an unnecessary IO on disks that do not
need
> > to
> > > be touched.
> > >
> > > Why this behavior? Are there any ways around it?
> >
> > this object in the STANDARD pool is called the 'head object', and it
> > holds the s3 object's metadata - including an attribute that says
> > which storage class the object's data is in
> >
> > when an S3 client downloads the object with a 'GET /bucket/LICENSE'
> > request, it doesn't specify the storage class. so radosgw has to find
> > its head object in a known location (the bucket's default storage
> > class pool) in order to figure out which pool holds the object's data
> >
> > >
> > > Thanks, Marcelo
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >
>
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io