That's right, radosgw doesn't do accounting per storage class. All you
have to go on is the rados-level pool stats for those storage classes.
On Mon, Sep 7, 2020 at 7:05 AM Tobias Urdin <tobias.urdin(a)binero.com> wrote:
Hello,
Anybody have any feedback or ways they have resolved this issue?
Best regards
________________________________
From: Tobias Urdin <tobias.urdin(a)binero.com>
Sent: Wednesday, August 26, 2020 3:01:49 PM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Storage class usage stats
Hello,
I've been trying to understand if there is any way to get usage information based on
storage classes for buckets.
Since there is no information available from the "radosgw-admin bucket stats"
commands nor any other endpoint I
tried to browse the source code but couldn't find any references where the storage
class would be exposed in such a way.
It also seems that RadosGW today is not saving any counters on amount of objects stored
in storage classes when it's
collecting usage stats, which means there is no such metadata saved for a bucket.
I was hoping it was atleast saved but not exposed because then it would have been a
easier fix than adding support to count number of objects in storage classes based on
operations which would involve a lot of places and mean writing to the bucket metadata on
each op :(
Is my assumptions correct that there is no way to retrieve such information, meaning
there is no way to measure such usage?
If the answer is yes, I assume the only way to get something that could be measured would
be to instead have multiple placement
targets since that is exposed from in bucket info. The bad things would be though that
you lose a lot of functionality related to lifecycle
and moving a single object to another storage class.
Best regards
Tobias
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io