Hello Eugen,
Below is the o/p:-

ceph osd df:-

image.png

ceph osd tree:-

image.png

On Thu, 28 Nov 2019 at 14:54, Eugen Block <eblock@nde.ag> wrote:
Hi,

can you share the output of `ceph osd df` and `ceph osd tree`?
The smallest of your OSDs will be the bottleneck. Since ceph tries to 
distribute the data evenly across all OSDs you won't be able to use 
the large OSDs, at least not without adjusting your setup.

Regards,
Eugen


Zitat von Alokkumar Mahajan <alokkumar.mahajan@gmail.com>:

> Thanks Wido.
> @ But normally CephFS will be able to use all the space inside your
> Cephcluster.
> So, you are saying even if i see the size for CephFS pools as 55 GB, it can
> still use whole 600GB (or the available disk) from Cluster?
>
> This is what i have with PGNum = 150 (for Data) and 32 (for Metadata) in my
> cluster.
>
> Pool
> Type
> Size
> Usage
> cephfs_data
> data
> 55.5366GiB
> 4%
> cephfs_meta
> metadata
> 55.7469GiB
>
> Thanks
>
>
> On Thu, 28 Nov 2019 at 13:49, Wido den Hollander <wido@42on.com> wrote:
>
>>
>>
>> On 11/28/19 6:41 AM, Alokkumar Mahajan wrote:
>> > Hello,
>> > I am new to Ceph and currently i am working on setting up CephFs and RBD
>> > environment. I have successfully setup Ceph Cluster with 4 OSD's (2
>> > OSD's with size 50GB and 2 OSD's with size 300GB).
>> >
>> > But while setting up CephFs the size which i see allocated for CephFs
>> > Data and metadata pools is 55GB. But i want to have 300GB assigned for
>> > CephFs.
>> >
>> > I tried using "target_size_bytes" flag while creating pool but it is
>> > not working (it saus invalid command). Same result when i
>> > use target_size_bytes with (ceph osd pool set) after creating pool.
>> >
>> > I am not sure if i am doing something silly here.
>> >
>> > Can someone please guide me on this?
>> >
>>
>>
>> You can set quotas on CephFS or on the RADOS pool for the CephFS 'data'
>> (haven't tried the last one though).
>>
>> But normally CephFS will be able to use all the space inside your Ceph
>> cluster.
>>
>> It's not that you can easily just allocate X GB/TB to CephFS.
>>
>> Wido
>>
>> > Thanks in adv.!
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-leave@ceph.io
>> >
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-leave@ceph.io
>>


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io