You can also expand the OSD. ceph-bluestore-tool has an option for expansion of the OSD.
I'm not 100% sure if that would solve the rockdb out of space issue. I think it will,
though. If not, you can move rockdb to a separate block device.
September 22, 2020 7:31 PM, "George Shuklin" <george.shuklin(a)gmail.com>
wrote:
> As far as I know, bluestore doesn't like super small sizes. Normally odd
> should stop doing funny things as full mark, but if device is too small it
> may be too late and bluefs run out of space.
>
> Two things:
> 1. Don't use too small osd
> 2. Have a spare area on the drive. I usually reserve 1% for emergency
> extension (and to give ssd firmware a bit if space to breath).
>
> On Wed, Sep 23, 2020, 01:03 Ivan Kurnosov <zerkms(a)zerkms.com> wrote:
>
>> Hi,
>>
>> this morning I woke up to a degraded test ceph cluster (managed by rook,
>> but it does not really change anything for the question I'm about to ask).
>>
>> After checking logs I have found that bluestore on one of the OSDs run out
>> of space.
>>
>> Some cluster details:
>>
>> ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus
>> (stable)
>> it runs on 3 little OSDs 10Gb each
>>
>> `ceph osd df` returned RAW USE of about 4.5GB on every node, happily
>> reporting about 5.5GB of AVAIL.
>>
>> Yet:
>>
>> ...
>> So, my question would be: how could I have prevented that? From monitoring
>> I have (prometheus) - OSDs are healthy, have plenty of space, yet they are
>> not.
>>
>> What command (and prometheus metric) would help me understand the actual
>> real bluestore use? Or am I missing something?
>>
>> Oh, and I "fixed" the cluster by expanding the broken osd.0 with a
larger
>> 15GB volume. And 2 other OSDs still run on 10GB volumes.
>>
>> Thanks in advance for any thoughts.
>>
>> --
>> With best regards, Ivan Kurnosov
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io