On 13. Dec 2020, at 16:49, Anthony D'Atri
<anthony.datri(a)gmail.com> wrote:
I suspect so, if rbd-mirror is fully disabled. If it’s still enabled for the pool or
image, removing it may fail.
Turn it off and we’ll both find out for sure.
On Dec 13, 2020, at 7:36 AM, mk <mk(a)pop.de>
wrote:
In fact journaling was enabled, is it enough to disable feature and pool shrinks
automatically again? Or still any additional actions are required?
—
Max
On 13. Dec 2020, at 15:53, Anthony D'Atri
<anthony.datri(a)gmail.com> wrote:
rbd status
rbd info
If the ‘journaling’ flag is enabled, use ‘rbd feature’ to remove it from the image.
>> On Dec 13, 2020, at 6:22 AM, mk <mk(a)pop.de> wrote:
>
> Yes, few months ago I had enabled mirroring for few weeks and disabled again. Is
there any additional actions has to be taken regarding journalling also ??
> fyi. I also have copied rbd image into one newly created pool but after few weeks new
pool grew up again to 11TB which is current state
> --
> BR
> Max
>
>> On 13. Dec 2020, at 14:34, Jason Dillaman <jdillama(a)redhat.com> wrote:
>>
>>> On Sun, Dec 13, 2020 at 6:03 AM mk <mk(a)pop.de> wrote:
>>>
>>> rados ls -p ssdshop
>>> outputs 20MB of lines without any bench prefix
>>> ...
>>> rbd_data.d4993cc3c89825.00000000000074ec
>>> rbd_data.d4993cc3c89825.0000000000001634
>>> journal_data.83.d4993cc3c89825.333485
>>> journal_data.83.d4993cc3c89825.380648
>>> journal_data.83.d4993cc3c89825.503838
>>> ...
>>
>> If you have journal objects indexed at 333485 and 503838, that's
>> nearly 2TiBs of data right there. Sounds like you enabled journaling
>> and mirroring but perhaps never turned it off when you stopped using
>> it?
>>
>>>> On 13. Dec 2020, at 11:05, Anthony D'Atri
<anthony.datri(a)gmail.com> wrote:
>>>>
>>>> Any chance you might have orphaned `rados bench` objects ? This happens
more than one might think.
>>>>
>>>> `rados ls > /tmp/out`
>>>>
>>>> Inspect the result. You should see a few administrative objects, some
header and data objects for the RBD volume. If you see a zillion with names like `bench*`
there’s your culprit. Those can be cleaned up.
>>>>
>>>>
>>>>
>>>>> On Dec 12, 2020, at 11:42 PM, mk <mk(a)pop.de> wrote:
>>>>>
>>>>> Hi folks,
>>>>> my cluster shows strange behavior, the only ssd pool on cluster with
repsize 3 and pg/pgp size 512
>>>>> which contains 300GB rbd image and only one snapshot occupies 11TB
space!
>>>>>
>>>>> I have tried objectmap check / rebuild, fstrim etc. which couldn’t
solve that problem, any help would be appreciated.
>>>>>
>>>>>
>>>>> ceph version 14.2.7 nautilus (stable)
>>>>>
>>>>>
>>>>> ceph df
>>>>> -------
>>>>> RAW STORAGE:
>>>>> CLASS SIZE AVAIL USED RAW USED %RAW USED
>>>>> hdd 107 TiB 68 TiB 39 TiB 39 TiB 36.45
>>>>> ssd 21 TiB 11 TiB 11 TiB 11 TiB 50.78
>>>>> TOTAL 128 TiB 78 TiB 50 TiB 50 TiB 38.84
>>>>>
>>>>> POOLS:
>>>>> POOL ID STORED OBJECTS
USED %USED MAX AVAIL
>>>>> ssdshop 83 3.5 TiB 517.72k 11
TiB 96.70 124 GiB
>>>>>
>>>>>
>>>>> rados df
>>>>> --------
>>>>> POOL_NAME USED OBJECTS CLONES COPIES
MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR
UNDER COMPR
>>>>> ssdshop 11 TiB 537040 28316 1611120
0 0 0 11482773 15 GiB 44189589 854 GiB 0 B 0 B
>>>>>
>>>>>
>>>>>
>>>>> rbd du -p ssdshop
>>>>> -----------------
>>>>> NAME PROVISIONED USED
>>>>> shp-de-300gb.rbd@snap_2020-12-12_20:30:00 300 GiB 289 GiB
>>>>> shp-de-300gb.rbd 300 GiB 109 GiB
>>>>> <TOTAL> 300 GiB 398 GiB
>>>>>
>>>>>
>>>>> crush_rule
>>>>> -----------
>>>>> rule ssd {
>>>>> id 3
>>>>> type replicated
>>>>> min_size 1
>>>>> max_size 10
>>>>> step take dc1 class ssd
>>>>> step chooseleaf firstn 2 type rack
>>>>> step emit
>>>>> step take dc2 class ssd
>>>>> step chooseleaf firstn -1 type rack
>>>>> step emit
>>>>> }
>>>>>
>>>>> BR
>>>>> Max
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>>
>>
>> --
>> Jason
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io