I found a merge request, ceph mon has new optíon :mon_sync_max_payload_keys
https://github.com/ceph/ceph/commit/d6037b7f484e13cfc9136e63e4cf7fac6ad6896…
My value of options mon_sync_max_payload_size is 4096.
If mon_sync_max_payload_keys options is default, ceph mon may be not sync
because slow opt.
What do you think?
Hi Friends,
We have 2 Ceph clusters on campus and we setup the second cluster as the DR
solution.
The images on the DR side are always behind the master.
Ceph Version : 12.2.11
VMWARE_LUN0:
global_id: 23460954-6986-4961-9579-0f2a1e58e2b2
state: up+replaying
description: replaying, master_position=[object_number=2632711,
tag_tid=24, entry_tid=1967382595], mirror_position=[object_number=1452837,
tag_tid=24, entry_tid=456440697], entries_behind_master=1510941898
last_update: 2020-11-30 14:13:38
VMWARE_LUN1:
global_id: cb579579-13b0-4522-b65f-c64ec44cbfaf
state: up+replaying
description: replaying, master_position=[object_number=1883943,
tag_tid=28, entry_tid=1028822927], mirror_position=[object_number=1359161,
tag_tid=28, entry_tid=358296085], entries_behind_master=670526842
last_update: 2020-11-30 14:13:33
Any suggestion on tuning or any parameters we can set on RBD-mirror to speed
up the replication. Both cluster have very little activity.
Appreciate your help.
Thanks,
-Vikas
Hi,
we created multiple CephFS, this invloved deploying mutliple mds-services using `ceph orch apply mds [...]`. Worked like a charm.
Now the filesystem has been removed and the leftovers of the filesystem should also be removed, but I can't delete the services as cephadm/orchestration module is recreating them. What is the "official" way to delete this applied service set? Setting placement size to 0 is not possible in Ceph 15.
Kind regards,
Michael
In fact journaling was enabled, is it enough to disable feature and pool shrinks automatically again? Or still any additional actions are required?
—
Max
> On 13. Dec 2020, at 15:53, Anthony D'Atri <anthony.datri(a)gmail.com> wrote:
>
> rbd status
> rbd info
>
> If the ‘journaling’ flag is enabled, use ‘rbd feature’ to remove it from the image.
>
>
>
>> On Dec 13, 2020, at 6:22 AM, mk <mk(a)pop.de> wrote:
>>
>> Yes, few months ago I had enabled mirroring for few weeks and disabled again. Is there any additional actions has to be taken regarding journalling also ??
>> fyi. I also have copied rbd image into one newly created pool but after few weeks new pool grew up again to 11TB which is current state
>> --
>> BR
>> Max
>>
>>> On 13. Dec 2020, at 14:34, Jason Dillaman <jdillama(a)redhat.com> wrote:
>>>
>>>> On Sun, Dec 13, 2020 at 6:03 AM mk <mk(a)pop.de> wrote:
>>>>
>>>> rados ls -p ssdshop
>>>> outputs 20MB of lines without any bench prefix
>>>> ...
>>>> rbd_data.d4993cc3c89825.00000000000074ec
>>>> rbd_data.d4993cc3c89825.0000000000001634
>>>> journal_data.83.d4993cc3c89825.333485
>>>> journal_data.83.d4993cc3c89825.380648
>>>> journal_data.83.d4993cc3c89825.503838
>>>> ...
>>>
>>> If you have journal objects indexed at 333485 and 503838, that's
>>> nearly 2TiBs of data right there. Sounds like you enabled journaling
>>> and mirroring but perhaps never turned it off when you stopped using
>>> it?
>>>
>>>>> On 13. Dec 2020, at 11:05, Anthony D'Atri <anthony.datri(a)gmail.com> wrote:
>>>>>
>>>>> Any chance you might have orphaned `rados bench` objects ? This happens more than one might think.
>>>>>
>>>>> `rados ls > /tmp/out`
>>>>>
>>>>> Inspect the result. You should see a few administrative objects, some header and data objects for the RBD volume. If you see a zillion with names like `bench*` there’s your culprit. Those can be cleaned up.
>>>>>
>>>>>
>>>>>
>>>>>> On Dec 12, 2020, at 11:42 PM, mk <mk(a)pop.de> wrote:
>>>>>>
>>>>>> Hi folks,
>>>>>> my cluster shows strange behavior, the only ssd pool on cluster with repsize 3 and pg/pgp size 512
>>>>>> which contains 300GB rbd image and only one snapshot occupies 11TB space!
>>>>>>
>>>>>> I have tried objectmap check / rebuild, fstrim etc. which couldn’t solve that problem, any help would be appreciated.
>>>>>>
>>>>>>
>>>>>> ceph version 14.2.7 nautilus (stable)
>>>>>>
>>>>>>
>>>>>> ceph df
>>>>>> -------
>>>>>> RAW STORAGE:
>>>>>> CLASS SIZE AVAIL USED RAW USED %RAW USED
>>>>>> hdd 107 TiB 68 TiB 39 TiB 39 TiB 36.45
>>>>>> ssd 21 TiB 11 TiB 11 TiB 11 TiB 50.78
>>>>>> TOTAL 128 TiB 78 TiB 50 TiB 50 TiB 38.84
>>>>>>
>>>>>> POOLS:
>>>>>> POOL ID STORED OBJECTS USED %USED MAX AVAIL
>>>>>> ssdshop 83 3.5 TiB 517.72k 11 TiB 96.70 124 GiB
>>>>>>
>>>>>>
>>>>>> rados df
>>>>>> --------
>>>>>> POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
>>>>>> ssdshop 11 TiB 537040 28316 1611120 0 0 0 11482773 15 GiB 44189589 854 GiB 0 B 0 B
>>>>>>
>>>>>>
>>>>>>
>>>>>> rbd du -p ssdshop
>>>>>> -----------------
>>>>>> NAME PROVISIONED USED
>>>>>> shp-de-300gb.rbd@snap_2020-12-12_20:30:00 300 GiB 289 GiB
>>>>>> shp-de-300gb.rbd 300 GiB 109 GiB
>>>>>> <TOTAL> 300 GiB 398 GiB
>>>>>>
>>>>>>
>>>>>> crush_rule
>>>>>> -----------
>>>>>> rule ssd {
>>>>>> id 3
>>>>>> type replicated
>>>>>> min_size 1
>>>>>> max_size 10
>>>>>> step take dc1 class ssd
>>>>>> step chooseleaf firstn 2 type rack
>>>>>> step emit
>>>>>> step take dc2 class ssd
>>>>>> step chooseleaf firstn -1 type rack
>>>>>> step emit
>>>>>> }
>>>>>>
>>>>>> BR
>>>>>> Max
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>
>>>
>>>
>>> --
>>> Jason
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
Hi folks,
my cluster shows strange behavior, the only ssd pool on cluster with repsize 3 and pg/pgp size 512
which contains 300GB rbd image and only one snapshot occupies 11TB space!
I have tried objectmap check / rebuild, fstrim etc. which couldn’t solve that problem, any help would be appreciated.
ceph version 14.2.7 nautilus (stable)
ceph df
-------
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 107 TiB 68 TiB 39 TiB 39 TiB 36.45
ssd 21 TiB 11 TiB 11 TiB 11 TiB 50.78
TOTAL 128 TiB 78 TiB 50 TiB 50 TiB 38.84
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
ssdshop 83 3.5 TiB 517.72k 11 TiB 96.70 124 GiB
rados df
--------
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
ssdshop 11 TiB 537040 28316 1611120 0 0 0 11482773 15 GiB 44189589 854 GiB 0 B 0 B
rbd du -p ssdshop
-----------------
NAME PROVISIONED USED
shp-de-300gb.rbd@snap_2020-12-12_20:30:00 300 GiB 289 GiB
shp-de-300gb.rbd 300 GiB 109 GiB
<TOTAL> 300 GiB 398 GiB
crush_rule
-----------
rule ssd {
id 3
type replicated
min_size 1
max_size 10
step take dc1 class ssd
step chooseleaf firstn 2 type rack
step emit
step take dc2 class ssd
step chooseleaf firstn -1 type rack
step emit
}
BR
Max
Hi folks,
my cluster shows strange behavior, the only ssd pool on cluster with repsize 3 and pg/pgp size 512
which contains 300GB rbd image and only one snapshot occupies 11TB space!
I have tried objectmap check / rebuild, fstrim etc. which couldn’t solve that problem, any help would be appreciated.
ceph version 14.2.7 nautilus (stable)
ceph df
-------
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 107 TiB 68 TiB 39 TiB 39 TiB 36.45
ssd 21 TiB 11 TiB 11 TiB 11 TiB 50.78
TOTAL 128 TiB 78 TiB 50 TiB 50 TiB 38.84
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
ssdshop 83 3.5 TiB 517.72k 11 TiB 96.70 124 GiB
rados df
--------
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
ssdshop 11 TiB 537040 28316 1611120 0 0 0 11482773 15 GiB 44189589 854 GiB 0 B 0 B
rbd du -p ssdshop
-----------------
NAME PROVISIONED USED
shp-de-300gb.rbd@snap_2020-12-12_20:30:00 300 GiB 289 GiB
shp-de-300gb.rbd 300 GiB 109 GiB
<TOTAL> 300 GiB 398 GiB
crush_rule
-----------
rule ssd {
id 3
type replicated
min_size 1
max_size 10
step take dc1 class ssd
step chooseleaf firstn 2 type rack
step emit
step take dc2 class ssd
step chooseleaf firstn -1 type rack
step emit
}
BR
Max
Dear all,
It seems that by default the grafana web page embedded inside the ceph dashboard is publicly available in read-only mode. More specifically the grafana configuration inside the docker running the grafana instance has the following configuration file (/usr/share/ceph/mgr/cephadm/templates/services/grafana/grafana.ini.j2)
[auth.anonymous]
enabled = true
org_name = 'Main Org.'
org_role = 'Viewer'
Do you think that this might be a security concern? Is there a way to enforce authentication also for the read-only mode? I wasn't able to find any documentation on how to configure grafana. The only thing I found which might be related to this issue is the following: https://tracker.ceph.com/issues/45372.
Regards,
Alessandro Piazza