Hi all,
we have a new issue in our Nautilus cluster.
The large omap warning seems to be more common for RGW usage, but we
currently only use CephFS and RBD. I found one thread [1] regarding
metadata pool, but it doesn't really help in our case.
The deep-scrub of PG 36.6 brought up this message (deep-scrub finished
with "ok"):
2019-09-30 20:18:22.548401 osd.9 (osd.9) 275 : cluster [WRN] Large
omap object found. Object: 36:654134d2:::mds0_openfiles.0:head Key
count: 238621 Size (bytes): 9994510
I checked xattr (none) and omapheader:
ceph01:~ # rados -p cephfs-metadata listxattr mds0_openfiles.0
ceph01:~ # rados -p cephfs-metadata getomapheader mds0_openfiles.0
header (42 bytes) :
00000000 13 00 00 00 63 65 70 68 20 66 73 20 76 6f 6c 75 |....ceph fs volu|
00000010 6d 65 20 76 30 31 31 01 01 0d 00 00 00 74 c3 12 |me v011......t..|
00000020 00 00 00 00 00 01 00 00 00 00 |..........|
0000002a
ceph01:~ # ceph fs volume ls
[
{
"name": "cephfs"
}
]
The respective OSD has default thresholds regarding large_omap:
ceph02:~ # ceph daemon osd.9 config show | grep large_omap
"osd_deep_scrub_large_omap_object_key_threshold": "200000",
"osd_deep_scrub_large_omap_object_value_sum_threshold": "1073741824",
Can anyone point me to a solution for this?
Best regards,
Eugen
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033813.html
Hello!
We have a small proxmox farm with
ceph consisting of three nodes.
Each node has 6 disks each with a capacity of 4 TB.
A only one pool has been created on these disks.
Size 2/1.
In theory, this pool should have a capacity: 32.74 TB
But the ceph df command returns only: 22.4 TB (USED + MAX AVAIL)(16.7 + 5.7)
How to explain this difference?
*ceph version is:* 12.2.12-pve1
*ceph df command out:*
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES USED
%USED MAX AVAIL OBJECTS DIRTY READ WRITE
RAW USED
ala01vf01p01 7 N/A N/A 16.7TiB
74.53 5.70TiB 4411119 4.41M 2.62GiB 887MiB
33.4TiB
*crush map:*
host n01vf01 {
id -3 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
id -18 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.0 weight 3.669
item osd.13 weight 3.669
item osd.14 weight 3.669
item osd.15 weight 3.669
item osd.16 weight 3.669
item osd.17 weight 3.669
}
host n02vf01 {
id -5 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
id -19 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.1 weight 3.669
item osd.8 weight 3.669
item osd.9 weight 3.669
item osd.10 weight 3.669
item osd.11 weight 3.669
item osd.12 weight 3.669
}
host n04vf01 {
id -34 # do not change unnecessarily
id -35 class hdd # do not change unnecessarily
id -36 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.7 weight 3.669
item osd.27 weight 3.669
item osd.24 weight 3.669
item osd.25 weight 3.669
item osd.26 weight 3.669
item osd.28 weight 3.669
}
root default {
id -1 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
id -21 class nvme # do not change unnecessarily
# weight 66.042
alg straw2
hash 0 # rjenkins1
item n01vf01 weight 22.014
item n02vf01 weight 22.014
item n04vf01 weight 22.014
}
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}