Hey,
What is the current status of Kinetic KV support in Ceph?
I'm asking because:
https://www.crn.com.au/news/seagate-quietly-bins-open-storage-project-519345 ..
and the fact that kinetic-cpp-client hasn't been updated in four years and
only compiles against OpenSSL 1.0.2, which will become EOL by the end of
2019.
Or am I totally wrong? ^
Thank you for reply in advance,
/Johan
Hi folks,
Originally our osd tree looked like this:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT
PRI-AFF
-1 2073.15186 root default
-14 176.63100 rack s01-rack
-19 176.63100 host s01
<snip osds>
-15 171.29900 rack s02-rack
-20 171.29900 host s02
<snip osds>
etc. You get the idea. It was a legacy thing as we've been upgrading
this cluster
since probably firefly, and started with way less hardware.
The crush rule was set up like this originally:
step take default
step chooseleaf firstn 0 type rack
which we have modified to
step take default
step chooseleaf firstn 0 type host
taking advantage of chooseleaf's behavior (eg searching in depth instead
of just
a single level).
Now we thought we could get rid of the rack buckets simply by moving the
host buckets to the root using "ceph osd crush move s01 root=default",
however
this resulted in a bunch of data movement.
Swapping the IDs manually in the crushmap seems to work (verified via
crushtool's
--compare), eg. changing the ID of s01 to s01-rack's and vice versa,
including
all shadow trees.
Looking around I saw that there is a swap-bucket command but that does
not swap
the IDs just bucket contents, so would result in data movement.
Other than manually editing the crushmap is there a better way to
achieve this?
Is this way the most optimal?
Cheers,
Zoltan
HI,
I recently upgraded my cluster from 12.2 to 14.2 and I'm having some
trouble getting the mgr dashboards for grafana working.
I setup Prometheus and Grafana per
https://docs.ceph.com/docs/nautilus/mgr/prometheus/#mgr-prometheus
However, for the osd disk performance statistics graphs on the host details
dashboard I'm getting the following error:
"found duplicate series for the match group {device="dm-5",
instance=":9100"} on the right hand-side of the operation:
[{name="ceph_disk_occupation", ceph_daemon="osd.13", db_device="/dev/dm-8",
device="dm-5", instance=":9100", job="ceph"}, {name="ceph_disk_occupation",
ceph_daemon="osd.15", db_device="/dev/dm-10", device="dm-5",
instance=":9100", job="ceph"}];many-to-many matching not allowed: matching
labels must be unique on one side"
This also happens on the following graphs:
Host Overview/AVG Disk Utilization
Host Details/OSD Disk Performance Statistics/*
Also the following graphs show no data points:
OSD Details/Physical Device Performance/*
prometheus version: 2.12.0
node exporter: 0.15.2
grafana version: 6.3.3
note that my osds all have separate data and rocks db devices. I have also
upgraded all the osds to nautilus via ceph-bluestore-tool repair.
Any idea what's needed to fix this?
Thanks
below are the Prometheus config files
prometheus.yml
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'node'
file_sd_configs:
- files:
- node_targets.yml
- job_name: 'ceph'
honor_labels: true
file_sd_configs:
- files:
- ceph_targets.yml
----
node_targets.yml:
[
{
"targets": [ "nas-osd-01:9100" ],
"labels": {
"instance": "nas-osd-01"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-02"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-03"
}
}
]
---
ceph_targets.yml:
[
{
"targets": [ "nas-osd-01:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-02:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-03:9283" ],
"labels": {}
}
]