in the logs.
please make sure `ceph mon ok-to-stop vx-rg23-rk65-u43-130`
succeeds.
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
Hi Sebastian,
I cannot see my replies in here. So i put attachment as a body here:
2020-05-21T18:52:36.813+0000 7faf19f20040 0 set uid:gid to 167:167 (ceph:ceph)
2020-05-21T18:52:36.813+0000 7faf19f20040 0 ceph version 15.2.2
(0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable), process ceph-mgr, pid 1
2020-05-21T18:52:36.817+0000 7faf19f20040 0 pidfile_write: ignore empty --pid-file
2020-05-21T18:52:36.853+0000 7faf19f20040 1 mgr[py] Loading python module
'alerts'
2020-05-21T18:52:36.957+0000 7faf19f20040 1 mgr[py] Loading python module
'balancer'
2020-05-21T18:52:37.029+0000 7faf19f20040 1 mgr[py] Loading python module
'cephadm'
2020-05-21T18:52:37.237+0000 7faf19f20040 1 mgr[py] Loading python module
'crash'
2020-05-21T18:52:37.333+0000 7faf19f20040 1 mgr[py] Loading python module
'dashboard'
2020-05-21T18:52:37.981+0000 7faf19f20040 1 mgr[py] Loading python module
'devicehealth'
2020-05-21T18:52:38.045+0000 7faf19f20040 1 mgr[py] Loading python module
'diskprediction_local'
2020-05-21T18:52:38.221+0000 7faf19f20040 1 mgr[py] Loading python module
'influx'
2020-05-21T18:52:38.293+0000 7faf19f20040 1 mgr[py] Loading python module
'insights'
2020-05-21T18:52:38.425+0000 7faf19f20040 1 mgr[py] Loading python module
'iostat'
2020-05-21T18:52:38.489+0000 7faf19f20040 1 mgr[py] Loading python module
'k8sevents'
2020-05-21T18:52:39.077+0000 7faf19f20040 1 mgr[py] Loading python module
'localpool'
2020-05-21T18:52:39.133+0000 7faf19f20040 1 mgr[py] Loading python module
'orchestrator'
2020-05-21T18:52:39.277+0000 7faf19f20040 1 mgr[py] Loading python module
'osd_support'
2020-05-21T18:52:39.433+0000 7faf19f20040 1 mgr[py] Loading python module
'pg_autoscaler'
2020-05-21T18:52:39.545+0000 7faf19f20040 1 mgr[py] Loading python module
'progress'
2020-05-21T18:52:39.633+0000 7faf19f20040 1 mgr[py] Loading python module
'prometheus'
2020-05-21T18:52:40.013+0000 7faf19f20040 1 mgr[py] Loading python module
'rbd_support'
2020-05-21T18:52:40.253+0000 7faf19f20040 1 mgr[py] Loading python module
'restful'
2020-05-21T18:52:40.553+0000 7faf19f20040 1 mgr[py] Loading python module
'rook'
2020-05-21T18:52:41.229+0000 7faf19f20040 1 mgr[py] Loading python module
'selftest'
2020-05-21T18:52:41.285+0000 7faf19f20040 1 mgr[py] Loading python module
'status'
2020-05-21T18:52:41.357+0000 7faf19f20040 1 mgr[py] Loading python module
'telegraf'
2020-05-21T18:52:41.421+0000 7faf19f20040 1 mgr[py] Loading python module
'telemetry'
2020-05-21T18:52:41.581+0000 7faf19f20040 1 mgr[py] Loading python module
'test_orchestrator'
2020-05-21T18:52:41.937+0000 7faf19f20040 1 mgr[py] Loading python module
'volumes'
2020-05-21T18:52:42.121+0000 7faf19f20040 1 mgr[py] Loading python module
'zabbix'
2020-05-21T18:52:42.189+0000 7faf06a1a700 0 ms_deliver_dispatch: unhandled message
0x556226c8e6e0 mon_map magic: 0 v1 from mon.1 v2:192.168.0.3:3300/0
2020-05-21T18:52:43.557+0000 7faf06a1a700 1 mgr handle_mgr_map Activating!
2020-05-21T18:52:43.557+0000 7faf06a1a700 1 mgr handle_mgr_map I am now activating
2020-05-21T18:52:43.665+0000 7faed44a7700 0 [balancer DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.665+0000 7faed44a7700 1 mgr load Constructed class from module:
balancer
2020-05-21T18:52:43.665+0000 7faed44a7700 0 [cephadm DEBUG root] setting log level based
on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.689+0000 7faed44a7700 1 mgr load Constructed class from module:
cephadm
2020-05-21T18:52:43.689+0000 7faed44a7700 0 [crash DEBUG root] setting log level based
on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.689+0000 7faed44a7700 1 mgr load Constructed class from module:
crash
2020-05-21T18:52:43.693+0000 7faed44a7700 0 [dashboard DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.693+0000 7faed44a7700 1 mgr load Constructed class from module:
dashboard
2020-05-21T18:52:43.693+0000 7faed44a7700 0 [devicehealth DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.693+0000 7faed44a7700 1 mgr load Constructed class from module:
devicehealth
2020-05-21T18:52:43.701+0000 7faed44a7700 0 [iostat DEBUG root] setting log level based
on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.701+0000 7faed44a7700 1 mgr load Constructed class from module:
iostat
2020-05-21T18:52:43.709+0000 7faed44a7700 0 [orchestrator DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.709+0000 7faed44a7700 1 mgr load Constructed class from module:
orchestrator
2020-05-21T18:52:43.717+0000 7faed44a7700 0 [osd_support DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.717+0000 7faed44a7700 1 mgr load Constructed class from module:
osd_support
2020-05-21T18:52:43.717+0000 7faed44a7700 0 [pg_autoscaler DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.721+0000 7faed44a7700 1 mgr load Constructed class from module:
pg_autoscaler
2020-05-21T18:52:43.721+0000 7faed44a7700 0 [progress DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.721+0000 7faed44a7700 1 mgr load Constructed class from module:
progress
2020-05-21T18:52:43.729+0000 7faed44a7700 0 [prometheus DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.729+0000 7faed44a7700 1 mgr load Constructed class from module:
prometheus
2020-05-21T18:52:43.733+0000 7faed44a7700 0 [rbd_support DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:44.761+0000 7faed4ca8700 0 log_channel(cluster) log [DBG] : pgmap v4:
97 pgs: 33 undersized+peered, 64 undersized+degraded+peered; 4.8 GiB data, 5.8 GiB used,
44 TiB / 44 TiB avail; 1379/2758 objects degraded (50.000%)
2020-05-21T18:52:45.641+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v5:
97 pgs: 33 undersized+peered, 64 undersized+degraded+peered; 4.8 GiB data, 6.4 GiB used,
47 TiB / 47 TiB avail; 1379/2758 objects degraded (50.000%)
2020-05-21T18:52:47.645+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v6:
97 pgs: 33 undersized+peered, 64 undersized+degraded+peered; 4.8 GiB data, 6.4 GiB used,
47 TiB / 47 TiB avail; 1379/2758 objects degraded (50.000%)
2020-05-21T18:52:49.645+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v9:
97 pgs: 33 undersized+peered, 64 undersized+degraded+peered; 4.8 GiB data, 11 GiB used, 80
TiB / 80 TiB avail; 1379/2758 objects degraded (50.000%)
2020-05-21T18:52:49.805+0000 7faed4ca8700 0 log_channel(audit) log [DBG] :
from='client.134148 -' entity='client.admin' cmd=[{"prefix":
"orch upgrade start", "ceph_version": "15.2.2",
"target": ["mon-mgr", ""]}]: dispatch
2020-05-21T18:52:51.645+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v12:
97 pgs: 10 active+clean, 20 undersized+peered, 27 peering, 40 undersized+degraded+peered;
4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 889/2758 objects degraded (32.234%)
2020-05-21T18:52:51.817+0000 7faed44a7700 1 mgr load Constructed class from module:
rbd_support
2020-05-21T18:52:51.817+0000 7faed44a7700 0 [restful DEBUG root] setting log level based
on debug_mgr: WARNING (1/5)
2020-05-21T18:52:51.817+0000 7faed44a7700 1 mgr load Constructed class from module:
restful
2020-05-21T18:52:51.817+0000 7faed44a7700 0 [status DEBUG root] setting log level based
on debug_mgr: WARNING (1/5)
2020-05-21T18:52:51.817+0000 7faed44a7700 1 mgr load Constructed class from module:
status
2020-05-21T18:52:51.821+0000 7faecb0fb700 0 [restful WARNING root] server not running:
no certificate configured
2020-05-21T18:52:51.821+0000 7faed44a7700 0 [telemetry DEBUG root] setting log level
based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:51.825+0000 7faed44a7700 1 mgr load Constructed class from module:
telemetry
2020-05-21T18:52:51.825+0000 7faed44a7700 0 [volumes DEBUG root] setting log level based
on debug_mgr: WARNING (1/5)
2020-05-21T18:52:51.837+0000 7faed44a7700 1 mgr load Constructed class from module:
volumes
2020-05-21T18:52:51.853+0000 7faec48ee700 -1 client.0 error registering admin socket
command: (17) File exists
2020-05-21T18:52:51.853+0000 7faec48ee700 -1 client.0 error registering admin socket
command: (17) File exists
2020-05-21T18:52:51.853+0000 7faec48ee700 -1 client.0 error registering admin socket
command: (17) File exists
2020-05-21T18:52:51.853+0000 7faec48ee700 -1 client.0 error registering admin socket
command: (17) File exists
2020-05-21T18:52:51.853+0000 7faec48ee700 -1 client.0 error registering admin socket
command: (17) File exists
2020-05-21T18:52:53.645+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v14:
97 pgs: 10 active+clean, 20 undersized+peered, 27 peering, 40 undersized+degraded+peered;
4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 889/2758 objects degraded (32.234%)
2020-05-21T18:52:55.053+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
First pull of docker.io/ceph/ceph:v15.2.2
2020-05-21T18:52:55.649+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v15:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 180 KiB/s rd,
5.5 KiB/s wr, 19 op/s
2020-05-21T18:52:57.649+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v16:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 135 KiB/s rd,
4.1 KiB/s wr, 14 op/s
2020-05-21T18:52:59.649+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v17:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 123 KiB/s rd,
4.0 KiB/s wr, 15 op/s
2020-05-21T18:53:01.133+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
Target is docker.io/ceph/ceph:v15.2.2 with id
4569944bb86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa
2020-05-21T18:53:01.137+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
Checking mgr daemons...
2020-05-21T18:53:01.141+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
All mgr daemons are up to date.
2020-05-21T18:53:01.141+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
Checking mon daemons...
2020-05-21T18:53:01.649+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v18:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 111 KiB/s rd,
3.6 KiB/s wr, 15 op/s
2020-05-21T18:53:02.381+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:53:03.653+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v19:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 93 KiB/s rd,
3.0 KiB/s wr, 12 op/s
2020-05-21T18:53:05.653+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v20:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 93 KiB/s rd,
4.6 KiB/s wr, 13 op/s
2020-05-21T18:53:07.653+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v21:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 2.7 KiB/s rd,
1.8 KiB/s wr, 3 op/s
2020-05-21T18:53:09.658+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v22:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 2.7 KiB/s rd,
1.8 KiB/s wr, 3 op/s
2020-05-21T18:53:11.658+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v23:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 1023 B/s rd,
1.6 KiB/s wr, 1 op/s
2020-05-21T18:53:13.658+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v24:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 1.6 KiB/s wr, 0
op/s
2020-05-21T18:53:15.658+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v25:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 1.8 KiB/s wr, 1
op/s
2020-05-21T18:53:17.402+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:53:17.658+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v26:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 255 B/s wr, 0
op/s
2020-05-21T18:53:19.662+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v27:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 255 B/s wr, 0
op/s
2020-05-21T18:53:21.662+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v28:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 255 B/s wr, 0
op/s
2020-05-21T18:53:23.662+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v29:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 255 B/s wr, 0
op/s
2020-05-21T18:53:25.662+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v30:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.7 KiB/s wr, 0
op/s
2020-05-21T18:53:27.666+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v31:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:29.666+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v32:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:31.666+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v33:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:32.414+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:53:33.666+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v34:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:35.670+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v35:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:37.670+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v36:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:39.670+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v37:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:41.670+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v38:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:43.670+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v39:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:45.674+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v40:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:47.430+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:53:47.674+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v41:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:49.674+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v42:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:51.674+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v43:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:53.678+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v44:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:53:55.678+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v45:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:57.678+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v46:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:53:59.678+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v47:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:54:01.682+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v48:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:54:02.454+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
Target is docker.io/ceph/ceph:v15.2.2 with id
4569944bb86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa
2020-05-21T18:54:02.458+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
Checking mgr daemons...
2020-05-21T18:54:02.458+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
All mgr daemons are up to date.
2020-05-21T18:54:02.458+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade:
Checking mon daemons...
2020-05-21T18:54:03.614+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:54:03.682+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v49:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:54:05.682+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v50:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 5.4 KiB/s wr, 0
op/s
2020-05-21T18:54:07.682+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v51:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:09.686+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v52:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:11.686+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v53:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:13.690+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v54:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:15.691+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v55:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:17.691+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v56:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:18.631+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:54:19.691+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v57:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:21.691+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v58:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:23.691+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v59:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:25.695+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v60:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 170 B/s wr, 0
op/s
2020-05-21T18:54:27.695+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v61:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 170 B/s wr, 0
op/s
2020-05-21T18:54:29.695+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v62:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 170 B/s wr, 0
op/s
2020-05-21T18:54:31.695+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v63:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 170 B/s wr, 0
op/s
2020-05-21T18:54:33.647+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:54:33.695+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v64:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 170 B/s wr, 0
op/s
2020-05-21T18:54:35.699+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v65:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail; 170 B/s wr, 0
op/s
2020-05-21T18:54:37.699+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v66:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:39.699+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v67:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:41.699+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v68:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:43.703+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v69:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:45.703+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v70:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:47.703+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v71:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:48.663+0000 7faee6b7d700 0 log_channel(cephadm) log [INF] : Upgrade: It
is NOT safe to stop mon.vx-rg23-rk65-u43-130
2020-05-21T18:54:49.703+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v72:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:51.707+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v73:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:53.707+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v74:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:55.707+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v75:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:57.707+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v76:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
2020-05-21T18:54:59.711+0000 7faed5caa700 0 log_channel(cluster) log [DBG] : pgmap v77:
97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB avail
Sebastian Wagner wrote:
Hi Gencer,
I'm going to need the full mgr log file.
Best,
Sebastian
Am 20.05.20 um 15:07 schrieb Gencer W. Genç:
> Ah yes,
>
> {
> "mon": {
> "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee)
octopus
> (stable)": 2
> },
> "mgr": {
> "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8)
octopus
> (stable)": 2
> },
> "osd": {
> "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee)
octopus
> (stable)": 24
> },
> "mds": {
> "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee)
octopus
> (stable)": 2
> },
> "overall": {
> "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee)
octopus
> (stable)": 28,
> "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8)
octopus
> (stable)": 2
> }
> }
>
> How can i fix this?
>
> Gencer.
> On 20.05.2020 16:04:33, Ashley Merrick <singapore(a)amerrick.co.uk>
wrote:
> Does:
>
>
> ceph versions
>
>
> show any services yet running on 15.2.2?
>
>
>
> ---- On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç
<gencer(a)gencgiyen.com> wrote
> ----
>
>
> Hi Ashley,
> $ ceph orch upgrade status
>
>
> {
>
> "target_image": "docker.io/ceph/ceph:v15.2.2",
>
> "in_progress": true,
>
> "services_complete": [],
>
> "message": ""
>
> }
>
>
> Thanks,
>
> Gencer.
>
>
> On 20.05.2020 15:58:34, Ashley Merrick <singapore(a)amerrick.co.uk
> [mailto:singapore@amerrick.co.uk]> wrote:
>
> What does
>
> ceph orch upgrade status
>
> show?
>
>
>
> ---- On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç <gencer(a)gencgiyen.com
> [mailto:gencer@gencgiyen.com]> wrote ----
>
>
> Hi,
>
> I've 15.2.1 installed on all machines. On primary machine I executed ceph
upgrade
> command:
>
> $ ceph orch upgrade start --ceph-version 15.2.2
>
>
> When I check ceph -s I see this:
>
> progress:
> Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
> [=...........................] (remaining: 8h)
>
> It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get stuck
at
> this point.
>
> Is there any way to know why this has stuck?
>
> Thanks,
> Gencer.
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io [mailto:ceph-users@ceph.io]
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> [mailto:ceph-users-leave@ceph.io]
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäftsführer: Felix Imendörffer