I’m not exactly sure what I did, but it’s going through now. I did a
ceph orch upgrade check --ceph-version 16.2.7
my current version….
and I did a pause and resume. Now daemons are upgrading to 16.2.11.
-jeremy
On Monday, Feb 27, 2023 at 11:07 PM, Me
<jeremy(a)skidrow.la (mailto:jeremy@skidrow.la)> wrote:
[ceph: root@cn01 /]# ceph -W cephadm,
cluster:
id: bfa2ad58-c049-11eb-9098-3c8cf8ed728d
health: HEALTH_OK
services:
mon: 5 daemons, quorum cn05,cn02,cn03,cn04,cn01 (age 111m)
mgr: cn06.rpkpwg(active, since 7h), standbys: cn02.arszct, cn03.elmwhu
mds: 2/2 daemons up, 2 standby
osd: 35 osds: 35 up (since 111m), 35 in (since 5h)
data:
volumes: 2/2 healthy
pools: 8 pools, 545 pgs
objects: 8.13M objects, 7.7 TiB
usage: 31 TiB used, 95 TiB / 126 TiB avail
pgs: 545 active+clean
io:
client: 4.1 MiB/s rd, 885 KiB/s wr, 128 op/s rd, 14 op/s wr
progress:
Upgrade to quay.io/ceph/ceph:v16.2.11 (0s)
[............................]
Cluster is healthy.
Is there an easy way to see if anything was upgraded through the orchestrator?
-jeremy
> On Monday, Feb 27, 2023 at 10:58 PM, Curt <lightspd(a)gmail.com
(mailto:lightspd@gmail.com)> wrote:
> Did any of your cluster get partial upgrade? What about ceph -W cephadm, does that
return anything or just hang, also what about ceph health detail? You can always try ceph
orch upgrade pause and then orch upgrade resume, might kick something loose, so to speak.
> On Tue, Feb 28, 2023, 10:39 Jeremy Hansen <jeremy(a)skidrow.la
(mailto:jeremy@skidrow.la)> wrote:
> > {
> > "target_image": "quay.io/ceph/ceph:v16.2.11
(
http://quay.io/ceph/ceph:v16.2.11)"quot;,
> > "in_progress": true,
> > "services_complete": [],
> > "progress": "",
> > "message": ""
> > }
> >
> > Hasn’t changed in the past two hours.
> >
> > -jeremy
> >
> >
> >
> > > On Monday, Feb 27, 2023 at 10:22 PM, Curt <lightspd(a)gmail.com
(mailto:lightspd@gmail.com)> wrote:
> > > What does Ceph orch upgrade status return?
> > > On Tue, Feb 28, 2023, 10:16 Jeremy Hansen <jeremy(a)skidrow.la
(mailto:jeremy@skidrow.la)> wrote:
> > > > I’m trying to upgrade from 16.2.7 to 16.2.11. Reading the
documentation, I cut and paste the orchestrator command to begin the upgrade, but I
mistakenly pasted directly from the docs and it initiated an “upgrade” to 16.2.6. I
stopped the upgrade per the docs and reissued the command specifying 16.2.11 but now I see
no progress in ceph -s. Cluster is healthy but it feels like the upgrade process is just
paused for some reason.
> > > >
> > > > Thanks!
> > > > -jeremy
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users(a)ceph.io
(mailto:ceph-users@ceph.io)
> > > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
(mailto:ceph-users-leave@ceph.io)