Hi,
I would upgrade, configure the balancer correctly, then wait a bit for
it to smooth things out.
Afterwards you can reweight back to 1.0.
-- dan
On Mon, Mar 16, 2020 at 4:19 PM Thomas Schneider <74cmonty(a)gmail.com> wrote:
>
> Hi Dan,
>
> indeed I'm trying to balance the PGs.
>
> In order to ensure Ceph cluster operations I used OSD reweight, means
> some specific OSDs are not with reweight 0.8 and 0.9 respectively.
>
> Question:
> Can I upgrade to Ceph 14.2.8 w/o resetting the weight to 1.0?
> Or should I cleanup this reweight first, the upgrade to 14.2.8 and
> enable balancer as last?
>
>
> Regards
> Thomas
>
> Am 16.03.2020 um 16:10 schrieb Dan van der Ster:
> > Hi Thomas,
> > I lost track of your issue. Are you just trying to balance the PGs ?
> > 14.2.8 has big improvements -- check the release notes / blog post
> > about setting the upmap_max_deviations down to 2 or 1.
> > -- Dan
> >
> > On Mon, Mar 16, 2020 at 4:00 PM Thomas Schneider <74cmonty(a)gmail.com>
wrote:
> >> Hi Dan,
> >>
> >> I have opened this this bug report for balancer not working as expected.
> >>
https://tracker.ceph.com/issues/43586
> >>
> >> Then I thought it could make sense to balance the cluster manually by
> >> means of moving PGs from a heavily loaded OSD to another.
> >>
> >> I found your slides "Luminous: pg upmap (dev)
> >>
<https://indico.cern.ch/event/669931/contributions/2742401/attachments/1533434/2401109/upmap.pdf>",
> >> but I didn't fully understand.
> >>
> >> Could you please advise how to move PGs manually?
> >>
> >> Regards
> >> Thomas
> >>
> >> Am 23.01.2020 um 16:05 schrieb Dan van der Ster:
> >>> Hi Frank,
> >>>
> >>> No, it is basically balancing the num_pgs per TB (per osd).
> >>>
> >>> Cheers, Dan
> >>>
> >>>
> >>> On Thu, Jan 23, 2020 at 3:53 PM Frank R <frankaritchie(a)gmail.com
> >>> <mailto:frankaritchie@gmail.com>> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> Does using the Upmap balancer require that all OSDs be the same
size
> >>> (per device class)?
> >>>
> >>> thx
> >>> Frank
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users(a)ceph.io
> >>> <mailto:ceph-users@ceph.io>
> >>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >>> <mailto:ceph-users-leave@ceph.io>
> >>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users(a)ceph.io
> >>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >>
>
>