Some time ago on Luminous I also had to change the
crush rules on a all
hdd cluster to hdd (to prepare for adding ssd's and ssd pools). And pg's
started migrating while everything already was on hdd's, looks like this
is still not fixed?
Sage responded to a thread yesterday, how to change crush device
classes without rebalancing (crushtool reclassify):
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/675QZ2JXXX4…
Zitat von Marc Roos <M.Roos(a)f1-outsourcing.eu>eu>:
Some time ago on Luminous I also had to change the
crush rules on a all
hdd cluster to hdd (to prepare for adding ssd's and ssd pools). And pg's
started migrating while everything already was on hdd's, looks like this
is still not fixed?
>
>
>
>
>
> -----Original Message-----
> From: Raymond Berg Hansen [mailto:raymondbh@gmail.com]
> Sent: dinsdag 1 oktober 2019 14:32
> To: ceph-users(a)ceph.io
> Subject: [ceph-users] Re: Nautilus pg autoscale, data lost?
>
> You are absolutly right, I had made a crush rule for device class hdd.
> Did not put this in connection with this problem. When I put the pools
> back in the default crush rule things are starting to fix itself it
> seems.
> Have I done something wrong with this crush rule?
>
> # rules
> rule replicated_rule {
> id 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
> rule replicated-hdd {
> id 1
> type replicated
> min_size 1
> max_size 10
> step take default class hdd
> step chooseleaf firstn 0 type datacenter
> step emit
> }
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io