Hi,
the folline upmap reports "no upmaps proposed".
root@ld3955:/home# osdmaptool om --upmap hdd-upmap.sh
--upmap-pool=hdb_backup --upmap-deviation 0
osdmaptool: osdmap file 'om'
writing upmap command output to: hdd-upmap.sh
checking for upmap cleanups
upmap, max-count 100, max deviation 0
limiting to pools hdb_backup (11)
nno upmaps proposed
root@ld3955:/home# ls -ltr | tail
-rw-r--r-- 1 root root 60758 Nov 5 08:27 osd_tree
-rw-r--r-- 1 root root 21686 Nov 15 14:12
compiled-crushmap-15-11-2019_14-12
-rw-r--r-- 1 root root 29996 Nov 15 14:13
decompiled-crushmap-15-11-2019_14-12
-rw-r--r-- 1 root root 30426 Nov 15 14:55
new-decompiled-crushmap-15-11-2019_14-12
-rw-r--r-- 1 root root 20381 Nov 15 14:55
new-compiled-crushmap-15-11-2019_14-12
-rw-r--r-- 1 root root 12507 Nov 18 14:57 disk-usage
-rw-r--r-- 1 root root 60626 Nov 18 14:58 osd-tree
-rw-r--r-- 1 root root 7380 Nov 22 16:46 ceph.tar.gz
-rw-r--r-- 1 root root 166421 Nov 30 22:53 om
-rw-r--r-- 1 root root 0 Nov 30 23:11 hdd-upmap.sh
In my opinion this shows that the evalutation step is not working
because my OSDs are not balanced.
Therefore I would conclude to open a bug report for this issue.
Thomas
Am 19.11.2019 um 10:22 schrieb Konstantin Shalygin:
On 11/19/19 4:01 PM, Thomas Schneider wrote:
If Ceph is not cabable to manage rebalancing
automatically, how can I
proceed to rebalance the data manually?
Use offline upmap for your target pool:
ceph osd getmap -o om; osdmaptool om --upmap upmap.sh
--upmap-pool=hdd_backup --upmap-deviation 0; bash upmap.sh; rm -f
upmap.sh om
k