Personally, when adding drives like this, I set noin (ceph osd set noin), and norebalance
(ceph osd set norebalance). Like your situation, we run smaller clusters; our largest
cluster only has 18 OSDs.
That keeps the cluster from starting data moves until all new drives are in place.
Don't forget to unset these values (ceph osd unset noin, ceph osd unset norebalance).
There are also values you can tune to control whether user traffic or recovery traffic
gets precedent while data is moving.
Thank you,
Dominic L. Hilsbos, MBA
Vice President - Information Technology
Perform Air International Inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com
-----Original Message-----
From: Kai Börnert [mailto:kai.boernert@posteo.de]
Sent: Tuesday, June 15, 2021 8:20 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: Strategy for add new osds
Hi,
as far as I understand it,
you get no real benefit with doing them one by one, as each osd add, can
cause a lot of data to be moved to a different osd, even tho you just
rebalanced it.
The algorithm determining the placement of pg's does not take the
current/historic placement into account, so changing anything at this,
could cause any amount of data to migrate, with each change
Greetings,
Kai
On 6/15/21 5:06 PM, Jorge JP wrote:
Hello,
I have a ceph cluster with 5 nodes (1 hdd each node). I want to add 5 more drives (hdd)
to expand my cluster. What is the best strategy for this?
I will add each drive in each node but is a good strategy add one drive and wait to
rebalance the data to new osd for add new osd? or maybe.. I should be add the 5 drives
without wait rebalancing and ceph rebalancing the data to all new osd?
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io