Apologies for not including the blog post explaining pool migration but here it is: https://ceph.com/geen-categorie/ceph-pool-migration/

On 7/27/19 11:51 PM, Valentin Bajrami wrote:

Hello There,

Recently, I've been reading about how to migrate an existing EC pool which has the following profile:

crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=2
m=2
plugin=jerasure
technique=reed_sol_van
w=8

To a new EC pool with the following profile:

crush-device-class=
crush-failure-domain=rack
crush-root=default
jerasure-per-chunk-alignment=false
k=2
m=2
plugin=jerasure
technique=reed_sol_van
w=8

So, I've been reading the following blog but not sure if this will work with an EC pool. The header which says "The simple way" shows the steps how it's done but there is a small comment beneath it which says and I quote:

"But it does not work in all cases. For example with EC pools : “error copying pool testpool => newpool: (95) Operation not supported”."

My setup is as follows:

DC1 = 4 servers each with 1 OSD running

DC2 = 4 servers each with 1 OSD running

Since my current EC profile crush-failure-domain is set to 'host' (which seems to be the default) I wanted to change that to 'rack' so I can ensures that no two chunks are stored in the same rack.

The idea here is to place 4 OSD's on DC1 in rack1  and 4 OSD's on DC2 in rack2. I am aware I'd need to modify the crush ruleset to achieve, however, from this starting point, what's best practice to achieve this?

Please see a diagram (Thanks to lordcirth_ on IRC oftc  #ceph) attached to this email.

Please let me know if you need additional information from my side.

-- 
Met vriendelijke groeten / Kind regards,

Valentin Bajrami
Target Holding 

_______________________________________________
Dev mailing list -- dev@ceph.io
To unsubscribe send an email to dev-leave@ceph.io
-- 
Met vriendelijke groeten,

Valentin Bajrami
Target Holding