Hi Kenneth,
I did a migration from 2 to 3 pool layout recently. The only way to do this within ceph at
the moment seems to be to create a second ceph fs with the new layout, rsync everything
over and delete the old ceph fs. I had enough spare capacity and room for the extra PGs to
do that.
This requires downtime during the final rsync.
Having done the migration, I don't really see any difference in performance or FS
stability. It looks like the extra replicated default data pool does not have such a large
impact and probably only in rare situations. I usually see no activity on it. Since the
2-pool layout will not run out of support, maybe it is not really that important to
migrate?
I didn't hear anything about in which situation it may make a decisive difference, the
recommendation is formulated rather weakly. Maybe there are some future changes where this
becomes important, but this would need to be answered by a developer.
If you can't afford the downtime and are happy with how it works, I wouldn't
bother too much. Would be nice to hear a bit more from someone with technical insight on
code level.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Kenneth Waegeman <kenneth.waegeman(a)ugent.be>
Sent: 07 May 2020 10:48:14
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: cephfs change/migrate default data pool
Someone an idea /experience if this is possible ? :)
On 29/04/2020 14:56, Kenneth Waegeman wrote:
Hi all,
I read in some release notes it is recommended to have your default
data pool replicated and use erasure coded pools as additional pools
through layouts. We have still a cephfs with +-1PB usage with a EC
default pool. Is there a way to change the default pool or some other
kind of migration without having to recreate the FS?
Thanks!
Kenneth
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io