Am 05.04.21 um 21:27 schrieb Peter Woodman:
yeah, but you don't want to have those reference
objects in an EC pool,
that's iiuc been explicitly disallowed in newer
versions, as it's a performance suck. so leaving them in the replicated pool is good
:)
I know, but that's quite workload-dependent. We actually fare quite well with our
existing EC-only data pool on HDDs,
keeping only the metadata on a small replicated NVMe pool. If most of your workloads are
write-once, read-many, and of that, most are streaming reads,
this may be exactly down your alley ;-).
IIUC, newer versions disallow this only to clarify it might not be what you want
performance-wise,
but they still allow to override this if you know it is what you want.
So in our case, indeed an EC-to-EC-migration would be what we'd look at (once we add
more and more servers).
This could either be solved by "EC profile migration" (if it was possible),
or, more generally, by adding the possibility to migrate the primary data
pool of an existing FS.
But as I understand it, for now, none of that is possible just yet, and the only "big
hammer" would be to create a new FS and copy things over.
CephFS mirroring (of snapshots) in Pacific may make this easier by allowing to reduce any
actual downtime for users, but any other solution would be much appreciated.
Cheers,
Oliver
On Mon, Apr 5, 2021 at 2:55 PM Oliver Freyermuth <freyermuth(a)physik.uni-bonn.de
<mailto:freyermuth@physik.uni-bonn.de>> wrote:
Hi,
that really looks like a useful tool, thanks for mentioning this on
the list
:-).
However, I'd also love to learn about a different way —
as documentation
states:
"You may notice that object counts in your
primary data pool (the one passed to fs new) continue to increase, even if files are being
created in
the pool you added."
https://docs.ceph.com/en/latest/cephfs/file-layouts/
<https://docs.ceph.com/en/latest/cephfs/file-layouts/>
So I think while this will migrate the bulk of the data, it will never be
a full migration the way CephFS seems to be implemented.
Especially for growing EC clusters, it would be helpful to be able to migrate to a
different, more space-efficient EC profile as the number of hosts increases.
We're not at this point yet, but one day we'll surely be. Right now, the only
"complete migration" approach seems to be to create a new FS, and migrate things
over...
Am I right?
Cheers,
Oliver
Am 05.04.21 um 19:22 schrieb Peter Woodman:
hi, i made a tool to do this. it’s rough around
the edges and has some
known bugs with symlinks as parent paths but it checks all file layouts
to
> see if they match the directory layout they’re in, and if not, makes them
> so by copying and replacing. so to ‘migrate’ set
your directory layouts
and
then run this
tool to move everything to the right places.
i’m unaware of another way of doing this so if there is someone
tell me!
https://git.sr.ht/~pjjw/cephfs-layout-tool
<https://git.sr.ht/~pjjw/cephfs-layout-tool>
On Sun, Apr 4, 2021 at 5:43 PM <ceph(a)fionera.de <mailto:ceph@fionera.de>>
wrote:
Hello everyone,
I currently have a CephFS running with about 60TB of Data. I created it
with a replicated pool as default pool, an erasure coded one as
additional data pool like it is described in the docs. Now I want to
migrate the data from the replicated pool, to the new erasure coded one.
I couldn't find any docs and was wondering if its even possible
currently.
Thank you very much,
Fionera
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io <mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io
<mailto:ceph-users-leave@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io <mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io
<mailto:ceph-users-leave@ceph.io>