Hi Thoralf,
On 26.02.20 15:35, thoralf schulze wrote:
recently, we've come across a lot of advice to
only use replicated rados
pools as default- (ie: root-) data pools for cephfs¹.
It should be possible to use an EC pool for CephFS data:
https://docs.ceph.com/docs/master/cephfs/createfs/#using-erasure-coded-pool…
unfortunately, we either skipped or blatantly ignored
this advice while
creating our cephfs, so our default data pool is an erasure coded one
with k=2 and m=4, which _should_ be fine availability-wise. could anyone
elaborate on the impacts regarding the performance of the whole setup?
Apart from the CephFS issue I think EC pools should always have k >= m
as m is the parity part.
if a migration to a replicated pool is recommend:
would a simple
ceph osd pool set $default_data crush_rule $something_replicated
suffice, or would you recommend a more elaborated approach, something
along the lines of taking the cephfs down, copy contents of default_pool
to default_new, rename default_new default_pool, taking the cephfs up again?
You can only migrate by rsync'ing the data to a complete new CephFS.
There is no inplace migration possible. You also cannot convert a pool
from EC to replicated or vice versa.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin