Hi Konstantin,
I hope you or anybody still follows this old thread.
Can this EC data pool be configured per pool, not per client? If we follow
we may see that cinder
client will access vms and volumes pools, both with read and write
permission. How can we handle this?
If we config with different clients for nova (vms) and cinder (volumes), I
think there will be a problem if there is cross pool access, especially on
write. Let's say that client nova will create volume on instance creation
for booting from that volume. Any thoughts?
Best regards,
Date: Wed, 11 Jul 2018 11:16:27 +0700
From: Konstantin Shalygin <k0ste(a)k0ste.ru>
To: ceph-users(a)lists.ceph.com
Subject: Re: [ceph-users] Erasure coding RBD pool for OpenStack
Glance, Nova and Cinder
Message-ID: <069ac368-22b0-3d18-937b-70ce39287cb1(a)k0ste.ru>
Content-Type: text/plain; charset=utf-8; format=flowed
So if you want, two more questions to you :
- How do you handle your ceph.conf configuration (default data pool by
user) / distribution ? Manually, config management, openstack-ansible...
?
- Did you made comparisons, benchmarks between replicated pools and EC
pools, on the same hardware / drives ? I read that small writes are not
very performant with EC.
ceph.conf with default data pool is only need for Cinder at image
creation time, after this luminous+ rbd client will be found feature
"data-pool" and will perform data-io to this pool.
# rbd info
erasure_rbd_meta/volume-09ed44bf-7d16-453a-b712-a636a0d3d812 <-----
meta pool !
rbd image 'volume-09ed44bf-7d16-453a-b712-a636a0d3d812':
??????? size 1500 GB in 384000 objects
??????? order 22 (4096 kB objects)
??????? data_pool: erasure_rbd_data??????? <----- our data pool
??????? block_name_prefix: rbd_data.6.a2720a1ec432bf
??????? format: 2
??????? features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, data-pool????????? <----- "data-pool" feature
??????? flags:
??????? create_timestamp: Sat Jan 27 20:24:04 2018
k
------------------------------