I thought of that but it doesn't make much sense. AFAICT min_size should
block IO when i lose 3 osds, but it shouldn't effect the amount of the
stored data. Am i missing something?
On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin <k0ste(a)k0ste.ru> wrote:
On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
What I can't find is the 138,509 G difference between the
ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static
BTW, checking the same data historically shows we have about 1.12x of what
we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in
reality. Anyone have any ideas for why this is the case?
May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3
(+1) is a your calculated 1.67.
k
--
erdem agaoglu