+dev(a)ceph.io <dev(a)ceph.io>
---------- Forwarded message ---------
From: Amit Ghadge <amitg.b14(a)gmail.com>
Date: Sun, Mar 22, 2020 at 6:31 AM
Subject: [ceph-users] Maximum limit of lifecycle rule length
To: <ceph-users(a)ceph.io>
He All,
We set rgw_lc_max_rules to 10000 but we seen the issue while the xml rule
length are > 1MB It return InvalidRange, the format is below,
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<Status>Enabled</Status><Prefix>test_1/</Prefix><Expiration><Days>1</Days></Expiration>.</Rule>
.
.
.
</LifecycleConfiguration>
Any reason why Ceph not allowed lc rule length > 1MB.
Thanks,
AmitG
Hi.
For a long time I was under an impression that clones are as efficient
in bluestore as snapshots.
But today I finally decided to test it and ... I discovered it was an
utterly wrong impression :) RBD copies the whole 4 MB object even when a
small 4 KB block is modified within it in the child image. In my
all-NVMe cluster this leads to 40 (40!!!) random write iops (bs=4k
iodepth=1) in a fresh RBD clone, which is terrible.
Question of the day: is it possible to reimplement RBD clones using
"sparse objects"? As I understand the support for sparse objects
themselves is already there. So maybe librbd could only write the
modified part to the child image when writing and read "holes" from
parents when reading?
--
Vitaliy Filippov
A few months ago I created a feature request to lower the default Zstd compression level to 1 and make it configurable. This fixes the performance issue that prevents us from using Zstd for compression on our clusters (the current default in Ceph is compression level 5, but the Zstd upstream uses 3):
https://tracker.ceph.com/issues/43377
There weren't any takers, so I created my own fix that I would like to see merged and backported to Nautilus and Octopus:
https://github.com/ceph/ceph/pull/33790
Let me know what you guys think and if there are any changes you'd like to see.
Thanks,
Bryan
Mark called out [0] a specific issue [1] in older versions of rocksdb that
may cause problems when running multiple concurrent compaction threads.
The luminous 12.2.13 rocksdb submodule branch rests somewhere between the
following rocksdb tags:
tags/v4.9 (960 commits ahead)
tags/v5.5.1 (97 commits behind)
The concurrency issue was fixed by a commit [2] that landed well after
v5.5.1:
git describe --contains 4420df4b0e15ae88911e960c4fbafbaf8450fcf7
v5.15.10~154
The fix is present in nautilus, but not in luminous.
Am I missing something that mitigates the risk of running concurrent
compactions in Luminous? Should this default be reverted back to 1?
[0] https://github.com/ceph/ceph/pull/29027#issue-297158998
[1] https://github.com/facebook/rocksdb/pull/3926
[2]
https://github.com/facebook/rocksdb/commit/4420df4b0e15ae88911e960c4fbafbaf…
Thanks,
Dan Hill
I've had this PR sitting around for a while:
https://github.com/ceph/ceph/pull/31885
It's bitrotted a bit, and I'll clean that up soon, but after looking
over cephadm, I wonder if it would make sense to also extend it to do
these actions on machines that are just intended to be kcephfs or krbd
clients.
We typically don't need to do a full-blown install on the clients, so
being able to install just the minimum packages needed and do a minimal
conf/keyring setup would be nice.
Does this make sense? I'll open a tracker if the principal cephadm devs
are OK with it.
Thanks,
--
Jeff Layton <jlayton(a)redhat.com>
Hi,
Recently, i have written some code to bench the performance of ceph monitor cluster and etcd, and compare the results.
I installed ceph cluster (only three monitors, no osds, no mgr, no rgw, no mds) on my three virtualbox machines, and also installed etcd cluster on these three machines. The installed ceph version is luminous 12.2.8, the version of etcd is 3.3.11. The configuration of each virtualbox machine is 10GB memory, and 8 core cpu.
ceph mon cluster was benched using librados's mon_command api, send config-key related command to monitors, `config-key set` for writing key-value, `config-key get` for reading key-value. The key is ranged in [0, 1024), the value is random hex string of length 32, The bench tool is written using c++.
etcd was benched using etcd client v2 version's api go.etcd.io/etcd/client, for reading key-values I have set the Quorum flag in the GetOptions to achieve linearizability consistency. The key is ranged in [0, 1024), the value is random hex string of length 32. The bench tool is written using golang.
Both of the bench tools are runned on the first virtualbox machine.
Here is the bench result:
--------------------------------------------------------------------
| ceph monitors | etcd |
--------------------------------------------------------------------
| qps latency | qps latency |
| (max#min#avg) | (max#min#avg) |
--------------------------------------------------------------------
single concurrent read | 623 407#0#1 | 481 71#1#1 |
single concurrent write | 116 454#3#8 | 483 26#1#1 |
--------------------------------------------------------------------
16 concurrent read | 1110 1220#0#14 | 3322 23#1#4 |
16 concurrent write | 293 440#6#54 | 3280 29#1#4 |
--------------------------------------------------------------------
32 concurrent read | 1176 1161#0#27 | 4006 58#1#7 |
32 concurrent write | 332 754#8#97 | 4297 32#1#6 |
--------------------------------------------------------------------
64 concurrent read | 1160 1623#0#55 | 4954 156#2#12 |
64 concurrent write | 336 1738#8#192 | 5013 92#3#12 |
--------------------------------------------------------------------
As the result shows:
1. for ceph monitors, reading is about 4 times faster than write, which may because ceph monitor using lease, so all monitors can service reading request.
2. for etcd cluster, the performance between reading and writing is no big difference.
3. compare ceph monitors and etcd, ceph monitors is much slower than etcd, especially in multi concurrent situation.
Best wishes,
Yao Zongyou
Apparently I'm not receiving emails from this list, even though I'm
definitely subscribed. Just wanted to test if this goes through or not
to confirm. Sorry for the noise.
Regards,
Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tserong(a)suse.com
I m trying to create a user for testing purpose for rgw using this command
`sudo radosgw-admin user create --uid="testuser" --display-name="First User"
`
But I m getting error message(below is the image of terminal)
[image: Screenshot from 2020-03-18 00-17-44.png]
rgw.conf file
[image: Screenshot from 2020-03-18 00-18-21.png]
ceph.conf file
os-ubuntu 18.04.
Please help me ;)