Marc Roos wrote:
> > In the past I see some good results (benchmark &
> > latencies) for MySQL and PostgreSQL. However, I've always used
> > 4MB object size. Maybe i can get much better
> > performance on smaller object size. Haven't tried actually.
>
> Did you tune mysql / postgres for this setup? Did you have a default
> ceph rbd setup?
Yes, I had to tune some settings on PostgreSQL. Especially on:
synchronous_commit = off
I have a default RBD settings.
Do you have any recommendation?
Thanks,
Gencer.
Hi, Ceph brain trust:
I'm still trying to wrap my head around some capacity planning for Ceph,
and I can't find a definitive answer to this question in the docs (at least
one that penetrates my mental haze)...
Does the OSD host count affect the total available pool size? My cluster
consists of three 12-bay Dell PowerEdge machines running reflashed PERCs to
make each SAS drive individually addressable. Each node is running 10 OSDs.
Is Ceph limiting the max available pool size because all of my OSDs are
being hosted on just three nodes? If I had 30 OSDs running across ten nodes
instead, a node failure would result in just three OSDs dropping out
instead of ten.
Is there any rationale to this thinking, or am I trying to manufacture a
solution to a problem I still don't understand?
Thanks,
Dallas
Yes, I had to tune some settings on PostgreSQL. Especially on:
synchronous_commit = off
I have a default RBD settings.
Do you have any recommendation?
Thanks,
Gencer.
On 19.10.2020 12:49:51, Marc Roos <m.roos(a)f1-outsourcing.eu> wrote:
> In the past I see some good results (benchmark & latencies) for MySQL
and PostgreSQL. However, I've always used
> 4MB object size. Maybe i can get much better performance on smaller
object size. Haven't tried actually.
Did you tune mysql / postgres for this setup? Did you have a default
ceph rbd setup?
Hi
When I use haproxy with keep-alive mode to rgws, haproxy gives many
responses like this!
Is there any problem with keep-alive mode in rgw?
Using nautilus 14.2.9 with beast frontend.
This problem may also be related to the below unsolved issue, which
specifically mentions 'unfound' objects. Sadly, there is probably
nothing in the report which will help with your troubleshooting.
https://tracker.ceph.com/issues/44286
C.
I enabled a certificate on my radosgw, but I think I am running into the
problem that the s3 clients are accessing the buckets like
bucket.rgw.domain.com. Which fails my cert rgw.domain.com.
Is there any way to configure that only rgw.domain.com is being used?
Good morning all,
I don't know if this happened to someone, but recently a user ran out of their quota for the number of objects in a bucket. The only sign I could see in the logs (tail -f /var/log/ceph/ceph-rgw-dao-wkr-01.rgw0.log)
was the following:
2020-10-01T03:07:02.098+0000 7fd872bf6700 1 ====== req done req=0x7fd8cb13a8a0 op status=-2026 http_status=413 latency=17.304220706s ======
When I removed the User limits the error went from 413 to 422 in the logs.
radosgw-admin quota set --quota-scope=bucket --uid=smithj --max-objects=-1
2020-10-15T11:58:41.661+0000 7f0c79b51700 1 ====== req done req=0x7f0cb7f5f8a0 op status=-2018 http_status=422 latency=0.001000013s ======
I have tried a lot of radosgw-admin command, but so far the error persists. Do you have any ideas ?
Thx !
Sylvain
Hi,
We are trying to introduce SSD/NvME OSD’s and to prevent data moving from current (hdd based) OSD’s while also having erasure coded pools we could not just simply change the erasure coding profile or create a new one and just apply it to the pool.
Reading this list and other posts on forums it was suggested to use the crushtool reclassify to insert device classes into the current crush rules and load that crush map manually.
Not having edited the crush map in this or any other fashion earlier I would appreciate very much if I someone could verify if I have done the reclassify correctly.
Thank you,
[root@cephyr-mon1 crushtest]# crushtool -i crush_comp.c --reclassify --reclassify-root default hdd -o crush_comp_corr.c
classify_root default (-1) as hdd
renumbering bucket -1 -> -29
renumbering bucket -27 -> -30
renumbering bucket -25 -> -31
renumbering bucket -23 -> -32
renumbering bucket -21 -> -33
renumbering bucket -19 -> -34
renumbering bucket -17 -> -35
renumbering bucket -15 -> -36
renumbering bucket -13 -> -37
renumbering bucket -11 -> -38
renumbering bucket -9 -> -39
renumbering bucket -7 -> -40
renumbering bucket -5 -> -41
renumbering bucket -3 -> -42
[root@cephyr-mon1 crushtest]# crushtool -i crush_comp.c --compare crush_comp_corr.c
rule 0 had 0/10240 mismatched mappings (0)
rule 6 had 0/10240 mismatched mappings (0)
rule 7 had 0/4096 mismatched mappings (0)
rule 8 had 0/4096 mismatched mappings (0)
rule 9 had 0/4096 mismatched mappings (0)
rule 10 had 0/4096 mismatched mappings (0)
rule 11 had 0/4096 mismatched mappings (0)
rule 12 had 0/4096 mismatched mappings (0)
rule 13 had 0/4096 mismatched mappings (0)
maps appear equivalent
Regards,
Mathias Lindberg
Tel: +46 (0)31 7723059
Mob: +46 (0)723 526107
Mathias Lindberg
mathlin(a)chalmers.se