Hi,
While doing some benchmarks I have two identical Ceph clusters:
3x SuperMicro 1U
AMD Epyc 7302P 16C
256GB DDR
4x Samsung PM983 1,92TB
100Gbit networking
I tested on such a setup with v16.2.4 with fio:
bs=4k
qd=1
IOps: 695
That was very low as I was expecting at least >1000 IOps.
I checked with the second Ceph cluster which was still running v15.2.8,
the result: 1364 IOps.
I then upgraded from 15.2.8 to 15.2.13: 725 IOps
Looking at the differences between v15.2.8 and v15.2.8 of options.cc I
saw these options:
bluefs_buffered_io: false -> true
bluestore_cache_trim_max_skip_pinned: 1000 -> 64
The main difference seems to be 'bluefs_buffered_io', but in both cases
this was already explicitly set to 'true'.
So anything beyond 15.2.8 is right now giving me a much lower I/O
performance with Queue Depth = 1 and Block Size = 4k.
15.2.8: 1364 IOps
15.2.13: 725 IOps
16.2.4: 695 IOps
Has anybody else seen this as well? I'm trying to figure out where this
is going wrong.
Wido
Hello
My attempt to upgrade from Octopus to Pacific ran into
issues, and I currently have one 16.2.4 mon and two 15.2.12
mons. Is this safe to run the cluster like this or should I
shut down the 16.2.4 mon until I figure out what to do next
with the upgrade?
Thanks,
Vlad
Hello
My upgrade from 15.2.12 to 16.2.4 is stuck because a mon
daemon failed to upgrade. Systemctl status of the mon showed
this error:
Error: open /sys/fs/cgroup/cpuacct,cpu/system.slice/...
It turns out there is no /sys/fs/cgroup/cpuacct,cpu
directory on my system. Instead, I have
/sys/fs/cgroup/cpu,cpuacct. Symlinking them appears to have
solved the immediate problem, but if I proceed with the
upgrade to 16.2.4, after reboot, all ceph daemons will
probably fail to start.
Is this an issue with ceph, podman (2.1.1), or the fact that
I am running Centos7?
Is it possible to upgrade from 15.2.12 to 16.2.4 on Centos7?
I thought that installing a version of podman that is
compatible with both would suffice, but apparently not...
Vlad
Hi,
Is there a way to check the omap sizes in the index pool? Either key numbers and size also?
This one doesn't work: https://ceph.com/geen-categorie/get-omap-keyvalue-size/
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com<mailto:istvan.szabo@agoda.com>
---------------------------------------------------
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
I am running the Ceph ansible script to install ceph version Stable-6.0
(Pacific).
When running the sample yml file that was supplied by the github repo it
runs fine up until the "ceph-mon : check if monitor initial keyring already
exists" step. There it will hang for 30-40 minutes before failing.
From my understanding ceph ansible should be creating this keyring and
using it for communication between monitors, so does anyone know why the
playbook would have a hard time with this step?
Thanks in advance!
A DocuBetter Meeting will be held on 09 June 2021 at 1730 UTC.
This is the monthly DocuBetter Meeting that is more convenient for
European and North American Ceph contributors than the other meeting,
which is convenient for people in Australia and Asia (and which is very
rarely attended).
Topics:
- cephadm docs rewrite (ongoing)
- ceph.io copy rewrite and information architecture restructure (ongoing)
- rgw manual install procedure (prospective)
- new ceph-docs mailing list
Bring your docs complaints and requests to this meeting.
Meeting: https://bluejeans.com/908675367
Etherpad: https://pad.ceph.com/p/Ceph_Documentation
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.10.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
In addition to our regular release this week, we're also participating in this
June's "Ceph Month" event with the "go-ceph get together" Birds-of-a-Feather
session on Thursday June 10th at 10:10 Eastern time. It should be visible in
the Ceph Community calendar [1]. If you can't make the BoF, questions,
comments, bugs etc are best directed at our github issues
tracker or github discussions forum.
[1] - https://ceph.io/contribute/#community-calendar
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io
Hi,
I try to create buckets through rgw in following order:
- *bucket1* with *user1* with *access_key1* and *secret_key1*
- *bucket1* with *user2* with *access_key2* and *secret_key2*
when I try to create a second bucket1 with user2 I get *Error response code
BucketAlreadyExists.*
Why? Should not buckets relate only to users? Is this by design and is
there any particular reason that it follows this concept?
Regards,
Rok
Hello.
I have a multisite RGW environment.
When I create a new bucket, the bucket is directly created on master
and secondary.
If I don't want to sync a bucket, I need to stop sync after creation.
Is there any global option as "Do not sync directly, only start if I want to" ?