Hi everyone,
Our telemetry service is up and running again.
Thanks Adam Kraitman and Dan Mick for restoring the service.
We thank you for your patience and appreciate your contribution to the
project!
Thanks,
Yaarit
On Tue, Jan 3, 2023 at 3:14 PM Yaarit Hatuka <yhatuka(a)redhat.com> wrote:
> Hi everyone,
>
> We are having some infrastructure issues with our telemetry backend, and
> we are working on fixing it.
> Thanks Jan Horacek for opening this issue
> <https://tracker.ceph.com/issues/58371> [1]. We will update once the
> service is back up.
> We are sorry for any inconvenience you may be experiencing, and appreciate
> your patience.
>
> Thanks,
> Yaarit
>
> [1] https://tracker.ceph.com/issues/58371
>
Hey all,
We will be having a Ceph science/research/big cluster call on Tuesday
January 31st. If anyone wants to discuss something specific they can add
it to the pad linked below. If you have questions or comments you can
contact me.
This is an informal open call of community members mostly from
hpc/htc/research environments where we discuss whatever is on our minds
regarding ceph. Updates, outages, features, maintenance, etc...there is
no set presenter but I do attempt to keep the conversation lively.
Pad URL:
https://pad.ceph.com/p/Ceph_Science_User_Group_20230131
Ceph calendar event details:
January 31, 2023
15:00 UTC
4pm Central European
9am Central US
Description: Main pad for discussions:
https://pad.ceph.com/p/Ceph_Science_User_Group_Index
Meetings will be recorded and posted to the Ceph Youtube channel.
To join the meeting on a computer or mobile phone:
https://bluejeans.com/908675367?src=calendarLink
To join from a Red Hat Deskphone or Softphone, dial: 84336.
Connecting directly from a room system?
1.) Dial: 199.48.152.152 or bjn.vc <http://bjn.vc>
2.) Enter Meeting ID: 908675367
Just want to dial in on your phone?
1.) Dial one of the following numbers: 408-915-6466 (US)
See all numbers: https://www.redhat.com/en/conference-numbers
2.) Enter Meeting ID: 908675367
3.) Press #
Want to test your video connection? https://bluejeans.com/111
Kevin
--
Kevin Hrpcek
NASA VIIRS Atmosphere SIPS/TROPICS
Space Science & Engineering Center
University of Wisconsin-Madison
I can only confirm that. The file https://download.ceph.com/debian-pacific/pool/main/c/ceph/python3-rados_16.… is clearly missing on the ceph download server which makes it impossible to install the upgrade on Debian. And as the previous 16.2.10 Package definition has been replaced with the latest one, this currently makes it impossible to install Pacific on a Debian system.
Dear all,
is there information available anywhere about the write amplification for
CephFS? I found quite some material on write amplification of VMs using
journaled file system on top of RBD but nothing as it relates to CephFS?
From my understanding I would expect the following:
- for X-rep, data needs to be written to X pgs (factor X)
- for k-m EC, data is written to k+m pgs (factor (k+m)/k)
Am I correct? How much is added by meta data, WAL? Do I miss anything?
Best wishes,
Manuel
Good day all,
I've an issue with a few OSDs (in two different nodes) that attempt to
start but fail / crash quite quickly. They are all LVM disks.
I've tried upgrading software, health checks on the hardware (nodes and
disks) and there doesn't seem to be any issues there.
Recently I've had a few "other" disks physically fail in the cluster and
now have one PG down which is blocking some IO on CephFS.
I've added the output of the osd journalctl and the osd log below in case
it's helpful to identify anything obvious.
I also set debug bluefs = 20 , saw this in another post.
I recently manually upgraded this node to (17.2.0) before the problem
began, later to (17.2.5). - The other osds in this node start / run fine.
The other node (15.2.17) also has a few osds that will not start and some
that run without issue.
Could anyone point me in the right direction to investigate and solve my
osd issues.
https://pastebin.com/3PkCabdfhttps://pastebin.com/BT9bnhSb
Production system mainly used for CephFS
OS: Ubuntu 20.04.5 LTS
Ceph versions: 15.2.17 - Octopus (one OSD node manually upgraded to 17.2.5
- Quincy)
Erasure data pool (K=4, M=2) - The journal's for each osd are co-located
on each drive
Kind regards
Geoffrey Rhodes
Hello All,
In the ceph quincy Not able to find rbd_mirror_journal_max_fetch_bytes config
in rbd mirror
i configured the ceph cluster almost 400 tb and enable the rbd-mirror in the
starting stage i'm able to achive the almost 9 GB speed , but after the rebalane
completed of the all the images . rbd-mirror speed got automaticily reduce to between 4 to
5 mbps.
in my primary cluster we are continuelsy writing the 50 to 400 mbps data but replication
speed only we get the 4 to 5 mbps. also we have the 10 Gbps replication network
bandwidth.
Note::- I also try to find the option rbd_mirror_journal_max_fetch_bytes but i'm not
able to find the this option in the configuration. also when i try to set from the command
line it's showing error like
command:
ceph config set client.rbd rbd_mirror_journal_max_fetch_bytes 33554432
error:
Error EINVAL: unrecognized config option 'rbd_mirror_journal_max_fetch_bytes'
cluster version
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Please suggest any alternative way to configurre this option or how i can improve the
replication n/w speed.
Hello,
If buffered_io is enabled, is there a way to know what is the exactly used physical memory from each osd?
What I've found is the dump_mempools which last entries are the following, but this bytes would be the real physical memory usage?
"total": {
"items": 60005205,
"bytes": 995781359
Also which metric is this value? I haven't found any.
Thank you
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.