Hi,
In AWS EBS gp3, AWS says that small volume size cannot achieve best
performance. I think it's a feature or tendency of general
distributed storages including Ceph. Is that right in Ceph block storage? I
read many docs on ceph community. I never heard of Ceph storage.
https://docs.aws.amazon.com/ebs/latest/userguide/general-purpose.html
Regard,
--
Mitsumasa KONDO
Hello everyone!
We deployed a platform with Ceph Quincy and now we need to give access to
some old nodes with CentOS7 until 30/07/2024. I found two approaches, the
first one, deploying Ganesha NFS and bringing access through the NFS
protocol. The second one is to use an older cephfs client, specifically the
Octopus client.
I would like to know if there is a third option and which the community
would recommend.
Thanks in advance.
Regards!
--
Dario Graña
PIC (Port d'Informació Científica)
Campus UAB, Edificio D
E-08193 Bellaterra, Barcelona
http://www.pic.es
Avis - Aviso - Legal Notice: http://legal.ifae.es
Hi All,
I'm having a crazy time getting config items to stick on my MDS daemons.
I'm running Reef 18.2.1 on RHEL 9 and the daemons are running in
podman, I used cephadm to deploy the daemons.
I can adjust the config items in runtime, like so:
ceph tell mds.slugfs.pr-md-01.xdtppo config set mds_bal_interval -1
But for the life of me I cannot get that to stick when I restart the MDS
daemon.
I've tried adding this to /etc/ceph/ceph.conf in the host server:
[mds]
mds_bal_interval = -1
But that doesn't get picked up on daemon restart. I also added the same
config segment to /etc/ceph/ceph.conf *inside* the container, no dice,
still doesn't stick. I even tried adding it to
/var/lib/ceph/<uuid>/config/ceph.conf and it *still* doesn't stick
across daemon restarts.
Does anyone know how I can get MDS config items to stick across daemon
reboots when the daemon is running in podman under RHEL?
Thanks much!
-erich
Hi all,
Today we discussed:
2024/04/08
- [Zac] CQ#4 is going out this week -
https://pad.ceph.com/p/ceph_quarterly_2024_04
- Last chance to review!
- [Zac] IcePic Initiative - context-sensitive help - do we regard the
docs as a part of the online help?
- https://pad.ceph.com/p/2024_04_08_cephadm_context_sensitive_help
- docs.ceph.com should be main source of truth; can link to this or
reference it generally as "see docs.ceph.com"
- Squid RC status
- Blockers tracked in: https://pad.ceph.com/p/squid-upgrade-failures
- rgw: topic changes merged to main, but introduced some test failures.
account changes blocked on topics
- Non-blocker for RC0
- centos 9 containerization (status unknown?)
- Non-blocker for RC0
- Follow up with Dan / Guillaume
- RADOS has one outstanding blocker awaiting QA
- Failing to register new account at Ceph tracker - error 404.
- Likely related to Redmine upgrade over the weekend
- Pacific eol:
- Action item: in https://docs.ceph.com/en/latest/releases/, move to
"archived"
- 18.2.3
- one or two PRs from cephfs left
- Milestone: https://github.com/ceph/ceph/milestone/19
Thanks,
Laura
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hi everyone,
On behalf of the Ceph Foundation Board, I would like to announce the
creation of, and cordially invite you to, the first of a recurring series
of meetings focused solely on gathering feedback from the users of
Ceph. The overarching goal of these meetings is to elicit feedback from the
users, companies, and organizations who use Ceph in their production
environments. You can find more details about the motivation behind this
effort in our user survey [1] that we highly encourage all of you to take.
This is an extension of the Ceph User Dev Meeting with concerted focus on
Performance (led by Vincent Hsu, IBM) and Orchestration/Deployment (led by
Matt Leonard, Bloomberg), to start off with. We would like to kick off this
series of meetings on March 21, 2024. The survey will be open until March
18, 2024.
Looking forward to hearing from you!
Thanks,
Neha
[1]
https://docs.google.com/forms/d/15aWxoG4wSQz7ziBaReVNYVv94jA0dSNQsDJGqmHCLM…
Hi everyone,
I’d like to extend a warm thank you to Mike Perez for his years of service
as community manager for Ceph. He is changing focuses now to engineering.
The Ceph Foundation board decided to use services from the Linux Foundation
to fulfill some community management responsibilities, rather than rely on
a single member organization employing a community manager. The Linux
Foundation will assist with Ceph Foundation membership and governance
matters.
Please welcome Noah Lehman (cc’d) as our social media and marketing point
person - for anything related to this area, including the Ceph YouTube
channel, please reach out to him.
Ceph days will continue to be organized and funded by organizations around
the world, with the help of the Ceph Ambassadors (
https://ceph.io/en/community/ambassadors/). Gaurav Sitlani (cc’d) will help
organize the ambassadors going forward.
For other matters, please contact council(a)ceph.io and we’ll direct the
matter to the appropriate people.
Thanks,
Neha Ojha, Dan van der Ster, Josh Durgin
Ceph Executive Council
We are happy to announce another release of the go-ceph API library. This is a
regular release following our every-two-months release cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.27.0
The library includes bindings that aim to play a similar role to the "pybind"
python bindings in the ceph tree but for the Go language. The library also
includes additional APIs that can be used to administer cephfs, rbd, rgw, and
other subsystems.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com
Hello! I've installed my 5-node CEPH cluster next install NFS server by command:
ceph nfs cluster create nfshacluster 5 --ingress --virtual_ip 192.168.171.48/26 --ingress-mode haproxy-protocol.
I don't understand fully how this must be works but when i stop NFS daemon even on one of this nodes I've see that writing on NFS shares is disappear (testing via vdbench).
As i understand it is wrong and IO from stopped daemon must switching to another NFS daemon without any impact on IO.
Can someone help me with troubleshoot this issue? Or explain how done full-fledged Active-Active HA NFS Cluster for production use.
Thanks!
Руслан Нурабаев
Старший инженер
Сектор ИТ платформы
Отдел развития опорной сети
Департамент развития сети
+77012119272
Ruslan.Nurabayev(a)kcell.kz
-----Original Message-----
From: ceph-users-request(a)ceph.io <ceph-users-request(a)ceph.io>
Sent: Thursday, April 11, 2024 15:07
To: Ruslan Nurabayev <Ruslan.Nurabayev(a)kcell.kz>
Subject: Welcome to the "ceph-users" mailing list
[You don't often get email from ceph-users-request(a)ceph.io. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
Welcome to the "ceph-users" mailing list!
To post to this list, send your email to:
ceph-users(a)ceph.io
You can unsubscribe or make adjustments to your options via email by sending a message to:
ceph-users-request(a)ceph.io
with the word 'help' in the subject or body (don't include the quotes), and you will get back a message with instructions. You will need your password to change your options, but for security purposes, this password is not included here. If you have forgotten your password you will need to reset it via the web UI.
________________________________
****************************************************************************************
Осы хабарлама және онымен берілетін кез келген файлдар құпия болып
табылады және олар мекенжайда көрсетілген жеке немесе заңды тұлғалардың
пайдалануына ғана арналған. Егер сіз болжамды алушы болып табылмайтын
болсаңыз, осы арқылы осындай ақпаратты кез келген таратуға, жіберуге,
көшіруге немесе пайдалануға қатаң тыйым салынатыны және осы электрондық
хабарлама дереу жойылуға тиіс екендігін хабарлаймыз.
KCELL осы хабарламадағы кез келген ақпараттың дәлдігіне немесе
толықтығына қатысты ешқандай кепілдік бермейді және сол арқылы онда
қамтылған ақпарат үшін немесе оны беру, қабылдау, сақтау немесе қандай да
бір түрде пайдалану үшін кез келген жауапкершілікті болдырмайды. Осы
хабарламада айтылған пікірлер тек жіберушіге ғана тиесілі және KCELL
пікірін де білдіруі міндетті емес. Бұл электрондық хабарлама барлық
танымал компьютерлік вирустарға тексерілді.
****************************************************************************************
Данное сообщение и любые передаваемые с ним файлы являются
конфиденциальными и предназначены исключительно для использования
физическими или юридическими лицами, которым они адресованы. Если вы не
являетесь предполагаемым получателем, настоящим уведомляем о том,
что любое распространение, пересылка, копирование или использование такой
информации строго запрещено, и данное электронное сообщение должно
быть немедленно удалено.
KCELL не дает никаких гарантий относительно точности или полноты любой
информации, содержащейся в данном сообщении, и тем самым исключает
любую ответственность за информацию, содержащуюся в нем, или за ее
передачу, прием, хранение или использование каким-либо образом. Мнения,
выраженные в данном сообщении, принадлежат только отправителю и
не обязательно отражают мнение KCELL.
Данное электронное сообщение было проверено на наличие всех известных
компьютерных вирусов.
****************************************************************************************
This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you are not the intended recipient you are hereby notified
that any dissemination, forwarding, copying or use of any of the
information is strictly prohibited, and the e-mail should immediately be
deleted.
KCELL makes no warranty as to the accuracy or completeness of any
information contained in this message and hereby excludes any liability
of any kind for the information contained therein or for the information
transmission, reception, storage or use of such in any way
whatsoever. The opinions expressed in this message belong to sender alone
and may not necessarily reflect the opinions of KCELL.
This e-mail has been scanned for all known computer viruses.
****************************************************************************************