Hi,
2021年7月4日(日) 0:17 changcheng.liu <changcheng.liu(a)aliyun.com>:
>
> Hi all,
> I'm reading the ceph survey results: https://ceph.io/community/2021-ceph-user-survey-results.
> Do we have the data about which type of AsyncMessenger is used? TCP/RDMA/DPDK.
> What's the reason that RDMA & DPDK isn't often used?
> You can find both the report and raw data below.
>
> 2021 Ceph User Survey Results
> 2021 Ceph User Survey Raw Data
The above two links are 404. Could you fix them?
Thanks,
Satoru
>
> B.R.
> Jerry
>
>
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
Hi Reed,
To add to this command by Weiwen:
On 28.05.21 13:03, 胡 玮文 wrote:
> Have you tried just start multiple rsync process simultaneously to transfer different directories? Distributed system like ceph often benefits from more parallelism.
When I migrated from XFS on iSCSI (legacy system, no Ceph) to CephFS a
few months ago, I used msrsync [1] and was quite happy with the speed.
For your use case, I would start with -p 12 but might experiment with up
to -p 24 (as you only have 6C/12T in your CPU). With many small files,
you also might want to increase -s from the default 1000.
Note that msrsync does not work with the --delete rsync flag. As I was
syncing a live system, I ended up with this workflow:
- Initial sync with msrsync (something like ./msrsync -p 12 --progress
--stats --rsync "-aS --numeric-ids" ...)
- Second sync with msrsync (to sync changes during the first sync)
- Take old storage off-line for users / read-only
- Final rsync with --delete (i.e. rsync -aS --numeric-ids --delete ...)
- Mount cephfs at location of old storage, adjust /etc/exports with fsid
entries where necessary, turn system back on-line / read-write
Cheers
Sebastian
[1] https://github.com/jbd/msrsync
Hi!
I installed fresh cluster 16.2.4
as described in https://docs.ceph.com/en/latest/cephadm/#cephadm
Everything works except for one thing: there are only graphics in the hosts / overall performance, (only the CPU and the network). In all other places the inscription "no data".
What could I have done wrong?
WBR,
Fyodor.
Hi,
I have this config:
https://jpst.it/2yBsD
What I'm missing from the backend part to make it able to balance on the same server but different port?
Thank you
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
Hello
I am getting an error on one node in my cluster (other nodes
are fine) when trying to run "cephadm shell". Historically
this machine has been used as the primary Ceph management
host, so it would be nice if this could be fixed.
ceph-1 ~ # cephadm -v shell
container_init=False
Inferring fsid 79656e6e-21e2-4092-ac04-d536f25a435d
Inferring config
/var/lib/ceph/79656e6e-21e2-4092-ac04-d536f25a435d/mon.ceph-1/config
Running command: /usr/bin/podman images --filter
label=ceph=True --filter dangling=false --format
{{.Repository}}(a){{.Digest}}
/usr/bin/podman: stdout
docker.io/ceph/daemon-base@sha256:0810dc7db854150bc48cf8fc079875e28b3138d070990a630b8fb7cec7cd2ced
/usr/bin/podman: stdout
docker.io/ceph/ceph@sha256:54e95ae1e11404157d7b329d0bef866ebbb214b195a009e87aae4eba9d282949
/usr/bin/podman: stdout
docker.io/ceph/ceph@sha256:16d37584df43bd6545d16e5aeba527de7d6ac3da3ca7b882384839d2d86acc7d
Using recent ceph image
docker.io/ceph/daemon-base@sha256:0810dc7db854150bc48cf8fc079875e28b3138d070990a630b8fb7cec7cd2ced
Running command: /usr/bin/podman run --rm --ipc=host
--net=host --entrypoint stat -e
CONTAINER_IMAGE=docker.io/ceph/daemon-base@sha256:0810dc7db854150bc48cf8fc079875e28b3138d070990a630b8fb7cec7cd2ced
-e NODE_NAME=ceph-1
docker.io/ceph/daemon-base@sha256:0810dc7db854150bc48cf8fc079875e28b3138d070990a630b8fb7cec7cd2ced
-c %u %g /var/lib/ceph
stat: stdout 167 167
Running command (timeout=None): /usr/bin/podman run --rm
--ipc=host --net=host --privileged --group-add=disk -it -e
LANG=C -e PS1=[ceph: \u@\h \W]\$ -e
CONTAINER_IMAGE=docker.io/ceph/daemon-base@sha256:0810dc7db854150bc48cf8fc079875e28b3138d070990a630b8fb7cec7cd2ced
-e NODE_NAME=ceph-1 -v
/var/run/ceph/79656e6e-21e2-4092-ac04-d536f25a435d:/var/run/ceph:z
-v
/var/log/ceph/79656e6e-21e2-4092-ac04-d536f25a435d:/var/log/ceph:z
-v
/var/lib/ceph/79656e6e-21e2-4092-ac04-d536f25a435d/crash:/var/lib/ceph/crash:z
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v
/run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v
/var/lib/ceph/79656e6e-21e2-4092-ac04-d536f25a435d/mon.ceph-1/config:/etc/ceph/ceph.conf:z
-v
/etc/ceph/ceph.client.admin.keyring:/etc/ceph/ceph.keyring:z
-v
/var/lib/ceph/79656e6e-21e2-4092-ac04-d536f25a435d/home:/root
--entrypoint bash
docker.io/ceph/daemon-base@sha256:0810dc7db854150bc48cf8fc079875e28b3138d070990a630b8fb7cec7cd2ced
Error: error checking path
"/var/lib/ceph/79656e6e-21e2-4092-ac04-d536f25a435d/mon.ceph-1/config":
stat
/var/lib/ceph/79656e6e-21e2-4092-ac04-d536f25a435d/mon.ceph-1/config:
no such file or directory
The machine in question doesn't run a mon daemon (but it did
a long time ago), so I am not sure why "cephadm shell" on
this particular machine is looking for mon.ceph-1/config
Can anybody help?
Thanks,
Vlad
Hi,
I'm looking for this long time ago, I have a lot of users and when 1 user can take down the cluster I want to know which one, but there isn't any bucket stats that could help.
Anyone knows anything?
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com<mailto:istvan.szabo@agoda.com>
---------------------------------------------------
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
Hi,
I want to remove all the objectstore related things from my cluster and keep only for RBD.
I've uninstalled the RGW services.
Removed the haproxy config related to that.
When I try to delete realm, zone, zonegroup it is finished but after coupe of minutes something recreate another zonegroup. I can't figure out what.
What I miss?
PS: The pools are still there, that will be the last step, hope I don't miss any necessary step.
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com<mailto:istvan.szabo@agoda.com>
---------------------------------------------------
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
I have two rbd images.
running fio on them, one performs very well, and one performs poorly. I'd like to get more insight on why.
I know that
rbd info xx/yyy
gives me a small amount of information. but I cant seem to find a detailed info dump, in the way you can do things like
ceph daemon osd.9 config show
I cant even get useful google hits on things like
"ceph find rbd image stripe-unit"
all the pages seem to detail how to set the value, but not how to query it.
Can anyone point me in the right direction?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com