Hi,
I have a confusion regarding a struct data type in Ceph CRUSH source code.
Header fie in here
https://github.com/ceph/ceph/blob/master/src/crush/crush.h
If you see below. there is a struct data type namely _weight_set_. What I
have understood going through different CRUSH maps, this _weight_set_ array
should be a 2D array, am I right?
struct crush_choose_arg {
__s32 *ids; /*!< values to use instead of items */
__u32 ids_size; /*!< size of the __ids__ array */
struct crush_weight_set *weight_set; /*!< weight replacements for a given
position */
__u32 weight_set_positions;
};
BR
Bobby
Hi,
is there any way to do that without disabling journaling?
# rbd map image@snap
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported
by the kernel with "rbd feature disable image@snap journaling".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
# ceph -v
ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)
# uname -r
5.4.52-050452-generic
Thanks,
--
Herbert
Hi,
I was just checking on a few (13) IPv6-only Ceph clusters and I noticed
that they couldn't send their Telemetry data anymore:
telemetry.ceph.com has address 8.43.84.137
This server used to have Dual-Stack connectivity while it was still
hosted at OVH.
It seemed to have moved to Red Hat, but lost IPv6 connectivity.
How can we get this back?
Wido
Hi,everyone
Our production ceph cluster the version is hammer have 3 monitors and 300+ osds.Monitor daemon runs on host diffrent from osd daemon.
openstack vm runs on rbd.Now we need to maintain monitor host so must shutdown monitor host a moment, one by one.
But keep two monitor host online all the time.
What is the impact of monitor down on vm?
How to do reduce the impact?
Dear Cepher,
On a Ceph node (Luminous 12.2.13), i had a HDD with smartctl inidicating going to fail soon but still operational, and i would like to replace it now. It is no fun to rebalance the data and then get a new disk in and do rebalanance again.
I am thinging of using ceph-objectstore-tool to copy the data from the failing HDD to a new HDD on the same ceph node, with the same OSD ID. Is it possible, and how?
thanks in advance for advice,
samuel
huxiaoyu(a)horebdata.cn
Hi everyone, we have a ceph cluster for object storage only, the rgws are accessible from the internet, and everything is ok.
Now, one of our team/client required that their data should not ever be accessible from the internet.
In any case of security bug/breach/whatever, they want to limit the access to their data from the local network.
Before creating a second "private" cluster, is there a way to achieve this on our current "public" cluster?
Is a multi-zone without replication would help me with that?
A public rgws for public access on the "pub_zone", and a private rgws for private access on the "prv_zone"?
pubzone.rgw.buckets.data
prvzone.rgw.buckets.data
If the "public" rgws is hacked, without the access_key/secret_key of the private zone, is there any possibilities to access the private zone?
Does a multi-realms would help me to secure it more?
Any input would be really appreciated.
I don't want to put to much energy for false security and/or security by obscurity,
so if these scenarios of multi-sites/multi-realms are useless, in a security point of view, please tell me. :-)
Thanks!
JS
Hi everyone,
Our Ceph cluster is stuck in syncing status for a long time after executing the radosgw-admin data sync init command.
-----
realm dcd64504-c445-4810-9b83-851875443bcd (storage)
zonegroup 313a345a-4886-4cb3-8d06-0fe3919d591a (mastergroup)
zone 76fc5fe2-9f89-4419-b611-ab275000b358 (dc01)
metadata sync no sync (zone is master)
data sync source: cc4e8e55-988a-430e-b1df-4d88f0c81f4f (dc02)
syncing
full sync: 117/128 shards
full sync: 3 buckets to sync
incremental sync: 11/128 shards
data is behind on 117 shards
behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
-----
Can anyone give me a hint to let the synchronization job finish?
Many thanks!
--
Nghia Viet Tran (Mr)
mgm technology partners Vietnam Co. Ltd
7 Phan Châu Trinh
Đà Nẵng, Vietnam
+84 935905659
nghia.viet.tran(a)mgm-tp.com<mailto:nghia.viet.tran@mgm-tp.com>
www.mgm-tp.com<https://www.mgm-tp.com/en/>
Visit us on LinkedIn<https://www.linkedin.com/company/mgm-technology-partners-vietnam-co-ltd> and Facebook<https://www.facebook.com/mgmTechnologyPartnersVietnam>!
Innovation Implemented.
General Director: Frank Müller
Registered office: 7 Pasteur, Hải Châu 1, Hải Châu, Đà Nẵng
MST/Tax 0401703955