Hi Folks
We are using Ceph as our storage backend on our 6 Node Proxmox VM Cluster. To Monitor our systems we use Zabbix and i would like to get some Ceph Data into our Zabbix to get some alarms when something goes wrong.
Ceph mgr has a module, "zabbix" that uses "zabbix-sender" to actively send data, but i cannot get the module working. It always responds with "failed to send data"
The network side seems to be fine:
root@vm-2:~# traceroute 192.168.15.253
traceroute to 192.168.15.253 (192.168.15.253), 30 hops max, 60 byte packets
1 192.168.15.253 (192.168.15.253) 0.411 ms 0.402 ms 0.393 ms
root@vm-2:~# nmap -p 10051 192.168.15.253
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-18 08:40 CEST
Nmap scan report for 192.168.15.253
Host is up (0.00026s latency).
PORT STATE SERVICE
10051/tcp open zabbix-trapper
MAC Address: BA:F5:30:EF:40:EF (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 0.61 seconds
root@vm-2:~# ceph zabbix config-show
{"zabbix_port": 10051, "zabbix_host": "192.168.15.253", "identifier": "VM-2", "zabbix_sender": "/usr/bin/zabbix_sender", "interval": 60}
root@vm-2:~#
But if i try "ceph zabbix send" i get "failed to send data to zabbix" and this show up in the systems journal:
Sep 18 08:41:13 vm-2 ceph-mgr[54445]: 2019-09-18 08:41:13.272 7fe360fe4700 -1 mgr.server reply reply (1) Operation not permitted
The log of ceph-mgr on that machine states:
2019-09-18 08:42:18.188 7fe359fd6700 0 mgr[zabbix] Exception when sending: /usr/bin/zabbix_sender exited non-zero: zabbix_sender [3253392]: DEBUG: answer [{"response":"success","info":"processed: 0; failed: 44; total: 44; seconds spent: 0.000179"}]
2019-09-18 08:43:18.217 7fe359fd6700 0 mgr[zabbix] Exception when sending: /usr/bin/zabbix_sender exited non-zero: zabbix_sender [3253629]: DEBUG: answer [{"response":"success","info":"processed: 0; failed: 44; total: 44; seconds spent: 0.000321"}]
I'm guessing, this could have something to do with user rights. But i have no idea where to start to track this down.
Maybe someone here has a hint?
If more information is needed, i will gladly provide it.
greetings
Ingo
We've run into a problem on our test cluster this afternoon which is running Nautilus (14.2.2). It seems that any time PGs move on the cluster (from marking an OSD down, setting the primary-affinity to 0, or by using the balancer), a large number of the OSDs in the cluster peg the CPU cores they're running on for a while which causes slow requests. From what I can tell it appears to be related to slow peering caused by osd_pg_create() taking a long time.
This was seen on quite a few OSDs while waiting for peering to complete:
# ceph daemon osd.3 ops
{
"ops": [
{
"description": "osd_pg_create(e179061 287.7a:177739 287.9a:177739 287.e2:177739 287.e7:177739 287.f6:177739 287.187:177739 287.1aa:177739 287.216:177739 287.306:177739 287.3e6:177739)",
"initiated_at": "2019-08-27 14:34:46.556413",
"age": 318.25234538000001,
"duration": 318.25241895300002,
"type_data": {
"flag_point": "started",
"events": [
{
"time": "2019-08-27 14:34:46.556413",
"event": "initiated"
},
{
"time": "2019-08-27 14:34:46.556413",
"event": "header_read"
},
{
"time": "2019-08-27 14:34:46.556299",
"event": "throttled"
},
{
"time": "2019-08-27 14:34:46.556456",
"event": "all_read"
},
{
"time": "2019-08-27 14:35:12.456901",
"event": "dispatched"
},
{
"time": "2019-08-27 14:35:12.456903",
"event": "wait for new map"
},
{
"time": "2019-08-27 14:40:01.292346",
"event": "started"
}
]
}
},
...snip...
{
"description": "osd_pg_create(e179066 287.7a:177739 287.9a:177739 287.e2:177739 287.e7:177739 287.f6:177739 287.187:177739 287.1aa:177739 287.216:177739 287.306:177739 287.3e6:177739)",
"initiated_at": "2019-08-27 14:35:09.908567",
"age": 294.900191001,
"duration": 294.90068416899999,
"type_data": {
"flag_point": "delayed",
"events": [
{
"time": "2019-08-27 14:35:09.908567",
"event": "initiated"
},
{
"time": "2019-08-27 14:35:09.908567",
"event": "header_read"
},
{
"time": "2019-08-27 14:35:09.908520",
"event": "throttled"
},
{
"time": "2019-08-27 14:35:09.908617",
"event": "all_read"
},
{
"time": "2019-08-27 14:35:12.456921",
"event": "dispatched"
},
{
"time": "2019-08-27 14:35:12.456923",
"event": "wait for new map"
}
]
}
}
],
"num_ops": 6
}
That "wait for new map" message made us think something was getting hung up on the monitors, so we restarted them all without any luck.
I'll keep investigating, but so far my google searches aren't pulling anything up so I wanted to see if anyone else is running into this?
Thanks,
Bryan
Hi,
Currently running Mimic 13.2.5.
We had reports this morning of timeouts and failures with PUT and GET
requests to our Ceph RGW cluster. I found these messages in the RGW
log:
RGWReshardLock::lock failed to acquire lock on
bucket_name:bucket_instance ret=-16
NOTICE: resharding operation on bucket index detected, blocking
block_while_resharding ERROR: bucket is still resharding, please retry
Which were preceded by many of these, which I think are normal/expected.
check_bucket_shards: resharding needed: stats.num_objects=6415879
shard max_objects=6400000
Our RGW cluster sits behind haproxy which notified me approx 90
seconds after the first 'resharding needed' message that no backends
were available. It appears this dynamic reshard process caused the
RGWs to lock up for a period of time. Roughly 2 minutes later the
reshard error messages stop and operation returns to normal.
Looking back through previous RGW logs, I see a similar event from
about a week ago, on the same bucket. We have several buckets with
shard counts exceeding 1k (this one only has 128), and much larger
object counts, so clearly this isn't the first time dynamic sharding
has been invoked on this cluster.
Has anyone seen this? I expect it will come up again, and can turn up
debugging if that'll help. Thanks for any assistance!
Josh
On our test cluster after upgrading to 14.2.5 I'm having problems with the mons pegging a CPU core while moving data around. I'm currently converting the OSDs from FileStore to BlueStore by marking the OSDs out in multiple nodes, destroying the OSDs, and then recreating them with ceph-volume lvm batch. This seems too get the ceph-mon process into a state where it pegs a CPU core on one of the mons:
1764450 ceph 20 0 4802412 2.1g 16980 S 100.0 28.1 4:54.72 ceph-mon
Has anyone else run into this with 14.2.5 yet? I didn't see this problem while the cluster was running 14.2.4.
Thanks,
Bryan
Hi,
I am having an unusual slowdown using VMware with ISCSI gws. I have two
ISCSI gateways with two RBD images. I have checked the following in the
logs:
Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:26.040 969 [INFO]
alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting lock
acquisition operation.2019-12-24 09:00:26.040 969 [INFO]
alua_implicit_transition:557 rbd/pool1.vmware_iscsi1: Lock acquisition
operation is already in process.2019-12-24 09:00:26.973 969 [WARN]
tcmu_rbd_lock:744 rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: tcmu_rbd_lock:744
rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:28.099 969 [WARN]
tcmu_notify_lock_lost:201 rbd/pool1.vmware_iscsi1: Async lock drop. Old
state 1
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: tcmu_notify_lock_lost:201
rbd/pool1.vmware_iscsi1: Async lock drop. Old state 1
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: alua_implicit_transition:562
rbd/pool1.vmware_iscsi1: Starting lock acquisition operation.
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:28.824 969 [INFO]
alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting lock
acquisition operation.2019-12-24 09:00:28.990 969 [WARN] tcmu_rbd_lock:744
rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Dec 24 09:00:28 ceph-iscsi2 tcmu-runner: tcmu_rbd_lock:744
rbd/pool1.vmware_iscsi1: Acquired exclusive lock.
Can anyone help-me please?
Gesiel
Dear Cephalopodians,
running 13.2.6 on the source cluster and 14.2.5 on the rbd mirror nodes and the target cluster,
I observe regular failures of rbd-mirror processes.
With failures, I mean that traffic stops, but the daemons are still listed as active rbd-mirror daemons in
"ceph -s", and the daemons are still running. This comes in sync with a hefty load of below messages in the mirror logs.
This happens "sometimes" when some OSDs go down and up in the target cluster (which happens each night since the disks in that cluster
shortly go offline during "online" smart self-tests - that's a problem in itself, but it's a cluster built from hardware that would have been trashed otherwise).
The rbd daemons keep running in any case, but synchronization stops. If not all rbd mirror daemons have failed (we have three running, and it usually does not hit all of them),
the "surviving" seem(s) not to take care of the images the other daemons had locked.
Right now, I am eyeing with a "quick solution" of regularly restarting the rbd-mirror daemons, but if there are any good ideas on which debug info I could collect
to get this analyzed and fixed, that would of course be appreciated :-).
Cheers,
Oliver
-----------------------------------------------
2019-12-24 02:08:51.379 7f31c530e700 -1 rbd::mirror::ImageReplayer: 0x559dcb968d00 [2/aabba863-89fd-4ea5-bb8c-0f417225d394] handle_process_entry_safe: failed to commit journal event: (108) Cannot send after transport endpoint shutdown
2019-12-24 02:08:51.379 7f31c530e700 -1 rbd::mirror::ImageReplayer: 0x559dcb968d00 [2/aabba863-89fd-4ea5-bb8c-0f417225d394] handle_replay_complete: replay encountered an error: (108) Cannot send after transport endpoint shutdown
...
2019-12-24 02:08:54.392 7f31c530e700 -1 rbd::mirror::ImageReplayer: 0x559dcb87bb00 [2/23699357-a611-4557-9d73-6ff5279da991] handle_process_entry_safe: failed to commit journal event: (125) Operation canceled
2019-12-24 02:08:54.392 7f31c530e700 -1 rbd::mirror::ImageReplayer: 0x559dcb87bb00 [2/23699357-a611-4557-9d73-6ff5279da991] handle_replay_complete: replay encountered an error: (125) Operation canceled
2019-12-24 02:08:55.707 7f31ea358700 -1 rbd::mirror::image_replayer::GetMirrorImageIdRequest: 0x559dce2e05b0 handle_get_image_id: failed to retrieve image id: (108) Cannot send after transport endpoint shutdown
2019-12-24 02:08:55.707 7f31ea358700 -1 rbd::mirror::image_replayer::GetMirrorImageIdRequest: 0x559dcf47ee70 handle_get_image_id: failed to retrieve image id: (108) Cannot send after transport endpoint shutdown
...
2019-12-24 02:08:55.716 7f31f5b6f700 -1 rbd::mirror::ImageReplayer: 0x559dcb997680 [2/f8218221-6608-4a2b-8831-84ca0c2cb418] operator(): start failed: (108) Cannot send after transport endpoint shutdown
2019-12-24 02:09:25.707 7f31f5b6f700 -1 rbd::mirror::InstanceReplayer: 0x559dcabd5b80 start_image_replayer: global_image_id=0577bd16-acc4-4e9a-81f0-c698a24f8771: blacklisted detected during image replay
2019-12-24 02:09:25.707 7f31f5b6f700 -1 rbd::mirror::InstanceReplayer: 0x559dcabd5b80 start_image_replayer: global_image_id=05bd4cca-a561-4a5c-ad83-9905ad5ce34e: blacklisted detected during image replay
2019-12-24 02:09:25.707 7f31f5b6f700 -1 rbd::mirror::InstanceReplayer: 0x559dcabd5b80 start_image_replayer: global_image_id=0e614ece-65b1-4b4a-99bd-44dd6235eb70: blacklisted detected during image replay
-----------------------------------------------
Hi,
I have a cephfs in production based on 2 pools (data+metadata).
Data is in erasure coding with the profile :
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=3
m=2
plugin=jerasure
technique=reed_sol_van
w=8
Metadata is in replicated mode with k=3
The crush rules are as follow :
[
{
"rule_id": 0,
"rule_name": "replicated_rule",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 1,
"rule_name": "ec_data",
"ruleset": 1,
"type": 3,
"min_size": 3,
"max_size": 5,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_indep",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
]
When we installed it, everything was in the same room, but know we
splitted our cluster (6 servers but soon 8) in 2 rooms. Thus we updated
the crushmap by adding a room layer (with ceph osd crush add-bucket
room1 room etc) and move all our servers in the tree to the correct
place (ceph osd crush move server1 room=room1 etc...).
Now, we would like to change the rules to set a failure domain to room
instead of host (to be sure that in case of disaster in one of the rooms
we will still have a copy in the other).
What is the best strategy to do this ?
F.
Hi All,
I have a query regarding objecter behaviour for homeless session. In
situations when all OSDs containing copies (*let say replication 3*) of an
object are down, the objecter assigns a homeless session (OSD=-1) to a
client request. This request makes radosgw thread hang indefinitely as the
data could never be served because all required OSDs are down. With
multiple similar requests, all the radosgw threads gets exhausted and
hanged indefinitely waiting for the OSDs to come up. This creates complete
service unavailability as no rgw threads are present to process valid
requests which could have been directed towards active PGs/OSDs.
I think we should have behaviour in objecter or radosgw to terminate
request and return early in case of a homeless session. Let me know your
thoughts on this.
Regards,
Biswajeet
--
*-----------------------------------------------------------------------------------------*
*This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error, please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named addressee,
you should not disseminate, distribute or copy this email. Please notify
the sender immediately by email if you have received this email by mistake
and delete this email from your system. If you are not the intended
recipient, you are notified that disclosing, copying, distributing or
taking any action in reliance on the contents of this information is
strictly prohibited.*****
****
*Any views or opinions presented in this
email are solely those of the author and do not necessarily represent those
of the organization. Any information on shares, debentures or similar
instruments, recommended product pricing, valuations and the like are for
information purposes only. It is not meant to be an instruction or
recommendation, as the case may be, to buy or to sell securities, products,
services nor an offer to buy or sell securities, products or services
unless specifically stated to be so on behalf of the Flipkart group.
Employees of the Flipkart group of companies are expressly required not to
make defamatory statements and not to infringe or authorise any
infringement of copyright or any other legal right by email communications.
Any such communication is contrary to organizational policy and outside the
scope of the employment of the individual concerned. The organization will
not accept any liability in respect of such communication, and the employee
responsible will be personally liable for any damages or other liability
arising.*****
****
*Our organization accepts no liability for the
content of this email, or for the consequences of any actions taken on the
basis of the information *provided,* unless that information is
subsequently confirmed in writing. If you are not the intended recipient,
you are notified that disclosing, copying, distributing or taking any
action in reliance on the contents of this information is strictly
prohibited.*
_-----------------------------------------------------------------------------------------_