I’ve inherited a couple of clusters with non-default (ie, not “ceph”) internal names, and I want to rename them for the usual reasons.
I had previously developed a full list of steps - which I no longer have access to.
Anyone done this recently? Want to be sure I’m not missing something.
* Nautilus, CentOS 7, RGW and RBD
* Rename OSD mountpoints with mount —move
* Rename systemd resources / mounts?
* Rename /var/lib/ceph/{mon,osd} directories
* Rename ceph*conf files on backend and client systems
* Rename keyrings — just the filenames?
* Rename log files
* Ajust `ceph config` paths for admin socket, keyring, logs, mgr/mds/mon data, osd journal, rgw_data
* Restart daemons
* Ensure /var/run/ceph sockets are appropriately named
Thanks
— aad
Hi
Sorry for the repost, but I didn't get any respons on my first post so I
will try to rephrase it
We have several ceph clusters running nautilus (coming from mimic). On one
of the clusters I got a health warning
HEALTH_WARN
1 large omap objects
* when I check on the rados gateway with radosgw-admin bucket limit check'
I see 2 buckets over the limit
* the config value for rgw_dynamic_resharding is true
* on: 'radosgw-admin reshard stale-instances list' I get an error:
Resharding disabled in a multisite env, stale instances unlikely from
resharding. This is however a single site cluster.
I have more clusters that are exactly the same and that do auto index
resharding just fine. ANd above command does not give an error
Can anyone help me tackle this problem. I've gone through the
documentation and some blogs but did not find a solution yet
Any help is much appreciated
Thanks
Marcel
On all OSD nodes I'm using vm.min_free_kbytes = 4194304 (4GB). This was one of the first tunings on the cluster.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Anthony D'Atri <anthony.datri(a)gmail.com>
Sent: 08 May 2020 10:17
To: Frank Schilder
Subject: Re: [ceph-users] Re: Data loss by adding 2OSD causing Long heartbeat ping times
Just for grins, what is your vm.min_free_kbytes setting?
Depending on the size and number of your OSDs and your workload, 2-4GB is a starting point.
>
> Hi XuYun and Martin,
>
> I checked that already. The OSDs in question have 8GB memory limit and the RAM of the servers is about 50% used. It could be memory fragmentation, which used to be a problem before bitmap allocator. However, my OSDs are configured to use bitmap, at least that is what they claim they are using.
>
> There might be a somewhat more fundamental issue, also related to my recent experience described in "Ceph meltdown, need help". The problems seem to have the same source, busy OSDs get behind with their internal cluster communication because (I suspect) client IO and admin IO (heartbeats, beacons, etc.) are handled in the same queue. If things get a bit busy, admin I/O slows down and avalanches happen.
>
> Also in the case here (long heartbeat) I first saw remapping and peering going on, then the heartbeats times of a few OSDs suddenly shot up. It is possible that some OSDs were already busy with client I/O. The additional peering seems to have the ability to add so much additional load to some OSDs that they start falling behind and getting marked out erroneously, with the consequence of even more peering, load, etc.
>
> I'm working on a new conversation "Cluster outage due to client IO" to have a clean focused thread. I need a bit more time to collect information though. For now, our cluster is up and running healthy.
>
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ________________________________________
> From: Martin Verges <martin.verges(a)croit.io>
> Sent: 07 May 2020 12:17:10
> To: XuYun
> Cc: Frank Schilder; ceph-users
> Subject: Re: [ceph-users] Re: Data loss by adding 2OSD causing Long heartbeat ping times
>
> Hello XuYun,
>
> In my experience, I would always disable swap, it won't do any good.
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695
> E-Mail: martin.verges(a)croit.io<mailto:martin.verges@croit.io>
> Chat: https://t.me/MartinVerges
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
>
> Am Do., 7. Mai 2020 um 12:07 Uhr schrieb XuYun <yunxu(a)me.com<mailto:yunxu@me.com>>:
> We had got some ping back/front problems after upgrading from filestore to bluestore. It turned out to be related to insufficient memory/swap usage.
>
>> 2020年5月6日 下午10:08,Frank Schilder <frans(a)dtu.dk<mailto:frans@dtu.dk>> 写道:
>>
>> To answer some of my own questions:
>>
>> 1) Setting
>>
>> ceph osd set noout
>> ceph osd set nodown
>> ceph osd set norebalance
>>
>> before restart/re-deployment did not harm. I don't know if it helped, because I didn't retry the procedure that led to OSDs going down. See also point 3 below.
>>
>> 2) A peculiarity of this specific deployment of 2 OSDs was, that it was a mix of OSD deployment and restart after a reboot. I'm working on getting this sorted and this is a different story. For anyone who might find him-/herself in a situation where some OSDs are temporarily down/out with PGs remapped and objects degraded for whatever reason while new OSDs come up, the way to have ceph rescan the down/out OSDs after they come up is to
>>
>> - "ceph osd crush move" the new OSDs temporarily to a location outside the crush sub tree covering any pools (I have such a parking space in the crush hierarchy for easy draining and parking disks)
>> - bring up the down/out OSDs
>> - at this point, the cluster will fall back to the original crush map that was in place when the OSDs went down/out
>> - the cluster will now find all shards that went orphan and health will be restored very quickly
>> - once the cluster is healthy, "ceph osd crush move" the new OSDs back to their desired location
>> - now you will see remapped PGs/misplaced objects, but no degraded objects
>>
>> 3) I still don't have an answer why long heartbeat ping times were observed. There seems to be a more serious issue and this will continue in its own thread "Cluster outage due to client IO" to be opened soon.
>>
>> Best regards,
>> =================
>> Frank Schilder
>> AIT Risø Campus
>> Bygning 109, rum S14
>>
>> ________________________________________
>> From: Frank Schilder <frans(a)dtu.dk<mailto:frans@dtu.dk>>
>> Sent: 25 April 2020 15:34:25
>> To: ceph-users
>> Subject: [ceph-users] Data loss by adding 2OSD causing Long heartbeat ping times
>>
>> Dear all,
>>
>> Two days ago I added very few disks to a ceph cluster and run into a problem I have never seen before when doing that. The entire cluster was deployed with mimic 13.2.2 and recently upgraded to 13.2.8. This is the first time I added OSDs under 13.2.8.
>>
>> I had a few hosts that I needed to add 1 or 2 OSDs to and I started with one that needed 1. Procedure was as usual:
>>
>> ceph osd set norebalance
>> deploy additional OSD
>>
>> The OSD came up and PGs started peering, so far so good. To my surprise, however, I started seeing health-warnings about slow ping times:
>>
>> Long heartbeat ping times on back interface seen, longest is 1171.910 msec
>> Long heartbeat ping times on front interface seen, longest is 1180.764 msec
>>
>> After peering it looked like it got better and I waited it out until the messages were gone. This took a really long time, at least 5-10 minutes.
>>
>> I went on to the next host and deployed 2 new OSDs this time. Same as above, but with much worse consequences. Apparently, the ping times exceeded a timeout for a very short moment and an OSD was marked out for ca. 2 seconds. Now all hell broke loose. I got health errors with the dreaded "backfill_toofull", undersized PGs and a large amount of degraded objects. I don't know what is causing what, but I ended up with data loss by just adding 2 disks.
>>
>> We have dedicated network hardware and each of the OSD hosts has 20GBit front and 40GBit back network capacity (LACP trunking). There are currently no more than 16 disks per server. The disks were added to an SSD pool. There was no traffic nor any other exceptional load on the system. I have ganglia resource monitoring on all nodes and cannot see a single curve going up. Network, CPU utilisation, load, everything below measurement accuracy. The hosts and network are quite overpowered and dimensioned to host many more OSDs (in future expansions).
>>
>> I have three questions, ordered by how urgently I need an answer:
>>
>> 1) I need to add more disks next week and need a workaround. Will something like this help avoiding the heartbeat time-out:
>>
>> ceph osd set noout
>> ceph osd set nodown
>> ceph osd set norebalance
>>
>> 2) The "lost" shards of the degraded objects were obviously still on the cluster somewhere. Is there any way to force the cluster to rescan OSDs for the shards that went orphan during the incident?
>>
>> 3) This smells a bit like a bug that requires attention. I was probably just lucky that I only lost 1 shard per PG. Has something similar reported before? Is this fixed in 13.2.10? Is it something new? Any settings that need to be looked at? If logs need to be collected, I can do so during my next attempt. However, I cannot risk data integrity of a production cluster and, therefore, probably not run the original procedure again.
>>
>> Many thanks for your help and best regards,
>> =================
>> Frank Schilder
>> AIT Risø Campus
>> Bygning 109, rum S14
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
I'm wondering if anyone still sees issues with ceph-mgr using CPU and
being unresponsive even in recent Nautilus releases. We upgraded our
largest cluster from Mimic to Nautilus (14.2.8) recently - it has about
3500 OSDs. Now ceph-mgr is constantly at 100-200% CPU (1-2 cores), and
becomes unresponsive after a few minutes. The finisher-Mgr queue length
grows (I've seen it at over 100k) - similar symptoms as seen with
earlier Nautilus releases by many. This is what it looks like after an
hour of running:
"finisher-Mgr": {
"queue_len": 66078,
"complete_latency": {
"avgcount": 21,
"sum": 2098.408767721,
"avgtime": 99.924227034
}
},
We have a pretty vanilla manager config, only the balancer is enabled in
upmap mode. Here are the enabled modules:
"always_on_modules": [
"balancer",
"crash",
"devicehealth",
"orchestrator_cli",
"progress",
"rbd_support",
"status",
"volumes"
],
"enabled_modules": [
"restful"
],
Any ideas or outstanding issues in this area?
Andras
Hi,
Are there any more resources of unit tests for CRUSH algorithm other than
the test cases here: :
https://github.com/ceph/ceph/tree/master/src/test/crush
Or more unit testing of CRUSH apart from the these test cases would be an
overkill?
BR
Bobby !
Hi everyone,
I have a question about the network setup. From the document, It’s recommended to have 2 NICs per hosts as described in below picture
[Diagram]
In the picture, OSD hosts will connect to the Cluster network for replicate and heartbeat between OSDs, therefore, we definitely need 2 NICs for it. But seems there are no connections between Ceph MON and Cluster network. Can we install 1 NIC on Ceph MON then?
I appreciated any comments!
Thank you!
--
Nghia Viet Tran (Mr)
Hi all,
I read in some release notes it is recommended to have your default data
pool replicated and use erasure coded pools as additional pools through
layouts. We have still a cephfs with +-1PB usage with a EC default pool.
Is there a way to change the default pool or some other kind of
migration without having to recreate the FS?
Thanks!
Kenneth
I have a seemingly strange situation. I have three OSDs that I created with Ceph Octopus using the `ceph orch daemon add <host>:device` command. All three were added and everything was great. Then I rebooted the host. Now the daemon’s won’t start via Docker. When I attempt to run the `docker` command directly it errors with:
root@balin:/var/lib/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667/osd.12# /usr/bin/docker run --rm --net=host --privileged --group-add=disk --name ceph-c3d06c94-bb66-4f84-bf78-470a2364b667-osd.12 -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=balin -v /var/run/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667:/var/run/ceph:z -v /var/log/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667:/var/log/ceph:z -v /var/lib/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667/osd.12:/var/lib/ceph/osd/ceph-12:z -v /var/lib/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667/osd.12/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/bin/ceph-osd docker.io/ceph/ceph:v15 -n osd.12 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix="debug "
debug 2020-05-07T22:58:06.258+0000 7f622a161ec0 0 set uid:gid to 167:167 (ceph:ceph)
debug 2020-05-07T22:58:06.258+0000 7f622a161ec0 0 ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable), process ceph-osd, pid 1
debug 2020-05-07T22:58:06.258+0000 7f622a161ec0 0 pidfile_write: ignore empty --pid-file
debug 2020-05-07T22:58:06.258+0000 7f622a161ec0 -1 bluestore(/var/lib/ceph/osd/ceph-12/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-12/block: (13) Permission denied
debug 2020-05-07T22:58:06.258+0000 7f622a161ec0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-12: (2) No such file or directory
The OSDs are able to come back online if I run `ceph-volume lvm activate —all`. Everything from a usage point of view is fine, even after a reboot, however I now have errors in the `ceph orch ps` list:
osd.12 balin error 27s ago - <unknown> docker.io/ceph/ceph:v15 <unknown> <unknown>
This is an Ubuntu 20.04 system, FWIW. I haven’t a clue where to go from here. While things are technically working since the OSDs are online and functioning, I’d really like to have them under the `ceph orch` management like the rest of the systems.
~Sean
Hi,
===
NOTE: I do not see my thread in ceph-list for some reason. I don't know if list received my question or not. So, sorry if this is duplicate.
===
I just deployed a new cluster with cephadm instead of ceph-deploy. In tyhe past, If i change ceph.conf for tweaking, i was able to copy them and apply to all servers. But i cannot find this on new cephadm tool. I did few changes on ceph.conf but ceph is unaware of those changes. How can i apply them? I've used it with docker. Thanks, Gencer.
Hello,
Sorry if this has been asked before...
A few months ago I deployed a small Nautilus cluster using
ceph-ansible. The OSD nodes have multiple spinning drives and a PCI
NVMe. Now that the cluster has been stable for a while it's time to
start optimizing performance.
While I can tell that there is a part of the NVMe associated with each
OSD, I'm trying to verify which BlueStore components are using the NVMe
- WAL, DB, Cache - and whether the configuration generated by
ceph-ansible (and my settings in osds.yml) is optimal for my hardware.
I've searched around a bit and, while I have found documentation on how
to configure, reconfigure, and repair a BlueStore OSD, I haven't found
anything on how to query the current configuration.
Could anybody point me to a command or link to documentation on this?
Thanks.
-Dave
--
Dave Hall
Binghamton University