We are seeking information on configuring Ceph to work with Noobaa and
NextCloud.
Randy
--
Randy Morgan
CSR
Department of Chemistry/BioChemistry
Brigham Young University
randym(a)chem.byu.edu
Hi list,
We're wondering if Ceph Nautilus packages will be provided for Ubuntu
Focal Fossa (20.04)?
You might wonder why one would not just use Ubuntu Bionic (18.04)
instead of using the latest LTS. Here is why: a glibc bug in Ubuntu
Bionic that *might* affect Open vSwitch (OVS) users [1].
We had quite a few issues with OVS deadlocks on hypervisors, and do not
want to risk experiencing the same issues on our Ceph cluster(s). I'm
not sure how many of you use OVS for bridging / bonding, but for those
who do, running Ceph (Nautlilus / Octopus) on 20.04 would be preferred.
Gr. Stefan
[1]: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1839592
Hi,
I've read that Ceph has some InfluxDB reporting capabilities inbuilt (https://docs.ceph.com/docs/master/mgr/influx/).
However, Telegraf, which is the system reporting daemon for InfluxDB, also has a Ceph plugin (https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ceph).
Just curious what people's thoughts on the two are, or what they are using in production?
Which is easier to deploy/maintain, have you found? Or more useful for alerting, or tracking performance gremlins?
Thanks,
Victor
Hi,
I was just checking on a few (13) IPv6-only Ceph clusters and I noticed
that they couldn't send their Telemetry data anymore:
telemetry.ceph.com has address 8.43.84.137
This server used to have Dual-Stack connectivity while it was still
hosted at OVH.
It seemed to have moved to Red Hat, but lost IPv6 connectivity.
How can we get this back?
Wido
Hello,
I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)
4 nodes - each node 11 HDD, 1 SSD, 10Gbit network
Cluster was empty, fresh install. We filled cluster with data (small blocks) using RGW.
Cluster is now used for testing so no client was using it during my admin operations mentioned below
After a while (7TB of data / 40M objects uploaded) we decided, that we increase pg_num from 128 to 256 to better spread data and to speedup
this operation, I've set
ceph config set mgr target_max_misplaced_ratio 1
so that whole cluster rebalance as quickly as it can.
I have 3 issues/questions below:
1)
I noticed, that manual increase from 128 to 256 caused approx. 6 OSD's to restart with logged
heartbeat_map clear_timeout 'OSD::osd_op_tp thread 0x7f8c84b8b700' had suicide timed out after 150
after a while OSD's were back so I continued after a while with my tests.
My question - increasing number of PG with maximal target_max_misplaced_ratio was too much for that OSDs? It is not recommended to do it
this way? I had no problem with this increase before, but configuration of cluster was slightly different and it was luminous version.
2)
Rebuild was still slow so I increased number of backfills
ceph tell osd.* injectargs "--osd-max-backfills 10"
and reduced recovery sleep time
ceph tell osd.* injectargs "--osd-recovery-sleep-hdd 0.01"
and after few hours I noticed, that some of my OSD's were restarted during recovery, in log I can see
...
|2020-03-21 06:41:28.343 7fe1f8bee700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fe1da154700' had timed out after 15 2020-03-21
06:41:28.343 7fe1f8bee700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fe1da154700' had timed out after 15 2020-03-21 06:41:36.780
7fe1da154700 1 heartbeat_map clear_timeout 'OSD::osd_op_tp thread 0x7fe1da154700' had timed out after 15 2020-03-21 06:41:36.888
7fe1e7769700 0 log_channel(cluster) log [WRN] : Monitor daemon marked osd.7 down, but it is still running 2020-03-21 06:41:36.888
7fe1e7769700 0 log_channel(cluster) log [DBG] : map e3574 wrongly marked me down at e3573 2020-03-21 06:41:36.888 7fe1e7769700 1 osd.7 3574
start_waiting_for_healthy |
I observed network graph usage and network utilization was low during recovery (10Gbit was not saturated).
So lot of IOPS on OSD causes also hartbeat operation to timeout? I thought that OSD is using threads and HDD timeouts are not influencing
heartbeats to other OSD's and MON. It looks like it is not true.
3)
After OSD was wrongly marked down I can see that cluster has object degraded. There were no degraded object before that.
Degraded data redundancy: 251754/117225048 objects degraded (0.215%), 8 pgs degraded, 8 pgs undersized
It means that this OSD disconnection causes data degraded? How is it possible, when no OSD was lost. Data should be on that OSD and after
peering should be everything OK. With luminous I had no problem, after OSD up degraded objects where recovered/found during few seconds and
cluster was healthy within seconds.
Thank you very much for additional info. I can perform additional tests you recommend because cluster is used for testing purpose now.
With regards
Jan Pekar
--
============
Ing. Jan Pekař
jan.pekar(a)imatic.cz
----
Imatic | Jagellonská 14 | Praha 3 | 130 00
http://www.imatic.cz | +420326555326
============
--
Has anyone ever tried using this feature? I've added it to the [global]
section of the ceph.conf on my POC cluster but I'm not sure how to tell if
it's actually working. I did find a reference to this feature via Google and
they had it in their [OSD] section?? I've tried that too..
TIA
Adam
Dear all,
maybe someone can give me a pointer here. We are running OpenNebula with ceph RBD as a back-end store. We have a pool of spinning disks to create large low-demand data disks, mainly for backups and other cold storage. Everything is fine when using linux VMs. However, Windows VMs perform poorly, they are like a factor 20 slower than a similarly created linux VM.
If anyone has pointers what to look for, we would be very grateful.
The OpenNebula installation is more or less default. The current OS and libvirt versions we use are:
Centos 7.6 with stock kernel 3.10.0-1062.1.1.el7.x86_64
libvirt-client.x86_64 4.5.0-23.el7_7.1 @updates
qemu-kvm-ev.x86_64 10:2.12.0-33.1.el7 @centos-qemu-ev
Some benchmark results from good to worse workloads:
rbd bench --io-size 4M --io-total 4G --io-pattern seq --io-type write --io-threads 16 : 450MB/s
rbd bench --io-size 4M --io-total 4G --io-pattern seq --io-type write --io-threads 1 : 230MB/s
rbd bench --io-size 1M --io-total 4G --io-pattern seq --io-type write --io-threads 1 : 190MB/s
rbd bench --io-size 64K --io-total 4G --io-pattern seq --io-type write --io-threads 1 : 150MB/s
rbd bench --io-size 64K --io-total 1G --io-pattern rand --io-type write --io-threads 1 : 26MB/s
dd with conv=fdatasync gives awesome 500MB/s inside linux VM for sequential write of 4GB.
We copied a couple of large ISO files inside the Windows VM and for the first ca. 1 to 1.5G it performs as expected. Thereafter, however, write speed drops rapidly to ca. 25MB/s and does not recover. It is almost as if Windows translates large sequential writes to small random writes.
If anyone has seen and solved this before, please let us know.
Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
As a follow-up to our recent memory problems with OSDs (with high pglog
values:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LJPJZPBSQRJ…
), we also see high buffer_anon values. E.g. more than 4 GB, with "osd
memory target" set to 3 GB. Is there a way to restrict it?
As it is called "anon", I guess that it would first be necessary to find
out what exactly is behind this?
Well maybe it is just as Wido said, with lots of small objects, there
will be several problems.
Cheers
Harry
Hi everyone:
There are two types of qos in ceph(one based on tokenbucket algorithm,another based on mclock ).
Which one I can use in nautilus production environment ?Thank you