Hi all,
bdev_enable_discard has been in ceph for several major releases now
but it is still off by default.
Did anyone try it recently -- is it safe to use? And do you have perf
numbers before and after enabling?
Cheers, Dan
All;
We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16).
Initially our bucket grew very quickly, as I was loading old data into it and we quickly ran into Large OMAP Object warnings.
I have since done a couple manual reshards, which has fixed the warning on the primary cluster. I have never been able to get rid of the issue on the cluster with the replica.
I prior conversation on this list led me to this command:
radosgw-admin reshard stale-instances list --yes-i-really-mean-it
The results of which look like this:
[
"nextcloud-ra:f91aeff8-a365-47b4-a1c8-928cd66134e8.185262.1",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.6",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.2",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.5",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.4",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.3",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.1",
"3520ae821f974340afd018110c1065b8/OS Development:f91aeff8-a365-47b4-a1c8-928cd66134e8.4298264.1",
"10dfdfadb7374ea1ba37bee1435d87ad/volumebackups:f91aeff8-a365-47b4-a1c8-928cd66134e8.4298264.2",
"WorkOrder:f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1"
]
I find this particularly interesting, as nextcloud-ra, <swift>/OS Development, <swift>/volumbackups, and WorkOrder buckets no longer exist.
When I run:
for obj in $(rados -p 300.rgw.buckets.index ls | grep f91aeff8-a365-47b4-a1c8-928cd66134e8.3512190.1); do printf "%-60s %7d\n" $obj $(rados -p 300.rgw.buckets.index listomapkeys $obj | wc -l); done
I get the expected 64 entries, with counts around 20000 +/- 1000.
Are the above listed stale instances ok to delete? If so, how do I go about doing so?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com
Hello,
When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap", how can I tell the cephadm bootstrap NOT to install the ceph-grafana container?
Thank you very much in advance for your answer.
Best regards,
Mabi
Hi everyone,
In June 2021, we're hosting a month of Ceph presentations, lightning
talks, and unconference sessions such as BOFs. There is no
registration or cost to attend this event.
The CFP is now open until May 12th.
https://ceph.io/events/ceph-month-june-2021/cfp
Speakers will receive confirmation that their presentation is accepted
and further instructions for scheduling by May 16th.
The schedule will be available on May 19th.
Join the Ceph community as we discuss how Ceph, the massively
scalable, open-source, software-defined storage system, can radically
improve the economics and management of data storage for your
enterprise.
--
Mike Perez
Hi,
I'm currently testing some disaster scenarios.
When removing one osd/monitor host, I see that a new quorum is build
without the missing host. The missing host is listed in the dashboard
under Not In Quorum, so probably everything as expected.
After restarting the host, I see that the osd's come back online and
everything appears to be working, however the quorum is still only with
two monitors.
Looking at the services, I can see that it is somehow stopped. Is this
expected and I must start it manually somehow, or should it work? The
whole cluster is deployed using cephadm (the node was the initial
bootstrap one, if that is important)
Greetings, Kai
Hello,
what is the currently preferred method, in terms of stability and
performance, for exporting a CephFS directory with Samba?
- locally mount the CephFS directory and export it via Samba?
- using the "vfs_ceph" module of Samba?
Best,
Martin
Backend:
XFS for the filestore back-end.
In our testing, we found the performance decreases when cluster usage exceed default nearfull ratio(85%), is it by design?
Environment: Ceph Nautilus 14.2.8 Object Storage
Data nodes: 12 * HDD OSDs drives each with a 12TB capacity + 2 * SSD OSDs drives for rgw bucket index pool & rgw meta pool.
Custom configs (since we dealing with a majority smaller sized objects)
bluestore_min_alloc_size_ssd 4096
bluestore_min_alloc_size_hdd 4096
Observations from cosbench performance tests
Stage
Op-Type
Op-Count
Byte-Count
Avg-ResTime
s7-read1KB 48W
read
2004202
2004202000
43.11
s13-read2KB 48W
read
2013906
4027812000
42.9
s19-read4KB 48W
read
2014701
8058804000
42.88
s25-read8KB 48W
read
2002337
16018696000
43.15
s31-read16KB 48W
read
1987785
31804560000
43.46
s37-read32KB 48W
read
1976190
63238080000
43.7
s43-read64KB 48W
read
1929183
123467712000
44.78
s49-read128KB 48W
read
9965032
1275524096000
8.67
s55-read256KB 48W
read
6505554
1665421824000
13.28
The response time improves drastically when the object size is greater than 64KB. What could be the reason?
Thanks,
Ronnie
[https://d1dejaj6dcqv24.cloudfront.net/asset/image/email-banner-384-2x.png]<https://www.qualys.com/email-banner>
This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. NOTE that all incoming emails sent to Qualys email accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional emails ("spam"). If you have any concerns about this process, please contact us.
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.9.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
Specific questions, comments, bugs etc are best directed at our github issues
tracker or github discussions forum.
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io