There is a general documentation meeting called the "DocuBetter Meeting",
and it is held every two weeks. The next DocuBetter Meeting will be on 09
Sep 2020 at 0830 PDT, and will run for thirty minutes. Everyone with a
documentation-related request or complaint is invited.
The meeting will be held here: https://bluejeans.com/908675367
This meeting will cover the reorganization of the Ceph website as well as
Zac's recently-developed workflow that aims to make it possible to move the
good ideas from stale or poorly-formed documentation PRs into the
documentation more quickly. It will also cover HACKING.rst's inclusion into
the documentation, and the ongoing initiative to improve the Installation
Guide and the Developer Guide.
Send documentation-related requests and complaints to me by replying to
this email and CCing me at zac.dover(a)gmail.com.
The next DocuBetter meeting is scheduled for:
09 Sep 2020 0830 PDT
09 Sep 2020 1630 UTC
10 Sep 2020 0230 AEST
Etherpad: https://pad.ceph.com/p/Ceph_Documentation
Zac's docs whiteboard: https://pad.ceph.com/p/docs_whiteboard
Report Documentation Bugs: https://pad.ceph.com/p/Report_Documentation_Bugs
Meeting: https://bluejeans.com/908675367
Hi Folks,
The weekly performance meeting will be starting in about 25 minutes!
Today, we are going to discuss refactoring onodes in bluestore to
improve memory usage and CPU overhead. See you there!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
Hi all,
Currently, there are two ways to create exports with mgr/volume/nfs module
and
dashboard. Both use the same code[1][2] with modification to create
exports.
Recently, there was a meeting to discuss integration of dashboard with
volume/nfs module. A number of todo items were identified.
Below provides a brief description of export creation workflow:
1) mgr/volume/nfs module [3]
* It was introduced in octopus.
* It automates the pool and cluster creation with "ceph nfs cluster create"
command.
* Currently using 'cephadm' as backend. In future 'rook' will also be
supported.
* Default exports can be created with
'ceph nfs export create cephfs <fsname> <clusterid> <binding>
[--readonly] [--path=/path/in/cephfs]'.
Otherwise `ceph nfs cluster config set <clusterid> -i <config_file>`
command
can be used to create user defined exports. Even modify ganesha
configuration.
* RGW exports are not supported [4]. We need someone to help with it.
* Exports can be listed, fetched and deleted. But cannot be modified
currently [5].
* Only NFSv4 is supported. It provides better cache management, parallelism,
compound operations, and lease based locks than previous versions.
2) Dashboard[6]
* The pool and nfs cluster needs to be created explicitly.
* Also requires the
"ceph dashboard set-ganesha-clusters-rados-pool-namespace
<pool_name>[/<namespace>]"
command to be used before exports can be created. And following options
need to
be specified: cluster id, daemons, path, pseudo path, access type, squash,
security label, protocols [3, 4], transport [udp, tcp], cephfs user id,
cephfs name.
* It supports both cephfs and rgw exports.
* Exports can be modified, listed, fetched and deleted.
* Available from nautilus.
We would like to create a common code base for it and eventually go in a
direction where the dashboard may use the volumes/nfs plugin for configuring
NFS clusters.
These are the issues we identified in our meeting:
* Difference in user workflow between volume/nfs and dashboard.
* rgw exports need to be supported in volume/nfs module.
* Dashboard does not want to depend on the orchestrator in future for
fetching
cluster pool and namespace.
* Dashboard creates config object per daemon containing export object rados
url.
* In cephadm all daemons within the cluster watch a single config object.
This
config object contains rados url for export objects.
rados://$pool/$namespace/export-$i
rados://$pool/$namespace/userconf-nfs.$svc
(export config)
(user defined config)
+-----------+ +----------- + +-----------+
+---------+
| | | |
| | | |
| export-1 | | export-2 | | export-3 | |
export |
| | | |
| | | |
+----+----+ +-----+-----+ +-----+----+
+----+----+
^ ^
^ ^
| |
| |
+-----------------+----------------+-----------------+
%url |
|
+---------+---------+
| |
rados://$pool/$namespace/conf-nfs.$svc
| conf+nfs.$svc | (common config)
| |
+---------+----------+
^
|
watch_url |
+-----------------------------------------------+
|
| |
|
| | RADOS
+----------------------------------------------------------------------------------+
|
| | CONTAINER
watch_url | watch_url | watch_url |
|
| |
+--------+-------+ +--------+-------+ +-------+--------+
| | | |
| | /etc/ganesha/ganesha.conf
| nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c |
(bootstrap config)
| | | |
| |
+----------------+ +----------------+ +------------------+
In our next meeting, we’d like to decide on a way forward for reconciling
these issues.
[1]
https://github.com/ceph/ceph/blob/master/src/pybind/mgr/volumes/fs/nfs.py
[2]
https://github.com/ceph/ceph/blob/master/src/pybind/mgr/dashboard/services/…
[3] https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports
[4] https://tracker.ceph.com/issues/47172
[5] https://tracker.ceph.com/issues/45746
[6] https://docs.ceph.com/docs/master/mgr/dashboard/#nfs-ganesha-management
Thanks,
Varsha
Hi Joao & Kefu,
I have a question about
https://github.com/ceph/ceph/commit/e62269c8929e414284ad0773c4a3c82e43735e4e
which was backported and released into v14.2.10.
My understanding is that the intention was to ignore the osd_epoch of
down osds, so that we can trim osdmaps up to the min of (a) the lowest
per-pool clean epoch and (b) the lowest clean epoch of all up osds.
(See [1] and [2] for motivation).
Before this commit, get_min_last_epoch_clean would loop over *all* osd
epochs and lower the floor if needed.
Now after the commit we only check the epochs of the *out* osds.
Isn't that logic inverted? Shouldn't we be looping over all the *in* osds? [3]
This commit has passed by many eyes already so I must be confused...
Please help :-/
(I ask because we already have evidence running 14.2.11 that maps are
still not trimmed when we mark out a broken osd -- we had to restart
the mon leader to provoke the trimming).
Thanks,
Dan
[1] https://tracker.ceph.com/issues/37875#note-6
[2] https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/6KSOLVLWR6HZOVUY7U…
[3]
@@ -2251,7 +2251,7 @@ epoch_t OSDMonitor::get_min_last_epoch_clean() const
// don't trim past the oldest reported osd epoch
for (auto [osd, epoch] : osd_epochs) {
if (epoch < floor &&
- osdmap.is_out(osd)) {
+ osdmap.is_in(osd)) {
floor = epoch;
}
}
Hi ,
I want to know the number of connections in CEPH. I think the connection is mainly OSD connection.
Is the following statement correct?
Each OSD is connected with other OSDs, and there may be more than one connection between two OSDs.
If there is only one connection per OSD, the number of the connection is N(N-1)/2,. If there are k connections per OSD, the number of the connection is kN(N-1)/2.
Thanks for your help.
Best regards.
Congmin Yin