Hi all,
At Ceph-Dashboard's meetings we have discussed a couple of times how to
improve PR review & merge process. Sometimes Pull Requests get stalled with
no clear reason: they have positive reviews, perhaps some
comments/suggestions, but not clear rejections. And, after a certain
moment, the longer they remain open, the harder they seem to get merged (or
closed).
An idea taken from Ceph-Ansible team would be to 'code' the set of rules
that a PR needs to meet in order to be merged and use some tool (e.g.:
Mergify <https://mergify.io>) to implement that. This would change the
contribution pipeline from a push model (PRs not merged unless manual
action taken) to pull model (rules met, PRs get merged unless manual action
taken).
This PR <https://github.com/ceph/ceph/pull/29496> brings Mergify to
Dashboard, with the following set of rules:
- PR base branch is 'master'
- PR is labeled 'dashboard'
- PR title starts with "mgr/dashboard: "
- All Jenkins checks pass (except arm64)
- 2 review approvals required with no changes-requested or unaddressed
comments. All requested reviewers have issued a review (perhaps too strict).
- As Dashboard's CODEOWNERS PR
<https://github.com/ceph/ceph/pull/29451> is already merged, that
means that at least 1 one the approvals must come from a @ceph/dashboard
team member.
- 'DNM' label not present
One downside identified is that Mergify does not allow to specify a message
for the merge commit (e.g.: to add "Reviewed-by:" metadata). I'm preparing
a PR to Mergify repo <https://github.com/Mergifyio/mergify-engine>,
but the GitHub
Reviews <https://developer.github.com/v3/pulls/reviews/> API does not
easily provide with every reviewer's e-mail (nor the Users API
<https://developer.github.com/v3/users/> does), so it needs to be extracted
from the Events API <https://developer.github.com/v3/activity/events/>
(which is kind of tricky). And this use case seem to be specific to us,
right?
So, would it be ok to have PR merge commit messages with "Reviewed-By:
FirstName LastName <@github_login>" or even with no message at all?
BTW, apart from the "merge" action, Mergify provides other actions
<https://doc.mergify.io/actions.html> that could be useful to automate
other daily manual steps:
- Backport the PR to another branch
- Add a comment to the PR
- Add or remove labels (e.g: if the PR title matches "^mgr/dashboard: ",
let's add a "dashboard" label).
Any feedback welcome!
Kind Regards,
Ernesto Puerta
He / Him / His
Senior Software Engineer, Ceph
Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
Dear Madam/Sir:
Glad to hear that you're on the market for photographic equipment , our factory is specialized in Tripod with 19 years experience ,with good quality and pretty competitive price.
Also we have our own professional designers to meet any of your
requirements.
Why choose us?
- Quality first
- On-time shipment
- No extra cost,
Hope to have your feedback soon.
Best regard
Candy
GuangZhou Qingzhuang Photographic Equipment Co. Ltd
Product : tripod ,monopod ,shoulder pad , slider, background stand , light stand , LED light
Hi guys,
As a distributed filesystem, all clients of CephFS share the whole
cluster's resources, for example, IOPS, throughput. In some cases,
resources will be occupied by some clients. So QoS for CephFS is
needed in most cases.
I have made two kinds of design, as follows:
1. all clients use the same QoS setting, just as the implementation in this PR.
(PR: https://github.com/ceph/ceph/pull/29266). Maybe there are
multiple mount points,
if we limit the total IO, the number of total mount points is also limited.
So in my implementation, the total IO & BPS is not limited.
2. all clients share a specific QoS setting. I think there are two
kinds of use cases in detail. (not implemented)
2.1 setting a total limit, all clients limited by the average:
total_limit/clients_num.
2.2 setting a total limit, the mds decide the client's limitation by
their historical IO&BPS.
Based on the token bucket algorithm, I implement QoS for CephFS.
The basic idea is as follows:
Set QoS info as one of the dir's xattrs;
All clients can access the same dirs with the same QoS setting.
Similar to the Quota's config flow. when the MDS receives the QoS
setting, it'll also broadcast the message to all clients.
We can change the limit online.
And we will config QoS as follows, it supports
{limit/burst}{iops/bps/read_iops/read_bps/write_iops/write_bps}
configure setting, some examples:
setfattr -n ceph.qos.limit.iops -v 200 /mnt/cephfs/testdirs/
setfattr -n ceph.qos.burst.read_bps -v 200 /mnt/cephfs/testdirs/
getfattr -n ceph.qos.limit.iops /mnt/cephfs/testdirs/
getfattr -n ceph.qos /mnt/cephfs/testdirs/
But, there is also a big problem. For the bps{bps/write_bps/read_bps} setting,
if the bps is lower than the request's block size, the client
will be blocked until it gets enough token.
Any suggestion will be appreciated, thanks!
PR: https://github.com/ceph/ceph/pull/29266
And I am finishing the QoS implementation in Cephfs-kernel client and
will post the code when I finish it.
Dear Madam/Sir:
Glad to hear that you're on the market for photographic equipment , our factory is specialized in Tripod with 19 years experience ,with good quality and pretty competitive price.
Also we have our own professional designers to meet any of your
requirements.
Why choose us?
- Quality first
- On-time shipment
- No extra cost,
Hope to have your feedback soon.
Best regard
Candy
GuangZhou Qingzhuang Photographic Equipment Co. Ltd
Product : tripod ,monopod ,shoulder pad , slider, background stand , light stand , LED light
Hi ceph community. Im Jacob and I currently surveying some solutions about
cephFS offsite backup. I have read from that ceph has plan for multi-site
CephFS and CephFS snapshot mirror for Octopus release from Cephalocon and
issue tracker. However, i still wondering if there is any solution
available right now, and it will be really helpful if you can help me
answering these question
How can I get diff between CephFS snapshot. I know there is rctime xattr
available there, but is there any quicker solutions?
Is there any way that I can specify how many OSDs of a pg required to
commit when client write an object. For example, pg1 => [osd.1 osd.4
osd.7], when client write objA, he will get commit after osd.1 and osd.4
finished, and let osd.7 do the replication by recovering with ceph
Thanks for spending time reading this
Best regards