Hi, zheng and patrick.
During the last CDM, you mentioned that before doing rstat
propagation, client dirty caps should be forced to flush after taking
snapshots and before doing rstat propagation. Do you think that
forcing dirty caps flush should be enclosed in the rstat propagation
operation or be another stand-alone operation? Thanks:-)
Please add all PRs ready for nautilus v14.2.2 ASAP and tag with labels
"nautilus-batch-1" and "needs-qa".
The current plan is to start QE validation next week.
Thx
YuriW
We're glad to announce the sixth bugfix release of the Mimic v13.2.x
long term stable release series. We recommend that all Mimic users
upgrade. We thank everyone for contributing towards this release.
Notable Changes
---------------
* Ceph v13.2.6 now packages python bindings for python3.6 instead of
python3.4, because EPEL7 recently switched from python3.4 to
python3.6 as the native python3. See the announcement[1] _`
for more details on the background of this change.
For a detailed changelog, please refer to the official blog post entry
at https://ceph.com/releases/v13-2-6-mimic-released/
[1]: https://lists.fedoraproject.org/archives/list/epel-announce@lists.fedorapro…
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.6.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7b695f835b03642f85998b2ae7b6dd093d9fbce4
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
We have several virtual xattrs in cephfs which return various values as
strings. xattrs don't necessarily return strings however, so we need to
include the terminating NULL byte in the length when we return the
length.
Furthermore, the getxattr manpage says that we should return -ERANGE if
the buffer is too small to hold the resulting value. Let's start doing
that here as well.
URL: https://bugzilla.redhat.com/show_bug.cgi?id=1717454
Reported-by: Tomas Petr <tpetr(a)redhat.com>
Signed-off-by: Jeff Layton <jlayton(a)kernel.org>
---
fs/ceph/xattr.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c
index 6621d27e64f5..57f1bd83c21c 100644
--- a/fs/ceph/xattr.c
+++ b/fs/ceph/xattr.c
@@ -803,8 +803,14 @@ ssize_t __ceph_getxattr(struct inode *inode, const char *name, void *value,
if (err)
return err;
err = -ENODATA;
- if (!(vxattr->exists_cb && !vxattr->exists_cb(ci)))
- err = vxattr->getxattr_cb(ci, value, size);
+ if (!(vxattr->exists_cb && !vxattr->exists_cb(ci))) {
+ /* Make sure result will fit in buffer */
+ if (size > 0) {
+ if (size < vxattr->getxattr_cb(ci, NULL, 0) + 1)
+ return -ERANGE;
+ }
+ err = vxattr->getxattr_cb(ci, value, size) + 1;
+ }
return err;
}
--
2.21.0
Hi Folks,
Welcome back from Cephalocon! Perf meeting is on in ~15 minutes. I'm
sending this both to the old and the new ceph development lists, but in
the future these emails will only be sent to the new dev(a)ceph.io list so
please remember to register!
Today we will talk a bit about some of the discussion that happened at
cephalocon around the new community performance hardware, plans for
incerta, Jenkins performance testing, autotuning, and trocksdb.
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
We go with upstream release and mostly Nautilus now, probably the most
aggressive ones among serious production user (i.e tens of PB+ ),
I will vote for November for several reasons:
1. Q4 is holiday season and usually production rollout was blocked
, especially storage related change, which usually give team more time
to prepare/ testing/ LnP the new releases, as well as catch up with
new features.
2. Q4/Q1 is usually the planning season, having the upstream
released and testing to know the readiness of new feature, will
greatly helps when planning the feature/offering of next year.
3. Users have whole year to migrate their
provision/monitoring/deployment/remediation system to new version, and
have enough time to fix and stable the surrounding system before next
holiday season.
Release in Feb or March will make the Q4 just in the middle of the
cycle, and lot of changes will land at last minutes(month), in which
case, few things can be test and forecasted based on the state-of-art
in Q4.
-Xiaoxi
Linh Vu <vul(a)unimelb.edu.au> 于2019年6月6日周四 上午8:32写道:
>
> I think 12 months cycle is much better from the cluster operations perspective. I also like March as a release month as well.
> ________________________________
> From: ceph-users <ceph-users-bounces(a)lists.ceph.com> on behalf of Sage Weil <sage(a)newdream.net>
> Sent: Thursday, 6 June 2019 1:57 AM
> To: ceph-users(a)ceph.com; ceph-devel(a)vger.kernel.org; dev(a)ceph.io
> Subject: [ceph-users] Changing the release cadence
>
> Hi everyone,
>
> Since luminous, we have had the follow release cadence and policy:
> - release every 9 months
> - maintain backports for the last two releases
> - enable upgrades to move either 1 or 2 releases heads
> (e.g., luminous -> mimic or nautilus; mimic -> nautilus or octopus; ...)
>
> This has mostly worked out well, except that the mimic release received
> less attention that we wanted due to the fact that multiple downstream
> Ceph products (from Red Has and SUSE) decided to based their next release
> on nautilus. Even though upstream every release is an "LTS" release, as a
> practical matter mimic got less attention than luminous or nautilus.
>
> We've had several requests/proposals to shift to a 12 month cadence. This
> has several advantages:
>
> - Stable/conservative clusters only have to be upgraded every 2 years
> (instead of every 18 months)
> - Yearly releases are more likely to intersect with downstream
> distribution release (e.g., Debian). In the past there have been
> problems where the Ceph releases included in consecutive releases of a
> distro weren't easily upgradeable.
> - Vendors that make downstream Ceph distributions/products tend to
> release yearly. Aligning with those vendors means they are more likely
> to productize *every* Ceph release. This will help make every Ceph
> release an "LTS" release (not just in name but also in terms of
> maintenance attention).
>
> So far the balance of opinion seems to favor a shift to a 12 month
> cycle[1], especially among developers, so it seems pretty likely we'll
> make that shift. (If you do have strong concerns about such a move, now
> is the time to raise them.)
>
> That brings us to an important decision: what time of year should we
> release? Once we pick the timing, we'll be releasing at that time *every
> year* for each release (barring another schedule shift, which we want to
> avoid), so let's choose carefully!
>
> A few options:
>
> - November: If we release Octopus 9 months from the Nautilus release
> (planned for Feb, released in Mar) then we'd target this November. We
> could shift to a 12 months candence after that.
> - February: That's 12 months from the Nautilus target.
> - March: That's 12 months from when Nautilus was *actually* released.
>
> November is nice in the sense that we'd wrap things up before the
> holidays. It's less good in that users may not be inclined to install the
> new release when many developers will be less available in December.
>
> February kind of sucked in that the scramble to get the last few things
> done happened during the holidays. OTOH, we should be doing what we can
> to avoid such scrambles, so that might not be something we should factor
> in. March may be a bit more balanced, with a solid 3 months before when
> people are productive, and 3 months after before they disappear on holiday
> to address any post-release issues.
>
> People tend to be somewhat less available over the summer months due to
> holidays etc, so an early or late summer release might also be less than
> ideal.
>
> Thoughts? If we can narrow it down to a few options maybe we could do a
> poll to gauge user preferences.
>
> Thanks!
> sage
>
>
> [1] https://protect-au.mimecast.com/s/N1l6CROAEns1RN1Zu9Jwts?domain=twitter.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi everyone,
We are splitting the ceph-devel(a)vger.kernel.org list into two:
- dev(a)ceph.io
This will be the new general purpose Ceph development discussion list.
We encourage all subscribers to the current ceph-devel(a)vger.kernel.org
to subscribe to this new list.
Subscribe to the new ceph-devel list now at:
https://lists.ceph.io/postorius/lists/dev.ceph.io/
(We were originally going to call this list ceph-devel(a)ceph.io, but are
going with dev(a)ceph.io instead to avoid the confusion of having two
'ceph-devel's, particularly when searching archives.)
- ceph-devel(a)vger.kernel.org
The current list will continue to exist, but its role will shift to
Linux kernel-related traffic, including kernel patches and discussion of
implementation details for the kernel client code.
At some point in the future, when all non-kernel discussion has shifted
to the new list, you might want to unsubscribe from the old list.
For the next week or two, please direct discussion at both lists. Once a
bit of time has passed and most active developers have subscribed to the
new list, we will focus discussion on the new list only.
We will send several more emails to the old list to remind people to
subscribe to the new list.
Why are we doing this?
1 The new list is mailman and managed by the Ceph community, which means
that when people have problems with subscribe, mails being lost, or any
other list-related problems, we can actually do something about it.
Currently we have no real ability to perform any management-related tasks
on the vger list.
2 The vger majordomo setup also has some frustrating features/limitations,
the most notable being that it only accepts plaintext email; anything
with MIME or HTML formatting is rejected. This confuses many users.
3 The kernel development and general Ceph development have slightly
different modes of collaboration. Kernel code review is based on email
patches to the list and reviewing via email, which can be noisy and
verbose for those not involved in kernel development. The Ceph userspace
code is handled via github pull requests, which capture both proposed
changes and code review.
Thanks!
+ dev(a)ceph.io
On Tue, Jun 4, 2019 at 1:59 PM 安安静静 <2741248158(a)qq.com> wrote:
>
> Hi chai,
>
> When I : yum install librados2-devel
> All librados2-devel versions are build by gcc 4.9 or older,
lyn, before starting the discussion, i'd like have more context:
where is the librados2-devel package downloaded from?
and what is the version ?
i guess you are referencing the librados2-devel packages from
http://download.ceph.com, and it is built for CentOS7 . i am not sure
how you went to the conclusion that we build ceph using GCC 4.9.
actually, we are using GCC-8 for building Ceph in master.
>
> but most c++ project uses gcc5.0 or higher,
good to learn. and quotation needed.
>
> this make me have to change all other c++ sources rebuild with : -D_GLIBCXX_USE_CXX11_ABI=0
see http://docs.ceph.com/docs/master/dev/cxx/.
>
> this is realy confused me,
>
> Can ceph provide a version of librados2-devel build by gcc5.0 or higher?
>
> best regards,
>
> lyn