Hi Myoungwon,
I was thinking about how a refcounted cas pool would interact with
snapshots and it occurred to me that dropping refs when an object is
deleted may break snapshotted versions of that object. If object A has
a ref to chunk X, is snapshotted, then A is deleted, we'll (currently)
drop the ref to X and remove it. That means that A can't be read.
One way to get around that would be to mirror snaps from the source pool
to the chunk pool--this is how cache tiering works. The problem I see
there is that I'd hoped to allow multiple pools to share/consume the same
chunk pool, but each pool has its own snapid namespace.
Another would be to bake the refs more deepling into the source rados pool
so that the refs are only dropped after all clones also drop the ref.
That is harder to track, though, since I think you'd need to examine all
of the clones to know whether the ref is truly gone. Unless we embed
even more metadata in the SnapSet--something analogous to clone_overlap to
identifying the chunks. That seems like it will bloat that structure,
though.
Other ideas?
sage
On Thu, Jun 27, 2019 at 8:58 PM nokia ceph <nokiacephusers(a)gmail.com> wrote:
>
> Hi Team,
>
> We have a requirement to create multiple copies of an object and currently we are handling it in client side to write as separate objects and this causes huge network traffic between client and cluster.
> Is there possibility of cloning an object to multiple copies using librados api?
> Please share the document details if it is feasible.
It may be possible to use an object class to accomplish what you want
to achieve but the more we understand what you are trying to do, the
better the advice we can offer (at the moment your description sounds
like replication which is already part of RADOS as you know).
More on object classes from Cephalocon Barcelona in May this year:
https://www.youtube.com/watch?v=EVrP9MXiiuU
>
> Thanks,
> Muthu
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Cheers,
Brad
Here we go again! As usual the conference theme is intended to
inspire, not to restrict; talks on any topic in the world of free and
open source software, hardware, etc. are most welcome, and Ceph talks
definitely fit.
I've added this to https://pad.ceph.com/p/cfp-coordination as well.
-------- Forwarded Message --------
Subject: [lca-announce] linux.conf.au 2020 - Call for Sessions and
Miniconfs now open!
Date: Tue, 25 Jun 2019 21:19:43 +1000
From: linux.conf.au Announcements <lca-announce(a)lists.linux.org.au>
Reply-To: lca-announce(a)lists.linux.org.au
To: lca-announce(a)lists.linux.org.au
The linux.conf.au 2020 organising team is excited to announce that the
linux.conf.au 2020 Call for Sessions and Call for Miniconfs are now open!
These will stay open from now until Sunday 28 July Anywhere on Earth
(AoE) (https://en.wikipedia.org/wiki/Anywhere_on_Earth).
Our theme for linux.conf.au 2020 is "Who's Watching", focusing on
security, privacy and ethics.
As big data and IoT-connected devices become more pervasive, it's no
surprise that we're more concerned about privacy and security than ever
before.
We've set our sights on how open source could play a role in maximising
security and protecting our privacy in times of uncertainty.
With the concept of privacy continuing to blur, open source could be the
solution to give us '2020 vision'.
Call for Sessions
Would you like to talk in the main conference of linux.conf.au 2020?
The main conference runs from Wednesday to Friday, with multiple streams
catering for a wide range of interest areas.
We welcome you to submit a session
(https://linux.conf.au/programme/sessions/) proposal for either a talk
or tutorial now.
Call for Miniconfs
Miniconfs are dedicated day-long streams focusing on single topics,
creating a more immersive experience for delegates than a session.
Miniconfs are run on the first two days of the conference before the
main conference commences on Wednesday.
If you would like to organise a miniconf
(https://linux.conf.au/programme/miniconfs/) at linux.conf.au, we want
to hear from you.
Have we got you interested?
You can find out how to submit your session or miniconf proposals at
https://linux.conf.au/programme/proposals/.
If you have any other questions you can contact us via email at
contact(a)lca2020.linux.org.au.
We are looking forward to reading your submissions.
linux.conf.au 2020 Organising Team
---
Read this online at
https://lca2020.linux.org.au/news/call-for-sessions-miniconfs-now-open/
_______________________________________________
lca-announce mailing list
lca-announce(a)lists.linux.org.au
http://lists.linux.org.au/mailman/listinfo/lca-announce
Hi everyone,
Since luminous, we have had the follow release cadence and policy:
- release every 9 months
- maintain backports for the last two releases
- enable upgrades to move either 1 or 2 releases heads
(e.g., luminous -> mimic or nautilus; mimic -> nautilus or octopus; ...)
This has mostly worked out well, except that the mimic release received
less attention that we wanted due to the fact that multiple downstream
Ceph products (from Red Has and SUSE) decided to based their next release
on nautilus. Even though upstream every release is an "LTS" release, as a
practical matter mimic got less attention than luminous or nautilus.
We've had several requests/proposals to shift to a 12 month cadence. This
has several advantages:
- Stable/conservative clusters only have to be upgraded every 2 years
(instead of every 18 months)
- Yearly releases are more likely to intersect with downstream
distribution release (e.g., Debian). In the past there have been
problems where the Ceph releases included in consecutive releases of a
distro weren't easily upgradeable.
- Vendors that make downstream Ceph distributions/products tend to
release yearly. Aligning with those vendors means they are more likely
to productize *every* Ceph release. This will help make every Ceph
release an "LTS" release (not just in name but also in terms of
maintenance attention).
So far the balance of opinion seems to favor a shift to a 12 month
cycle[1], especially among developers, so it seems pretty likely we'll
make that shift. (If you do have strong concerns about such a move, now
is the time to raise them.)
That brings us to an important decision: what time of year should we
release? Once we pick the timing, we'll be releasing at that time *every
year* for each release (barring another schedule shift, which we want to
avoid), so let's choose carefully!
A few options:
- November: If we release Octopus 9 months from the Nautilus release
(planned for Feb, released in Mar) then we'd target this November. We
could shift to a 12 months candence after that.
- February: That's 12 months from the Nautilus target.
- March: That's 12 months from when Nautilus was *actually* released.
November is nice in the sense that we'd wrap things up before the
holidays. It's less good in that users may not be inclined to install the
new release when many developers will be less available in December.
February kind of sucked in that the scramble to get the last few things
done happened during the holidays. OTOH, we should be doing what we can
to avoid such scrambles, so that might not be something we should factor
in. March may be a bit more balanced, with a solid 3 months before when
people are productive, and 3 months after before they disappear on holiday
to address any post-release issues.
People tend to be somewhat less available over the summer months due to
holidays etc, so an early or late summer release might also be less than
ideal.
Thoughts? If we can narrow it down to a few options maybe we could do a
poll to gauge user preferences.
Thanks!
sage
[1] https://twitter.com/larsmb/status/1130010208971952129
in the root CMakeLists.txt ...
if(WITH_KRBD AND WITHOUT_RBD)
message(FATAL_ERROR "Cannot have WITH_KRBD with WITH_RBD.")
endif()
considering the options being tested, should't that be
"Cannot have WITH_KRBD with WTIHOUT_RBD."
we want to have RBD enabled when Kernel RBD is enabled, isn't it ?
please correct me if my understanding is wrong
--
Milind
Hi,
The ceph-iscsi project [1] has gained a lot of new features and
functionality along with the iSCSI management features that were added
to Ceph Dashboard in Nautilus. The Dashboard is tightly coupled to the
ceph-iscsi config version and needs to be updated in parallel, e.g. when
new functionality is added or existing behavior changes (e.g. here [2],
[3]).
Currently, all ceph-iscsi development is done in the "master" branch of
ceph-iscsi, while the dashboard is developed in the Ceph git repo and
thus is developed and maintained in multiple branches (e.g. "master" and
"nautilus").
This makes it challenging to keep these two components in sync and to
facilitate both maintaining a "stable" version while allowing new
features to be merged. To reduce this complexity and to better interlock
the testing and development of the dashboard and ceph-iscsi, I would
like to propose two possible solutions:
1) merge the ceph-iscsi code base into the ceph git repository. This
way, the development of new features and maintenance would take place in
distinct branches, and in close synchronization between the dashboard
and the ceph-iscsi component. This might also be beneficial for creating
unit tests that test both components without having to assemble the
pieces from various places beforehand. It would also ensure that a
matching ceph-iscsi package is always built and released along with the
corresponding Ceph version, thus offloading the ceph-iscsi devs from the
build and release work. From a version numbering perspective, this
should not be a problem - the current ceph-iscsi packages are of version
"3.0", so they could easily be upgraded to the Ceph versioning scheme.
It would also help with keeping the documentation [4] in sync with the
functionality (the ceph-iscsi docs are actually part of the main Ceph
documentation in the git repo already). This approach is more work, as
it requires finding a way to merge the ceph-iscsi git repo into the ceph
git repo (ideally by preserving the history), and because the packaging
and build scripts need to be updated to build the respective DEB and RPM
packages.
2) As an alternative that requires lesser effort (but also has fewer
benefits), we should at least start using branches within the ceph-iscsi
git repo, so we can do bug fixing on a stable version separate from new
feature development that may introduce incompatible changes. I'd propose
to name the branches according to the Ceph release they support, e.g. by
creating a "nautilus" branch. But the packaging and releasing would
still be disconnected from the Ceph releases (and the work would
increase along with the branches we need to build and support).
I clearly would be in favor of the first proposal, but would need help
in getting this implemented.
Thoughts, concerns, objections?
Would anybody be willing and interested in making this happen?
Thanks!
Lenz
[1] https://github.com/ceph/ceph-iscsi/
[2] https://github.com/ceph/ceph-iscsi/pull/84/
[3] https://github.com/ceph/ceph/pull/28720
[4] http://docs.ceph.com/docs/master/rbd/iscsi-overview/
--
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)
Hi everyone,
Tomorrow's Ceph Tech Talk will be an updated "Intro to Ceph" talk by Sage
Weil. This will be based on a newly refreshed set of slides and provide a
high-level introduction to the overall Ceph architecture, RGW, RBD, and
CephFS.
Our plan is to follow-up later this summer with complementary deep-dive
talks on each of the major components: RGW, RBD, and CephFS to start.
You can join the talk live tomorrow June 27 at 1700 UTC (1PM ET) at
https://bluejeans.com/613110014/browser
As usual, the talk will be recorded and posted to the YouTube channel[1]
as well.
Thanks!
sage
[1] https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw
Dear Sir or Madam,
Are you interested in saving some money on importing any of the following?:
-Top Precision Stamping Fabrication parts
- Competitive Price Electric Box
- High Quality Die Casting CNC Parts
- Partner ABB, Commbox, Huawei, Schneider
All of our products are very affordable as a result of being produced in special economic development regions of China .
We are more than happy to help you with the import process too!
If you are not responsible for this matter, please help me pass the purchasing department or sales department. Thank you so much.
My contact details are below, and I would be glad to hear from you.
Sincerely,
Erin
Shenzhen YSY Electric
Skype: erin.xxy
Mob# +86-181 2629 3176