For developers submitting jobs using teuthology, we now have
recommendations on what priority level to use:
https://docs.ceph.com/docs/master/dev/developer_guide/#testing-priority
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
The office is a productivity suite, developed and maintained by one of the biggest companies, market leaders in technology, Microsoft Office. You must already have gotten your answer, Microsoft is not just another company that poses to be developing great software with just a team of 100 members but Microsoft is a big company that needs to keep its reputation and work always on par with the revenue they are getting. This means they need to hire the best people to get the job done and that’s how it works in Microsoft. There is no issue with Microsoft, but the only issue is that Microsoft has been the target for hackers for quite a while now, It is one of the easiest targets anyone can find when it comes to the customer base and data theft. The data Microsoft has stored on their cloud servers is enormous and is really important for most of the people.
office.com/setuphttps://www.officesetup.helpoffice.com/setuphttps://w-ww-office.com/setupmcafee.com/activatehttps://www.help-mcafee.memcafee.com/activatehttps://w-w-w-mcafee.com/activate
We're happy to announce the fourth bugfix release in the Octopus series.
In addition to a security fix in RGW, this release brings a range of fixes
across all components. We recommend that all Octopus users upgrade to this
release. For a detailed release notes with links & changelog please
refer to the official blog entry at https://ceph.io/releases/v15-2-4-octopus-released
Notable Changes
---------------
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
(William Bowling, Adam Mohammed, Casey Bodley)
* Cephadm: There were a lot of small usability improvements and bug fixes:
* Grafana when deployed by Cephadm now binds to all network interfaces.
* `cephadm check-host` now prints all detected problems at once.
* Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
when generating an SSL certificate for Grafana.
* The Alertmanager is now correctly pointed to the Ceph Dashboard
* `cephadm adopt` now supports adopting an Alertmanager
* `ceph orch ps` now supports filtering by service name
* `ceph orch host ls` now marks hosts as offline, if they are not
accessible.
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
nfs-ns::
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
* Cephadm: `ceph orch ls --export` now returns all service specifications in
yaml representation that is consumable by `ceph orch apply`. In addition,
the commands `orch ps` and `orch ls` now support `--format yaml` and
`--format json-pretty`.
* Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a preview of
the OSD specification before deploying OSDs. This makes it possible to
verify that the specification is correct, before applying it.
* RGW: The `radosgw-admin` sub-commands dealing with orphans --
`radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
`radosgw-admin orphans list-jobs` -- have been deprecated. They have
not been actively maintained and they store intermediate results on
the cluster, which could fill a nearly-full cluster. They have been
replaced by a tool, currently considered experimental,
`rgw-orphan-list`.
* RBD: The name of the rbd pool object that is used to store
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
to "rbd_trash_purge_schedule". Users that have already started using
`rbd trash purge schedule` functionality and have per pool or namespace
schedules configured should copy "rbd_trash_trash_purge_schedule"
object to "rbd_trash_purge_schedule" before the upgrade and remove
"rbd_trash_purge_schedule" using the following commands in every RBD
pool and namespace where a trash purge schedule was previously
configured::
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
or use any other convenient way to restore the schedule after the
upgrade.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
Hi Ceph Developers,
The Ceph community is planning on participating in the upcoming round of
Outreachy (https://www.outreachy.org/).
Applicants will be applying for internships during the month of October and
interns would work on their projects from December - March.
If you're interested in mentoring a project, please add your ideas to this
projects list:
https://pad.ceph.com/p/project-ideas
I will be visiting various standup meetings over the coming weeks to
discuss project ideas as well. If you have any questions, please reach out
to me.
Best,
Ali
sorry for cross-posting. i sent this mail to ceph-maintainers two
months ago, but got no responses so far. but after reading the
comments in https://github.com/ceph/ceph-deploy/pull/496, i think i
should check with ceph-devel as well. so i am forwarding this mail to
ceph-devel for more inputs.
---------- Forwarded message ---------
From: kefu chai <tchaikov(a)gmail.com>
Date: Thu, Jun 4, 2020 at 6:39 PM
Subject: is ceph-deploy still used?
To: <ceph-maintainers(a)ceph.io>
Cc: Neha Ojha <nojha(a)redhat.com>, Josh Durgin <jdurgin(a)redhat.com>,
Brad Hubbard <bhubbard(a)redhat.com>, James Page <james.page(a)ubuntu.com>
hi ceph maintainers,
when reviewing ceph-deploy PRs, i am wondering why are we still
maintaining this tool. as IIUC, we are supposed to deploy ceph using
the Ansible playbooks offered by ceph-ansble[0]. and in future, we are
more likely to deploy a ceph cluster using cephadm[1].
so the question is, are you still packaging / using ceph-deploy?
cheers,
--
[0] https://github.com/ceph/ceph-ansible
[1] https://ceph.io/ceph-management/introducing-cephadm/
--
Regards
Kefu Chai
--
Regards
Kefu Chai
Hi ,
I want to know the number of connections in CEPH. I think the connection is mainly OSD connection.
Is the following statement correct?
Each OSD is connected with other OSDs, and there may be more than one connection between two OSDs.
If there is only one connection per OSD, the number of the connection is N(N-1)/2,. If there are k connections per OSD, the number of the connection is kN(N-1)/2.
Thanks for your help.
Best regards.
Congmin Yin
Greetings, Ceph developers:
We are getting a FTBFS on master with glibc 2.32
https://tracker.ceph.com/issues/47187
Is anyone in a position to reproduce the issue and help find a fix for it?
Thanks in advance!
Nathan
--
Nathan Cutler
Software Engineer Distributed Storage
SUSE LINUX, s.r.o.
Tel.: +420 284 084 037
This proposal might be a bit thin on details, but I would love to have some
feedback and gauge the broader community's and developer's interest, as well as
to poke holes in the current idea.
All comments welcome.
-Joao
MOTIVATION
----------
Even though we currently have at-rest encryption, ensuring data security on the
physical device, this is currently on an OSD-basis, and it is too coarse-grained
to allow different entities/clients/tenants to have their data encrypted with
different keys.
The intent here is to allow different tenants to have their data encrypted at
rest, independently, and without necessarily relying on full osd encryption.
This way one could have anywhere between a handful to dozens or hundreds of
tenants with their data encrypted on disk, while not having to maintain full
at-rest encryption should the administrator consider it too cumbersome or
unnecessary.
While there are very good arguments for ensuring this encryption is performed
client-side, such that each client actively controls their own secrets, a
server-side approach has several other benefits that may outweigh a client-side
approach.
On the one hand,
* encrypting server side means encrypting N times, depending on replication
size and scheme;
* the secrets keyring will be centralized, likely in the monitor, much like
what we do for dmcrypt; even though encrypted.
* on-the-wire data will still need to rely on msgr2 encryption; even though
one could argue that this will likely happen regardless of whether a client-
or server-side approach is being used.
But on the other,
1. encryption becomes transparent for the client, avoiding the effort of
implementing such schemes in client libraries and kernel drivers;
2. tighter control over the unit of data being encrypted, reducing the load of
encrypting a whole object versus a disk block in bluestore.
3. older clients will be able to support encryption out of the box, given they
will have no idea their data is being encrypted, nor on how that is happening.
CHOOSING NAMESPACES
--------------------
While investigating where and how per-tenant encryption could be implemented,
two other ideas were on the table:
1. on a per-client basis, relying on cephx entities, with an encryption key
per-client, or a shared key amongst several clients; this key would be kept
encrypted in the monitor's kv store with the entity's cephx key.
2. on a per-pool basis.
The first one would definitely be feasible, but potentially tricky to
implement just right, without too many exceptions or involvement of other
portions of the stack. E.g., dealing with metadata could become tricky. Then
again, there wasn't one reason that could not be addressed and become a
showstopper.
As for 2., it would definitely be the easiest to implement: pool is created with
an 'encrypted' flag on, key is kept in the monitors, OSDs encrypt any object
belonging to that pool. The problem with this option, however, is how
coarse-grained it is. If we really wanted a per-tenant approach, one would have
to ensure one pool per tenant. Not necessarily a big deal if a lot of
potentially small pools is fine. This idea was scrapped in favour of encrypting
namespaces instead.
Given RADOS already has the concept of a namespace, it might just be the ideal
medium to implement such an approach, as we get the best of the two options
above: we can get a smaller-grained access than a pool, but still with the same
capabilities of limiting access by entity through caps. We also get to have
multiple namespaces in a single pool should we choose to do so. All the
while, the concept is high-level enough that the effort of implementing the
actual encryption scheme might be performed in a select, handful of places,
without the need for a lot (maybe, any) particular exceptions or corner cases.
APPROACH
---------
It is important to note that there are several implementations details,
especially on "how exactly this is going to happen", that have not been fully
figured out.
Essentially, the objective is to ensure that objects from a given namespace are
always encrypted or decrypted by bluestore when writing or reading the data. The
hope that performing at this level will allow us to
1. ensure the operation is performed at the disk block size, ensuring that
small writes, or partial writes, will not require a rewrite of the whole
object; same goes for reads.
2. avoid dealing with all the mechanics involving objects and other operations
over them, and focus solely on their data and metadata.
Secret distribution is expected to be done by the monitors, at the OSDs request.
In an ideal world, the OSDs would know exactly which namespaces they might have
to encrypt/decrypt, based on pools they currently hold, and request keys for
those before hand, such that they don't have to request a key from the monitor
when an operation arrives. This would not only require us to become a bit more
aware of namespaces, but keeping these keys cached might require the osd to
keep them encrypted in memory. What to use for that is something that hasn't
been much thought about -- maybe we could get away with using the osd's cephx
key.
As for the namespaces, in their current form we don't have much (any?)
information about them. Access to an object in a namespace is based on prior
knowledge of that namespace and the object's name. We currently don't have
statistics on namespaces, nor are we able to know whether an OSD keeps any
object belonging to a namespace _before_ an operation on such an object is
handled.
Even though it's not particularly _required_ to get more out of namespaces than
we currently have, it would definitely be ideal if we ended up with the ability
to 1) have statistics out of namespaces, as it would imperative if we're using
them for tenants; and 2) able to cache ahead keys for namespaces an osd might
have to handle (read, namespaces living in a pool with PGs mapped to a given
osd).
As you may know, the Sepia Long Running Cluster has been hitting
capacity limits over the past week or so. This has resulted in service
disruptions to teuthology runs, chacra.ceph.com,
docker-mirror.front.sepia.ceph.com, and quay.ceph.io.
We've been able to get by by deleting/compressing logs more aggressively
but it's not ideal or sustainable.
Patrick has created a new erasure coded pool/filesystem that will allow
us to keep the same amount of logs but use less space. In order to have
teuthology workers start writing logs to that pool, we need to take an
outage.
At 0400 UTC 19AUG2020, I will instruct all teuthology workers to die
after their running jobs finish. At 1300 UTC, I will kill any jobs that
are still running. This gives the lab 9 hours to gracefully shut down.
At that point, we will switch the mountpoint on teuthology.front over to
the new EC pool and start storing new logs there.
At the same time, Patrick will start migrating logs on the existing/old
pool to the new pool. This means that logs from 7/20 through 8/19 will
be unavailable (you'll see 404s) via the Pulpito web UI and qa-proxy
URLs until they're migrated to the new EC pool.
Let me know if you have any questions/concerns.
Thanks,
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway