- 17.2.6
- gibba cluster has been upgraded
- saw known issue with slow backfilling, will use gibba cluster to
test fix for it but fix is not targeted for 17.2.6 release
- Still need to upgrade LRC
- Once LRC is upgraded and everything is approved, will release rc
build
- release notes PR is open
- cephalocon dev summit
- draft schedule: https://pad.ceph.com/p/cephalocon-dev-summit-2023
- want to do some planning for the next few years here since that
goes better in person
- Potentially some panel involving users
- cache tiering
- summary of last week's discussion for those who missed it
- plan is still to deprecate but not remove in reef
- some docs work at the minimum will be done for the deprecation,
possibly some added warnings if possible
- Enforcing 2FA on github to be in the Ceph org
- Not a lot of action following previous email about this
- Need to do something about the bots in the ceph org
- Specifically
- CephaloBot @ceph-jenkins
- Ceph Sepia lab robot @sepia-robot
- Red Hat Ceph Storage Jenkins bot @rhcs-jenkins
- Want infra team to look at this
- Have cc'd Dan Mick, Adam Kraitman, Zack Cerza on this email,
hoping one of them could see what these bots do and impact
if they are
removed from ceph org.
- Discussion on scale testing
- Currently, Gibba is being used for other things so we don't have
any cluster with large logical scale
- Might be able to get some scale or performance testing from
community members, but nothing certain
- Gibba could be brought back to large logical scale later if we have
features that need the scale testing
- Transitioning to Jitsi
- Bluejeans will be unavailable as of April 1st
- Discussed recording process for Jitsi meetings
- Mike Perez will make some new Jitsi meeting rooms for teams who
need it
- Upgrade suites
- Some of the upgrade suites start from tagged releases, when they
used to start from the tip of the stable branch
- wouldn't be too hard to have them use builds for the tip of the
branch, just need to go through and change them
- some of the tests are specifically peer-to-peer tests testing
upgrades from certain point releases to others, so those should not be
changed
Thanks,
- Adam King
Hi
https://ceph.io/en/news/blog/2022/s3select-intro/
Recently I published a blog on s3-select.
the Blog discusses what it is, and why it is required.
the last paragraph discusses the Trino(analytic SQL utility) and its
integration with Ceph/s3select.
that integration is still on-work, and it is quite promising.
Trino does not just provides a comprehensive SQL but also provides
scalable processing for the SQL statements.
we will be glad to hear your ideas and comments.
Gal.
Hi,
I want
1) copy a snapshot to an image,
2) no need to copy snapshots,
3) no dependency after copy,
4) all same image format 2.
In that case, is rbd cp the same as rbd clone + rbd flatten?
I ran some tests, seems like it, but want to confirm, in case of missing anything.
Also, seems cp is a bit faster and flatten, is that true?
Thanks!
Tony
in today's refactoring call, we discussed the topic of async
reads/writes for file-based backends, using zipper's optional_yield
argument
we can start by trying out asio's new asio::stream_file[1] and
asio::random_access_file[2] classes based on io_uring. both classes
can be constructed with an existing file descriptor using the
'native_handle_type' overload
an example read function:
// read into the given buffer, returning the number of bytes read.
throws on errors
size_t read_some(asio::stream_file& file, std::span<char> buffer,
optional_yield y)
{
if (y) {
return file.async_read_some(buffer, y.get_yield_context());
} else {
return file.read_some(buffer);
}
}
the synchronous case probably won't be that simple, since we won't
have an asio::io_context to construct the asio::stream_file with. we
might just fall back to the read system call there
[1] https://www.boost.org/doc/libs/1_79_0/doc/html/boost_asio/reference/stream_…
[2] https://www.boost.org/doc/libs/1_79_0/doc/html/boost_asio/reference/random_…
The following describes cephx auth extension(s). Note that there is no
implementation yet, and your comments are welcome.
* Introduction
The cephx authentication and authorization is a system in which the
ceph principals and ceph clients are able to create secure sessions to
[other] ceph principals while validating the identity of their peers.
In this auth process, the auth authority (served by the ceph mons)
keeps a shared secret database that allows it to provide tickets to
access the different services. It is a multi-phase process that was
inspired by the kerberos design: when authenticating, the
authenticating entities (“clients” although can also be different ceph
principals such as osd, mds, etc.) first establish their identities
with the mons by leveraging a shared secret mechanism (while the mons
are able to affirm their own identity), and are provided with a ticket
to access the services that they request access to. The services are
able to validate the tickets. This allows them to affirm the client
identity, and also to get information about the clients’ permissions,
such as what resources they’re allowed to access. There is a mechanism
that allows the different principals to have separate secret keys, and
a shared rotating key (with other principals of the same type), so
that clients can use the same ticket when accessing all similar
principals.
The issue at hand stems from the fact that the ceph mons that act as
the auth authority need to keep the auth information about all the
clients in the system. The mons keep this information in their Paxos
state. This can potentially be a performance issue, a scalability
problem, and moreover, it can complicate certain operational designs.
For example, in order for a system to allow multiple different clients
to access rbd images, it needs to have admin permissions to create new
cephx clients.
* Initial Idea
The high-level proposal is to allow cephx clients to be able to
generate keys and self-signed tickets that would allow access to a
subset of their own resources (by a sub-client). Generation of these
tickets will not require admin privileges as they will not require the
creation of new ceph clients. The tickets will include authorization
information (such as what resources the sub-client is allowed to use),
and the secret key (sub-key). The tickets will be encrypted using the
client secret key.
The sub-client will be provided with the new key that was generated by
the client, and with the ticket. At this point the sub-client will be
able to initiate a regular cephx authentication process, with one
modification: the ticket will be sent with its auth request(s). The
mon will unpack the ticket, validate it, and use the sub-key for the
rest of its auth process with the sub-client. The mon will check that
the ticket provided by the sub-client gives permission to resources in
a level that the client is allowed to use.
* Sub-Key Revocation (discussion)
As it is, the proposal does not provide a solution for sub-key/ticket
revocation. Revoking the original client key will effectively remove
the sub-key, however, it will also revoke all the other sub-keys that
this client created which can be a problem.
One solution is to set an expiration timestamp on the tickets, so that
the mons would not allow the authentication after that timestamp. The
idea can be extended by having the mons keep a blacklist of the
revoked sub-keys (until their expiration), so that immediate
revocation would be possible. The thinking is that sub-key revocation
is not a frequent operation, so that such a blacklist would not be a
problem to maintain. The bigger issue with this solution is that it
requires the original client to keep creating sub-keys in order for
the sub-client to be able to continue and access the system. This
might work for certain use cases, but not for all.
* Extension to initial proposal
The sub-client has a key to access an (potentially external) entity
that provides it with the sub-key (+ticket). These are generated
dynamically and can expire. The sub-client needs to fetch a new
sub-key once the old one expires.
The entity that distributes the sub-key has access to the client key,
and can keep a database of all the existing sub-clients. It can map
between the sub-client and its owning client, and can revoke
sub-client keys if needed.
In a ceph internal implementation, the ceph manager could serve as the
ticket granting service. In this case, the sub-client will be set with
a non-expiring sub-key (tgt) that will allow it to access the ceph
manager with specific caps for generating temporary tickets for
accessing the client resources. The sub-client will go through the mon
authentication process, will get a ticket to access the manager, will
authenticate against the manager, and will request a new sub-ticket.
The manager could validate that the sub-client is valid. The
sub-client tgt could include a validation authority information, that
would give the manager the ability to potentially access external key
management services for the validation process (e.g., check that tgt
wasn’t revoked).
A different implementation can rely on an external key management
system that would be able to generate the temporary sub-key for
accessing the resources. The client will be configured with the
required key to access the external system.
Any thoughts?
Yehuda
ceph-api-nightly and ceph-dashboard-cephadm-e2e-nightly are both
configured to send email to ceph-qa(a)ceph.io when the build fails, and
again when it starts working after having failed.
Is anyone monitoring that email for such notifications? Should they
continue? Should they go somewhere less catchall, or is that alias an
appropriate place?
Hello All, I am new to Ceph Community.
I am trying to build Ceph. While installing the Libraries using :- install-
deps.sh
Script, it is Failing to *Build wheels for collected packages: bs4,
termcolor, parse, wrapt.*
I have tried manually installing them using pip install --no-cache-dir bs4
termcolor parse wrapt but it doesn't work.
I have also tried some alternatives, but it is failing to do so.
I am attaching a screenshot and error log file (The last para depicts the
build Error).
Please let me know if anyone of you knows how to fix this.
Thanks
Mishal
For developers submitting jobs using teuthology, we now have
recommendations on what priority level to use:
https://docs.ceph.com/docs/master/dev/developer_guide/#testing-priority
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hello,
- bluejeans -> jitsi meet
- all bluejeans meeting rooms (main community room where events
like CDM are held, various standup rooms, NVMeOF room, etc) will
stop working at the end of March
- current plan is to move to https://meet.jit.si/, which, like
bluejeans and unlike google meet, doesn't require logging in to
join the meeting (and also open source!)
- getting recordings out is more involved (requires dropbox?) but
should be still manageable
- Mike will create new meeting rooms and announce on the lists
- enable ubuntu jammy for reef
- https://github.com/ceph/ceph/pull/49443
- https://pulpito.ceph.com/?sha1=63c2dce869c8f63c396d3b6505a21c44088ff500
- overall looks good, Ilya and Venky will schedule reruns and report
in the PR
- a custom teuthology branch was used but it's not specific to the PR
- enable centos 9 stream for reef
- blocked on python dependency bundling changes
(https://github.com/ceph/ceph/pull/47501)
- github 2FA for ceph organization
- still a lot of non-2FA accounts (both members and outside
collaborators) despite the push to enable 2FA
- also some number of inactive accounts
- ceph organization will require 2FA on April 1, non-2FA accounts
will be automatically removed from the organization by github on
that date
- Ilya will send an announcement to the dev list
- nightlies were disabled some time ago, let's enable them again?
- for major suites - no, for the rest - yes
- between integration branches, ad-hoc tests and Yuri's (bi-)weekly
baseline runs they were getting very little attention
Thanks,
Ilya
It looks like the problem is that ceph-exporter stuff is in the quincy
branch (including the cephadm binary) but not any quincy image yet, since
17.2.6 isn't out yet (our ci tests against images built on latest quincy
branch so hasn't hit this issue). So even pulling quay.io/ceph/ceph:v17
<http://quay.io/ceph/ceph:v17.>, which is the latest quincy build, doesn't
pull an image that supports the ceph-exporter, resulting in this error.
Technically the exact command you used should start to work again once
17.2.6 is released, but there should definitely be some error handling here
for those using the recent versions of the binary in github to bootstrap
with older quincy images. I'll make a small patch to address it. Thanks for
the heads up.
On Wed, Mar 15, 2023 at 6:04 AM Tobias Fischer <tobias.fischer(a)clyso.com>
wrote:
> Hi Adam,
>
> I noticed a bug using cephadm quincy:
>
> when using cephadm from
> https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm as written
> on https://docs.ceph.com/en/quincy/cephadm/install/ I get following
> error:
>
> root@ceph-01:~# cephadm bootstrap --ssh-user cephadm --ssh-public-key
> /home/cephadm/.ssh/cephadm.pub --ssh-private-key
> /home/cephadm/.ssh/cephadm --mon-ip 10.82.71.11
> Verifying ssh connectivity ...
> Adding key to cephadm@localhost authorized_keys...
> key already in cephadm@localhost authorized_keys...
> Verifying podman|docker is present...
> Verifying lvm2 is present...
> Verifying time synchronization is in place...
> Unit chrony.service is enabled and running
> Repeating the final host check...
> podman (/usr/bin/podman) version 4.3.1 is present
> systemctl is present
> lvcreate is present
> Unit chrony.service is enabled and running
> Host looks OK
> Cluster fsid: e460fb6a-c315-11ed-8abc-02000a52470b
> Verifying IP 10.82.71.11 port 3300 ...
> Verifying IP 10.82.71.11 port 6789 ...
> Mon IP `10.82.71.11` is in CIDR network `10.82.71.0/24`
> <http://10.82.71.0/24>
> Mon IP `10.82.71.11` is in CIDR network `10.82.71.0/24`
> <http://10.82.71.0/24>
> Internal network (--cluster-network) has not been provided, OSD
> replication will default to the public_network
> Pulling container image quay.io/ceph/ceph:v17...
> Ceph version: ceph version 17.2.5
> (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
> Extracting ceph user uid/gid from container image...
> Creating initial keys...
> Creating initial monmap...
> Creating mon...
> Waiting for mon to start...
> Waiting for mon...
> mon is available
> Assimilating anything we can from ceph.conf...
> Generating new minimal ceph.conf...
> Restarting the monitor...
> Setting mon public_network to 10.82.71.0/24
> Wrote config to /etc/ceph/ceph.conf
> Wrote keyring to /etc/ceph/ceph.client.admin.keyring
> Creating mgr...
> Verifying port 9283 ...
> Waiting for mgr to start...
> Waiting for mgr...
> mgr not available, waiting (1/15)...
> mgr not available, waiting (2/15)...
> mgr is available
> Enabling cephadm module...
> Waiting for the mgr to restart...
> Waiting for mgr epoch 5...
> mgr epoch 5 is available
> Setting orchestrator backend to cephadm...
> Using provided ssh keys...
> Adding key to cephadm@localhost authorized_keys...
> key already in cephadm@localhost authorized_keys...
> Adding host ceph-01...
> Deploying mon service with default placement...
> Deploying mgr service with default placement...
> Deploying crash service with default placement...
> Deploying ceph-exporter service with default placement...
> Non-zero exit code 22 from /usr/bin/podman run --rm --ipc=host
> --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
> CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-01 -e
> CEPH_USE_RANDOM_NONCE=1 -v
> /var/log/ceph/e460fb6a-c315-11ed-8abc-02000a52470b:/var/log/ceph:z -v
> /tmp/ceph-tmpdj4jpaw8:/etc/ceph/ceph.client.admin.keyring:z -v
> /tmp/ceph-tmpqrstws24:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
> apply ceph-exporter
> /usr/bin/ceph: stderr Error EINVAL: Usage:
> /usr/bin/ceph: stderr ceph orch apply -i <yaml spec> [--dry-run]
> /usr/bin/ceph: stderr ceph orch apply <service_type>
> [--placement=<placement_string>] [--unmanaged]
> /usr/bin/ceph: stderr
> Traceback (most recent call last):
> File "/usr/local/bin/cephadm", line 9653, in <module>
> main()
> File "/usr/local/bin/cephadm", line 9641, in main
> r = ctx.func(ctx)
> ^^^^^^^^^^^^^
> File "/usr/local/bin/cephadm", line 2205, in _default_image
> return func(ctx)
> ^^^^^^^^^
> File "/usr/local/bin/cephadm", line 5774, in command_bootstrap
> prepare_ssh(ctx, cli, wait_for_mgr_restart)
> File "/usr/local/bin/cephadm", line 5275, in prepare_ssh
> cli(['orch', 'apply', t])
> File "/usr/local/bin/cephadm", line 5714, in cli
> ).run(timeout=timeout, verbosity=verbosity)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/usr/local/bin/cephadm", line 4144, in run
> out, _, _ = call_throws(self.ctx, self.run_cmd(),
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/usr/local/bin/cephadm", line 1853, in call_throws
> raise RuntimeError('Failed command: %s' % ' '.join(command))
> RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host
> --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
> CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-01 -e
> CEPH_USE_RANDOM_NONCE=1 -v
> /var/log/ceph/e460fb6a-c315-11ed-8abc-02000a52470b:/var/log/ceph:z -v
> /tmp/ceph-tmpdj4jpaw8:/etc/ceph/ceph.client.admin.keyring:z -v
> /tmp/ceph-tmpqrstws24:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
> apply ceph-exporter
>
> If I skip the monitoring stack with "--skip-monitoring-stack"
> boostraping works as expected.
>
> Using cephadm from
> https://github.com/ceph/ceph/raw/v17.2.5/src/cephadm/cephadm ( "v17.2.5"
> instead of "quincy" ) also works as expected.
>
> If you have any questions please get in touch. thanks!
>
> BR
>
> Tobi
>
> --
> Mit freundlichen Grüßen
> Tobias Fischer
> Head of Ceph
>
> Clyso GmbH
> p: +49 89 21552391 12
> a: Loristraße 8 | 80335 München | Germany
> w: https://clyso.com | e: tobias.fischer(a)clyso.com
>
> We are hiring: https://www.clyso.com/jobs/
> ---
> Geschäftsführer: Dipl. Inf. (FH) Joachim Kraftmayer
> Unternehmenssitz: Utting am Ammersee
> Handelsregister beim Amtsgericht: Augsburg
> Handelsregister-Nummer: HRB 25866
> USt. ID-Nr.: DE275430677
>
>