Hi Marc,
On 9/16/20 7:30 PM, Marc Roos wrote:
> - In the future you will not be able to read the docs if you have an
> adblocker(?)
Can you please elaborate on that? What are you referring to? I'm not
aware of any advertisements on RTD, but I admittedly have several
ad-blockers enabled by default anyway.
Lenz
--
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi,
$ curl https://docs.ceph.com
curl: (51) SSL: certificate subject name (ssl403572.cloudflaressl.com)
does not match target host name 'docs.ceph.com'
$ curl -v -v https://docs.ceph.com
* Rebuilt URL to: https://docs.ceph.com/
* Trying 104.17.32.82...
* Connected to docs.ceph.com (104.17.32.82) port 443 (#0)
* found 127 certificates in /etc/ssl/certs/ca-certificates.crt
* found 512 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_ECDSA_AES_128_GCM_SHA256
* server certificate verification OK
* server certificate status verification SKIPPED
* SSL: certificate subject name (ssl403572.cloudflaressl.com) does not
match target host name 'docs.ceph.com'
* Closing connection 0
curl: (51) SSL: certificate subject name (ssl403572.cloudflaressl.com)
does not match target host name 'docs.ceph.com'
Not sure whether this is a problem on ceph side or on hosting side, but
it should be fixed.
Regards,
Burkhard
On Wed, Sep 16, 2020 at 11:08 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
>
> - In the future you will not be able to read the docs if you have an
> adblocker(?)
not aware of anything of this sort
>
>
>
> -----Original Message-----
> To: dev; ceph-users
> Cc: Kefu Chai
> Subject: [ceph-users] Migration to ceph.readthedocs.io underway
>
> Hi everyone,
>
> We are in the process of migrating from docs.ceph.com to
> ceph.readthedocs.io. We enabled it in
> https://github.com/ceph/ceph/pull/34499 and will now be using it by
> default.
>
> Why?
>
> - The search feature in ceph.readthedocs.io is much better than
> docs.ceph.com and allows you to search multiple strings.
> - RTD provides an in-built version switching feature which we plan to
> use in future.
>
> What does it mean to you?
>
> - Some broken links are expected during this migration. Things like ceph
> API documentation need special handling (example:
> https://docs.ceph.com/en/latest/rados/api/) and are expected to be
> broken temporarily.
>
> - Much better Ceph documentation experience once the migration is done.
>
> Thanks for your patience!
>
> Cheers,
> Neha
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
>
>
I wonder if this new system allows me to choose Ceph versions. I see the
v:latest in the right bottom corner but it seems to be the only choice so
far.
On Wed, Sep 16, 2020 at 12:31 PM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
> - In the future you will not be able to read the docs if you have an
> adblocker(?)
>
>
>
> -----Original Message-----
> To: dev; ceph-users
> Cc: Kefu Chai
> Subject: [ceph-users] Migration to ceph.readthedocs.io underway
>
> Hi everyone,
>
> We are in the process of migrating from docs.ceph.com to
> ceph.readthedocs.io. We enabled it in
> https://github.com/ceph/ceph/pull/34499 and will now be using it by
> default.
>
> Why?
>
> - The search feature in ceph.readthedocs.io is much better than
> docs.ceph.com and allows you to search multiple strings.
> - RTD provides an in-built version switching feature which we plan to
> use in future.
>
> What does it mean to you?
>
> - Some broken links are expected during this migration. Things like ceph
> API documentation need special handling (example:
> https://docs.ceph.com/en/latest/rados/api/) and are expected to be
> broken temporarily.
>
> - Much better Ceph documentation experience once the migration is done.
>
> Thanks for your patience!
>
> Cheers,
> Neha
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
Hi everyone,
We are in the process of migrating from docs.ceph.com to
ceph.readthedocs.io. We enabled it in
https://github.com/ceph/ceph/pull/34499 and will now be using it by
default.
Why?
- The search feature in ceph.readthedocs.io is much better than
docs.ceph.com and allows you to search multiple strings.
- RTD provides an in-built version switching feature which we plan to
use in future.
What does it mean to you?
- Some broken links are expected during this migration. Things like
ceph API documentation need special handling (example:
https://docs.ceph.com/en/latest/rados/api/) and are expected to be
broken temporarily.
- Much better Ceph documentation experience once the migration is done.
Thanks for your patience!
Cheers,
Neha
This is the fifth backport release of the Ceph Octopus stable release
series. This release brings a range of fixes across all components. We
recommend that all Octopus users upgrade to this release.
Notable Changes
---------------
* CephFS: Automatic static subtree partitioning policies may now be configured
using the new distributed and random ephemeral pinning extended attributes on
directories. See the documentation for more information:
https://docs.ceph.com/docs/master/cephfs/multimds/
* Monitors now have a config option `mon_osd_warn_num_repaired`, 10 by default.
If any OSD has repaired more than this many I/O errors in stored data a
`OSD_TOO_MANY_REPAIRS` health warning is generated.
* Now when noscrub and/or no deep-scrub flags are set globally or per pool,
scheduled scrubs of the type disabled will be aborted. All user initiated
scrubs are NOT interrupted.
* Fix an issue with osdmaps not being trimmed in a healthy cluster (
issue#47297, pr#36981)
For the detailed changelog please refer to the blog entry at
https://ceph.io/releases/v15-2-5-octopus-released/
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.5.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 2c93eff00150f0cc5f106a559557a58d3d7b6f1f
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi all,
I am working on bug #47443[1]. To verify the fix[2], I need to run the
test "tasks.mgr.test_progress.TestProgress.test_osd_cannot_recover"[3]
locally but I am not aware of the correct way since I have never run
the mgr tests before.
I tried "python3 ../qa/tasks/vstart_runner.py
tasks.mgr.test_progress.TestProgress.test_osd_cannot_recover
--kclient" but the execution aborted with AssertionError at this
line[4]. Here's the full traceback[5].
Besides running the test with the fix, I also tried running this test
on a separate repo, the master of which was unupdated enough to not
have the commits from PR #32581[6] (which is the cause of the bug),
but I got the same traceback[5].
So I think I might not be triggering the tests the right way. I looked
at docs, specifically in the developer's guide and MGR section but I
couldn't find anything related.
Thanks,
- Rishabh
[1] https://tracker.ceph.com/issues/47447
[2] https://github.com/ceph/ceph/pull/37159
[3] https://github.com/ceph/ceph/blob/master/qa/tasks/mgr/test_progress.py#L218
[4] https://github.com/ceph/ceph/blob/master/qa/tasks/ceph_manager.py#L1930
[5] https://paste.centos.org/view/094dec74
[6] https://github.com/ceph/ceph/pull/32581
Hi,
I have come across an old thread (2017) on the topic rbd-nbd performance.
Here: https://www.spinics.net/lists/ceph-devel/msg36645.html
It says they have tried adding multi-connections support on rbd-nbd with
newest nbd driver, so that the nbd driver can create multiple io queues,
and each io queue is associated with one socket connection to talk to
rbd-nbd for request sending and response receiving.
I have to work on the simular use-case. My question is , does the current
rbd-ndb tool has the multi-connections support by default so that I can
create multiple IO queues in the nbd driver?
Thanks
Bobby