Thanks a lot for information! We will consider it and reach back to you if we have
questions. Thanks again.
From: Martin Verges <martin.verges(a)croit.io>
Sent: Thursday, November 28, 2019 9:09 PM
To: Sang, Oliver <oliver.sang(a)intel.com>
Cc: Sage Weil <sweil(a)redhat.com>om>; ceph-announce(a)ceph.io; ceph-users(a)ceph.io;
dev(a)ceph.io; Li, Philip <philip.li(a)intel.com>
Subject: Re: [ceph-users] Re: v13.2.7 mimic released
Hello,
we (croit GmbH) are a founding member of the Ceph foundation and we build the packages
from the official git repository to ship it with our own solution.
However, we are not Ceph itself and so this is not an official mirror.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@croit.io<mailto:martin.verges@croit.io>
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am Do., 28. Nov. 2019 um 04:24 Uhr schrieb Sang, Oliver
<oliver.sang@intel.com<mailto:oliver.sang@intel.com>>:
Thanks a lot for information!
what’s the relationship of this mirror with ceph official website?
Basically we want to use an official release and hesitate to use a 3rd part build
package.
From: Martin Verges <martin.verges@croit.io<mailto:martin.verges@croit.io>>
Sent: Wednesday, November 27, 2019 9:58 PM
To: Sang, Oliver <oliver.sang@intel.com<mailto:oliver.sang@intel.com>>
Cc: Sage Weil <sweil@redhat.com<mailto:sweil@redhat.com>>;
ceph-announce@ceph.io<mailto:ceph-announce@ceph.io>;
ceph-users@ceph.io<mailto:ceph-users@ceph.io>;
dev@ceph.io<mailto:dev@ceph.io>
Subject: Re: [ceph-users] Re: v13.2.7 mimic released
Hello,
as far I know Mimic and nautilus are still not available on debian. Unfortunately we do
not provide mimic on our mirror for debian 10 buster. But if you want to migrate to
nautilus, feel free to use our public mirrors described at
https://croit.io/2019/07/07/2019-07-07-debian-mirror.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@croit.io<mailto:martin.verges@croit.io>
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am Mi., 27. Nov. 2019 um 14:38 Uhr schrieb Sang, Oliver
<oliver.sang@intel.com<mailto:oliver.sang@intel.com>>:
can this version be installed on Debian 10?
If not, is there a plan for Mimic to support Debian 10?
-----Original Message-----
From: Sage Weil <sweil@redhat.com<mailto:sweil@redhat.com>>
Sent: Monday, November 25, 2019 10:50 PM
To: ceph-announce@ceph.io<mailto:ceph-announce@ceph.io>;
ceph-users@ceph.io<mailto:ceph-users@ceph.io>;
dev@ceph.io<mailto:dev@ceph.io>
Subject: [ceph-users] v13.2.7 mimic released
This is the seventh bugfix release of the Mimic v13.2.x long term stable release series.
We recommend all Mimic users upgrade.
For the full release notes, see
https://ceph.io/releases/v13-2-7-mimic-released/
Notable Changes
MDS:
- Cache trimming is now throttled. Dropping the MDS cache via the “ceph tell
mds.<foo> cache drop” command or large reductions in the cache size will no longer
cause service unavailability.
- Behavior with recalling caps has been significantly improved to not attempt recalling
too many caps at once, leading to instability. MDS with a large cache (64GB+) should be
more stable.
- MDS now provides a config option “mds_max_caps_per_client” (default:
1M) to limit the number of caps a client session may hold. Long running client sessions
with a large number of caps have been a source of instability in the MDS when all of these
caps need to be processed during certain session events. It is recommended to not
unnecessarily increase this value.
- The “mds_recall_state_timeout” config parameter has been removed. Late client recall
warnings are now generated based on the number of caps the MDS has recalled which have not
been released. The new config parameters “mds_recall_warning_threshold” (default: 32K) and
“mds_recall_warning_decay_rate” (default: 60s) set the threshold for this warning.
- The “cache drop” admin socket command has been removed. The “ceph tell mds.X cache
drop” remains.
OSD:
- A health warning is now generated if the average osd heartbeat ping time exceeds a
configurable threshold for any of the intervals computed.
The OSD computes 1 minute, 5 minute and 15 minute intervals with average, minimum and
maximum values. New configuration option “mon_warn_on_slow_ping_ratio” specifies a
percentage of “osd_heartbeat_grace” to determine the threshold. A value of zero disables
the warning. A new configuration option “mon_warn_on_slow_ping_time”, specified in
milliseconds, overrides the computed value, causing a warning when OSD heartbeat pings
take longer than the specified amount. A new admin command “ceph daemon mgr.#
dump_osd_network [threshold]” lists all connections with a ping time longer than the
specified threshold or value determined by the config options, for the average for any of
the 3 intervals. A new admin command ceph daemon osd.# dump_osd_network [threshold]” does
the same but only including heartbeats initiated by the specified OSD.
- The default value of the
“osd_deep_scrub_large_omap_object_key_threshold” parameter has been lowered to detect an
object with large number of omap keys more easily.
RGW:
- radosgw-admin introduces two subcommands that allow the managing of expire-stale objects
that might be left behind after a bucket reshard in earlier versions of RGW. One
subcommand lists such objects and the other deletes them. Read the troubleshooting section
of the dynamic resharding docs for details.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>