Hello,
as far I know Mimic and nautilus are still not available on debian.
Unfortunately we do not provide mimic on our mirror for debian 10 buster.
But if you want to migrate to nautilus, feel free to use our public mirrors
described at
https://croit.io/2019/07/07/2019-07-07-debian-mirror.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am Mi., 27. Nov. 2019 um 14:38 Uhr schrieb Sang, Oliver <
oliver.sang(a)intel.com>gt;:
can this version be installed on Debian 10?
If not, is there a plan for Mimic to support Debian 10?
-----Original Message-----
From: Sage Weil <sweil(a)redhat.com>
Sent: Monday, November 25, 2019 10:50 PM
To: ceph-announce(a)ceph.io; ceph-users(a)ceph.io; dev(a)ceph.io
Subject: [ceph-users] v13.2.7 mimic released
This is the seventh bugfix release of the Mimic v13.2.x long term stable
release series. We recommend all Mimic users upgrade.
For the full release notes, see
https://ceph.io/releases/v13-2-7-mimic-released/
Notable Changes
MDS:
- Cache trimming is now throttled. Dropping the MDS cache via the “ceph
tell mds.<foo> cache drop” command or large reductions in the cache size
will no longer cause service unavailability.
- Behavior with recalling caps has been significantly improved to not
attempt recalling too many caps at once, leading to instability. MDS with a
large cache (64GB+) should be more stable.
- MDS now provides a config option “mds_max_caps_per_client” (default:
1M) to limit the number of caps a client session may hold. Long running
client sessions with a large number of caps have been a source of
instability in the MDS when all of these caps need to be processed during
certain session events. It is recommended to not unnecessarily increase
this value.
- The “mds_recall_state_timeout” config parameter has been removed. Late
client recall warnings are now generated based on the number of caps the
MDS has recalled which have not been released. The new config parameters
“mds_recall_warning_threshold” (default: 32K) and
“mds_recall_warning_decay_rate” (default: 60s) set the threshold for this
warning.
- The “cache drop” admin socket command has been removed. The “ceph tell
mds.X cache drop” remains.
OSD:
- A health warning is now generated if the average osd heartbeat ping
time exceeds a configurable threshold for any of the intervals computed.
The OSD computes 1 minute, 5 minute and 15 minute intervals with average,
minimum and maximum values. New configuration option
“mon_warn_on_slow_ping_ratio” specifies a percentage of
“osd_heartbeat_grace” to determine the threshold. A value of zero disables
the warning. A new configuration option “mon_warn_on_slow_ping_time”,
specified in milliseconds, overrides the computed value, causing a warning
when OSD heartbeat pings take longer than the specified amount. A new admin
command “ceph daemon mgr.# dump_osd_network [threshold]” lists all
connections with a ping time longer than the specified threshold or value
determined by the config options, for the average for any of the 3
intervals. A new admin command ceph daemon osd.# dump_osd_network
[threshold]” does the same but only including heartbeats initiated by the
specified OSD.
- The default value of the
“osd_deep_scrub_large_omap_object_key_threshold” parameter has been
lowered to detect an object with large number of omap keys more easily.
RGW:
- radosgw-admin introduces two subcommands that allow the managing of
expire-stale objects that might be left behind after a bucket reshard in
earlier versions of RGW. One subcommand lists such objects and the other
deletes them. Read the troubleshooting section of the dynamic resharding
docs for details.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io