On 4/5/20 1:16 PM, Marc Roos wrote:
No didn't get answer to this.
Yes I thought also, but recently there has been an issue here with an
upgrade to Octopus, where osd's are being changed automatically and
consume huge amounts of memory during this. Furthermore if you have a
cluster with hundreds of osds, it is not really acceptable to have to
recreate them.
The upgrade to O is not related to this.
But the alloc size of BlueStore is set during mkfs of BlueStore. So yes,
you will need to re-create the OSDs if you want this done.
That takes time:
- Mark out
- Wait for HEALTH_OK
- Re-format OSD
- Mark in
- Repeat
As time goes on the developers improve things which can't always be done
automatically, therefor at some point you will have to do this.
Wido
>
>
>
> -----Original Message-----
> From: Brent Kennedy [mailto:bkennedy@cfl.rr.com]
> Sent: 05 April 2020 04:26
> To: Marc Roos; 'abhishek'; 'ceph-users'
> Subject: RE: [ceph-users] Re: v14.2.8 Nautilus released
>
> Did you get an answer for this? My original thought when I read it was
> that the osd would need to be recreated(as you noted).
>
> -Brent
>
> -----Original Message-----
> From: Marc Roos <M.Roos(a)f1-outsourcing.eu>
> Sent: Tuesday, March 3, 2020 10:58 AM
> To: abhishek <abhishek(a)suse.com>om>; ceph-users <ceph-users(a)ceph.io>
> Subject: [ceph-users] Re: v14.2.8 Nautilus released
>
>
> This bluestore_min_alloc_size_ssd=4K, do I need to recreate these osd's?
>
> Or does this magically change? What % performance increase can be
> expected?
>
>
> -----Original Message-----
> To: ceph-announce(a)ceph.io; ceph-users(a)ceph.io; dev(a)ceph.io;
> ceph-devel(a)vger.kernel.org
> Subject: [ceph-users] v14.2.8 Nautilus released
>
>
> This is the eighth update to the Ceph Nautilus release series. This
> release fixes issues across a range of subsystems. We recommend that all
>
> users upgrade to this release. Please note the following important
> changes in this release; as always the full changelog is posted at:
>
https://ceph.io/releases/v14-2-8-nautilus-released
>
> Notable Changes
> ---------------
>
> * The default value of `bluestore_min_alloc_size_ssd` has been changed
> to 4K to improve performance across all workloads.
>
> * The following OSD memory config options related to bluestore cache
> autotuning can now
> be configured during runtime:
>
> - osd_memory_base (default: 768 MB)
> - osd_memory_cache_min (default: 128 MB)
> - osd_memory_expected_fragmentation (default: 0.15)
> - osd_memory_target (default: 4 GB)
>
> The above options can be set with::
>
> ceph config set osd <option> <value>
>
> * The MGR now accepts `profile rbd` and `profile rbd-read-only` user
> caps.
> These caps can be used to provide users access to MGR-based RBD
> functionality
> such as `rbd perf image iostat` an `rbd perf image iotop`.
>
> * The configuration value `osd_calc_pg_upmaps_max_stddev` used for upmap
> balancing has been removed. Instead use the mgr balancer config
> `upmap_max_deviation` which now is an integer number of PGs of
> deviation
> from the target PGs per OSD. This can be set with a command like
> `ceph config set mgr mgr/balancer/upmap_max_deviation 2`. The default
> `upmap_max_deviation` is 1. There are situations where crush rules
> would not allow a pool to ever have completely balanced PGs. For
> example, if
> crush requires 1 replica on each of 3 racks, but there are fewer OSDs
> in 1 of
> the racks. In those cases, the configuration value can be increased.
>
> * RGW: a mismatch between the bucket notification documentation and the
> actual
> message format was fixed. This means that any endpoints receiving
> bucket
> notification, will now receive the same notifications inside a JSON
> array
> named 'Records'. Note that this does not affect pulling bucket
> notification
> from a subscription in a 'pubsub' zone, as these are already wrapped
> inside
> that array.
>
> * CephFS: multiple active MDS forward scrub is now rejected. Scrub
> currently
> only is permitted on a file system with a single rank. Reduce the
> ranks to one
> via `ceph fs set <fs_name> max_mds 1`.
>
> * Ceph now refuses to create a file system with a default EC data pool.
> For
> further explanation, see:
>
https://docs.ceph.com/docs/nautilus/cephfs/createfs/#creating-pools
>
> * Ceph will now issue a health warning if a RADOS pool has a `pg_num`
> value that is not a power of two. This can be fixed by adjusting
> the pool to a nearby power of two::
>
> ceph osd pool set <pool-name> pg_num <new-pg-num>
>
> Alternatively, the warning can be silenced with::
>
> ceph config set global mon_warn_on_pool_pg_num_not_power_of_two
> false
>
> Getting Ceph
> ------------
>
> * Git at
git://github.com/ceph/ceph.git
> * Tarball at
http://download.ceph.com/tarballs/ceph-14.2.8.tar.gz
> * For packages, see
>
http://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: 2d095e947a02261ce61424021bb43bd3022d35cb
>
> --
> Abhishek Lekshmanan
> SUSE Software Solutions Germany GmbH
> GF: Felix Imendörffer HRB 21284 (AG Nürnberg)
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>