Hello Jared,
you could use croit to manage the osd and drop the work entirely. You would
just need to restart host by host over pxe network to migrate.
Besides that, you can freshly install ubuntu within a running environment
using debootstrap and then just restart the host. However that is quite
tricky and not suggested to inexperienced users.
If you don't touch the Ceph disks at all, the service will come up again
without anything to change from your side. Sometimes it's better to clean
up some old mess and do it the way you currently work.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am Fr., 19. Juni 2020 um 16:29 Uhr schrieb shubjero <shubjero(a)gmail.com>om>:
Hi all,
I have a 39 node, 1404 spinning disk Ceph Mimic cluster across 6 racks
for a total of 9.1PiB raw and about 40% utilized. These storage nodes
started their life on Ubuntu 14.04 and in-place upgraded to 16.04 2
years ago however I have started a project to do fresh installs of
each OSD node to Ubuntu 18.04 to keep things fresh and well supported.
I am reaching out to see what others might suggest in terms of
strategy to get these hosts updated quicker than my current strategy.
Current strategy:
1. Pick 3 nodes, drain them by modifying the crush weight
2. Fresh install 18.04 using automation tool (MAAS) + some Ansible
playbooks to setup server
3. Purge OSD node worth of OSD' (this causes data to be 'misplaced'
due to rack weight changing)
4. Run ceph-volume lvm batch for osd node
5. Move OSD's in to desired hosts in crush map (large rebalancing to
fill back up)
If anyone has suggestions on a quicker way to do this I am all ears.
I am wondering if its not necessary to have to drain/fill OSD nodes at
all and if this can be done with just a fresh install and not touch
the OSD's however I don't know how to perform a fresh installation and
then tell ceph that I have OSD's with data on them and to somehow
re-register them with the cluster? Or is there a better order of
operations to draining/filling without causing a high amount of
objects to be misplaced due to manipulating the crush map.
That being said, since our cluster is a bit older and the majority of
our bluestore osd's are provisioned in the 'simple' method using a
small metadata partition and the remainder as a raw partition whereas
now it seems the suggested way is to use the lvm layout and tmpfs.
Anyways, I'm all ears and appreciate any feedback.
Jared Baker
Ontario Institute for Cancer Research
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io