Hi Bob,
The way I workaround that issue is checking if the file for each hard disk
already exists, if not, I create it. Take a look here:
That allows me to do vagrant up/halt multiple times without any problem.
Thanks!
On Fri, Feb 14, 2020 at 10:42 AM Bob Wassell <bob(a)softiron.com> wrote:
I’ve found that having > 1 drive controller in
vagrant is problematic,
although i do agree it would be preferred logically for > 1 osd per node.
That being said, have you all run into the problems that i’ve seen with > 1
hdd controller? Namely, the inability to use vagrant's up command after a
machine has been provisioned and halted. At least from my perspective,
once you provision you can’t use vagrant for up / halt and just use VB
commands. VB does not check for drive controller’s existence, and gets
confused. Is that consistent with what you’ve found? I’ve spent some time
on this issue, so i’m curious as to if you’ve found any “magic grits” to
solve it. This issue is on both *nix and windows too btw. (It is
certainly a VB issue, not vagrant’s falt imo)
Thank you in advance for your thoughts, anything that jumpstarts newbs
past a ceph installation is a great thing!
Bob
[image: SoftIron Logo]
*Bob Wassell*
Solutions Architect | Soft <http://www.softiron.com>Iron
<http://www.softiron.com>
+1 610 505 9861
bob(a)softiron.com
On Feb 14, 2020, at 11:50 AM, Ignacio Ocampo <nafiux(a)gmail.com> wrote:
Hi all,
A group of friends and my self are documenting a hands-on workshop about
Ceph
https://github.com/Nafiux/ceph-workshop, for learning purposes.
The idea is to provide visibility step-by-step on common scenarios, from
basic usage, to disaster and recovery scenarios.
We will hold a workshop next weekend, and some of the ideas for learning
are:
- Configure and learn how to consume Block Storage Devices
- Configure and learn how to consume File Systems Storage
- Configure and learn how to consume Object Storage
- Simulate a disaster and recovery event by killing a node and setting
up a new one
- Simulate a node migration
Any idea or feedback is welcome! From the way we've decided to install Ceph
(ceph-ansible), the way we're configuring the cluster, suggestions on basic
day-to-day operations we should learn, etc.
Thanks for your support!
--
Ignacio Ocampo
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io