It works well for me, been running a couple clusters for 1-2 years where all OSD hosts
(~200) have no system disks and instead netboot from PXE.
No NFS server involved, each host loads the same system image (Debian Live squashfs) into
memory on boot and runs independently from there on out. Takes some trickery to configure
and bring the OSDs up on boot (using puppet in my case), though that might get easier with
the containerized approach in Ceph 15+.
Best,
Eric
On 21 Mar 2020, at 14:18, huxiaoyu(a)horebdata.cn
wrote:
Hi, Marc,
Indeed PXE boot makes a lot sense in large cluster, cuting down OS deployment and
management burden, but only iff no single of failure is guaranteed...
best regards,
samuel
huxiaoyu(a)horebdata.cn
From: Marc Roos
Date: 2020-03-21 14:13
To: ceph-users; huxiaoyu; martin.verges
Subject: RE: [ceph-users] Questions on Ceph cluster without OS disks
I would say it is not a 'proven technology' otherwise you would see a
wide spread implementation and adaptation of this method. However if you
really need the physical disk space, it is a solution. Although I also
would have questions on creating an extra redundant environment to
service remote booting, just to spare a os disk position. Maybe this
makes more sence in really big environments.
-----Original Message-----
From: huxiaoyu(a)horebdata.cn [mailto:huxiaoyu@horebdata.cn]
Sent: 21 March 2020 13:54
To: Martin Verges; ceph-users
Subject: [ceph-users] Questions on Ceph cluster without OS disks
Hello, Martin,
I notice that Croit advocate the use of ceph cluster without OS disks,
but with PXE boot.
Do you use a NFS server to serve the root file system for each node?
such as hosting configuration files, user and password, log files, etc.
My question is, will the NFS server be a single point of failure? If the
NFS server goes down, the network experience any outage, ceph nodes may
not be able to write to the local file systems, possibly leading to
service outage.
How do you deal with the above potential issues in production? I am a
bit worried...
best regards,
samuel
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io