Hi all,
as part of the Ceph Foundation, we're considering to re-launch the Ceph
website and migrate it away from a dated WordPress to Jekyll, backed by
Git et al. (Either hosted on our own infrastructure or even GitHub
pages.)
This would involve building/customizing a Jekyll theme, providing
feedback on the site structure proposal and usability, migrating content
(where appropriate) from the existing site, and working with the Ceph
infra team on getting it hosted/deployed.
Some help with improving the design would be welcome.
Content creation isn't necessarily part of the requirements, but working
with stakeholders on filling in blanks is; and if we could get someone
savvy with Ceph who wants to fill in a few pages, that's a plus!
After the launch, we should be mostly self-sufficient again for
day-to-day tasks.
If that's the kind of contract work you or a friend is interested in,
please reach out to me.
(The Foundation hasn't yet approved the budget, we're still trying to
get a feeling for the funding required. But I'd be fairly optimistic.)
Regards,
Lars
--
SUSE Software Solutions Germany GmbH, MD: Felix Imendörffer, HRB 36809 (AG Nürnberg)
"Architects should open possibilities and not determine everything." (Ueli Zbinden)
>
> 1. shoud i use a raid controller a create for example a raid 5 with all disks on each osd server? or should i passtrough all disks to ceph osd?
>
> If your OSD servers have HDDs, buy a good RAID Controller with a battery-backed write cache and configure it using multiple RAID-0 volumes (1 physical disk per volume). That way, reads and write will be accelerated by the cache on the HBA.
I’ve lived this scenario and hated it. Multiple firmware and manufacturing issues, batteries/supercaps can fail and need to be monitored, bugs causing staged data to be lost before writing to disk, another bug that required replacing the card if there was preserved cache for a failed drive, because it would refuse to boot, difficulties in drive monitoring, HBA monitoring utility that would lock the HBA or peg the CPU, the list goes on.
For the additional cost of RoC, cache RAM, supercap to (fingers crossed) protect the cache, all the additional monitoring and hands work … you might find that SATA SSDs on a JBOD HBA are no more expensive.
> 3. if i have a 3 physically node osd cluster, did i need 5 physicall mons?
> No. 3 MON are enough
If you have good hands and spares. If your cluster is on a different continent and colo hands can’t find their own butts ….. it’s nice to survive a double failure.
ymmv
Hello together,
we want to create a productive ceph storage system in our datacenter in may this year with openstack and ucs and i tested a lot in my cep test enviroment, and i have some general questions.
whats receommended?
1. shoud i use a raid controller a create for example a raid 5 with all disks on each osd server? or should i passtrough all disks to ceph osd?
2. if i have a 2 pyhsicaly node osd cluster, did i need 3 physicall mons?
3. if i have a 3 physically node osd cluster, did i need 5 physicall mons?
3. where i should in install the mgr? on osd or mon
4. where i should in install the rgw? on osd or mon OR on 1 or 2 separate machines?
in my testlab i created 3 VMs osds with mgr installed, and 5 VMs mons , and 1 VM as rgw -> is this correct?
thx in advance
hfreidhof
Hello,
How frequently do RBD device names get reused? For instance, when I map a
volume on a client and it gets mapped to /dev/rbd0 and when it is unmapped,
does a subsequent map reuse this name right away?
I ask this question, because in our use case, we try to unmap a volume and
we are thinking about adding some retries in case unmap fails for any
reason. But I am concerned about race conditions such as the following:
1. thread 1 calls unmap, but the call times out and returns in the process,
but in the background unmap request does go through and the device gets
removed
2. thread 1 does a retry based on the device name.
If between 1 and 2, another thread tries to map another volume and if it
gets mapped to same device right after previous unmap was successful, then
in step 2, we will be trying to unmap a device that doesn't belong to
previous map.
So I want to know how frequently do the device names get reused, and if
there is a way to keep them using new names until they round back after a
max limit.
Thanks,
Shridhar
De all,
we are in the process of migrating a ceph file system from a 2-pool layout (rep meta+ec data) to the recently recommended 3-pool layout (rep meta, per primary data, ec data). As part of this, we need to migrate any ceph xattribs set on files and directories. As these are no longer discoverable, how would one go about this?
Special cases:
How to migrate quota settings?
How to migrate dir- and file-layouts?
Ideally, at least quota attributes should be transferable on the fly with tools like rsync.
If automatic migration is not possible, is there at least an efficient way to *find* everything with special ceph attributes?
Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
I have successfully installed a ceph storage cluster with a radosgw ceph object gateway daemon.
I have been able to create an s3 bucket using a python boto3 script and a radosgw user, using the key and secret.
I cannot find very much documentation concerning the usage of buckets to store objects, and since this is a new concept for me I was wondering if anyone with experience with this could help me out.
I am hoping to use a frontend to manage ceph that requires s3 compatibility.
Hi,
Is there a way to override the default dashboard container image hardcoded in cephadm [1] ?
For the Ceph container image it possible to change the default value [2] either:
- via cephadm cli and CEPHADM_IMAGE environment variable
- via cephadm cli and the --image parameter
- via container_image orchestrator parameter (ie: ceph config set global container_image xxx/yyy:zzz)
But I'm not able to find something similar for the dashboard stack (alertmanager, grafana, node-exporter and prometheus)
Any idea ?
Regards,
Dimitri
[1] https://github.com/ceph/ceph/blob/master/src/cephadm/cephadm#L114-L160
[2] https://github.com/ceph/ceph/blob/master/src/cephadm/cephadm#L3