Please discard this question, I figure it out.
Tony
> -----Original Message-----
> From: Tony Liu <tonyliu0592(a)hotmail.com>
> Sent: Thursday, August 27, 2020 1:55 PM
> To: ceph-users(a)ceph.io
> Subject: [ceph-users] [cephadm] Deploy Ceph in a closed environment
>
> Hi,
>
> I'd like to deploy Ceph in a closed environment (no connectivity to
> public). I will build repository and registry to hold required packages
> and container images. How do I specify the private registry when running
> "cephadm bootstrap"? The same question for adding OSD.
>
> Thanks!
> Tony
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
Hi,
I'd like to deploy Ceph in a closed environment (no connectivity
to public). I will build repository and registry to hold required
packages and container images. How do I specify the private
registry when running "cephadm bootstrap"? The same question for
adding OSD.
Thanks!
Tony
Am I the only one that thinks it is not necessary to dump these keys
with every command (ls and get)? Either remove these keys from auth ls
and auth get. Or remove the commands "auth print_key" "auth print-key"
and "auth get-key"
Hello,
Looking for a bit of guidance / approach to upgrading from Nautilus to
Octopus considering CentOS and Ceph-Ansible.
We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 as
of this post).
- There are 4 monitor-hosts with mon, mgr, and dashboard functions
consolidated;
- 4 RGW hosts
- 4 ODS costs, with 10 OSDs each. This is planned to scale to 7 nodes
with additional OSDs and capacity (considering to do this as part of
upgrade process)
- Currently using ceph-ansible (however it's a process to maintain scripts
/ configs between playbook versions - although a great framework, not ideal
in our case;
- All hosts run CentOS 7.x;
- dm-crypt in use on LVM OSDs (via ceph-ansible);
- Deployment IS NOT containerized.
Octopus support on CentOS 7 is limited due to python dependencies, as a
result we want to move to CentOS 8 or Ubuntu 20.04. The other outlier is
CentOS native Kernel support for LSI2008 (eg. 9211) HBAs which some of our
OSD nodes use.
Irrespective of OS considerations above, the upgrade will be to an OS that
fully supports Octopus.
We'd like to make use of ceph orchestrator for on-going cluster management.
Here's an upgrade path scenario that is being considered. At a high-level:
1. Deploy a new monitor on CentOS 8. May be Nautilus via established
ceph-ansible playbook.
2. Upgrade new monitor to Octopus using dnf / ceph package upgrade.
3. Decommission individual monitor hosts (existing on CentOS 7) and
redeploy on CentOS 8 via ceph orchestrator from new monitor node;
4. Repeat until all monitors are on new OS + Octopus (all deployed via
Ceph Orchestrator.
5. Add additional OSD nodes / drives / capacity via orchestrator on
Octopus;
6. Upgrade existing OSD hosts by keeping OSDs intact, reinstalling new OS
(CentOS 8 or Ubuntu 20.04);
7. Deploy ceph octopus on new nodes via orchestrator;
8. Reactivate / rescan in-tact OSDs on newly redeployed node. (i.e.
ceph-volume
lvm activate --all)
9. Rinse / repeat for remaining Nautilus nodes.
10. Manually upgrade RGW packages on gateway nodes.
Thank you.
Official document says that you should allocate 4% of the slow device space
for block.db.
But the main problem is that Bluestore uses RocksDB and RocksDB puts a file
on the fast
device only if it thinks that the whole layer will fit there.
As for RocksDB, L1 is about 300M, L2 is about 3G, L3 is near 30G, and L4 is
about 300G.
For instance, RocksDB puts L2 files to block.db only if it’s at least 3G
there.
As a result, 30G is a acceptable value.
Tony Liu <tonyliu0592(a)hotmail.com> 于2020年8月25日周二 上午10:49写道:
> > -----Original Message-----
> > From: Anthony D'Atri <anthony.datri(a)gmail.com>
> > Sent: Monday, August 24, 2020 7:30 PM
> > To: Tony Liu <tonyliu0592(a)hotmail.com>
> > Subject: Re: [ceph-users] Re: Add OSD with primary on HDD, WAL and DB on
> > SSD
> >
> > Why such small HDDs? Kinda not worth the drive bays and power, instead
> > of the complexity of putting WAL+DB on a shared SSD, might you have been
> > able to just buy SSDs and not split? ymmv.
>
> 2TB is for testing, it will bump up to 10TB for production.
>
> > The limit is a function of the way the DB levels work, it’s not
> > intentional.
> >
> > WAL by default takes a fixed size, like 512 MB or something.
> >
> > 64 GB is a reasonable size, it accomodates the WAL and allows space for
> > DB compaction without overflowing.
>
> For each 10TB HDD, what's the recommended DB device size for both
> DB and WAL? The doc recommends 1% - 4%, meaning 100GB - 400GB for
> each 10TB HDD. But given the WAL data size and DB data size, I am
> not sure if that 100GB - 400GB will be used efficiently.
>
> > With this commit the situation should be improved, though you don’t
> > mention what release you’re running
> >
> > https://github.com/ceph/ceph/pull/29687
>
> I am using ceph version 15.2.4 octopus (stable).
>
> Thanks!
> Tony
>
> > >>> I don't need to create
> > >>> WAL device, just primary on HDD and DB on SSD, and WAL will be using
> > >>> DB device cause it's faster. Is that correct?
> > >>
> > >> Yes.
> > >>
> > >>
> > >> But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
> > >> Anything less than those sizes will have a lot of untilised space,
> > >> e.g a 20GB device will only utilise 3GB.
> > >
> > > I have 1 480GB SSD and 7 2TB HDDs. 7 LVs are created on SSD, each is
> > > about 64GB, for 7 OSDs.
> > >
> > > Since it's shared by DB and WAL, DB will take 30GB and WAL will take
> > > the rest 34GB. Is that correct?
> > >
> > > Is that size of DB and WAL good for 2TB HDD (block store and object
> > > store cases)?
> > >
> > > Could you share a bit more about the intention of such limit?
> > >
> > >
> > > Thanks!
> > > Tony
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> > > email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
Hi everyone,
Join us August 27th at 17:00 UTC to hear Pritha Srivastava present on
this month's Ceph Tech Talk: Secure Token Service in the Rados Gateway.
Calendar invite and archive can be found here:
https://ceph.io/ceph-tech-talks/
If you're interested or know someone who can present September 24th, or
October 22nd please let me know!
--
Mike Perez
He/Him
Ceph Community Manager
Red Hat Los Angeles <https://www.redhat.com>
thingee(a)redhat.com <mailto:thingee@redhat.com>
M: 1-951-572-2633 <tel:1-951-572-2633> IM: IRC Freenode/OFTC: thingee
494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
@Thingee <https://twitter.com/thingee>
<https://www.redhat.com>
<https://redhat.com/summit>
Hi
we found a very ugly issue in rados df
I have several clusters, all running ceph nautilus (14.2.11), We have
there replicated pools with replica size 4.
On the older clusters "rados df" shows in the used column the net used
space. On our new cluster, rados df shows in the used column the gross
used space.
The older clusters was upgraded from luminous (and before) and uses
filestore and the new cluster is initally deployed with nautilus and
bluestore.
why are the outputs different? Is this related with nautilus or with
bluestore? For our reporting this values have a significant relevance
and now I am running in such discrepancies.
What commands/metrics can I use to get more reliable values. Maybe "ceph
df detail"?
Manuel
I think I can remember reading somewhere that every radosgw is required
to run with their own clientid. Is this still necessary? Or can I run
multiple instances of radosgw with the same clientid?
So can have something like
rgw: 2 daemons active (rgw1, rgw1, rgw1)