Hi everyone,
The Ceph Month June schedule is now available:
https://pad.ceph.com/p/ceph-month-june-2021
We have great sessions from component updates, performance best
practices, Ceph on different architectures, BoF sessions to get more
involved with working groups in the community, and more! You may also
leave open discussion topics for the listed talks that we'll get to
each Q/A portion.
I will provide the video stream link on this thread and etherpad once
it's available. You can also add the Ceph community calendar, which
will have the Ceph Month sessions prefixed with "Ceph Month" to get
local timezone conversions.
https://calendar.google.com/calendar/embed?src=9ts9c7lt7u1vic2ijvvqqlfpo0%4…
Thank you to our speakers for taking the time to share with us all the
latest best practices and usage with Ceph!
--
Mike Perez
On Mon, May 3, 2021 at 12:27 PM Magnus Harlander <magnus(a)harlan.de> wrote:
>
> Am 03.05.21 um 12:25 schrieb Ilya Dryomov:
>
> ceph osd setmaxosd 10
>
> Bingo! Mount works again.
>
> Veeeery strange things are going on here (-:
>
> Thanx a lot for now!! If I can help to track it down, please let me know.
Good to know it helped! I'll think about this some more and probably
plan to patch the kernel client to be less stringent and not choke on
this sort of misconfiguration.
Thanks,
Ilya
Hello.
I have a multisite RGW environment.
When I create a new bucket, the bucket is directly created on master
and secondary.
If I don't want to sync a bucket, I need to stop sync after creation.
Is there any global option as "Do not sync directly, only start if I want to" ?
Hello Samuel. Thanks for the answer.
Yes the Intel S4510 series is a good choice but it's expensive.
I have 21 server and data distribution is quite well.
At power loss I don't think I'll lose data. All the VM's using same
image and the rest is cookie.
In this case I'm not sure I should spend extra money on PLP.
Actually I like Samsung 870 EVO. It's cheap and I think 300TBW will be
enough for 5-10years.
Do you know any better ssd with the same price range as 870 EVO?
Samsung 870 evo (500GB) = 5 Years or 300 TBW - $64.99
Samsung 860 pro (512GB) = 5 Years or 600 TBW - $99
>
> I would recommend Intel S4510 series, which has power loss protection (PLP).
>
> If you do not care about PLP, lower-cost Samsung 870EVO and Crucial MX500 should also be OK (with separate DB/WAL on enterprise SSD with PLP)
>
> Samuel
>
> ________________________________
> huxiaoyu(a)horebdata.cn
>
>
> From: by morphin
> Date: 2021-05-30 02:48
> To: Anthony D'Atri
> CC: Ceph Users
> Subject: [ceph-users] Re: SSD recommendations for RBD and VM's
> Hello Anthony.
>
> I use Qemu and I don't need size.
> I've 1000 vm and usually they're clones from the same rbd image. The
> image is 30GB.
> Right now I've 7TB Stored data. rep x3 = 20TB data. It's mostly read
> intensive. Usage is stable and does not grow.
> So I need I/O more than capacity. That's why I'm looking for 256-512GB SSD's.
> I think right now 480-512GB is sweet spot for $ / GB. So 60PCS 512GB
> will be enough. Actually 120PCS 256GB will be better but the price
> goes up.
> I have Dell R720-740 and I use SATA Intel DCS3700 for journal. I've
> 40PCS 100GB. I'm gonna make them OSD as well.
> 7 years and DC S3700 still rocks. Not even one of them is dead.
> The SSD must be Low price & High TBW life span. Rest is not important.
>
>
> Anthony D'Atri <anthony.datri(a)gmail.com>, 30 May 2021 Paz, 02:26
> tarihinde şunu yazdı:
> >
> > The choice depends on scale, your choice of chassis / form factor, budget, workload and needs.
> >
> > The sizes you list seem awfully small. Tell us more about your use-case. OpenStack? Proxmox? QEMU? VMware? Converged? Dedicated ?
> > —aad
> >
> >
> > > On May 29, 2021, at 2:10 PM, by morphin <morphinwithyou(a)gmail.com> wrote:
> > >
> > > Hello.
> > >
> > > I have virtualization env and I'm looking new SSD for HDD replacement.
> > > What are the best Performance / Price SSDs in the market right now?
> > > I'm looking 1TB, 512GB, 480GB, 256GB, 240GB.
> > >
> > > Is there a SSD recommendation list for ceph?
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
I'm trying to figure out a CRUSH rule that will spread data out across my cluster as much as possible, but not more than 2 chunks per host.
If I use the default rule with an osd failure domain like this:
step take default
step choose indep 0 type osd
step emit
I get clustering of 3-4 chunks on some of the hosts:
# for pg in $(ceph pg ls-by-pool cephfs_data_ec62 -f json | jq -r '.pg_stats[].pgid'); do
> echo $pg
> for osd in $(ceph pg map $pg -f json | jq -r '.up[]'); do
> ceph osd find $osd | jq -r '.host'
> done | sort | uniq -c | sort -n -k1
8.0
1 harrahs
3 paris
4 aladdin
8.1
1 aladdin
1 excalibur
2 mandalaybay
4 paris
8.2
1 harrahs
2 aladdin
2 mirage
3 paris
...
However, if I change the rule to use:
step take default
step choose indep 0 type host
step chooseleaf indep 2 type osd
step emit
I get the data spread across 4 hosts with 2 chunks per host:
# for pg in $(ceph pg ls-by-pool cephfs_data_ec62 -f json | jq -r '.pg_stats[].pgid'); do
> echo $pg
> for osd in $(ceph pg map $pg -f json | jq -r '.up[]'); do
> ceph osd find $osd | jq -r '.host'
> done | sort | uniq -c | sort -n -k1
> done
8.0
2 aladdin
2 harrahs
2 mandalaybay
2 paris
8.1
2 aladdin
2 harrahs
2 mandalaybay
2 paris
8.2
2 harrahs
2 mandalaybay
2 mirage
2 paris
...
Is it possible to get the data to spread out over more hosts? I plan on expanding the cluster in the near future and would like to see more hosts get 1 chunk instead of 2.
Also, before you recommend adding two more hosts and switching to a host-based failure domain, the cluster is on a variety of hardware with between 2-6 drives per host and drives that are 4TB-12TB in size (it's part of my home lab).
Thanks,
Bryan
Hi all, I have observed that the MDS Cache Configuration has 18 parameters:
mds_cache_memory_limit
mds_cache_reservation
mds_health_cache_threshold
mds_cache_trim_threshold
mds_cache_trim_decay_rate
mds_recall_max_caps
mds_recall_max_decay_threshold
mds_recall_max_decay_rate
mds_recall_global_max_decay_threshold
mds_recall_warning_threshold
mds_recall_warning_decay_rate
mds_session_cap_acquisition_throttle
mds_session_cap_acquisition_decay_rate
mds_session_max_caps_throttle_ratio
mds_cap_acquisition_throttle_retry_request_timeout
mds_session_cache_liveness_magnitude
mds_session_cache_liveness_decay_rate
mds_max_caps_per_client
I find the Ceph documentation in this section a bit cryptic and I have
tried to find some resources that talk about how to tune these
parameters, but without success.
Does anyone have experience in adjusting these parameters according to
the characteristics of the Ceph cluster itself, the hardware and the use
of MDS?
Regards!
--
*******************************************************
Andrés Rojas Guerrero
Unidad Sistemas Linux
Area Arquitectura Tecnológica
Secretaría General Adjunta de Informática
Consejo Superior de Investigaciones Científicas (CSIC)
Pinar 19
28006 - Madrid
Tel: +34 915680059 -- Ext. 990059
email: a.rojas(a)csic.es
ID comunicate.csic.es: @50852720l:matrix.csic.es
*******************************************************
Peter,
We're seeing the same issues as you are. We have 2 new hosts Intel(R)
Xeon(R) Gold 6248R CPU @ 3.00GHz w/ 48 cores, 384GB RAM, and 60x 10TB SED
drives and we have tried both 15.2.13 and 16.2.4
Cephadm does NOT properly deploy and activate OSDs on Ubuntu 20.04.2 with
Docker.
Seems to be a bug in Cephadm and a product regression, as we have 4 near
identical nodes on Centos running Nautilus (240 x 10TB SED drives) and had
no problems.
FWIW we had no luck yet with one-by-one OSD daemon additions through ceph
orch either. We also reproduced the issue easily in a virtual lab using
small virtual disks on a single ceph VM with 1 mon.
We are now looking into whether we can get past this with a manual buildout.
If you, or anyone, has hit the same stumbling block and gotten past it, I
would really appreciate some guidance.
Thanks,
Marco
On Thu, May 27, 2021 at 2:23 PM Peter Childs <pchilds(a)bcs.org> wrote:
> In the end it looks like I might be able to get the node up to about 30
> odds before it stops creating any more.
>
> Or more it formats the disks but freezes up starting the daemons.
>
> I suspect I'm missing somthing I can tune to get it working better.
>
> If I could see any error messages that might help, but I'm yet to spit
> anything.
>
> Peter.
>
> On Wed, 26 May 2021, 10:57 Eugen Block, <eblock(a)nde.ag> wrote:
>
> > > If I add the osd daemons one at a time with
> > >
> > > ceph orch daemon add osd drywood12:/dev/sda
> > >
> > > It does actually work,
> >
> > Great!
> >
> > > I suspect what's happening is when my rule for creating osds run and
> > > creates them all-at-once it ties the orch it overloads cephadm and it
> > can't
> > > cope.
> >
> > It's possible, I guess.
> >
> > > I suspect what I might need to do at least to work around the issue is
> > set
> > > "limit:" and bring it up until it stops working.
> >
> > It's worth a try, yes, although the docs state you should try to avoid
> > it, it's possible that it doesn't work properly, in that case create a
> > bug report. ;-)
> >
> > > I did work out how to get ceph-volume to nearly work manually.
> > >
> > > cephadm shell
> > > ceph auth get client.bootstrap-osd -o
> > > /var/lib/ceph/bootstrap-osd/ceph.keyring
> > > ceph-volume lvm create --data /dev/sda --dmcrypt
> > >
> > > but given I've now got "add osd" to work, I suspect I just need to fine
> > > tune my osd creation rules, so it does not try and create too many osds
> > on
> > > the same node at the same time.
> >
> > I agree, no need to do it manually if there is an automated way,
> > especially if you're trying to bring up dozens of OSDs.
> >
> >
> > Zitat von Peter Childs <pchilds(a)bcs.org>:
> >
> > > After a bit of messing around. I managed to get it somewhat working.
> > >
> > > If I add the osd daemons one at a time with
> > >
> > > ceph orch daemon add osd drywood12:/dev/sda
> > >
> > > It does actually work,
> > >
> > > I suspect what's happening is when my rule for creating osds run and
> > > creates them all-at-once it ties the orch it overloads cephadm and it
> > can't
> > > cope.
> > >
> > > service_type: osd
> > > service_name: osd.drywood-disks
> > > placement:
> > > host_pattern: 'drywood*'
> > > spec:
> > > data_devices:
> > > size: "7TB:"
> > > objectstore: bluestore
> > >
> > > I suspect what I might need to do at least to work around the issue is
> > set
> > > "limit:" and bring it up until it stops working.
> > >
> > > I did work out how to get ceph-volume to nearly work manually.
> > >
> > > cephadm shell
> > > ceph auth get client.bootstrap-osd -o
> > > /var/lib/ceph/bootstrap-osd/ceph.keyring
> > > ceph-volume lvm create --data /dev/sda --dmcrypt
> > >
> > > but given I've now got "add osd" to work, I suspect I just need to fine
> > > tune my osd creation rules, so it does not try and create too many osds
> > on
> > > the same node at the same time.
> > >
> > >
> > >
> > > On Wed, 26 May 2021 at 08:25, Eugen Block <eblock(a)nde.ag> wrote:
> > >
> > >> Hi,
> > >>
> > >> I believe your current issue is due to a missing keyring for
> > >> client.bootstrap-osd on the OSD node. But even after fixing that
> > >> you'll probably still won't be able to deploy an OSD manually with
> > >> ceph-volume because 'ceph-volume activate' is not supported with
> > >> cephadm [1]. I just tried that in a virtual environment, it fails when
> > >> activating the systemd-unit:
> > >>
> > >> ---snip---
> > >> [2021-05-26 06:47:16,677][ceph_volume.process][INFO ] Running
> > >> command: /usr/bin/systemctl enable
> > >> ceph-volume@lvm-8-1a8fc8ae-8f4c-4f91-b044-d5636bb52456
> > >> [2021-05-26 06:47:16,692][ceph_volume.process][INFO ] stderr Failed
> > >> to connect to bus: No such file or directory
> > >> [2021-05-26 06:47:16,693][ceph_volume.devices.lvm.create][ERROR ] lvm
> > >> activate was unable to complete, while creating the OSD
> > >> Traceback (most recent call last):
> > >> File
> > >> "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py",
> > >> line 32, in create
> > >> Activate([]).activate(args)
> > >> File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py",
> > >> line 16, in is_root
> > >> return func(*a, **kw)
> > >> File
> > >>
> "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py",
> > >> line
> > >> 294, in activate
> > >> activate_bluestore(lvs, args.no_systemd)
> > >> File
> > >>
> "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py",
> > >> line
> > >> 214, in activate_bluestore
> > >> systemctl.enable_volume(osd_id, osd_fsid, 'lvm')
> > >> File
> > >> "/usr/lib/python3.6/site-packages/ceph_volume/systemd/systemctl.py",
> > >> line 82, in enable_volume
> > >> return enable(volume_unit % (device_type, id_, fsid))
> > >> File
> > >> "/usr/lib/python3.6/site-packages/ceph_volume/systemd/systemctl.py",
> > >> line 22, in enable
> > >> process.run(['systemctl', 'enable', unit])
> > >> File "/usr/lib/python3.6/site-packages/ceph_volume/process.py",
> > >> line 153, in run
> > >> raise RuntimeError(msg)
> > >> RuntimeError: command returned non-zero exit status: 1
> > >> [2021-05-26 06:47:16,694][ceph_volume.devices.lvm.create][INFO ] will
> > >> rollback OSD ID creation
> > >> [2021-05-26 06:47:16,697][ceph_volume.process][INFO ] Running
> > >> command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> > >> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.8
> > >> --yes-i-really-mean-it
> > >> [2021-05-26 06:47:17,597][ceph_volume.process][INFO ] stderr purged
> > osd.8
> > >> ---snip---
> > >>
> > >> There's a workaround described in [2] that's not really an option for
> > >> dozens of OSDs. I think your best approach is to bring cephadm to
> > >> activate the OSDs for you.
> > >> You wrote you didn't find any helpful error messages, but did cephadm
> > >> even try to deploy OSDs? What does your osd spec file look like? Did
> > >> you explicitly run 'ceph orch apply osd -i specfile.yml'? This should
> > >> trigger cephadm and you should see at least some output like this:
> > >>
> > >> Mai 26 08:21:48 pacific1 conmon[31446]: 2021-05-26T06:21:48.466+0000
> > >> 7effc15ff700 0 log_channel(cephadm) log [INF] : Applying service
> > >> osd.ssd-hdd-mix on host pacific2...
> > >> Mai 26 08:21:49 pacific1 conmon[31009]: cephadm
> > >> 2021-05-26T06:21:48.469611+0000 mgr.pacific1.whndiw (mgr.14166) 1646 :
> > >> cephadm [INF] Applying service osd.ssd-hdd-mix on host pacific2...
> > >>
> > >> Regards,
> > >> Eugen
> > >>
> > >> [1] https://tracker.ceph.com/issues/49159
> > >> [2] https://tracker.ceph.com/issues/46691
> > >>
> > >>
> > >> Zitat von Peter Childs <pchilds(a)bcs.org>:
> > >>
> > >> > Not sure what I'm doing wrong, I suspect its the way I'm running
> > >> > ceph-volume.
> > >> >
> > >> > root@drywood12:~# cephadm ceph-volume lvm create --data /dev/sda
> > >> --dmcrypt
> > >> > Inferring fsid 1518c8e0-bbe4-11eb-9772-001e67dc85ea
> > >> > Using recent ceph image ceph/ceph@sha256
> > >> > :54e95ae1e11404157d7b329d0bef866ebbb214b195a009e87aae4eba9d282949
> > >> > /usr/bin/docker: Running command: /usr/bin/ceph-authtool
> > --gen-print-key
> > >> > /usr/bin/docker: Running command: /usr/bin/ceph-authtool
> > --gen-print-key
> > >> > /usr/bin/docker: --> RuntimeError: No valid ceph configuration file
> > was
> > >> > loaded.
> > >> > Traceback (most recent call last):
> > >> > File "/usr/sbin/cephadm", line 8029, in <module>
> > >> > main()
> > >> > File "/usr/sbin/cephadm", line 8017, in main
> > >> > r = ctx.func(ctx)
> > >> > File "/usr/sbin/cephadm", line 1678, in _infer_fsid
> > >> > return func(ctx)
> > >> > File "/usr/sbin/cephadm", line 1738, in _infer_image
> > >> > return func(ctx)
> > >> > File "/usr/sbin/cephadm", line 4514, in command_ceph_volume
> > >> > out, err, code = call_throws(ctx, c.run_cmd(),
> > verbosity=verbosity)
> > >> > File "/usr/sbin/cephadm", line 1464, in call_throws
> > >> > raise RuntimeError('Failed command: %s' % ' '.join(command))
> > >> > RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host
> > >> > --net=host --entrypoint /usr/sbin/ceph-volume --privileged
> > >> --group-add=disk
> > >> > --init -e CONTAINER_IMAGE=ceph/ceph@sha256
> :54e95ae1e11404157d7b329d0t
> > >> >
> > >> > root@drywood12:~# cephadm shell
> > >> > Inferring fsid 1518c8e0-bbe4-11eb-9772-001e67dc85ea
> > >> > Inferring config
> > >> >
> > /var/lib/ceph/1518c8e0-bbe4-11eb-9772-001e67dc85ea/mon.drywood12/config
> > >> > Using recent ceph image ceph/ceph@sha256
> > >> > :54e95ae1e11404157d7b329d0bef866ebbb214b195a009e87aae4eba9d282949
> > >> > root@drywood12:/# ceph-volume lvm create --data /dev/sda --dmcrypt
> > >> > Running command: /usr/bin/ceph-authtool --gen-print-key
> > >> > Running command: /usr/bin/ceph-authtool --gen-print-key
> > >> > Running command: /usr/bin/ceph --cluster ceph --name
> > client.bootstrap-osd
> > >> > --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> > >> > 70054a5c-c176-463a-a0ac-b44c5db0987c
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1 auth: unable
> to
> > >> find
> > >> > a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such
> > file
> > >> or
> > >> > directory
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1
> > >> > AuthRegistry(0x7fdef405b378) no keyring found at
> > >> > /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1 auth: unable
> to
> > >> find
> > >> > a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such
> > file
> > >> or
> > >> > directory
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1
> > >> > AuthRegistry(0x7fdef405ef20) no keyring found at
> > >> > /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1 auth: unable
> to
> > >> find
> > >> > a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such
> > file
> > >> or
> > >> > directory
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1
> > >> > AuthRegistry(0x7fdef8f0bea0) no keyring found at
> > >> > /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef2d9d700 -1
> > monclient(hunting):
> > >> > handle_auth_bad_method server allowed_methods [2] but i only support
> > [1]
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef259c700 -1
> > monclient(hunting):
> > >> > handle_auth_bad_method server allowed_methods [2] but i only support
> > [1]
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef1d9b700 -1
> > monclient(hunting):
> > >> > handle_auth_bad_method server allowed_methods [2] but i only support
> > [1]
> > >> > stderr: 2021-05-25T07:46:18.188+0000 7fdef8f0d700 -1 monclient:
> > >> > authenticate NOTE: no keyring found; disabled cephx authentication
> > >> > stderr: [errno 13] RADOS permission denied (error connecting to the
> > >> > cluster)
> > >> > --> RuntimeError: Unable to create a new OSD id
> > >> > root@drywood12:/# lsblk /dev/sda
> > >> > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> > >> > sda 8:0 0 7.3T 0 disk
> > >> >
> > >> > As far as I can see cephadm gets a little further than this as the
> > disks
> > >> > have lvm volumes on them just the osd's daemons are not created or
> > >> started.
> > >> > So maybe I'm invoking ceph-volume incorrectly.
> > >> >
> > >> >
> > >> > On Tue, 25 May 2021 at 06:57, Peter Childs <pchilds(a)bcs.org> wrote:
> > >> >
> > >> >>
> > >> >>
> > >> >> On Mon, 24 May 2021, 21:08 Marc, <Marc(a)f1-outsourcing.eu> wrote:
> > >> >>
> > >> >>> >
> > >> >>> > I'm attempting to use cephadm and Pacific, currently on debian
> > >> buster,
> > >> >>> > mostly because centos7 ain't supported any more and cenotos8
> ain't
> > >> >>> > support
> > >> >>> > by some of my hardware.
> > >> >>>
> > >> >>> Who says centos7 is not supported any more? Afaik centos7/el7 is
> > being
> > >> >>> supported till its EOL 2024. By then maybe a good alternative for
> > >> >>> el8/stream has surfaced.
> > >> >>>
> > >> >>
> > >> >> Not supported by ceph Pacific, it's our os of choice otherwise.
> > >> >>
> > >> >> My testing says the version available of podman, docker and
> python3,
> > do
> > >> >> not work with Pacific.
> > >> >>
> > >> >> Given I've needed to upgrade docker on buster can we please have a
> > list
> > >> of
> > >> >> versions that work with cephadm, maybe even have cephadm say no,
> > please
> > >> >> upgrade unless your running the right version or better.
> > >> >>
> > >> >>
> > >> >>
> > >> >>> > Anyway I have a few nodes with 59x 7.2TB disks but for some
> reason
> > >> the
> > >> >>> > osd
> > >> >>> > daemons don't start, the disks get formatted and the osd are
> > created
> > >> but
> > >> >>> > the daemons never come up.
> > >> >>>
> > >> >>> what if you try with
> > >> >>> ceph-volume lvm create --data /dev/sdi --dmcrypt ?
> > >> >>>
> > >> >>
> > >> >> I'll have a go.
> > >> >>
> > >> >>
> > >> >>> > They are probably the wrong spec for ceph (48gb of memory and
> > only 4
> > >> >>> > cores)
> > >> >>>
> > >> >>> You can always start with just configuring a few disks per node.
> > That
> > >> >>> should always work.
> > >> >>>
> > >> >>
> > >> >> That was my thought too.
> > >> >>
> > >> >> Thanks
> > >> >>
> > >> >> Peter
> > >> >>
> > >> >>
> > >> >>> > but I was expecting them to start and be either dirt slow or
> crash
> > >> >>> > later,
> > >> >>> > anyway I've got upto 30 of them, so I was hoping on getting at
> > least
> > >> get
> > >> >>> > 6PB of raw storage out of them.
> > >> >>> >
> > >> >>> > As yet I've not spotted any helpful error messages.
> > >> >>> >
> > >> >>> _______________________________________________
> > >> >>> ceph-users mailing list -- ceph-users(a)ceph.io
> > >> >>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > >> >>>
> > >> >>
> > >> > _______________________________________________
> > >> > ceph-users mailing list -- ceph-users(a)ceph.io
> > >> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > >>
> > >>
> > >> _______________________________________________
> > >> ceph-users mailing list -- ceph-users(a)ceph.io
> > >> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> > >>
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
Hi,
Any way to clean up large-omap in the index pool?
PG deep_scrub didn't help.
I know how to clean in the log pool, but no idea in the index pool :/
It's an octopus deployment 15.2.10.
Thank you
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.