Hi Sage,
Just read the news on Cancellation of Cephlacon 2020, although the site is
still status quo/ Double checking that we can proceed with the
cancellation of logistics for South Korea
Thanks
Romit
On Tue, Feb 4, 2020 at 11:02 PM <ceph-users-request(a)ceph.io> wrote:
> Send ceph-users mailing list submissions to
> ceph-users(a)ceph.io
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> ceph-users-request(a)ceph.io
>
> You can reach the person managing the list at
> ceph-users-owner(a)ceph.io
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of ceph-users digest..."
>
> Today's Topics:
>
> 1. Re: More OMAP Issues (Paul Emmerich)
> 2. Re: More OMAP Issues (DHilsbos(a)performair.com)
> 3. Re: Bluestore cache parameter precedence (Igor Fedotov)
> 4. Re: Understanding Bluestore performance characteristics
> (vitalif(a)yourcmc.ru)
> 5. Cephalocon Seoul is canceled (Sage Weil)
> 6. Re: Bluestore cache parameter precedence (Boris Epstein)
> 7. Bucket rename with (EDH - Manuel Rios)
>
>
> ----------------------------------------------------------------------
>
> Date: Tue, 4 Feb 2020 17:51:40 +0100
> From: Paul Emmerich <paul.emmerich(a)croit.io>
> Subject: [ceph-users] Re: More OMAP Issues
> To: DHilsbos(a)performair.com
> Cc: ceph-users <ceph-users(a)ceph.io>
> Message-ID:
> <
> CAD9yTbEp1BrAagzWzkaAQ-aCq-4ghyEwVJSDdyJzL-So52whBA(a)mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Are you running a multi-site setup?
> In this case it's best to set the default shard size to large enough
> number *before* enabling multi-site.
>
> If you didn't do this: well... I think the only way is still to
> completely re-sync the second site...
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Tue, Feb 4, 2020 at 5:23 PM <DHilsbos(a)performair.com> wrote:
> >
> > All;
> >
> > We're backing to having large OMAP object warnings regarding our RGW
> index pool.
> >
> > This cluster is now in production, so I can simply dump the buckets /
> pools and hope everything works out.
> >
> > I did some additional research on this issue, and it looks like I need
> to (re)shard the bucket (index?). I found information that suggests that,
> for older versions of Ceph, buckets couldn't be sharded after creation[1].
> Other information suggests the Nautilus (which we are running), can
> re-shard dynamically, but not when multi-site replication is configured[2].
> >
> > This suggests that a "manual" resharding of a Nautilus cluster should be
> possible, but I can't find the commands to do it. Has anyone done this?
> Does anyone have the commands to do it? I can schedule down time for the
> cluster, and take the RADOSGW instance(s), and dependent user services
> offline.
> >
> > [1]: https://ceph.io/geen-categorie/radosgw-big-index/
> > [2]: https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director - Information Technology
> > Perform Air International Inc.
> > DHilsbos(a)PerformAir.com
> > www.PerformAir.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 17:04:24 +0000
> From: <DHilsbos(a)performair.com>
> Subject: [ceph-users] Re: More OMAP Issues
> To: <ceph-users(a)ceph.io>
> Cc: <paul.emmerich(a)croit.io>
> Message-ID:
> <0670B960225633449A24709C291A525243605D57(a)COM01.performair.local>
> Content-Type: text/plain; charset="utf-8"
>
> Paul;
>
> Yes, we are running a multi-site setup.
>
> Re-sync would be acceptable at this point, as we only have 4 TiB in use
> right now.
>
> Tearing down and reconfiguring the second site would also be acceptable,
> except that I've never been able to cleanly remove a zone from a zone
> group. The only way I've found to remove a zone completely is to tear down
> the entire RADOSGW configuration (delete .rgw.root pool from both clusters).
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> DHilsbos(a)PerformAir.com
> www.PerformAir.com
>
>
>
> -----Original Message-----
> From: Paul Emmerich [mailto:paul.emmerich@croit.io]
> Sent: Tuesday, February 04, 2020 9:52 AM
> To: Dominic Hilsbos
> Cc: ceph-users
> Subject: Re: [ceph-users] More OMAP Issues
>
> Are you running a multi-site setup?
> In this case it's best to set the default shard size to large enough
> number *before* enabling multi-site.
>
> If you didn't do this: well... I think the only way is still to
> completely re-sync the second site...
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Tue, Feb 4, 2020 at 5:23 PM <DHilsbos(a)performair.com> wrote:
> >
> > All;
> >
> > We're backing to having large OMAP object warnings regarding our RGW
> index pool.
> >
> > This cluster is now in production, so I can simply dump the buckets /
> pools and hope everything works out.
> >
> > I did some additional research on this issue, and it looks like I need
> to (re)shard the bucket (index?). I found information that suggests that,
> for older versions of Ceph, buckets couldn't be sharded after creation[1].
> Other information suggests the Nautilus (which we are running), can
> re-shard dynamically, but not when multi-site replication is configured[2].
> >
> > This suggests that a "manual" resharding of a Nautilus cluster should be
> possible, but I can't find the commands to do it. Has anyone done this?
> Does anyone have the commands to do it? I can schedule down time for the
> cluster, and take the RADOSGW instance(s), and dependent user services
> offline.
> >
> > [1]: https://ceph.io/geen-categorie/radosgw-big-index/
> > [2]: https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director - Information Technology
> > Perform Air International Inc.
> > DHilsbos(a)PerformAir.com
> > www.PerformAir.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 20:10:20 +0300
> From: Igor Fedotov <ifedotov(a)suse.de>
> Subject: [ceph-users] Re: Bluestore cache parameter precedence
> To: Boris Epstein <borepstein(a)gmail.com>, ceph-users(a)ceph.io
> Message-ID: <0cb36a39-7dba-01b5-5383-dc1116f459a4(a)suse.de>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Hi Boris,
>
> general settings (unless they are set to zero) override disk-specific
> settings .
>
> I.e. bluestore_cache_size overrides both bluestore_cache_size_hdd and
> bluestore_cache_size_ssd.
>
> Here is the code snippet in case you know C++
>
> if (cct->_conf->bluestore_cache_size) {
> cache_size = cct->_conf->bluestore_cache_size;
> } else {
> // choose global cache size based on backend type
> if (_use_rotational_settings()) {
> cache_size = cct->_conf->bluestore_cache_size_hdd;
> } else {
> cache_size = cct->_conf->bluestore_cache_size_ssd;
> }
> }
>
> Thanks,
>
> Igor
>
> On 2/4/2020 2:14 PM, Boris Epstein wrote:
> > Hello list,
> >
> > As stated in this document:
> >
> >
> https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
> >
> > there are multiple parameters defining cache limits for BlueStore. You
> have
> > bluestore_cache_size (presumably controlling the cache size),
> > bluestore_cache_size_hdd (presumably doing the same for HDD storage only)
> > and bluestore_cache_size_ssd (presumably being the equivalent for SSD).
> My
> > question is, does bluestore_cache_size override the disk-specific
> > parameters, or do I need to set the disk-specific (or, rather, storage
> type
> > specific ones separately if I want to keep them to a certain value.
> >
> > Thanks in advance.
> >
> > Boris.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> ------------------------------
>
> Date: Tue, 04 Feb 2020 20:22:30 +0300
> From: vitalif(a)yourcmc.ru
> Subject: [ceph-users] Re: Understanding Bluestore performance
> characteristics
> To: Bradley Kite <bradley.kite(a)gmail.com>
> Cc: ceph-users(a)ceph.io
> Message-ID: <c381a59989a4f4f6760d061e745a281a(a)yourcmc.ru>
> Content-Type: text/plain; charset=US-ASCII; format=flowed
>
> SSD (block.db) partition contains object metadata in RocksDB so it
> probably loads the metadata before modifying objects (if it's not in
> cache yet). Also it sometimes performs compaction which also results in
> disk reads and writes. There are other things going on that I'm not
> completely aware of. There's the RBD object map... Maybe there are some
> locks that come into action when you parallel writes...
>
> There's a config option to enable RocksDB performance counters. You can
> have a look into it.
>
> However if you're just trying to understand why RBD isn't super fast
> then I don't think these reads are the cause...
>
> > Hi Vitaliy
> >
> > Yes - I tried this and I can still see a number of reads (~110 iops,
> > 440KB/sec) on the SSD, so it is significantly better, but the result
> > is still puzzling - I'm trying to understand what is causing the
> > reads. The problem is amplified with numjobs >= 2 but it looks like it
> > is still there with just 1.
> >
> > Like some caching parameter is not correct, and the same blocks are
> > being read over and over when doing a write?
> >
> > Could anyone advise on the best way for me to investigate further?
> >
> > I've tried strace (with -k) and 'perf record' but neither produce any
> > useful stack traces to help understand what's going on.
> >
> > Regards
> > --
> > Brad
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 17:24:37 +0000 (UTC)
> From: Sage Weil <sage(a)newdream.net>
> Subject: [ceph-users] Cephalocon Seoul is canceled
> To: ceph-announce(a)ceph.io, ceph-users(a)ceph.io, dev(a)ceph.io,
> ceph-devel(a)vger.kernel.org
> Message-ID: <alpine.DEB.2.21.2002041649050.21136(a)piezo.novalocal>
> Content-Type: text/plain; charset=US-ASCII
>
> Hi everyone,
>
> We are sorry to announce that, due to the recent coronavirus outbreak, we
> are canceling Cephalocon for March 3-5 in Seoul.
>
> More details will follow about how to best handle cancellation of hotel
> reservations and so forth. Registrations will of course be
> refunded--expect an email with details in the next day or two.
>
> We are still looking into whether it makes sense to reschedule the event
> for later in the year.
>
> Thank you to everyone who has helped to plan this event, submitted talks,
> and agreed to sponsor. It makes us sad to cancel, but the safety of
> our community is of the utmost importance, and it was looking increasing
> unlikely that we could make this event a success.
>
> Stay tuned...
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 12:29:13 -0500
> From: Boris Epstein <borepstein(a)gmail.com>
> Subject: [ceph-users] Re: Bluestore cache parameter precedence
> To: Igor Fedotov <ifedotov(a)suse.de>
> Cc: ceph-users(a)ceph.io
> Message-ID:
> <CADeF1XHrPzTq1+8S_WG=ZH=SVNAbdLaY=
> FR7UaLGHn3O_yWLnw(a)mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi Igor,
>
> Thanks!
>
> I think the code needs to be corrected - the choice criteria for which
> setting to use when
>
> cct->_conf->bluestore_cache_size == 0
>
> should be as follows:
>
> 1) See what kind of storage you have.
>
> 2) Select type-appropriate storage.
>
> Is this code public-editable? I'll be happy to correct that.
>
> Regards,
>
> Boris.
>
> On Tue, Feb 4, 2020 at 12:10 PM Igor Fedotov <ifedotov(a)suse.de> wrote:
>
> > Hi Boris,
> >
> > general settings (unless they are set to zero) override disk-specific
> > settings .
> >
> > I.e. bluestore_cache_size overrides both bluestore_cache_size_hdd and
> > bluestore_cache_size_ssd.
> >
> > Here is the code snippet in case you know C++
> >
> > if (cct->_conf->bluestore_cache_size) {
> > cache_size = cct->_conf->bluestore_cache_size;
> > } else {
> > // choose global cache size based on backend type
> > if (_use_rotational_settings()) {
> > cache_size = cct->_conf->bluestore_cache_size_hdd;
> > } else {
> > cache_size = cct->_conf->bluestore_cache_size_ssd;
> > }
> > }
> >
> > Thanks,
> >
> > Igor
> >
> > On 2/4/2020 2:14 PM, Boris Epstein wrote:
> > > Hello list,
> > >
> > > As stated in this document:
> > >
> > >
> >
> https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
> > >
> > > there are multiple parameters defining cache limits for BlueStore. You
> > have
> > > bluestore_cache_size (presumably controlling the cache size),
> > > bluestore_cache_size_hdd (presumably doing the same for HDD storage
> only)
> > > and bluestore_cache_size_ssd (presumably being the equivalent for SSD).
> > My
> > > question is, does bluestore_cache_size override the disk-specific
> > > parameters, or do I need to set the disk-specific (or, rather, storage
> > type
> > > specific ones separately if I want to keep them to a certain value.
> > >
> > > Thanks in advance.
> > >
> > > Boris.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >
>
> ------------------------------
>
> Date: Tue, 4 Feb 2020 17:29:55 +0000
> From: EDH - Manuel Rios <mriosfer(a)easydatahost.com>
> Subject: [ceph-users] Bucket rename with
> To: "ceph-users(a)ceph.io" <ceph-users(a)ceph.io>
> Message-ID: <HE1P195MB02521946493264331A2CE2A3B0030(a)HE1P195MB0252.EUR
> P195.PROD.OUTLOOK.COM>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi
>
> Some Customer asked us for a normal easy problem, they want rename a
> bucket.
>
> Checking the Nautilus documentation looks by now its not possible, but I
> checked master documentation and a CLI should be accomplish this apparently.
>
> $ radosgw-admin bucket link --bucket=foo --bucket-new-name=bar --uid=johnny
>
> Will be backported to Nautilus? Or its still just for developer/master
> users?
>
> https://docs.ceph.com/docs/master/man/8/radosgw-admin/
>
> Regards
> Manuel
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
>
> ------------------------------
>
> End of ceph-users Digest, Vol 85, Issue 17
> ******************************************
>
--
*-----------------------------------------------------------------------------------------*
*This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error, please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named addressee,
you should not disseminate, distribute or copy this email. Please notify
the sender immediately by email if you have received this email by mistake
and delete this email from your system. If you are not the intended
recipient, you are notified that disclosing, copying, distributing or
taking any action in reliance on the contents of this information is
strictly prohibited.*****
****
*Any views or opinions presented in this
email are solely those of the author and do not necessarily represent those
of the organization. Any information on shares, debentures or similar
instruments, recommended product pricing, valuations and the like are for
information purposes only. It is not meant to be an instruction or
recommendation, as the case may be, to buy or to sell securities, products,
services nor an offer to buy or sell securities, products or services
unless specifically stated to be so on behalf of the Flipkart group.
Employees of the Flipkart group of companies are expressly required not to
make defamatory statements and not to infringe or authorise any
infringement of copyright or any other legal right by email communications.
Any such communication is contrary to organizational policy and outside the
scope of the employment of the individual concerned. The organization will
not accept any liability in respect of such communication, and the employee
responsible will be personally liable for any damages or other liability
arising.*****
****
*Our organization accepts no liability for the
content of this email, or for the consequences of any actions taken on the
basis of the information *provided,* unless that information is
subsequently confirmed in writing. If you are not the intended recipient,
you are notified that disclosing, copying, distributing or taking any
action in reliance on the contents of this information is strictly
prohibited.*
_-----------------------------------------------------------------------------------------_
Hello,
I'm a beginner on ceph. I set up some ceph clusters in google cloud.
Cluster1 has three nodes and each node has three disks. Cluster2 has three
nodes and each node has two disks. Cluster3 has five nodes and each node
has five disks. Disk speed shown by `dd if=/dev/zero of=here bs=1G count=1
oflag=direct` is 117MB/s. The network is 10Gbps.
However, I found something strange:
1. The write performance of all clusters drops dramatically after a few
minutes. I created a pool named "scbench" with replicated size 1 (I know it
is not safe but I want the highest write speed). The write performance
(shown by rados bench -p scbench 1000 write) before and after the drop are:
cluster1: 297MB/s 94.5MB/s
cluster2: 304MB/s 67.4MB/s
cluster3: 494MB/s 267.6MB/s
It looks like the performance before the drop is nodes_num * 100MB/s, and
the performance after the drop is about osds_num * 10MB/s. I have no idea
why there is such a drop and why the performances before the drop are
linear with nodes_num.
2. The write performance of object storage (shown by swift-bench -c 64 -s
4096000 -n 100000 -g 0 swift.conf) is much lower than that of storage
cluster(shown by rados bench -p scbench 1000 write). I have set the
replicated size of "default.rgw.buckets.data" and
"default.rgw.buckets.index" to 1
The speed of cluster1 oss is 117MB/s (before the drop) and 26MB/s (after
the drop), and the speed of cluster3 oss is 118MB/s (the drop does not
happen).
Is it normal that the oss write performance is worse than rados write
performance? If not, how can I solve the problem?
Thanks!
Hi,
I finally got my Samsung PM983 [1] to use as journal for about 6 drives plus drive cache replacing a consumer SSD - Kingston SV300.
But I can't for the life of me figure out how to move an existing journal to this NVME on my Nautilus cluster.
# Created a new big partition on the NVME
sgdisk --new=1:2048:+33GiB --change-name="1:ceph block.db" --typecode="1:30cd0809-c2b2-499c-8879-2d6b78529876" --mbrtogpt /dev/nvme0n1
partprobe
sgdisk -p /dev/nvme0n1
# The below assumes there is already a partition+ fs on the nvme?
ceph-bluestore-tool bluefs-bdev-migrate –dev-target /dev/nvme0n1p1 -devs-source /var/lib/ceph/osd/ceph-1/block.db
- too many positional options have been specified on the command line
ceph-bluestore-tool bluefs-bdev-migrate -–path /var/lib/ceph/osd/ceph-1/block.db –-dev-target /dev/nvme0n1p1
- too many positional options have been specified on the command line
# Or should I create a new block device? if yes, will WAL come along ? And how do I remove the SSD journal partition (the old)
ceph-bluestore-tool bluefs-bdev-new-db -–path /var/lib/ceph/osd/ceph-1/block.db –-dev-target /dev/nvme0n1p1
The documentation is not very clear on what migration does, nor has the same concept of a DEVICE (/dev/sda is a device for me) it seems.
Thanks in advance,
Alex
----
[1] - Performance stats: https://docs.google.com/spreadsheets/d/1LXupjEUnNdf011QNr24pkAiDBphzpz5_MwM…
Hi,
I have a rather small cephfs cluster with 3 machines right now: all of
them sharing MDS, MON, MGS and OSD roles.
I had to move all machines to a new physical location and,
unfortunately, I had to move all of them at the same time.
They are already on again but ceph won't be accessible as all pgs are
in peering state and OSD keep going down and up again.
Here is some info about my cluster:
-------------------------------------------
# ceph -s
cluster:
id: e348b63c-d239-4a15-a2ce-32f29a00431c
health: HEALTH_WARN
1 filesystem is degraded
1 MDSs report slow metadata IOs
2 osds down
1 host (2 osds) down
Reduced data availability: 324 pgs inactive, 324 pgs peering
7 daemons have recently crashed
10 slow ops, oldest one blocked for 206 sec, mon.a2-df has slow ops
services:
mon: 3 daemons, quorum a2-df,a3-df,a1-df (age 47m)
mgr: a2-df(active, since 82m), standbys: a3-df, a1-df
mds: cephfs:1/1 {0=a2-df=up:replay} 2 up:standby
osd: 6 osds: 4 up (since 5s), 6 in (since 47m)
rgw: 1 daemon active (a2-df)
data:
pools: 7 pools, 324 pgs
objects: 850.25k objects, 744 GiB
usage: 2.3 TiB used, 14 TiB / 16 TiB avail
pgs: 100.000% pgs not active
324 peering
-------------------------------------------
-------------------------------------------
# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS TYPE NAME
-1 16.37366 - 16 TiB 2.3 TiB 2.3 TiB 1.1 GiB 8.1 GiB
14 TiB 13.83 1.00 - root default
-10 16.37366 - 16 TiB 2.3 TiB 2.3 TiB 1.1 GiB 8.1 GiB
14 TiB 13.83 1.00 - datacenter df
-3 5.45799 - 5.5 TiB 773 GiB 770 GiB 382 MiB 2.7 GiB
4.7 TiB 13.83 1.00 - host a1-df
3 hdd-slow 3.63899 1.00000 3.6 TiB 1.1 GiB 90 MiB 0 B 1 GiB
3.6 TiB 0.03 0.00 0 down osd.3
0 hdd 1.81898 1.00000 1.8 TiB 772 GiB 770 GiB 382 MiB 1.7 GiB
1.1 TiB 41.43 3.00 0 down osd.0
-5 5.45799 - 5.5 TiB 773 GiB 770 GiB 370 MiB 2.7 GiB
4.7 TiB 13.83 1.00 - host a2-df
4 hdd-slow 3.63899 1.00000 3.6 TiB 1.1 GiB 90 MiB 0 B 1 GiB
3.6 TiB 0.03 0.00 100 up osd.4
1 hdd 1.81898 1.00000 1.8 TiB 772 GiB 770 GiB 370 MiB 1.7 GiB
1.1 TiB 41.42 3.00 224 up osd.1
-7 5.45767 - 5.5 TiB 773 GiB 770 GiB 387 MiB 2.7 GiB
4.7 TiB 13.83 1.00 - host a3-df
5 hdd-slow 3.63869 1.00000 3.6 TiB 1.1 GiB 90 MiB 0 B 1 GiB
3.6 TiB 0.03 0.00 100 up osd.5
2 hdd 1.81898 1.00000 1.8 TiB 772 GiB 770 GiB 387 MiB 1.7 GiB
1.1 TiB 41.43 3.00 224 up osd.2
TOTAL 16 TiB 2.3 TiB 2.3 TiB 1.1 GiB 8.1 GiB
14 TiB 13.83
MIN/MAX VAR: 0.00/3.00 STDDEV: 21.82
-------------------------------------------
At this exact moment both OSDs from server a1-df were down but that's
changing. Sometimes I have only one OSD down, but most of the times I
have 2. And exactly which ones are actually down keeps changing.
What should I do to get my cluster back up? Just wait?
Regards,
Rodrigo Severo
Hi
Some Customer asked us for a normal easy problem, they want rename a bucket.
Checking the Nautilus documentation looks by now its not possible, but I checked master documentation and a CLI should be accomplish this apparently.
$ radosgw-admin bucket link --bucket=foo --bucket-new-name=bar --uid=johnny
Will be backported to Nautilus? Or its still just for developer/master users?
https://docs.ceph.com/docs/master/man/8/radosgw-admin/
Regards
Manuel
Hi everyone,
We are sorry to announce that, due to the recent coronavirus outbreak, we
are canceling Cephalocon for March 3-5 in Seoul.
More details will follow about how to best handle cancellation of hotel
reservations and so forth. Registrations will of course be
refunded--expect an email with details in the next day or two.
We are still looking into whether it makes sense to reschedule the event
for later in the year.
Thank you to everyone who has helped to plan this event, submitted talks,
and agreed to sponsor. It makes us sad to cancel, but the safety of
our community is of the utmost importance, and it was looking increasing
unlikely that we could make this event a success.
Stay tuned...
All;
We're backing to having large OMAP object warnings regarding our RGW index pool.
This cluster is now in production, so I can simply dump the buckets / pools and hope everything works out.
I did some additional research on this issue, and it looks like I need to (re)shard the bucket (index?). I found information that suggests that, for older versions of Ceph, buckets couldn't be sharded after creation[1]. Other information suggests the Nautilus (which we are running), can re-shard dynamically, but not when multi-site replication is configured[2].
This suggests that a "manual" resharding of a Nautilus cluster should be possible, but I can't find the commands to do it. Has anyone done this? Does anyone have the commands to do it? I can schedule down time for the cluster, and take the RADOSGW instance(s), and dependent user services offline.
[1]: https://ceph.io/geen-categorie/radosgw-big-index/
[2]: https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com
Hello Everyone,
I would like to understand if this output is right:
*# ceph df*
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
85.1TiB 43.7TiB 41.4TiB 48.68
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
volumes 13 13.8TiB 64.21 7.68TiB 3620495
I only have (1) pool called 'volumes' which is using 13.8TiB (we have a
replica of 3) so it's actually using 41,4TiB and that would be the RAW
USED, at this point is fine, but, then it said on the GLOBAL section that
the AVAIL space is 43.7TiB and the %RAW USED is only 48.68%.
So if I use the 7.68TiB of MAX AVAIL and the pool goes up to 100% of usage,
that would not lead to the total space of the cluster, right? I mean were
are those 43.7TiB of AVAIL space?
I'm using Luminous 12.2.12 release.
Sorry if it's a silly question or if it has been answered before.
Thanks in advance,
Best regards,