Send ceph-users mailing list submissions to
ceph-users(a)ceph.io
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
ceph-users-request(a)ceph.io
You can reach the person managing the list at
ceph-users-owner(a)ceph.io
When replying, please edit your Subject line so it is more specific
than "Re: Contents of ceph-users digest..."
Today's Topics:
1. Re: subtrees have overcommitted (target_size_bytes /
target_size_ratio)
(Lars Täuber)
2. After delete 8.5M Objects in a bucket still 500K left
(EDH - Manuel Rios Fernandez)
3. Re: Static website hosting with RGW (Casey Bodley)
----------------------------------------------------------------------
Date: Mon, 28 Oct 2019 11:24:54 +0100
From: Lars Täuber <taeuber(a)bbaw.de>
Subject: [ceph-users] Re: subtrees have overcommitted
(target_size_bytes / target_size_ratio)
To: ceph-users <ceph-users(a)ceph.io>
Message-ID: <20191028112454.0362fe66(a)bbaw.de>
Content-Type: text/plain; charset=UTF-8
Is there a way to get rid of this warnings with activated autoscaler
besides adding new osds?
Yet I couldn't get a satisfactory answer to the question why this all
happens.
ceph osd pool autoscale-status :
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_data 122.2T 1.5 165.4T 1.1085
0.8500 1.0 1024 on
versus
ceph df :
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 165 TiB 41 TiB 124 TiB 124 TiB 74.95
POOLS:
POOL ID STORED OBJECTS USED %USED
MAX AVAIL
cephfs_data 1 75 TiB 49.31M 122 TiB 87.16
12 TiB
It seems that the overcommitment is wrongly calculated. Isn't the RATE
already used to calculate the SIZE?
It seems USED(df) = SIZE(autoscale-status)
Isn't the RATE already taken into account here?
Could someone please explain the numbers to me?
Thanks!
Lars
Fri, 25 Oct 2019 07:42:58 +0200
Lars Täuber <taeuber(a)bbaw.de> ==> Nathan Fish <lordcirth(a)gmail.com> :
Hi Nathan,
Thu, 24 Oct 2019 10:59:55 -0400
Nathan Fish <lordcirth(a)gmail.com> ==> Lars Täuber <taeuber(a)bbaw.de> :
Ah, I see! The BIAS reflects the number of
placement groups it should
create. Since cephfs metadata pools are usually very small, but have
many objects and high IO, the autoscaler gives them 4x the number of
placement groups that it would normally give for that amount of data.
ah ok, I understand.
So, your cephfs_data is set to a ratio of 0.9,
and cephfs_metadata to
0.3? Are the two pools using entirely different device classes, so
they are not sharing space?
Yes, the metadata is on SSDs and the data on HDDs.
Anyway, I see that your overcommit is only
"1.031x". So if you set
cephfs_data to 0.85, it should go away.
This is not the case. I set the target_ratio to 0.7 and get this:
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO
TARGET RATIO
BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 15736M 3.0
2454G 0.0188
0.3000 4.0 256 on
cephfs_data 122.2T 1.5
165.4T 1.1085
0.7000 1.0 1024 on
The ratio seems to have nothing to do with the target_ratio but the SIZE
and the
RAW_CAPACITY.
Because the pool is still getting more data the
SIZE increases and
therefore the RATIO increases.
The RATIO seems to be calculated by this formula
RATIO = SIZE * RATE / RAW_CAPACITY.
This is what I don't understand. The data in the cephfs_data pool seems
to
need more space than the raw capacity of the cluster provides. Hence the
situation is called "overcommitment".
But why is this only the case when the autoscaler is active?
Thanks
Lars
>
> On Thu, Oct 24, 2019 at 10:09 AM Lars Täuber <taeuber(a)bbaw.de>
wrote:
> >
> > Thanks Nathan for your answer,
> >
> > but I set the the Target Ratio to 0.9. It is the cephfs_data pool
that
makes the troubles.
> >
> > The 4.0 is the BIAS from the cephfs_metadata pool. This "BIAS" is
not explained on the page linked below. So I don't know its meaning.
> >
> > How can be a pool overcommited when it is the only pool on a set of
OSDs?
> >
> > Best regards,
> > Lars
> >
> > Thu, 24 Oct 2019 09:39:51 -0400
> > Nathan Fish <lordcirth(a)gmail.com> ==> Lars Täuber
<taeuber(a)bbaw.de>
:
> > > The formatting is mangled on my
phone, but if I am reading it
correctly,
> > > you have set Target Ratio to 4.0.
This means you have told the
balancer
> > > that this pool will occupy 4x the
space of your whole cluster, and
to
> > > optimize accordingly. This is
naturally a problem. Setting it to 0
will
> > > clear the setting and allow the
autobalancer to work.
> > >
> > > On Thu., Oct. 24, 2019, 5:18 a.m. Lars Täuber, <taeuber(a)bbaw.de>
wrote:
> > >
> > > > This question is answered here:
> > > >
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
> > > >
> > > > But it tells me that there is more data stored in the pool than
the raw
> > > > capacity provides (taking the
replication factor RATE into
account) hence
> > > > the RATIO being above 1.0 .
> > > >
> > > > How comes this is the case? - Data is stored outside of the pool?
> > > > How comes this is only the case when the autoscaler is active?
> > > >
> > > > Thanks
> > > > Lars
> > > >
> > > >
> > > > Thu, 24 Oct 2019 10:36:52 +0200
> > > > Lars Täuber <taeuber(a)bbaw.de> ==> ceph-users(a)ceph.io :
> > > > > My question requires too complex an answer.
> > > > > So let me ask a simple question:
> > > > >
> > > > > What does the SIZE of "osd pool autoscale-status"
tell/mean/comes from?
> > > > >
> > > > > Thanks
> > > > > Lars
> > > > >
> > > > > Wed, 23 Oct 2019 14:28:10 +0200
> > > > > Lars Täuber <taeuber(a)bbaw.de> ==> ceph-users(a)ceph.io :
> > > > > > Hello everybody!
> > > > > >
> > > > > > What does this mean?
> > > > > >
> > > > > > health: HEALTH_WARN
> > > > > > 1 subtrees have overcommitted pool
target_size_bytes
> > > > > > 1
subtrees have overcommitted pool
target_size_ratio
> > > > > >
> > > > > > and what does it have to do with the autoscaler?
> > > > > > When I deactivate the autoscaler the warning goes away.
> > > > > >
> > > > > >
> > > > > > $ ceph osd pool autoscale-status
> > > > > > POOL SIZE TARGET SIZE RATE RAW CAPACITY
RATIO
> > > > TARGET RATIO BIAS PG_NUM
NEW PG_NUM AUTOSCALE
> > > > > > cephfs_metadata 15106M 3.0 2454G
0.0180
> > > > 0.3000 4.0 256
on
> > > > > > cephfs_data 113.6T 1.5 165.4T
1.0306
> > > > 0.9000 1.0 512
on
> > > > > >
> > > > > >
> > > > > > $ ceph health detail
> > > > > > HEALTH_WARN 1 subtrees have overcommitted pool
target_size_bytes; 1
> > > > subtrees have overcommitted
pool target_size_ratio
> > > > > > POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have
overcommitted
> > > > pool target_size_bytes
> > > > > > Pools ['cephfs_data'] overcommit available
storage by
1.031x due
> > > > to target_size_bytes 0 on
pools []
> > > > > > POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have
overcommitted
> > > > pool target_size_ratio
> > > > > > Pools ['cephfs_data'] overcommit available
storage by
1.031x due
> > > > to target_size_ratio 0.900 on
pools ['cephfs_data']
> > > > > >
> > > > > >
> > > > > > Thanks
> > > > > > Lars
> > > > > > _______________________________________________
> > > > > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > > > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
------------------------------
Date: Mon, 28 Oct 2019 14:18:01 +0100
From: "EDH - Manuel Rios Fernandez" <mriosfer(a)easydatahost.com>
Subject: [ceph-users] After delete 8.5M Objects in a bucket still 500K
left
To: <ceph-users(a)ceph.io>
Message-ID: <02a201d58d92$1fe85880$5fb90980$(a)easydatahost.com>
Content-Type: multipart/alternative;
boundary="----=_NextPart_000_02A3_01D58D9A.81B17B70"
This is a multipart message in MIME format.
------=_NextPart_000_02A3_01D58D9A.81B17B70
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi Ceph's!
We started deteling a bucket several days ago. Total size 47TB / 8.5M
objects.
Now we see the cli bucket rm stucked and by console drop this messages.
[root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700 0
abort_bucket_multiparts WARNING : aborted 1000 incomplete multipart uploads
2019-10-28 13:56:24.021 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 2000 incomplete multipart uploads
2019-10-28 13:57:04.726 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 3000 incomplete multipart uploads
2019-10-28 13:57:45.424 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 4000 incomplete multipart uploads
2019-10-28 13:58:25.905 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 5000 incomplete multipart uploads
2019-10-28 13:59:06.898 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 6000 incomplete multipart uploads
2019-10-28 13:59:47.829 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 7000 incomplete multipart uploads
2019-10-28 14:00:42.102 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 8000 incomplete multipart uploads
2019-10-28 14:01:23.829 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 9000 incomplete multipart uploads
2019-10-28 14:02:06.028 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 10000 incomplete multipart uploads
2019-10-28 14:02:48.648 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 11000 incomplete multipart uploads
2019-10-28 14:03:29.807 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 12000 incomplete multipart uploads
2019-10-28 14:04:11.180 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 13000 incomplete multipart uploads
2019-10-28 14:04:52.396 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 14000 incomplete multipart uploads
2019-10-28 14:05:33.050 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 15000 incomplete multipart uploads
2019-10-28 14:06:13.652 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 16000 incomplete multipart uploads
2019-10-28 14:06:54.806 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 17000 incomplete multipart uploads
2019-10-28 14:07:35.867 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 18000 incomplete multipart uploads
2019-10-28 14:08:16.886 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 19000 incomplete multipart uploads
2019-10-28 14:08:57.711 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 20000 incomplete multipart uploads
2019-10-28 14:09:38.032 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 21000 incomplete multipart uploads
2019-10-28 14:10:18.377 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 22000 incomplete multipart uploads
2019-10-28 14:10:58.833 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 23000 incomplete multipart uploads
2019-10-28 14:11:39.078 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 24000 incomplete multipart uploads
2019-10-28 14:12:24.731 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 25000 incomplete multipart uploads
2019-10-28 14:13:12.176 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 26000 incomplete multipart uploads
Bucket stats show 500K objects left. Looks like bucket rm is trying to
abort
all incompleted mutipart. But in bucket stats this operation is not
reflected removing objects from stats.
May be wait to get up 500K or it's a bug?
Regards
Manuel
------=_NextPart_000_02A3_01D58D9A.81B17B70
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15
=
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EstiloCorreo17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 3.0cm 70.85pt 3.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DES =
link=3D"#0563C1" vlink=3D"#954F72"><div
class=3DWordSection1><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D;mso-fareast-language:ES'>Hi =
Ceph’s!<o:p></o:p></span></p><p
class=3DMsoNormal><span =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>We started deteling a =
bucket several days ago. Total size 47TB / 8.5M =
objects.<o:p></o:p></span></p><p class=3DMsoNormal><span
lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>Now we see the cli =
bucket rm stucked and by console drop this =
messages.<o:p></o:p></span></p><p
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>[root@ceph-rgw03 ~]# =
2019-10-28 13:55:43.880 7f0dd92c9700 0 abort_bucket_multiparts =
WARNING : aborted 1000 incomplete multipart =
uploads<o:p></o:p></span></p><p class=3DMsoNormal><span
lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 13:56:24.021 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 2000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 13:57:04.726 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 3000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 13:57:45.424 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 4000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 13:58:25.905 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 5000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 13:59:06.898 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 6000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 13:59:47.829 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 7000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:00:42.102 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 8000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:01:23.829 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 9000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:02:06.028 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 10000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:02:48.648 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 11000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:03:29.807 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 12000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:04:11.180 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 13000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:04:52.396 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 14000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:05:33.050 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 15000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:06:13.652 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 16000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:06:54.806 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 17000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:07:35.867 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 18000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:08:16.886 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 19000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:08:57.711 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 20000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:09:38.032 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 21000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:10:18.377 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 22000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:10:58.833 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 23000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:11:39.078 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 24000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:12:24.731 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 25000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28 14:13:12.176 =
7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 26000 =
incomplete multipart uploads<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>Bucket stats show 500K =
objects left. Looks like bucket rm is trying to abort all incompleted =
mutipart. But in bucket stats this operation is not reflected removing =
objects from stats.<o:p></o:p></span></p><p
class=3DMsoNormal><span =
lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>May be wait to get up =
500K or it’s a bug?<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>Regards<o:p></o:p></span>=
</p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'>Manuel<o:p></o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span><=
/p></div></body></html>
------=_NextPart_000_02A3_01D58D9A.81B17B70--
------------------------------
Date: Mon, 28 Oct 2019 10:48:44 -0400
From: Casey Bodley <cbodley(a)redhat.com>
Subject: [ceph-users] Re: Static website hosting with RGW
To: ceph-users(a)ceph.io
Message-ID: <20834361-445e-1ee5-433b-dd4792f90608(a)redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
On 10/24/19 8:38 PM, Oliver Freyermuth wrote:
Dear Cephers,
I have a question concerning static websites with RGW.
To my understanding, it is best to run >=1 RGW client for "classic" S3
and in addition operate >=1 RGW client for website serving
(potentially with HAProxy or its friends in
front) to prevent messup of
requests via the different protocols.
I'd prefer to avoid "*.example.com" entries in DNS if possible.
So my current setup has these settings for the "web" RGW client:
rgw_enable_static_website = true
rgw_enable_apis = s3website
rgw_dns_s3website_name =
some_value_unused_when_A_records_are_used_pointing_to_the_IP_but_it_needs_to_be_set
and I create simple A records for each website
pointing to the IP of
this "web" RGW node.
I can easily upload content for those websites to the other RGW
instances which
are serving S3,
so S3 and s3website APIs are cleanly separated in
separate instances.
However, one issue remains: How do I run
s3cmd ws-create
on each website-bucket once?
I can't do that against the "classic" S3-serving RGW nodes. This will
give me a 405 (not allowed),
since they do not have rgw_enable_static_website
enabled.
I also can not run it against the "web S3" nodes, since they do not have
the S3 API enabled.
Of course I could enable that, but then the RGW
node can't cleanly
disentangle S3 and website requests since I use A records.
Does somebody have a good idea on how to solve this issue?
Setting "rgw_enable_static_website = true" on the S3-serving RGW nodes
would solve it, but does that have any bad side-effects on their S3
operation?
Enabling static website on the gateway serving the S3 api does look like
the right solution. As far as I can tell, it's only used to control
whether the S3 ops for PutBucketWebsite, GetBucketWebsite, and
DeleteBucketWebsite are exposed.
Also, if there's an expert on this: Exposing a bucket under a tenant as
static
website is not possible since the colon (:) can't be encoded in DNS,
right?
In case somebody also wants to set something like this up, here are the
best docs
I could find:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-s…
Cheers,
Oliver
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
------------------------------
Subject: Digest Footer
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
------------------------------
End of ceph-users Digest, Vol 81, Issue 79
******************************************