Unsubscribe please from all email ids
On Tue, 29 Oct 2019 at 7:12 am, <ceph-users-request(a)ceph.io> wrote:
Send ceph-users mailing list submissions to
ceph-users(a)ceph.io
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
ceph-users-request(a)ceph.io
You can reach the person managing the list at
ceph-users-owner(a)ceph.io
When replying, please edit your Subject line so it is more specific
than "Re: Contents of ceph-users digest..."
Today's Topics:
1. Help (Sumit Gaur)
----------------------------------------------------------------------
Date: Tue, 29 Oct 2019 07:06:17 +1100
From: Sumit Gaur <sumitkgaur(a)gmail.com>
Subject: [ceph-users] Help
To: ceph-users(a)ceph.io
Message-ID:
<CAH_rbop2r7d4BVMi_cxYQgOaEcnpcdyjbsKR=
UdQwc9wBnAh_g(a)mail.gmail.com>
Content-Type: multipart/alternative;
boundary="0000000000000dfea80595fe0971"
--0000000000000dfea80595fe0971
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
On Tue, 29 Oct 2019 at 1:50 am, <ceph-users-request(a)ceph.io> wrote:
Send ceph-users mailing list submissions to
ceph-users(a)ceph.io
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
ceph-users-request(a)ceph.io
You can reach the person managing the list at
ceph-users-owner(a)ceph.io
When replying, please edit your Subject line so it is more specific
than "Re: Contents of ceph-users digest..."
Today's Topics:
1. Re: subtrees have overcommitted (target_size_bytes /
target_size_ratio)
(Lars T=C3=A4uber)
2. After delete 8.5M Objects in a bucket still 500K left
(EDH - Manuel Rios Fernandez)
3. Re: Static website hosting with RGW (Casey Bodley)
----------------------------------------------------------------------
Date: Mon, 28 Oct 2019 11:24:54 +0100
From: Lars T=C3=A4uber <taeuber(a)bbaw.de>
Subject: [ceph-users] Re: subtrees have overcommitted
(target_size_bytes / target_size_ratio)
To: ceph-users <ceph-users(a)ceph.io>
Message-ID: <20191028112454.0362fe66(a)bbaw.de>
Content-Type: text/plain; charset=3DUTF-8
Is there a way to get rid of this warnings with activated autoscaler
besides adding new osds?
Yet I couldn't get a satisfactory answer to the question why this all
happens.
ceph osd pool autoscale-status :
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_data 122.2T 1.5 165.4T 1.1085
0.8500 1.0 1024 on
versus
ceph df :
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 165 TiB 41 TiB 124 TiB 124 TiB 74.95
POOLS:
POOL ID STORED OBJECTS USED %USED
MAX AVAIL
cephfs_data 1 75 TiB 49.31M 122 TiB 87.16
12 TiB
It seems that the overcommitment is wrongly calculated. Isn't the RATE
already used to calculate the SIZE?
It seems USED(df) =3D SIZE(autoscale-status)
Isn't the RATE already taken into account here?
Could someone please explain the numbers to me?
Thanks!
Lars
Fri, 25 Oct 2019 07:42:58 +0200
Lars T=C3=A4uber <taeuber(a)bbaw.de> =3D=3D> Nathan Fish
<lordcirth(a)gmail.c=
om> :
> Hi Nathan,
> Thu, 24 Oct 2019 10:59:55 -0400
> Nathan Fish <lordcirth(a)gmail.com> =3D=3D> Lars T=C3=A4uber
<taeuber@bba=
w.de> :
Ah, I see! The BIAS reflects the number of placement
groups it should
create. Since cephfs metadata pools are usually very small, but have
many objects and high IO, the autoscaler gives them 4x the number of
placement groups that it would normally give for that amount of data.
ah ok, I understand.
So, your cephfs_data is set to a ratio of 0.9,
and cephfs_metadata to
0.3? Are the two pools using entirely different device classes, so
they are not sharing space?
Yes, the metadata is on SSDs and the data on HDDs.
Anyway, I see that your overcommit is only
"1.031x". So if you set
cephfs_data to 0.85, it should go away.
This is not the case. I set the target_ratio to 0.7 and get this:
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO
TARGET RATIO
BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 15736M 3.0
2454G 0.0188
0.3000 4.0 256 on
cephfs_data 122.2T 1.5
165.4T 1.1085
0.7000 1.0 1024 on
> The ratio seems to have nothing to do
with the target_ratio but the
SIZ=
E
and the RAW_CAPACITY.
Because the pool is still getting more data the
SIZE increases and
therefore the RATIO increases.
The RATIO seems to be calculated by this formula
RATIO =3D SIZE * RATE / RAW_CAPACITY.
This is what I don't understand. The data in the cephfs_data pool seems
to
need more space than the raw capacity of the cluster provides. Hence
t=
he
situation is called "overcommitment".
But why is this only the case when the autoscaler is active?
Thanks
Lars
> On Thu, Oct 24, 2019 at 10:09 AM Lars
T=C3=A4uber <taeuber(a)bbaw.de>
wrote:
>
> > Thanks Nathan for your answer,
>
> > but I set the the Target
Ratio to 0.9. It is the cephfs_data pool
that makes the troubles.
>
> > The 4.0 is the BIAS from the cephfs_metadata pool. This "BIAS"
is
not explained on the page linked below. So I don't know its meaning.
>
> > How can be a pool overcommited when it is the only pool on a set of
OSDs?
> >
> > > Best regards,
> > > Lars
> >
> > > Thu, 24 Oct 2019
09:39:51 -0400
> > > Nathan Fish <lordcirth(a)gmail.com> =3D=3D> Lars T=C3=A4uber
<taeuber=
@bbaw.de>
:
> > > The formatting is mangled on my
phone, but if I am reading it
correctly,
> > > you have set Target Ratio to 4.0.
This means you have told the
balancer
> > > > that this pool will occupy 4x the space of your whole cluster,
an=
d
to
> > > > optimize accordingly. This is naturally a problem. Setting it to
=
0
will
> > > > clear the setting and allow the autobalancer to work.
> > >
> > > > On
Thu., Oct. 24, 2019, 5:18 a.m. Lars T=C3=A4uber, <taeuber@bbaw
=
.de>
wrote:
> > >
> > > > >
This question is answered here:
> > > > >
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning=
/
> >
>
> > > > But it tells me
that there is more data stored in the pool than
the raw
> > > > capacity provides (taking the
replication factor RATE into
account) hence
> > > > > the RATIO being above 1.0 .
> > > >
> > > >
> How comes this is the case? - Data is stored outside of the
poo=
l?
> > > > > How comes this is only
the case when the autoscaler is active?
> > > >
> > > >
> Thanks
> > > > > Lars
> > > >
> > >
>
> > > > > Thu, 24 Oct
2019 10:36:52 +0200
> > > > > Lars T=C3=A4uber <taeuber(a)bbaw.de> =3D=3D>
ceph-users(a)ceph.io
:
> >
> > > My question requires too complex an answer.
> > > > > So let me ask a simple question:
> > > >
> > > >
> What does the SIZE of "osd pool autoscale-status"
tell/mean/comes
from?
> > > > >
> > >
> > > Thanks
> > > > > > Lars
> > > > >
> > >
> > > Wed, 23 Oct 2019 14:28:10 +0200
> > > > > > Lars T=C3=A4uber <taeuber(a)bbaw.de> =3D=3D>
ceph-users(a)ceph.io=
:
> >
> > > > Hello everybody!
> > > > >
> > >
> > > What does this mean?
> > > > >
> > >
> > > health: HEALTH_WARN
> > > > > > 1 subtrees have overcommitted pool
target_size_bytes
> > > > > > 1
subtrees have overcommitted pool
target_size_ratio
> > > > >
> > > > > > and what does it have to do with the
autoscaler?
> > > > > > When I deactivate the autoscaler the warning goes away.
> > > > >
> > >
> >
> > > > > > $
ceph osd pool autoscale-status
> > > > > > POOL SIZE TARGET SIZE RATE RAW CAPACITY
RATIO
> > > > TARGET RATIO BIAS PG_NUM
NEW PG_NUM AUTOSCALE
> > > > > > cephfs_metadata 15106M 3.0 2454G
0.0180
> > > > 0.3000 4.0 256
on
> > > > > > cephfs_data 113.6T 1.5 165.4T
1.0306
> > > > 0.9000 1.0 512
on
> > > > >
> > >
> >
> > > > > > $
ceph health detail
> > > > > > HEALTH_WARN 1 subtrees have overcommitted pool
target_size_bytes; 1
> > > > subtrees have overcommitted
pool target_size_ratio
> > > > > > POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have
overcommitted
> > > > pool target_size_bytes
> > > > > > Pools ['cephfs_data'] overcommit available
storage by
1.031x due
> > > > to target_size_bytes 0 on
pools []
> > > > > > POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have
overcommitted
> > > > pool target_size_ratio
> > > > > > Pools ['cephfs_data'] overcommit available
storage by
1.031x due
> > > > to target_size_ratio 0.900 on
pools ['cephfs_data']
> > > > >
> > >
> >
> > > > > >
Thanks
> > > > > > Lars
> > > > > > _______________________________________________
> > > > > > ceph-users mailing list -- ceph-users(a)ceph.io
> > > > > > To unsubscribe send an email to ceph-users-leave(a)ceph.io
------------------------------
Date: Mon, 28 Oct 2019 14:18:01 +0100
From: "EDH - Manuel Rios Fernandez" <mriosfer(a)easydatahost.com>
Subject: [ceph-users] After delete 8.5M Objects in a bucket still 500K
left
To: <ceph-users(a)ceph.io>
Message-ID: <02a201d58d92$1fe85880$5fb90980$(a)easydatahost.com>
Content-Type: multipart/alternative;
boundary=3D"----=3D_NextPart_000_02A3_01D58D9A.81B17B70"
This is a multipart message in MIME format.
------=3D_NextPart_000_02A3_01D58D9A.81B17B70
Content-Type: text/plain;
charset=3D"us-ascii"
Content-Transfer-Encoding: 7bit
Hi Ceph's!
We started deteling a bucket several days ago. Total size 47TB / 8.5M
objects.
Now we see the cli bucket rm stucked and by console drop this messages.
[root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700 0
abort_bucket_multiparts WARNING : aborted 1000 incomplete multipart
uploa=
ds
2019-10-28 13:56:24.021 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 2000 incomplete multipart uploads
2019-10-28 13:57:04.726 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 3000 incomplete multipart uploads
2019-10-28 13:57:45.424 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 4000 incomplete multipart uploads
2019-10-28 13:58:25.905 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 5000 incomplete multipart uploads
2019-10-28 13:59:06.898 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 6000 incomplete multipart uploads
2019-10-28 13:59:47.829 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 7000 incomplete multipart uploads
2019-10-28 14:00:42.102 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 8000 incomplete multipart uploads
2019-10-28 14:01:23.829 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 9000 incomplete multipart uploads
2019-10-28 14:02:06.028 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 10000 incomplete multipart uploads
2019-10-28 14:02:48.648 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 11000 incomplete multipart uploads
2019-10-28 14:03:29.807 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 12000 incomplete multipart uploads
2019-10-28 14:04:11.180 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 13000 incomplete multipart uploads
2019-10-28 14:04:52.396 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 14000 incomplete multipart uploads
2019-10-28 14:05:33.050 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 15000 incomplete multipart uploads
2019-10-28 14:06:13.652 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 16000 incomplete multipart uploads
2019-10-28 14:06:54.806 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 17000 incomplete multipart uploads
2019-10-28 14:07:35.867 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 18000 incomplete multipart uploads
2019-10-28 14:08:16.886 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 19000 incomplete multipart uploads
2019-10-28 14:08:57.711 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 20000 incomplete multipart uploads
2019-10-28 14:09:38.032 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 21000 incomplete multipart uploads
2019-10-28 14:10:18.377 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 22000 incomplete multipart uploads
2019-10-28 14:10:58.833 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 23000 incomplete multipart uploads
2019-10-28 14:11:39.078 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 24000 incomplete multipart uploads
2019-10-28 14:12:24.731 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 25000 incomplete multipart uploads
2019-10-28 14:13:12.176 7f0dd92c9700 0 abort_bucket_multiparts WARNING :
aborted 26000 incomplete multipart uploads
Bucket stats show 500K objects left. Looks like bucket rm is trying to
abort
all incompleted mutipart. But in bucket stats this operation is not
reflected removing objects from stats.
May be wait to get up 500K or it's a bug?
Regards
Manuel
------=3D_NextPart_000_02A3_01D58D9A.81B17B70
Content-Type: text/html;
charset=3D"us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D3D"urn:schemas-microsoft-com:vml" =3D
xmlns:o=3D3D"urn:schemas-microsoft-com:office:office" =3D
xmlns:w=3D3D"urn:schemas-microsoft-com:office:word" =3D
xmlns:m=3D3D"http://schemas.microsoft.com/office/2004/12/omml" =3D
xmlns=3D3D"http://www.w3.org/TR/REC-html40"><head><me… =3D
http-equiv=3D3DContent-Type content=3D3D"text/html; =3D
charset=3D3Dus-ascii"><meta name=3D3DGenerator content=3D3D"Microsoft
Wor=
d 15 =3D
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EstiloCorreo17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 3.0cm 70.85pt 3.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D3D"edit" spidmax=3D3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D3D"edit">
<o:idmap v:ext=3D3D"edit" data=3D3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3D3DES
=3D
link=3D3D"#0563C1" vlink=3D3D"#954F72"><div
class=3D3DWordSection1><p =3D
class=3D3DMsoNormal><span =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Hi =3D
Ceph’s!<o:p></o:p></span></p><p
class=3D3DMsoNormal><span =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
<=3D
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>We started deteling a =
=3D
bucket several days ago. Total size 47TB / 8.5M
=3D
objects.<o:p></o:p></span></p><p
class=3D3DMsoNormal><span
lang=3D3DEN-US=
=3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
<=3D
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Now we see the cli =3D
bucket rm stucked and by console drop this =3D
messages.<o:p></o:p></span></p><p
class=3D3DMsoNormal><span
lang=3D3DEN-U=
S =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
<=3D
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>[root@ceph-rgw03 ~]# =
=3D
2019-10-28 13:55:43.880 7f0dd92c9700 0
abort_bucket_multiparts =3D
WARNING : aborted 1000 incomplete multipart =3D
uploads<o:p></o:p></span></p><p
class=3D3DMsoNormal><span lang=3D3DEN-US
=
=3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:56:24.021=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 2000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:57:04.726=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 3000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:57:45.424=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 4000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:58:25.905=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 5000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:59:06.898=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 6000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:59:47.829=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 7000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:00:42.102=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 8000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:01:23.829=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 9000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:02:06.028=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 10000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:02:48.648=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 11000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:03:29.807=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 12000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:04:11.180=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 13000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:04:52.396=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 14000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:05:33.050=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 15000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:06:13.652=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 16000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:06:54.806=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 17000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:07:35.867=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 18000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:08:16.886=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 19000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:08:57.711=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 20000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:09:38.032=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 21000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:10:18.377=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 22000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:10:58.833=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 23000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:11:39.078=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 24000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:12:24.731=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 25000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:13:12.176=
=3D
> 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 26000 =3D
> incomplete multipart uploads<o:p></o:p></span></p><p =3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
><=3D
> /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
<=3D
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Bucket stats show 500K
=
=3D
objects left. Looks like bucket rm is trying to
abort all incompleted =3D
mutipart. But in bucket stats this operation is not reflected removing =
=3D
> objects from stats.<o:p></o:p></span></p><p
class=3D3DMsoNormal><span =3D
> lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
<=3D
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>May be wait to get up =
=3D
> 500K or it’s a bug?<o:p></o:p></span></p><p
=3D
> class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
><=3D
> /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Regards<o:p></o:p></spa=
n>=3D
> </p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Manuel<o:p></o:p></span=
><=3D
> /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p> </o:p></span=
><=3D
> /p></div></body></html>
> ------=3D_NextPart_000_02A3_01D58D9A.81B17B70--
> ------------------------------
> Date: Mon, 28 Oct 2019 10:48:44
-0400
> From: Casey Bodley <cbodley(a)redhat.com>
> Subject: [ceph-users] Re: Static website hosting with RGW
> To: ceph-users(a)ceph.io
> Message-ID: <20834361-445e-1ee5-433b-dd4792f90608(a)redhat.com>
> Content-Type: text/plain; charset=3DUTF-8; format=3Dflowed
> On 10/24/19 8:38 PM, Oliver Freyermuth wrote:
> > Dear Cephers,
>
> > I have a question
concerning static websites with RGW.
> > To my understanding, it is best to run >=3D1 RGW client for
"classic"
S=
3
> and in addition operate >=3D1 RGW client for website serving
> > (potentially with HAProxy or its friends in front) to prevent messup of
> requests via the different protocols.
>
> > I'd prefer to avoid
"*.example.com" entries in DNS if possible.
> > So my current setup has these settings for the "web" RGW client:
> > rgw_enable_static_website =3D true
> > rgw_enable_apis =3D s3website
> > rgw_dns_s3website_name =3D
some_value_unused_when_A_records_are_used_pointing_to_the_IP_but_it_needs=
_to_be_set
> > and I create simple A records for each website pointing to the IP of
> this "web" RGW node.
>
> > I can easily upload content
for those websites to the other RGW
> instances which are serving S3,
> > so S3 and s3website APIs are cleanly separated in separate instances.
>
> > However, one issue remains:
How do I run
> > s3cmd ws-create
> > on each website-bucket once?
> > I can't do that against the "classic" S3-serving RGW nodes. This
will
> give me a 405 (not allowed),
> > since they do not have rgw_enable_static_website enabled.
> > I also can not run it against the "web S3" nodes, since they do not
hav=
e
> the S3 API enabled.
> > Of course I could enable that, but then the RGW node can't cleanly
> disentangle S3 and website requests since I use A records.
>
> > Does somebody have a good
idea on how to solve this issue?
> > Setting "rgw_enable_static_website =3D true" on the S3-serving RGW
node=
s
> would solve it, but does that have any bad side-effects on their S3
> operation?
> Enabling static website on the
gateway serving the S3 api does look like
> the right solution. As far as I can tell, it's only used to control
> whether the S3 ops for PutBucketWebsite, GetBucketWebsite, and
> DeleteBucketWebsite are exposed.
>
> > Also, if there's an expert on this: Exposing a
bucket under a tenant as
> static website is not possible since the colon (:) can't be encoded in
DN=
S,
> right?
>
>
> > In case somebody also wants to set something like this
up, here are the
> best docs I could find:
> >
https://gist.github.com/robbat2/ec0a66eed28e5f0e1ef7018e9c77910c
> > and of course:
>
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html=
-single/object_gateway_guide_for_red_hat_enterprise_linux/index#configuring=
<https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html=-single/object_gateway_guide_for_red_hat_enterprise_linux/index#configuring=>
_gateways_for_static_web_hosting
>
>
> > Cheers,
> > Oliver
>
>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> ------------------------------
> Subject: Digest Footer
>
_______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
> ------------------------------
> End of ceph-users Digest, Vol 81,
Issue 79
> ******************************************
--0000000000000dfea80595fe0971
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
<div><br></div><div><br><div
class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">On Tue, 29 Oct 2019 at 1:50 am, <<a
href=3D"mailto:
ceph-=
users-request@ceph.io">ceph-users-request@ceph.io</a>>
wrote:<br></div><=
blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;border-left:1px=
#ccc solid;padding-left:1ex">Send ceph-users mailing list submissions
to<b=
r>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <a href=3D"mailto:ceph-users@ceph.io"
target=3D=
"_blank">ceph-users(a)ceph.io</a><br>
<br>
To subscribe or unsubscribe via email, send a message with subject or<br>
body 'help' to<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <a href=3D"mailto:ceph-users-request@ceph.io"
t=
arget=3D"_blank">ceph-users-request(a)ceph.io</a><br>
<br>
You can reach the person managing the list at<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <a href=3D"mailto:ceph-users-owner@ceph.io"
tar=
get=3D"_blank">ceph-users-owner(a)ceph.io</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of ceph-users digest..."<br>
<br>
Today's Topics:<br>
<br>
=C2=A0 =C2=A01. Re: subtrees have overcommitted (target_size_bytes /
target=
_size_ratio)<br>
=C2=A0 =C2=A0 =C2=A0 (Lars T=C3=A4uber)<br>
=C2=A0 =C2=A02. After delete 8.5M Objects in a bucket still 500K left<br>
=C2=A0 =C2=A0 =C2=A0 (EDH - Manuel Rios Fernandez)<br>
=C2=A0 =C2=A03. Re: Static website hosting with RGW (Casey Bodley)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Date: Mon, 28 Oct 2019 11:24:54 +0100<br>
From: Lars T=C3=A4uber <<a href=3D"mailto:taeuber@bbaw.de"
target=3D"_bl=
ank">taeuber(a)bbaw.de</a>&gt;<br>
Subject: [ceph-users] Re: subtrees have overcommitted<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 (target_size_bytes / target_size_ratio)<br>
To: ceph-users <<a href=3D"mailto:ceph-users@ceph.io"
target=3D"_blank">=
ceph-users(a)ceph.io</a>&gt;<br>
Message-ID: <<a href=3D"mailto:20191028112454.0362fe66@bbaw.de"
target=
=3D"_blank">20191028112454.0362fe66(a)bbaw.de</a>&gt;<br>
Content-Type: text/plain; charset=3DUTF-8<br>
<br>
Is there a way to get rid of this warnings with activated autoscaler
beside=
s adding new osds?<br>
<br>
Yet I couldn't get a satisfactory answer to the question why this all
h=
appens.<br>
<br>
ceph osd pool autoscale-status :<br>
=C2=A0POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0SIZE=C2=A0=
TARGET SIZE=C2=A0 RATE=C2=A0 RAW CAPACITY=C2=A0 =C2=A0RATIO=C2=A0 TARGET
R=
ATIO=C2=A0 BIAS=C2=A0 PG_NUM=C2=A0 NEW PG_NUM=C2=A0 AUTOSCALE <br>
=C2=A0cephfs_data=C2=A0 =C2=A0 =C2=A0 122.2T=C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=
=A0 =C2=A0 =C2=A0 =C2=A0 1.5=C2=A0 =C2=A0 =C2=A0 =C2=A0 165.4T=C2=A0
1.1085=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 0.8500=C2=A0 =C2=A01.0=C2=A0 =C2=A0 1024=C2=A0
=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on <br>
<br>
versus<br>
<br>
=C2=A0ceph df=C2=A0 :<br>
RAW STORAGE:<br>
=C2=A0 =C2=A0 CLASS=C2=A0 =C2=A0 =C2=A0SIZE=C2=A0 =C2=A0 =C2=A0 =C2=A0
AVAI=
L=C2=A0 =C2=A0 =C2=A0 =C2=A0USED=C2=A0 =C2=A0 =C2=A0 =C2=A0 RAW USED=C2=A0
=
=C2=A0 =C2=A0%RAW USED <br>
=C2=A0 =C2=A0 hdd=C2=A0 =C2=A0 =C2=A0 =C2=A0165 TiB=C2=A0 =C2=A0 =C2=A0 41
=
TiB=C2=A0 =C2=A0 =C2=A0124 TiB=C2=A0 =C2=A0 =C2=A0 124 TiB=C2=A0 =C2=A0
=C2=
=A0 =C2=A0 =C2=A074.95 <br>
<br>
POOLS:<br>
=C2=A0 =C2=A0 POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
I=
D=C2=A0 =C2=A0 =C2=A0STORED=C2=A0 =C2=A0 =C2=A0OBJECTS=C2=A0 =C2=A0
=C2=A0U=
SED=C2=A0 =C2=A0 =C2=A0 =C2=A0 %USED=C2=A0 =C2=A0 =C2=A0MAX AVAIL <br>
=C2=A0 =C2=A0 cephfs_data=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1=C2=A0 =C2=A0
=
=C2=A075 TiB=C2=A0 =C2=A0 =C2=A0 49.31M=C2=A0 =C2=A0 =C2=A0122 TiB=C2=A0 =
=C2=A0 =C2=A087.16=C2=A0 =C2=A0 =C2=A0 =C2=A0 12 TiB <br>
<br>
<br>
It seems that the overcommitment is wrongly calculated. Isn't the RATE
=
already used to calculate the SIZE?<br>
<br>
It seems USED(df) =3D SIZE(autoscale-status)<br>
Isn't the RATE already taken into account here?<br>
<br>
Could someone please explain the numbers to me?<br>
<br>
<br>
Thanks!<br>
Lars<br>
<br>
Fri, 25 Oct 2019 07:42:58 +0200<br>
Lars T=C3=A4uber <<a href=3D"mailto:taeuber@bbaw.de"
target=3D"_blank">t=
aeuber(a)bbaw.de</a>&gt; =3D=3D> Nathan Fish <<a
href=3D"mailto:
lordcir=
th(a)gmail.com" target=3D"_blank">lordcirth(a)gmail.com</a>&gt;
:<br>
> Hi Nathan,<br>
> <br>
> Thu, 24 Oct 2019 10:59:55 -0400<br>
> Nathan Fish <<a href=3D"mailto:lordcirth@gmail.com"
target=3D"_blan=
k">lordcirth(a)gmail.com</a>&gt; =3D=3D> Lars T=C3=A4uber
<<a
href=3D"m=
ailto:taeuber@bbaw.de"
target=3D"_blank">taeuber(a)bbaw.de</a>&gt; :<br>
> > Ah, I see! The BIAS reflects the number of placement groups it
sh=
ould<br>
> > create. Since cephfs metadata pools are usually very small, but
h=
ave<br>
> > many objects and high IO, the autoscaler gives them 4x the
number=
of<br>
> > placement groups that it would normally give for that amount of
d=
ata.<br>
> >=C2=A0 =C2=A0<br>
> ah ok, I understand.<br>
> <br>
> > So, your cephfs_data is set to a ratio of 0.9, and
cephfs_metadat=
a to<br>
> > 0.3? Are the two pools using entirely different device classes,
s=
o<br>
> > they are not sharing space?=C2=A0 <br>
> <br>
> Yes, the metadata is on SSDs and the data on HDDs.<br>
> <br>
> > Anyway, I see that your overcommit is only "1.031x".
So=
if you set<br>
> > cephfs_data to 0.85, it should go away.=C2=A0 <br>
> <br>
> This is not the case. I set the target_ratio to 0.7 and get this:<br>
> <br>
>=C2=A0 POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0SIZE=
=C2=A0 TARGET SIZE=C2=A0 RATE=C2=A0 RAW CAPACITY=C2=A0 =C2=A0RATIO=C2=A0
TA=
RGET RATIO=C2=A0 BIAS=C2=A0 PG_NUM=C2=A0 NEW PG_NUM=C2=A0 AUTOSCALE <br>
>=C2=A0 cephfs_metadata=C2=A0 15736M=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 3.0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A02454G=C2=A0
0.018=
8=C2=A0 =C2=A0 =C2=A0 =C2=A0 0.3000=C2=A0 =C2=A04.0=C2=A0 =C2=A0 =C2=A0256=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0 =C2=A0
=C2=
=A0 <br>
>=C2=A0 cephfs_data=C2=A0 =C2=A0 =C2=A0 122.2T=C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1.5=C2=A0 =C2=A0 =C2=A0 =C2=A0
165.4T=C2=A0=
1.1085=C2=A0 =C2=A0 =C2=A0 =C2=A0 0.7000=C2=A0 =C2=A01.0=C2=A0 =C2=A0
1024=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0 =C2=A0
=C2=
=A0 <br>
> <br>
> The ratio seems to have nothing to do with the target_ratio but the
SI=
ZE and the RAW_CAPACITY.<br>
> Because the pool is still getting more data the SIZE increases and
the=
refore the RATIO increases.<br>
> The RATIO seems to be calculated by this formula<br>
> RATIO =3D SIZE * RATE / RAW_CAPACITY.<br>
> <br>
> This is what I don't understand. The data in the cephfs_data pool
=
seems to need more space than the raw capacity of the cluster provides.
Hen=
ce the situation is called "overcommitment".<br>
> <br>
> But why is this only the case when the autoscaler is active?<br>
> <br>
> Thanks<br>
> Lars<br>
> <br>
> > <br>
> > On Thu, Oct 24, 2019 at 10:09 AM Lars T=C3=A4uber <<a
href=3D"=
mailto:taeuber@bbaw.de"
target=3D"_blank">taeuber(a)bbaw.de</a>&gt;
wrote:=C2=
=A0 <br>
> > ><br>
> > > Thanks Nathan for your answer,<br>
> > ><br>
> > > but I set the the Target Ratio to 0.9. It is the
cephfs_data=
pool that makes the troubles.<br>
> > ><br>
> > > The 4.0 is the BIAS from the cephfs_metadata pool. This
&quo=
t;BIAS" is not explained on the page linked below. So I don't
know=
its meaning.<br>
> > ><br>
> > > How can be a pool overcommited when it is the only pool on
a=
set of OSDs?<br>
> > ><br>
> > > Best regards,<br>
> > > Lars<br>
> > ><br>
> > > Thu, 24 Oct 2019 09:39:51 -0400<br>
> > > Nathan Fish <<a
href=3D"mailto:lordcirth@gmail.com"
targe=
t=3D"_blank">lordcirth(a)gmail.com</a>&gt; =3D=3D> Lars
T=C3=A4uber
<<a=
href=3D"mailto:taeuber@bbaw.de"
target=3D"_blank">taeuber(a)bbaw.de</a>&gt;
=
:=C2=A0 =C2=A0 <br>
> > > > The formatting is mangled on my phone, but if I am
read=
ing it correctly,<br>
> > > > you have set Target Ratio to 4.0. This means you
have
t=
old the balancer<br>
> > > > that this pool will occupy 4x the space of your
whole
c=
luster, and to<br>
> > > > optimize accordingly. This is naturally a problem.
Sett=
ing it to 0 will<br>
> > > > clear the setting and allow the autobalancer to
work.<b=
r>
> > > ><br>
> > > > On Thu., Oct. 24, 2019, 5:18 a.m. Lars T=C3=A4uber,
<=
;<a href=3D"mailto:taeuber@bbaw.de"
target=3D"_blank">taeuber(a)bbaw.de
</a>&g=
t; wrote:<br>
> > > >=C2=A0 =C2=A0 <br>
> > > > > This question is answered here:<br>
> > > > > <a href=3D"
https://ceph.io/rados/new-in-nautilus-p=
g-merging-and-autotuning/
<https://ceph.io/rados/new-in-nautilus-p=g-merging-and-autotuning/>"
rel=3D"noreferrer" target=3D"_blank">https://cep=
h.io/rados/new-in-nautilus-pg-merging-and-autotuning/</a><br>
> > > > ><br>
> > > > > But it tells me that there is more data
stored in
=
the pool than the raw<br>
> > > > > capacity provides (taking the replication
factor
R=
ATE into account) hence<br>
> > > > > the RATIO being above 1.0 .<br>
> > > > ><br>
> > > > > How comes this is the case? - Data is
stored
outsi=
de of the pool?<br>
> > > > > How comes this is only the case when the
autoscale=
r is active?<br>
> > > > ><br>
> > > > > Thanks<br>
> > > > > Lars<br>
> > > > ><br>
> > > > ><br>
> > > > > Thu, 24 Oct 2019 10:36:52 +0200<br>
> > > > > Lars T=C3=A4uber <<a
href=3D"mailto:
taeuber@bba=
w.de" target=3D"_blank">taeuber(a)bbaw.de</a>&gt;
=3D=3D> <a
href=3D"mailt=
o:ceph-users@ceph.io" target=3D"_blank">ceph-users(a)ceph.io</a>
:=C2=A0
=C2=
=A0 <br>
> > > > > > My question requires too complex
an
answer.<b=
r>
> > > > > > So let me ask a simple
question:<br>
> > > > > ><br>
> > > > > > What does the SIZE of "osd
pool
autoscal=
e-status" tell/mean/comes from?<br>
> > > > > ><br>
> > > > > > Thanks<br>
> > > > > > Lars<br>
> > > > > ><br>
> > > > > > Wed, 23 Oct 2019 14:28:10
+0200<br>
> > > > > > Lars T=C3=A4uber <<a
href=3D"mailto:
taeube=
r(a)bbaw.de" target=3D"_blank">taeuber(a)bbaw.de</a>&gt;
=3D=3D> <a
href=3D"=
mailto:ceph-users@ceph.io"
target=3D"_blank">ceph-users(a)ceph.io</a>
:=C2=A0=
=C2=A0 <br>
> > > > > > > Hello
everybody!<br>
> > > > > > ><br>
> > > > > > > What does this
mean?<br>
> > > > > > ><br>
> > > > > > >=C2=A0 =C2=A0
=C2=A0health:
HEALTH_WARN<b=
r>
> > > > > > >=C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0
=C2=A0=
=C2=A01 subtrees have overcommitted pool target_size_bytes<br>
> > > > > > >=C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0
=C2=A0=
=C2=A01 subtrees have overcommitted pool target_size_ratio<br>
> > > > > > ><br>
> > > > > > > and what does it have to
do with the
aut=
oscaler?<br>
> > > > > > > When I deactivate the
autoscaler the
war=
ning goes away.<br>
> > > > > > ><br>
> > > > > > ><br>
> > > > > > > $ ceph osd pool
autoscale-status<br>
> > > > > > >=C2=A0 POOL=C2=A0 =C2=A0
=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0SIZE=C2=A0 TARGET SIZE=C2=A0 RATE=C2=A0 RAW
CAPA=
CITY=C2=A0 =C2=A0RATIO=C2=A0 =C2=A0 <br>
> > > > > TARGET RATIO=C2=A0 BIAS=C2=A0 PG_NUM=C2=A0
NEW
PG_=
NUM=C2=A0 AUTOSCALE=C2=A0 =C2=A0 <br>
> > > > > > >=C2=A0
cephfs_metadata=C2=A0
15106M=C2=A0=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 3.0=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A02454G=C2=A0 0.0180=C2=A0 =C2=A0 <br>
> > > > >=C2=A0 =C2=A00.3000=C2=A0 =C2=A04.0=C2=A0
=C2=A0 =
=C2=A0256=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0
<=
br>
> > > > > > >=C2=A0 cephfs_data=C2=A0
=C2=A0 =C2=A0
11=
3.6T=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1.5=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 165.4T=C2=A0 1.0306=C2=A0 =C2=A0 <br>
> > > > >=C2=A0 =C2=A00.9000=C2=A0 =C2=A01.0=C2=A0
=C2=A0 =
=C2=A0512=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0
<=
br>
> > > > > > ><br>
> > > > > > ><br>
> > > > > > > $ ceph health
detail<br>
> > > > > > > HEALTH_WARN 1 subtrees
have
overcommitte=
d pool target_size_bytes; 1=C2=A0 =C2=A0 <br>
> > > > > subtrees have overcommitted pool
target_size_ratio=
=C2=A0 =C2=A0 <br>
> > > > > > >
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1
s=
ubtrees have overcommitted=C2=A0 =C2=A0 <br>
> > > > > pool target_size_bytes=C2=A0 =C2=A0
<br>
> > > > > > >=C2=A0 =C2=A0 =C2=A0Pools
['cephfs_da=
ta'] overcommit available storage by 1.031x due=C2=A0 =C2=A0 <br>
> > > > > to target_size_bytes=C2=A0 =C2=A0 0=C2=A0
on
pools=
[]=C2=A0 =C2=A0 <br>
> > > > > > >
POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1
s=
ubtrees have overcommitted=C2=A0 =C2=A0 <br>
> > > > > pool target_size_ratio=C2=A0 =C2=A0
<br>
> > > > > > >=C2=A0 =C2=A0 =C2=A0Pools
['cephfs_da=
ta'] overcommit available storage by 1.031x due=C2=A0 =C2=A0 <br>
> > > > > to target_size_ratio 0.900 on pools
['cephfs_d=
ata']=C2=A0 =C2=A0 <br>
> > > > > > ><br>
> > > > > > ><br>
> > > > > > > Thanks<br>
> > > > > > > Lars<br>
> > > > > > >
________________________________________=
_______<br>
> > > > > > > ceph-users mailing list --
<a
href=3D"ma=
ilto:ceph-users@ceph.io"
target=3D"_blank">ceph-users(a)ceph.io</a><br>
> > > > > > > To unsubscribe send an
email to <a href=
=3D"mailto:ceph-users-leave@ceph.io"
target=3D"_blank">ceph-users-leave@cep=
h.io</a>=C2=A0 =C2=A0 <br>
<br>
------------------------------<br>
<br>
Date: Mon, 28 Oct 2019 14:18:01 +0100<br>
From: "EDH - Manuel Rios Fernandez" <<a href=3D"mailto:
mriosfe=
r(a)easydatahost.com" target=3D"_blank">mriosfer(a)easydatahost.com
</a>><br>
Subject: [ceph-users] After delete 8.5M Objects in a bucket still 500K<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 left<br>
To: <<a href=3D"mailto:ceph-users@ceph.io"
target=3D"_blank">ceph-users@=
ceph.io</a>><br>
Message-ID: <02a201d58d92$1fe85880$5fb90980$@<a href=3D"
http://easydatah=
ost.com" rel=3D"noreferrer"
target=3D"_blank">easydatahost.com</a>><br>
Content-Type: multipart/alternative;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0
boundary=3D"----=3D_NextPart_000_02A3_01D5=
8D9A.81B17B70"<br>
<br>
This is a multipart message in MIME format.<br>
<br>
------=3D_NextPart_000_02A3_01D58D9A.81B17B70<br>
Content-Type: text/plain;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 charset=3D"us-ascii"<br>
Content-Transfer-Encoding: 7bit<br>
<br>
Hi Ceph's!<br>
<br>
<br>
<br>
We started deteling a bucket several days ago. Total size 47TB / 8.5M<br>
objects.<br>
<br>
<br>
<br>
Now we see the cli bucket rm stucked and by console drop this messages.<br>
<br>
<br>
<br>
[root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700=C2=A0 0<br>
abort_bucket_multiparts WARNING : aborted 1000 incomplete multipart
uploads=
<br>
<br>
2019-10-28 13:56:24.021 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 2000 incomplete multipart uploads<br>
<br>
2019-10-28 13:57:04.726 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 3000 incomplete multipart uploads<br>
<br>
2019-10-28 13:57:45.424 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 4000 incomplete multipart uploads<br>
<br>
2019-10-28 13:58:25.905 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 5000 incomplete multipart uploads<br>
<br>
2019-10-28 13:59:06.898 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 6000 incomplete multipart uploads<br>
<br>
2019-10-28 13:59:47.829 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 7000 incomplete multipart uploads<br>
<br>
2019-10-28 14:00:42.102 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 8000 incomplete multipart uploads<br>
<br>
2019-10-28 14:01:23.829 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 9000 incomplete multipart uploads<br>
<br>
2019-10-28 14:02:06.028 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 10000 incomplete multipart uploads<br>
<br>
2019-10-28 14:02:48.648 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 11000 incomplete multipart uploads<br>
<br>
2019-10-28 14:03:29.807 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 12000 incomplete multipart uploads<br>
<br>
2019-10-28 14:04:11.180 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 13000 incomplete multipart uploads<br>
<br>
2019-10-28 14:04:52.396 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 14000 incomplete multipart uploads<br>
<br>
2019-10-28 14:05:33.050 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 15000 incomplete multipart uploads<br>
<br>
2019-10-28 14:06:13.652 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 16000 incomplete multipart uploads<br>
<br>
2019-10-28 14:06:54.806 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 17000 incomplete multipart uploads<br>
<br>
2019-10-28 14:07:35.867 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 18000 incomplete multipart uploads<br>
<br>
2019-10-28 14:08:16.886 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 19000 incomplete multipart uploads<br>
<br>
2019-10-28 14:08:57.711 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 20000 incomplete multipart uploads<br>
<br>
2019-10-28 14:09:38.032 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 21000 incomplete multipart uploads<br>
<br>
2019-10-28 14:10:18.377 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 22000 incomplete multipart uploads<br>
<br>
2019-10-28 14:10:58.833 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 23000 incomplete multipart uploads<br>
<br>
2019-10-28 14:11:39.078 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 24000 incomplete multipart uploads<br>
<br>
2019-10-28 14:12:24.731 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 25000 incomplete multipart uploads<br>
<br>
2019-10-28 14:13:12.176 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
WARNIN=
G :<br>
aborted 26000 incomplete multipart uploads<br>
<br>
<br>
<br>
<br>
<br>
Bucket stats show 500K objects left. Looks like bucket rm is trying to
abor=
t<br>
all incompleted mutipart. But in bucket stats this operation is not<br>
reflected removing objects from stats.<br>
<br>
<br>
<br>
May be wait to get up 500K or it's a bug?<br>
<br>
<br>
<br>
Regards<br>
<br>
Manuel<br>
<br>
<br>
<br>
<br>
------=3D_NextPart_000_02A3_01D58D9A.81B17B70<br>
Content-Type: text/html;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 charset=3D"us-ascii"<br>
Content-Transfer-Encoding: quoted-printable<br>
<br>
<html xmlns:v=3D3D"urn:schemas-microsoft-com:vml" =3D<br>
xmlns:o=3D3D"urn:schemas-microsoft-com:office:office" =3D<br>
xmlns:w=3D3D"urn:schemas-microsoft-com:office:word" =3D<br>
xmlns:m=3D3D"<a href=3D"
http://schemas.microsoft.com/office/2004/12/om=
ml" rel=3D"noreferrer" target=3D"_blank">
http://schemas.microsoft.com/offic=
e/2004/12/omml
<http://schemas.microsoft.com/offic=e/2004/12/omml></a>"
=3D<br>
xmlns=3D3D"<a
href=3D"http://www.w3.org/TR/REC-html40"
rel=3D"noreferr=
er"
target=3D"_blank">http://www.w3.org/TR/REC-html40
</a>"><head=
><meta =3D<br>
http-equiv=3D3DContent-Type content=3D3D"text/html; =3D<br>
charset=3D3Dus-ascii"><meta name=3D3DGenerator
content=3D3D"=
;Microsoft Word 15 =3D<br>
(filtered medium)"><style><!--<br>
/* Font Definitions */<br>
@font-face<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {font-family:"Cambria Math";<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 panose-1:2 4 5 3 5 4 6 3 2 4;}<br>
@font-face<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {font-family:Calibri;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 panose-1:2 15 5 2 2 2 4 3 2 4;}<br>
/* Style Definitions */<br>
p.MsoNormal, li.MsoNormal, div.MsoNormal<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {margin:0cm;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 margin-bottom:.0001pt;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 font-size:11.0pt;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0
font-family:"Calibri",sans-serif;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 mso-fareast-language:EN-US;}<br>
a:link, span.MsoHyperlink<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-priority:99;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 color:#0563C1;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 text-decoration:underline;}<br>
a:visited, span.MsoHyperlinkFollowed<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-priority:99;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 color:#954F72;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 text-decoration:underline;}<br>
span.EstiloCorreo17<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-type:personal-compose;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0
font-family:"Calibri",sans-serif;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 color:windowtext;}<br>
.MsoChpDefault<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-type:export-only;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0
font-family:"Calibri",sans-serif;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 mso-fareast-language:EN-US;}<br>
@page WordSection1<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {size:612.0pt 792.0pt;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 margin:70.85pt 3.0cm 70.85pt 3.0cm;}<br>
div.WordSection1<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {page:WordSection1;}<br>
--></style><!--[if gte mso
9]><xml><br>
<o:shapedefaults v:ext=3D3D"edit"
spidmax=3D3D"1026"=
/><br>
</xml><![endif]--><!--[if gte mso
9]><xml><br>
<o:shapelayout v:ext=3D3D"edit"><br>
<o:idmap v:ext=3D3D"edit" data=3D3D"1"
/><br>
</o:shapelayout></xml><![endif]--></head><body
l=
ang=3D3DES =3D<br>
link=3D3D"#0563C1"
vlink=3D3D"#954F72"><div
class=
=3D3DWordSection1><p =3D<br>
class=3D3DMsoNormal><span =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Hi
=3D<br>
Ceph&#8217;s!<o:p></o:p></span></p><p
class=
=3D3DMsoNormal><span =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>We started
det=
eling a =3D<br>
bucket several days ago. Total size 47TB / 8.5M =3D<br>
objects.<o:p></o:p></span></p><p
class=3D3DMsoNo=
rmal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Now we see
the=
cli =3D<br>
bucket rm stucked and by console drop this =3D<br>
messages.<o:p></o:p></span></p><p
class=3D3DMsoN=
ormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>[root@ceph-rgw
=
03 ~]# =3D<br>
2019-10-28 13:55:43.880 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts =
=3D<br>
WARNING : aborted 1000 incomplete multipart =3D<br>
uploads<o:p></o:p></span></p><p
class=3D3DMsoNor=
mal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:=
56:24.021 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 2000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:=
57:04.726 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 3000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:=
57:45.424 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 4000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:=
58:25.905 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 5000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:=
59:06.898 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 6000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
13:=
59:47.829 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 7000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
00:42.102 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 8000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
01:23.829 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 9000
=3D=
<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
02:06.028 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 10000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
02:48.648 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 11000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
03:29.807 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 12000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
04:11.180 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 13000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
04:52.396 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 14000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
05:33.050 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 15000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
06:13.652 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 16000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
06:54.806 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 17000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
07:35.867 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 18000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
08:16.886 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 19000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
08:57.711 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 20000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
09:38.032 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 21000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
10:18.377 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 22000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
10:58.833 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 23000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
11:39.078 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 24000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
12:24.731 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 25000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
14:=
13:12.176 =3D<br>
7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 26000 =
=3D<br>
incomplete multipart
uploads<o:p></o:p></span></p>&=
lt;p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Bucket stats
s=
how 500K =3D<br>
objects left. Looks like bucket rm is trying to abort all incompleted
=3D<b=
r>
mutipart. But in bucket stats this operation is not reflected removing
=3D<=
br>
objects from
stats.<o:p></o:p></span></p><p
clas=
s=3D3DMsoNormal><span =3D<br>
lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>May be wait
to=
get up =3D<br>
500K or it&#8217;s a
bug?<o:p></o:p></span></p>=
<p =3D<br>
class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Regards<o:p=
></o:p></span>=3D<br>
</p><p class=3D3DMsoNormal><span lang=3D3DEN-US
=3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'>Manuel<o:p&=
gt;</o:p></span><=3D<br>
/p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D<br>
style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&am=
p;nbsp;</o:p></span><=3D<br>
/p></div></body></html><br>
------=3D_NextPart_000_02A3_01D58D9A.81B17B70--<br>
<br>
------------------------------<br>
<br>
Date: Mon, 28 Oct 2019 10:48:44 -0400<br>
From: Casey Bodley <<a href=3D"mailto:cbodley@redhat.com"
target=3D"_bla=
nk">cbodley(a)redhat.com</a>&gt;<br>
Subject: [ceph-users] Re: Static website hosting with RGW<br>
To: <a href=3D"mailto:ceph-users@ceph.io"
target=3D"_blank">ceph-users@ceph=
.io</a><br>
Message-ID: <<a href=3D"mailto:
20834361-445e-1ee5-433b-dd4792f90608@redh=
at.com"
target=3D"_blank">20834361-445e-1ee5-433b-dd4792f90608(a)redhat.com
</=
a>><br>
Content-Type: text/plain; charset=3DUTF-8; format=3Dflowed<br>
<br>
<br>
On 10/24/19 8:38 PM, Oliver Freyermuth wrote:<br>
> Dear Cephers,<br>
><br>
> I have a question concerning static websites with RGW.<br>
> To my understanding, it is best to run >=3D1 RGW client for
"c=
lassic" S3 and in addition operate >=3D1 RGW client for website
ser=
ving<br>
> (potentially with HAProxy or its friends in front) to prevent messup
o=
f requests via the different protocols.<br>
><br>
> I'd prefer to avoid "*.<a
href=3D"http://example.com"
rel=3D"=
noreferrer" target=3D"_blank">example.com</a>" entries
in DNS if
possi=
ble.<br>
> So my current setup has these settings for the "web" RGW
cli=
ent:<br>
>=C2=A0 =C2=A0rgw_enable_static_website =3D true<br>
>=C2=A0 =C2=A0rgw_enable_apis =3D s3website<br>
>=C2=A0 =C2=A0rgw_dns_s3website_name =3D
some_value_unused_when_A_record=
s_are_used_pointing_to_the_IP_but_it_needs_to_be_set<br>
> and I create simple A records for each website pointing to the IP of
t=
his "web" RGW node.<br>
><br>
> I can easily upload content for those websites to the other RGW
instan=
ces which are serving S3,<br>
> so S3 and s3website APIs are cleanly separated in separate
instances.<=
br>
><br>
> However, one issue remains: How do I run<br>
>=C2=A0 =C2=A0s3cmd ws-create<br>
> on each website