Hi Ernesto,
Could you take a look at:
http://cephdev.digiware.nl:8180/jenkins/job/ceph-master/3652/console
At the end it shows a weird error that goes away when I revert #28696.
And i looks like the open function really is trying to open a very
akward filename?
Thanx,
--WjW
Dear Madam/Sir:
Glad to hear that you're on the market for photographic equipment , our factory is specialized in Tripod with 19 years experience ,with good quality and pretty competitive price.
Also we have our own professional designers to meet any of your
requirements.
Why choose us?
- Quality first
- On-time shipment
- No extra cost,
Hope to have your feedback soon.
Best regard
Candy
GuangZhou Qingzhuang Photographic Equipment Co. Ltd
Product : tripod ,monopod ,shoulder pad , slider, background stand , light stand , LED light
hi Chunmei,
i am reviewing your change of
https://github.com/ceph/ceph/compare/master...liu-chunmei:ceph_seastar_alie….
it looks good in general. i think the simplest way to co-locate
different versions of alien-common, ceph-common and crimson-common is
to introduce different namespaces. because we need to have
alien-common and crimson-common in the same binary, and to have all of
these three versions in the same repository.
but this divergence concerns me, as it introduces yet another
condition in the shared infrastructure in our code base. and in the
long run, this #ifdef won't go away if we want to go this way, so i
need to at least give it a try. what is "it"? to port rocksdb to
seastar. as seastar offers "seastar::thread" which makes it relative
simpler to wrap the blocking calls with ucontext. and rocksdb offers a
abstraction machinery allowing one to port it to a new platform. and
seastar is a "platform" to some degree, i'd say.
will update you guys with my progress and findings.
--
Regards
Kefu Chai
I originally sent this to the old ceph-devel mailing list, so I apologize if you get it twice...
We've run into this issue on the first two clusters after upgrading them to Nautilus (14.2.2).
When marking a single OSD back in to the cluster some PGs will switch to the active+remapped+backfill_wait+backfill_toofull state for a while and then it goes away after some of the other PGs finish backfilling. This is rather odd because all the data on the cluster could fit on a single drive, but we have over 100 of them:
# ceph -s
cluster:
id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
health: HEALTH_ERR
Degraded data redundancy (low space): 1 pg backfill_toofull
services:
mon: 3 daemons, quorum a1cephmon002,a1cephmon003,a1cephmon004 (age 21h)
mgr: a1cephmon002(active, since 21h), standbys: a1cephmon003, a1cephmon004
mds: cephfs:2 {0=a1cephmon002=up:active,1=a1cephmon003=up:active} 1 up:standby
osd: 143 osds: 142 up, 142 in; 106 remapped pgs
rgw: 11 daemons active (radosgw.a1cephrgw008, radosgw.a1cephrgw009, radosgw.a1cephrgw010, radosgw.a1cephrgw011, radosgw.a1tcephrgw002, radosgw.a1tcephrgw003, radosgw.a1tcephrgw004, radosgw.a1tcephrgw005, radosgw.a1tcephrgw006, radosgw.a1tcephrgw007, radosgw.a1tcephrgw008)
data:
pools: 19 pools, 5264 pgs
objects: 1.45M objects, 148 GiB
usage: 658 GiB used, 436 TiB / 437 TiB avail
pgs: 44484/4351770 objects misplaced (1.022%)
5158 active+clean
104 active+remapped+backfill_wait
1 active+remapped+backfilling
1 active+remapped+backfill_wait+backfill_toofull
io:
client: 19 MiB/s rd, 13 MiB/s wr, 431 op/s rd, 509 op/s wr
I searched the archives, but most of the other people had more full clusters where sometimes this state could be valid. This bug report seems similar, but the fix was just to make it a warning instead of an error:
https://tracker.ceph.com/issues/39555
So I've created a new tracker ticket to troubleshoot this issue:
https://tracker.ceph.com/issues/4125
Let me know what you guys think,
Bryan
+dev@ceph
On Thu, Aug 15, 2019 at 10:42 PM Paul Emmerich <paul.emmerich(a)croit.io> wrote:
>
> We've also seen this bug several times since Mimic, it seems to happen
> whenever a backfill target goes down. Always resolves itself but is
> still annoying.
>
> The original fixmaking this a warning instead of an error
> unfortunately doesn't help on Nautilus because we often have clusters
> that would be HEALTH_OK without this bug on Nautilus (i.e., some PGs
> in remapped+backfill*) but they will show up as HEALTH_WARN with this
> fix (and HEALTH_ERR without it).
>
>
>
> Paul
>
>
>
> On Wed, Aug 14, 2019 at 11:44 PM Bryan Stillwell <bstillwell(a)godaddy.com> wrote:
> >
> > We've run into this issue on the first two clusters after upgrading them to Nautilus (14.2.2).
> >
> > When marking a single OSD back in to the cluster some PGs will switch to the active+remapped+backfill_wait+backfill_toofull state for a while and then it goes away after some of the other PGs finish backfilling. This is rather odd because all the data on the cluster could fit on a single drive, but we have over 100 of them:
> >
> > # ceph -s
> > cluster:
> > id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
> > health: HEALTH_ERR
> > Degraded data redundancy (low space): 1 pg backfill_toofull
> >
> > services:
> > mon: 3 daemons, quorum a1cephmon002,a1cephmon003,a1cephmon004 (age 21h)
> > mgr: a1cephmon002(active, since 21h), standbys: a1cephmon003, a1cephmon004
> > mds: cephfs:2 {0=a1cephmon002=up:active,1=a1cephmon003=up:active} 1 up:standby
> > osd: 143 osds: 142 up, 142 in; 106 remapped pgs
> > rgw: 11 daemons active (radosgw.a1cephrgw008, radosgw.a1cephrgw009, radosgw.a1cephrgw010, radosgw.a1cephrgw011, radosgw.a1tcephrgw002, radosgw.a1tcephrgw003, radosgw.a1tcephrgw004, radosgw.a1tcephrgw005, radosgw.a1tcephrgw006, radosgw.a1tcephrgw007, radosgw.a1tcephrgw008)
> >
> > data:
> > pools: 19 pools, 5264 pgs
> > objects: 1.45M objects, 148 GiB
> > usage: 658 GiB used, 436 TiB / 437 TiB avail
> > pgs: 44484/4351770 objects misplaced (1.022%)
> > 5158 active+clean
> > 104 active+remapped+backfill_wait
> > 1 active+remapped+backfilling
> > 1 active+remapped+backfill_wait+backfill_toofull
> >
> > io:
> > client: 19 MiB/s rd, 13 MiB/s wr, 431 op/s rd, 509 op/s wr
> >
> >
> > I searched the archives, but most of the other people had more full clusters where sometimes this state could be valid. This bug report seems similar, but the fix was just to make it a warning instead of an error:
> >
> > https://tracker.ceph.com/issues/39555
> >
> >
> > So I've created a new tracker ticket to troubleshoot this issue:
> >
> > https://tracker.ceph.com/issues/4125
> >
> >
> > Let me know what you guys think,
> >
> > Bryan
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
--
Cheers,
Brad
Hello!
I am a hacker who has access to your operating system.
I also have full access to your account.
I've been watching you for a few months now.
The fact is that you were infected with malware through an adult site that you visited.
If you are not familiar with this, I will explain.
Trojan Virus gives me full access and control over a computer or other device.
This means that I can see everything on your screen, turn on the camera and microphone, but you do not know about it.
I also have access to all your contacts and all your correspondence.
Why your antivirus did not detect malware?
Answer: My malware uses the driver, I update its signatures every 4 hours so that your antivirus is silent.
I made a video showing how you satisfy yourself in the left half of the screen, and in the right half you see the video that you watched.
With one click of the mouse, I can send this video to all your emails and contacts on social networks.
I can also post access to all your e-mail correspondence and messengers that you use.
If you want to prevent this,
transfer the amount of $500 to my bitcoin address (if you do not know how to do this, write to Google: "Buy Bitcoin").
My bitcoin address (BTC Wallet) is: 32AHHi7gsNfk9LZLNn3Bvt5vKKyHJc3ZgN
After receiving the payment, I will delete the video and you will never hear me again.
I give you 50 hours (more than 2 days) to pay.
I have a notice reading this letter, and the timer will work when you see this letter.
Filing a complaint somewhere does not make sense because this email cannot be tracked like my bitcoin address.
I do not make any mistakes.
If I find that you have shared this message with someone else, the video will be immediately distributed.
Best regards!