Hello all,
For the last couple of weeks I've been doing Coverity scans and
posting them with the results from the other static analysis I do each
week [1] I wanted to bring everyone up to speed on the current status
and why I'm doing it this way.
Quite some time ago now I took over the task of running the Coverity
scans but in January last year they stopped working with the version
that Coverity shipped and they have not worked since then with any
publicly available version. this coincided with our move to C++11 and
my investigation led me to believe that the version of Coverity
current at teh time lacked support for c++1*. My company does have a
subscription however and that gives us access to several versions
which are not publicly available. I tested with various versions and
found one that works and that is what I'm using to do the scans
currently. Unfortunately Coverity's website (dashboard) does not
support the uploading of results gathered with any version other than
what they ship publicly so we are stuck with the HTML results I
generate and host. I will be creating bug reports for the errors seen
during scanning with the publicly available version (which is a
*later* version than the working version) but how much priority they
will be given I have no idea. Previous attempts to contact support
have fallen on deaf ears so I'm not overly optimistic but we shall
see. It has taken considerable time and effort to get to this point
but various individuals have stated the importance of having these
results available so it has been, and remains, a priority for me as do
the other weekly scans I perform.
[1] http://people.redhat.com/bhubbard/
--
Cheers,
Brad
Hi Folks,
Perf meeting is on in ~20 minutes! Update on bluefs_alloc_size and
bluestore_min_alloc size work and testing. Sam is at the flash memory
summit so will present his latency analysis work at a later time.
Please feel free to add your own topic if you'd like!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
Hello ,
This is Peter from China FANXI company,Which coorpetate with Tiffany,Cartier,Pomellato,VanCleef for many years.
We design some new jewelry dispalys and package. It is a new application . You can use it to control your entire shop display more perfectly.
Catalogue will be sent if needed!
Best regards
Peter
Whatsapp:+86 15727655534
Wechat:zsp444528275
Dear Madam/Sir:
Glad to hear that you're on the market for photographic equipment , our factory is specialized in Tripod with 19 years experience ,with good quality and pretty competitive price.
Also we have our own professional designers to meet any of your
requirements.
Why choose us?
- Quality first
- On-time shipment
- No extra cost,
Hope to have your feedback soon.
Best regard
Candy
GuangZhou Qingzhuang Photographic Equipment Co. Ltd
Product : tripod ,monopod ,shoulder pad , slider, background stand , light stand , LED light
Hi everyone,
When I started setting up a test cluster, I created a cephfs_data pool
since I thought I was going to use the replicated type for the pool.
This turned out not to be to most ideal choice when it comes to usable
disk space. So I decided to create another erasure coded pool with
k=2;m=2 which I did. Please see the attached log file. I also attached
the output of the pools from the dashboard.
The problem is this. Whenever an OSD needs to be brought down, the
cluster is going to rebalance and enter the degraded mode. The rebalance
of the cluster takes a while since all the PG's need to be
redistributed. Now I could use something like:
|ceph osd ||set| |noout && ceph osd ||set| |norebalance|
However, I want to be able to have the cluster recovered much faster but
cephfs_data pool which has a lot of PG's which I cannot reduce (not able
in mimic) is keeping the cluster busy. This pool isn't used for storing
any data and I am not sure if I can remove this pool without effecting
the th-ec22 EC pool.
Anyone any thoughs on this?
In case I provided insufficient or missing information, please let me
know. I'd gladly share those with you.
--
Met vriendelijke groeten, Kind regards
Valentin Bajrami
Target Holding
1630 UTC / 12:30 ET / 9:30 PT
https://bluejeans.com/908675367
Agenda:
- [Josh] Unified osd admin commands. You can talk to daemons with 'ceph
daemon <name> ...' and 'ceph tell <name> ...', but the command sets are
annoyingly disjoint.
- [Junior] progress module progress. Lots of work has been going into
making good progress events (progress bars for interesting cluster
events). Status update and discussion of next steps.
- [Adam] RADOS client refactor. An effort to refactor librados,
partly/primarily to support boost::asio for RGW, is nearing completion.
Status update, review of goals, overview of design, performance, etc.
Sorry for the late notice on this one! Hope to see you all in a few
hours.
sage
Hi Kefu,
Can you take a look at:
http://cephdev.digiware.nl:8180/jenkins/job/ceph-master/3614/consoleFull
And see if you can figure out why fails.
It does so on and off when run from jenkins.
If I use ctest from the commandline it just completes with no errors.
Or I expect some extra debugging stuff possible in the Class, given some of
the names in the backtrace....
Thanx,
--WjW