Hello everyone,
can anyone review my PR https://github.com/ceph/ceph/pull/35860, I think it
is merged ready passing all test cases and done the changes as suggested in
the past by reviewers.
Thank You.
Hi Folks,
The weekly performance meeting is on right now. :) Today we will give an
update on bufferlist append changes and Adam Kupcyzk will talk about his
compression testing work. Thanks!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
Hello everyone,
My PR was passing successfully until today when I tried to push a commit
all the tests passed but
make check failed, error - "190 - unittest_rbd_mirror (Failed)" is this
related to this https://tracker.ceph.com/issues/46669 ?
Hi,
I am trying to use CEPH dmclock to see how it works for QoS control. Especially, I want to set “osd_op_queue” as “mclock_client” to config different [r, w, l] for each client. The CEPH version I use is nautilus 14.2.9.
I noticed that in "OSD CONFIG REFERENCE" section of CEPH documentation, it states that "the mClock based ClientQueue (mclock_client) also incorporates the client identifier in order to promote fairness between clients.", so I believe librados can support per-client configurations right now. I wonder how I can set up the CEPH configuration to config different (r, w, l) for different clients using such “client identifier"? Thanks.
Best,
Zhenbo Qiao
On Sat, Aug 1, 2020 at 5:50 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
> I can understand the benefits of having a CO, I am still testing with
> mesos. However what is the benefit of having ceph daemons running in CO
> environment?
As I said in my original post, I use this for testing CephFS.
There's no reason why you can't also use Ceph as a storage system in
the cloud though. It just may not be as cost effective as the native
cloud storage offering (like S3).
>Except for your mds, mrg and radosgw, your osd daemons are
> bound to the hardware / disks they are running on. It is not like if
> osd.121 goes down, you can start it on some random node.
Why not?
--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hi,
In *mapper.c* file of Ceph CRUSH, I am trying to understand the definition
of a linux macro ```*S64_MIN*``` used in the following ```*else*```
condition i.e. ```*draw = S64_MIN*```.
Which exact decimal value is meant here for ```*S64_MIN*```?
```
if (weights[i])
{
u = hash(bucket->h.hash, x, ids[i], r);
u &= 0xffff;
ln = crush_ln(u) - 0x1000000000000ll;
__s64 draw = div64_s64(ln, weights[i]);
}
else
{
__s64 draw = S64_MIN;
// #define S64_MAX ((s64)(U64_MAX >> 1))
// #define S64_MIN ((s64)(-S64_MAX -1))
}
if (i == 0 || draw > high_draw)
{
high = i;
high_draw = draw;
}
}
return bucket->h.items[high];
}
Thanks
Bobby !
Hello all,
There's a framework [0] I've been working on for a while to deploy
Ceph in the cloud. For now, this is done through the Linode, LLC cloud
provider [1].
Primarily I use this for testing CephFS performance/behavior. For
example, it's fairly simple to create a decent sized test cluster with
150GB MDSs and 16x 16GB OSDs for a humble 1.7TB usable storage. Then
provision 100+ client machine nodes for executing workflows against
the Ceph cluster. This has been very useful for isolating aberrant
behaviors that are only uncovered at scale. All of the code/playbooks
I use for this purpose are also in the ceph-linode repository.
Early versions of this project used ceph-ansible but I have updated
the code to use the new cephadm deployment technology [2] in Ceph
Octopus release. The ansible playbook to deploy the cluster [3] is
delightfully simple.
Anyway, I thought it may be useful to the broader community to have an
option to try out cephadm on a throwaway cluster for pennies per hour
through a VPS provider. Getting a small cluster started should take
less than 10 minutes by following the README. I hope some folks out
there find this useful for their own testing. Feedback is welcome.
[0] https://github.com/batrick/ceph-linode
[1] https://www.linode.com/
[2] https://docs.ceph.com/docs/master/cephadm/
[3] https://github.com/batrick/ceph-linode/blob/master/cephadm.yml
Full disclosure: I have no relationship with Linode except as a customer.
--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D