Hi,
Hello I am using rados bench tool. Currently I am using this tool on the
development cluster after running vstart.sh script. It is working fine and
I am interested in benchmarking the cluster. However I am struggling to
achieve a good bandwidth i.e. bandwidth (MB/sec). My target throughput is
at least 50 MB/sec and more. But mostly I am achieving is around 15-20
MB/sec. So, very poor.
I am quite sure I am missing something. Either I have to change my cluster
through vstart.sh script or I am not fully utilizing the rados bench tool.
Or may be both. i.e. not the right cluster and also not using the rados
bench tool correctly.
Some of the shell examples I have been using to build the cluster are
bellow:
MDS=0 RGW=1 ../src/vstart.sh -d -l -n --bluestore
MDS=0 RGW=1 MON=1 OSD=4../src/vstart.sh -d -l -n --bluestore
While using rados bench tool I have been trying with different block sizes
4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K. And I have also been changing the
-t parameter in the shell to increase concurrent IOs.
Looking forward to help.
Bobby
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.8.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
Specific questions, comments, bugs etc are best directed at our github issues
tracker.
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
A gperftools update is now available for EPEL8 in order to fix an issue with
ceph reported on IBM architectures:
https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2021-dd6932436d
Please test and leave feedback in Bodhi if you can.
HTH,
--
Yaakov Selkowitz
Senior Software Engineer - Platform Enablement
Red Hat, Inc.
Add dev to comment.
With 15.2.8, when apply OSD service spec, db_devices is gone.
Here is the service spec file.
==========================================
service_type: osd
service_id: osd-spec
placement:
hosts:
- ceph-osd-1
spec:
objectstore: bluestore
data_devices:
rotational: 1
db_devices:
rotational: 0
==========================================
Here is the logging from mon. The message with "Tony" is added by me
in mgr to confirm. The audit from mon shows db_devices is gone.
Is there anything in mon to filter that out based on host info?
How can I trace it?
==========================================
audit 2021-02-07T00:45:38.106171+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4020 : audit [DBG] from='client.24184218 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "target": ["mon-mgr", ""]}]: dispatch
cephadm 2021-02-07T00:45:38.108546+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4021 : cephadm [INF] Marking host: ceph-osd-1 for OSDSpec preview refresh.
cephadm 2021-02-07T00:45:38.108798+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4022 : cephadm [INF] Saving service osd.osd-spec spec with placement ceph-osd-1
cephadm 2021-02-07T00:45:38.108893+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4023 : cephadm [INF] Tony: spec: <bound method ServiceSpec.to_json of DriveGroupSpec(name=osd-spec->placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='ceph-osd-1', network='', name='')]), service_id='osd-spec', service_type='osd', data_devices=DeviceSelection(rotational=1, all=False), db_devices=DeviceSelection(rotational=0, all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False)>
audit 2021-02-07T00:45:38.109782+0000 mon.ceph-control-3 (mon.2) 25 : audit [INF] from='mgr.24142551 10.6.50.30:0/2838166251' entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]: dispatch
audit 2021-02-07T00:45:38.110133+0000 mon.ceph-control-1 (mon.0) 107 : audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]: dispatch
audit 2021-02-07T00:45:38.152756+0000 mon.ceph-control-1 (mon.0) 108 : audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]': finished
==========================================
Thanks!
Tony
> -----Original Message-----
> From: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard(a)softdesign.dk>
> Sent: Thursday, February 4, 2021 6:31 AM
> To: ceph-users(a)ceph.io
> Subject: [ceph-users] Re: db_devices doesn't show up in exported osd
> service spec
>
> Hi.
>
> I have the same situation. Running 15.2.8 I created a specification that
> looked just like it. With rotational in the data and non-rotational in
> the db.
>
> First use applied fine. Afterwards it only uses the hdd, and not the ssd.
> Also, is there a way to remove an unused osd service.
> I manages to create osd.all-available-devices, when I tried to stop the
> autocreation of OSD's. Using ceph orch apply osd --all-available-devices
> --unmanaged=true
>
> I created the original OSD using the web interface.
>
> Regards
>
> Jens
> -----Original Message-----
> From: Eugen Block <eblock(a)nde.ag>
> Sent: 3. februar 2021 11:40
> To: Tony Liu <tonyliu0592(a)hotmail.com>
> Cc: ceph-users(a)ceph.io
> Subject: [ceph-users] Re: db_devices doesn't show up in exported osd
> service spec
>
> How do you manage the db_sizes of your SSDs? Is that managed
> automatically by ceph-volume? You could try to add another config and
> see what it does, maybe try to add block_db_size?
>
>
> Zitat von Tony Liu <tonyliu0592(a)hotmail.com>:
>
> > All mon, mgr, crash and osd are upgraded to 15.2.8. It actually fixed
> > another issue (no device listed after adding host).
> > But this issue remains.
> > ```
> > # cat osd-spec.yaml
> > service_type: osd
> > service_id: osd-spec
> > placement:
> > host_pattern: ceph-osd-[1-3]
> > data_devices:
> > rotational: 1
> > db_devices:
> > rotational: 0
> >
> > # ceph orch apply osd -i osd-spec.yaml Scheduled osd.osd-spec
> > update...
> >
> > # ceph orch ls --service_name osd.osd-spec --export
> > service_type: osd
> > service_id: osd-spec
> > service_name: osd.osd-spec
> > placement:
> > host_pattern: ceph-osd-[1-3]
> > spec:
> > data_devices:
> > rotational: 1
> > filter_logic: AND
> > objectstore: bluestore
> > ```
> > db_devices still doesn't show up.
> > Keep scratching my head...
> >
> >
> > Thanks!
> > Tony
> >> -----Original Message-----
> >> From: Eugen Block <eblock(a)nde.ag>
> >> Sent: Tuesday, February 2, 2021 2:20 AM
> >> To: ceph-users(a)ceph.io
> >> Subject: [ceph-users] Re: db_devices doesn't show up in exported osd
> >> service spec
> >>
> >> Hi,
> >>
> >> I would recommend to update (again), here's my output from a 15.2.8
> >> test
> >> cluster:
> >>
> >>
> >> host1:~ # ceph orch ls --service_name osd.default --export
> >> service_type: osd
> >> service_id: default
> >> service_name: osd.default
> >> placement:
> >> hosts:
> >> - host4
> >> - host3
> >> - host1
> >> - host2
> >> spec:
> >> block_db_size: 4G
> >> data_devices:
> >> rotational: 1
> >> size: '20G:'
> >> db_devices:
> >> size: '10G:'
> >> filter_logic: AND
> >> objectstore: bluestore
> >>
> >>
> >> Regards,
> >> Eugen
> >>
> >>
> >> Zitat von Tony Liu <tonyliu0592(a)hotmail.com>:
> >>
> >> > Hi,
> >> >
> >> > When build cluster Octopus 15.2.5 initially, here is the OSD
> >> > service spec file applied.
> >> > ```
> >> > service_type: osd
> >> > service_id: osd-spec
> >> > placement:
> >> > host_pattern: ceph-osd-[1-3]
> >> > data_devices:
> >> > rotational: 1
> >> > db_devices:
> >> > rotational: 0
> >> > ```
> >> > After applying it, all HDDs were added and DB of each hdd is
> >> > created on SSD.
> >> >
> >> > Here is the export of OSD service spec.
> >> > ```
> >> > # ceph orch ls --service_name osd.osd-spec --export
> >> > service_type: osd
> >> > service_id: osd-spec
> >> > service_name: osd.osd-spec
> >> > placement:
> >> > host_pattern: ceph-osd-[1-3]
> >> > spec:
> >> > data_devices:
> >> > rotational: 1
> >> > filter_logic: AND
> >> > objectstore: bluestore
> >> > ```
> >> > Why db_devices doesn't show up there?
> >> >
> >> > When I replace a disk recently, when the new disk was installed and
> >> > zapped, OSD was automatically re-created, but DB was created on
> >> > HDD, not SSD. I assume this is because of that missing db_devices?
> >> >
> >> > I tried to update service spec, the same result, db_devices doesn't
> >> > show up when export it.
> >> >
> >> > Is this some known issue or something I am missing?
> >> >
> >> >
> >> > Thanks!
> >> > Tony
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send
> >> > an email to ceph-users-leave(a)ceph.io
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> >> email to ceph-users-leave(a)ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
Discussions have begun on enabling RBD volumes to be accessed as NVMe devices from hosts via a SmartNIC and NVMe over Fabrics (or just NVMe-oF from hosts without SmartNICs).
This will accommodate a variety of NVMe-oF gateway deployment scenarios, optionally including ADNN (an NVMe-oF extension for distributed storage targets). See the October 2010 CDM (https://youtu.be/2NFqigol6Ss?t=5473, at 1:31:00) for an overview of ADNN.
There is a weekly meeting (Tuesdays, 9:00 AM Pacific) on this topic open to all Ceph developers. See https://pad.ceph.com/p/rbd_nvmeof.
--- Scott
I built a new cluster from scratch, everything works fine.
Could anyone help to find out what is stuck here?
Another issue, devices don't show up after adding a host,
could be the same cause.
Any details about the workflow would be helpful too, like how
mon gets devices when a host is added, is it pushed by something
(mgr?) or pulled by mon?
Thanks!
Tony
> -----Original Message-----
> From: Tony Liu <tonyliu0592(a)hotmail.com>
> Sent: Sunday, February 7, 2021 5:32 PM
> To: ceph-users <ceph-users(a)ceph.io>
> Subject: [ceph-users] Re: Device is not available after zap
>
> I checked pvscan, vgscan, lvscan and "ceph-volume lvm list" on the OSD
> node, that zapped device doesn't show anywhere.
> Anything missing?
>
> Thanks!
> Tony
> ________________________________________
> From: Tony Liu <tonyliu0592(a)hotmail.com>
> Sent: February 7, 2021 05:27 PM
> To: ceph-users
> Subject: [ceph-users] Device is not available after zap
>
> Hi,
>
> With v15.2.8, after zap a device on OSD node, it's still not available.
> The reason is "locked, LVM detected". If I reboot the whole OSD node,
> then the device will be available. There must be something no being
> cleaned up. Any clues?
>
> Thanks!
> Tony
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
Hey-
Docker semi-recently started throttling pulls to docker hub. I
upgraded our account to a paid plan thinking that would eliminate/raise the
throttles, but I was wrong--it's the users pulling who need to be
authenticated and/or pay to do lots of pulls. So, I downgraded back to the
free plan.
Coincidentally, last week, they scaled the free plan way back to only 3
team members, so I had to remove several people. If you need access, just
let me (or dsavineau or leseb) know!
sage
Hello folks, this past summer Shraddha Agrawal implemented a new way for
teuthology to run tests - a single process, teuthloogy-dispatcher,
locking and then running jobs, rather than a bunch of workers competing
for locks [0].
Since there's a single dispatcher for each queue, jobs are run in strict
priority order. This also enables a couple improvements to the test
experience:
1) jobs may require more nodes - since only one job is locking at a
time, they cannot be starved of available nodes
2) dead jobs will have full logs - jobs that hit the max_job_time (12
hours in sepia) will have full ceph logs and coredumps collected as
usual - this should help quite a bit with stabilizing pacific
For more details, check out the PR [1].
This is now running all the queues in the sepia lab - let us know if
you run into any bugs!
And thanks to Shraddha for her hard work on this!
Josh
[0] https://ceph.io/gsoc-2020/#teuthology-scheduling%20Improvements
[1] https://github.com/ceph/teuthology/pull/1546
Hi
I'm new to ceph and would like to know if ceph can be integrated to use cloud storage repository as backend like Azure blob storage, S3. Particularly as a gateway to proxy the request to underlying storage providers to store and retrieve objects. This will help the client application to use S3 compatible API to communicate between on-perm and cloud storage environment natively.
Thanks.