It's displaying sdb (what I assume you want to be used as a DB device) as
unavailable. What's "pvs" output look like on that "ceph-osd-1"
host?
Perhaps it is full. I see the other email you sent regarding replacement; I
suspect the pre-existing LV from your previous OSD is not re-used. You may
need to delete it then the service specification should re-create it along
with the OSD. If I remember correctly, I stopped the automatic application
of the service spec (ceph orch rm osd.servicespec) when I had to replace a
failed OSD, removed the OSD, nuked the LV on the db device in question, put
in the new drive, then re-enabled the service-spec (ceph orch apply osd -i)
and the OSD + DB/WAL were created appropriately. I don't remember the exact
sequence, and it may depend on the ceph version. I'm also unsure if
the "orch osd rm <svc_id(s)> --replace [--force]" will allow preservation
of the db/wal mapping, it might be worth looking at in the future.
On Wed, Feb 10, 2021 at 2:22 PM Tony Liu <tonyliu0592(a)hotmail.com> wrote:
Hi David,
Request info is below.
# ceph orch device ls ceph-osd-1
HOST PATH TYPE SIZE DEVICE_ID
MODEL VENDOR ROTATIONAL AVAIL REJECT REASONS
ceph-osd-1 /dev/sdd hdd 2235G SEAGATE_DL2400MM0159_WBM2VL2G
DL2400MM0159 SEAGATE 1 True
ceph-osd-1 /dev/sda hdd 1117G SEAGATE_ST1200MM0099_WFK4NNDY
ST1200MM0099 SEAGATE 1 False LVM detected, Insufficient
space (<5GB) on vgs, locked
ceph-osd-1 /dev/sdb ssd 447G ATA_MZ7KH480HAHQ0D3_S5CNNA0N305738
MZ7KH480HAHQ0D3 ATA 0 False LVM detected, locked
ceph-osd-1 /dev/sdc hdd 2235G SEAGATE_DL2400MM0159_WBM2WNSE
DL2400MM0159 SEAGATE 1 False LVM detected, Insufficient
space (<5GB) on vgs, locked
ceph-osd-1 /dev/sde hdd 2235G SEAGATE_DL2400MM0159_WBM2WP2S
DL2400MM0159 SEAGATE 1 False LVM detected, Insufficient
space (<5GB) on vgs, locked
ceph-osd-1 /dev/sdf hdd 2235G SEAGATE_DL2400MM0159_WBM2VK99
DL2400MM0159 SEAGATE 1 False LVM detected, Insufficient
space (<5GB) on vgs, locked
ceph-osd-1 /dev/sdg hdd 2235G SEAGATE_DL2400MM0159_WBM2VJBT
DL2400MM0159 SEAGATE 1 False LVM detected, Insufficient
space (<5GB) on vgs, locked
ceph-osd-1 /dev/sdh hdd 2235G SEAGATE_DL2400MM0159_WBM2VMFK
DL2400MM0159 SEAGATE 1 False LVM detected, Insufficient
space (<5GB) on vgs, locked
# cat osd-spec.yaml
service_type: osd
service_id: osd-spec
placement:
hosts:
- ceph-osd-1
spec:
objectstore: bluestore
#block_db_size: 32212254720
block_db_size: 64424509440
data_devices:
#rotational: 1
paths:
- /dev/sdd
db_devices:
#rotational: 0
size: ":1T"
#unmanaged: true
+---------+------+------+------+----+-----+
# ceph orch apply osd -i osd-spec.yaml --dry-run
+---------+----------+------------+----------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+----------+------------+----------+----+-----+
|osd |osd-spec |ceph-osd-1 |/dev/sdd |- |- |
+---------+----------+------------+----------+----+-----+
Thanks!
Tony
________________________________________
From: David Orman <ormandj(a)corenode.com>
Sent: February 10, 2021 11:02 AM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users(a)ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
What's "ceph orch device ls" look like, and please show us your
specification that you've used.
Jens was correct, his example is how we worked-around this problem,
pending patch/new release.
On Wed, Feb 10, 2021 at 12:05 AM Tony Liu <tonyliu0592(a)hotmail.com<mailtomailto:
tonyliu0592(a)hotmail.com>> wrote:
With db_devices.size, db_devices shows up from "orch ls --export",
but no DB device/lvm created for the OSD. Any clues?
Thanks!
Tony
________________________________________
From: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard(a)softdesign.dk
<mailto:jens.hyllegaard@softdesign.dk>>
Sent: February 9, 2021 01:16 AM
To: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
Hi Tony.
I assume they used a size constraint instead of rotational. So if all your
SSD's are 1TB or less , and all HDD's are more than that you could use:
spec:
objectstore: bluestore
data_devices:
rotational: true
filter_logic: AND
db_devices:
size: ':1TB'
It was usable in my test environment, and seems to work.
Regards
Jens
-----Original Message-----
From: Tony Liu <tonyliu0592@hotmail.com<mailto:tonyliu0592@hotmail.com>>
Sent: 9. februar 2021 02:09
To: David Orman <ormandj@corenode.com<mailto:ormandj@corenode.com>>
Cc: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
Hi David,
Could you show me an example of OSD service spec YAML to workaround it by
specifying size?
Thanks!
Tony
________________________________________
From: David Orman <ormandj@corenode.com<mailto:ormandj@corenode.com>>
Sent: February 8, 2021 04:06 PM
To: Tony Liu
Cc: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
Adding ceph-users:
We ran into this same issue, and we used a size specification to
workaround for now.
Bug and patch:
https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083
Backport to Octopus:
https://github.com/ceph/ceph/pull/39171
On Sat, Feb 6, 2021 at 7:05 PM Tony Liu <tonyliu0592(a)hotmail.com<mailtomailto:
tonyliu0592@hotmail.com><mailto:tonyliu0592@hotmail.com<mailto:
tonyliu0592(a)hotmail.com>>> wrote:
Add dev to comment.
With 15.2.8, when apply OSD service spec, db_devices is gone.
Here is the service spec file.
==========================================
service_type: osd
service_id: osd-spec
placement:
hosts:
- ceph-osd-1
spec:
objectstore: bluestore
data_devices:
rotational: 1
db_devices:
rotational: 0
==========================================
Here is the logging from mon. The message with "Tony" is added by me in
mgr to confirm. The audit from mon shows db_devices is gone.
Is there anything in mon to filter that out based on host info?
How can I trace it?
==========================================
audit 2021-02-07T00:45:38.106171+0000 mgr.ceph-control-1.nxjnzz
(mgr.24142551) 4020 : audit [DBG] from='client.24184218 -'
entity='client.admin' cmd=[{"prefix": "orch apply osd",
"target":
["mon-mgr", ""]}]: dispatch cephadm 2021-02-07T00:45:38.108546+0000
mgr.ceph-control-1.nxjnzz (mgr.24142551) 4021 : cephadm [INF] Marking host:
ceph-osd-1 for OSDSpec preview refresh.
cephadm 2021-02-07T00:45:38.108798+0000 mgr.ceph-control-1.nxjnzz
(mgr.24142551) 4022 : cephadm [INF] Saving service osd.osd-spec spec with
placement ceph-osd-1 cephadm 2021-02-07T00:45:38.108893+0000
mgr.ceph-control-1.nxjnzz (mgr.24142551) 4023 : cephadm [INF] Tony: spec:
<bound method ServiceSpec.to_json of
DriveGroupSpec(name=osd-spec->placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='ceph-osd-1',
network='', name='')]), service_id='osd-spec',
service_type='osd',
data_devices=DeviceSelection(rotational=1, all=False),
db_devices=DeviceSelection(rotational=0, all=False), osd_id_claims={},
unmanaged=False, filter_logic='AND', preview_only=False)> audit
2021-02-07T00:45:38.109782+0000 mon.ceph-control-3 (mon.2) 25 : audit [INF]
from='mgr.24142551 10.6.50.30:0/2838166251<http://10.6.50.30:0/2838166251
<http://10.6.50.30:0/2838166251>'
entity='mgr.ceph-control-1.nxjnzz'
cmd=[{"prefix":"config-key
set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\":
\"2021-02-07T00:45:38.108810\", \"spec\": {\"plac
ement\": {\"hosts\": [\"ceph-osd-1\"]},
\"service_id\": \"osd-spec\",
\"service_name\": \"osd.osd-spec\", \"service_type\":
\"osd\", \"spec\":
{\"data_devices\": {\"rotational\": 1}, \"filter_logic\":
\"AND\",
\"objectstore\": \"bluestore\"}}}"}]: dispatch audit
2021-02-07T00:45:38.110133+0000 mon.ceph-control-1 (mon.0) 107 : audit
[INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz'
cmd=[{"prefix":"config-key
set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\":
\"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\":
{\"hosts\":
[\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\",
\"service_name\":
\"osd.osd-spec\", \"service_type\": \"osd\",
\"spec\": {\"data_devices\":
{\"rotational\": 1}, \"filter_logic\": \"AND\",
\"objectstore\":
\"bluestore\"}}}"}]: dispatch audit 2021-02-07T00:45:38.152756+0000
mon.ceph-control-1 (mon.0) 108 : audit [INF] from='mgr.24142551 '
entity='mgr.ceph-control-1.nxjnzz'
cmd='[{"prefix":"config-key
set","key":"mgr/cephadm/spec.osd.osd-
spec","val":"{\"created\":
\"2021-02-07T00:45:38.108810\", \"spec\":
{\"placement\": {\"hosts\": [\"ceph-osd-1\"]},
\"service_id\":
\"osd-spec\", \"service_name\": \"osd.osd-spec\",
\"service_type\":
\"osd\", \"spec\": {\"data_devices\":
{\"rotational\": 1},
\"filter_logic\": \"AND\", \"objectstore\":
\"bluestore\"}}}"}]': finished
==========================================
Thanks!
Tony
-----Original Message-----
From: Jens Hyllegaard (Soft Design A/S)
<jens.hyllegaard@softdesign.dk<mailto:jens.hyllegaard@softdesign.dk
<mailto:jens.hyllegaard@softdesign.dk<mailto:
jens.hyllegaard(a)softdesign.dk>>>
Sent: Thursday, February 4, 2021 6:31 AM
To: ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:
ceph-users@ceph.io<mailto:ceph-users@ceph.io>>
Subject: [ceph-users] Re: db_devices doesn't
show up in exported osd
service spec
Hi.
I have the same situation. Running 15.2.8 I created a specification
that looked just like it. With rotational in the data and
non-rotational in the db.
First use applied fine. Afterwards it only uses the hdd, and not the ssd.
Also, is there a way to remove an unused osd service.
I manages to create osd.all-available-devices, when I tried to stop
the autocreation of OSD's. Using ceph orch apply osd
--all-available-devices --unmanaged=true
I created the original OSD using the web interface.
Regards
Jens
-----Original Message-----
From: Eugen Block <eblock@nde.ag<mailto:eblock@nde.ag><mailto:
eblock@nde.ag<mailto:eblock@nde.ag>>>
Sent: 3. februar 2021 11:40
To: Tony Liu <tonyliu0592@hotmail.com<mailto:tonyliu0592@hotmail.com
<mailto:tonyliu0592@hotmail.com<mailto:tonyliu0592@hotmail.com>>>
Cc: ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:
ceph-users@ceph.io<mailto:ceph-users@ceph.io>>
Subject: [ceph-users] Re: db_devices doesn't
show up in exported osd
service spec
How do you manage the db_sizes of your SSDs? Is that managed
automatically by ceph-volume? You could try to add another config and
see what it does, maybe try to add block_db_size?
Zitat von Tony Liu <tonyliu0592(a)hotmail.com<mailtomailto:
tonyliu0592@hotmail.com><mailto:tonyliu0592@hotmail.com<mailto:
tonyliu0592(a)hotmail.com>>>;>>:
> All mon, mgr, crash and osd are upgraded to 15.2.8. It actually
> fixed another issue (no device listed after adding host).
> But this issue remains.
> ```
> # cat osd-spec.yaml
> service_type: osd
> service_id: osd-spec
> placement:
> host_pattern: ceph-osd-[1-3]
> data_devices:
> rotational: 1
> db_devices:
> rotational: 0
>
> # ceph orch apply osd -i osd-spec.yaml Scheduled osd.osd-spec
> update...
>
> # ceph orch ls --service_name osd.osd-spec --export
> service_type: osd
> service_id: osd-spec
> service_name: osd.osd-spec
> placement:
> host_pattern: ceph-osd-[1-3]
> spec:
> data_devices:
> rotational: 1
> filter_logic: AND
> objectstore: bluestore
> ```
> db_devices still doesn't show up.
> Keep scratching my head...
>
>
> Thanks!
> Tony
>> -----Original Message-----
>> From: Eugen Block <eblock@nde.ag<mailto:eblock@nde.ag><mailto:
eblock@nde.ag<mailto:eblock@nde.ag>>>
>> Sent: Tuesday, February 2, 2021 2:20 AM
>> To: ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:
ceph-users@ceph.io<mailto:ceph-users@ceph.io>>
>> Subject: [ceph-users] Re: db_devices
doesn't show up in exported
>> osd service spec
>>
>> Hi,
>>
>> I would recommend to update (again), here's my output from a 15.2.8
>> test
>> cluster:
>>
>>
>> host1:~ # ceph orch ls --service_name osd.default --export
>> service_type: osd
>> service_id: default
>> service_name: osd.default
>> placement:
>> hosts:
>> - host4
>> - host3
>> - host1
>> - host2
>> spec:
>> block_db_size: 4G
>> data_devices:
>> rotational: 1
>> size: '20G:'
>> db_devices:
>> size: '10G:'
>> filter_logic: AND
>> objectstore: bluestore
>>
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Tony Liu <tonyliu0592(a)hotmail.com<mailtomailto:
tonyliu0592@hotmail.com><mailto:tonyliu0592@hotmail.com<mailto:
tonyliu0592(a)hotmail.com>>>;>>:
>>
>> > Hi,
>> >
>> > When build cluster Octopus 15.2.5 initially, here is the OSD
>> > service spec file applied.
>> > ```
>> > service_type: osd
>> > service_id: osd-spec
>> > placement:
>> > host_pattern: ceph-osd-[1-3]
>> > data_devices:
>> > rotational: 1
>> > db_devices:
>> > rotational: 0
>> > ```
>> > After applying it, all HDDs were added and DB of each hdd is
>> > created on SSD.
>> >
>> > Here is the export of OSD service spec.
>> > ```
>> > # ceph orch ls --service_name osd.osd-spec --export
>> > service_type: osd
>> > service_id: osd-spec
>> > service_name: osd.osd-spec
>> > placement:
>> > host_pattern: ceph-osd-[1-3]
>> > spec:
>> > data_devices:
>> > rotational: 1
>> > filter_logic: AND
>> > objectstore: bluestore
>> > ```
>> > Why db_devices doesn't show up there?
>> >
>> > When I replace a disk recently, when the new disk was installed
>> > and zapped, OSD was automatically re-created, but DB was created
>> > on HDD, not SSD. I assume this is because of that missing
db_devices?
>> >
>> > I tried to update service spec, the same result, db_devices
>> > doesn't show up when export it.
>> >
>> > Is this some known issue or something I am missing?
>> >
>> >
>> > Thanks!
>> > Tony
>> > _______________________________________________
>> > ceph-users mailing list --
>> > ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:
ceph-users@ceph.io<mailto:ceph-users@ceph.io>> To unsubscribe send
>> > an email to
>> > ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io><mailto:
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list --
>> ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:
ceph-users@ceph.io<mailto:ceph-users@ceph.io>> To unsubscribe send
>> an email to
>> ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io><mailto:
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>>
_______________________________________________
ceph-users mailing list --
ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:ceph-users@ceph.io
<mailto:ceph-users@ceph.io>> To unsubscribe send an
email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io
<mailto:ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>>
_______________________________________________
ceph-users mailing list --
ceph-users@ceph.io<mailto:ceph-users@ceph.io><mailto:ceph-users@ceph.io
<mailto:ceph-users@ceph.io>> To unsubscribe send an
email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io
<mailto:ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io
<mailto:ceph-users@ceph.io<mailto:ceph-users@ceph.io>>
To unsubscribe
send an email to ceph-users-leave(a)ceph.io<mailtolto:
ceph-users-leave@ceph.io><mailto:ceph-users-leave@ceph.io<mailto:
ceph-users-leave(a)ceph.io>>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailtolto:
ceph-users-leave(a)ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailtolto:
ceph-users-leave(a)ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailtolto:
ceph-users-leave(a)ceph.io>