Thank you both for your response.  So this leads me to the next question:

ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>

What is <root> and <failure-domain> in this case?

It also looks like this is responsible for things like “rack awareness” type attributes which is something I’d like to utilize.:

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root
This is something I will eventually take advantage of as well.

Thank you!
-jeremy


On May 28, 2021, at 12:03 AM, Janne Johansson <icepic.dz@gmail.com> wrote:

Create a crush rule that only chooses non-ssd drives, then
ceph osd pool set <perf-pool-name> crush_rule YourNewRuleName
and it will move over to the non-ssd OSDs.

Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen <jeremy@skidrow.la>:


I’m very new to Ceph so if this question makes no sense, I apologize.  Continuing to study but I thought an answer to this question would help me understand Ceph a bit more.

Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  I’d like to understand how to remap this PG so it’s not using the SSD OSDs.

ceph pg map 1.0
osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]

OSD 28 is the SSD.

Is this possible?  Does this make any sense?  I’d like to reserve the SSDs for their own pool.

Thank you!
-jeremy
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io



--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io