Hi,
from my observation, Ceph uses ~512 Bytes per inode.
So what matters is not the TiB of your EC pool, but the number of bytes.
(Again, this is concluded from observation on Ceph 18.2.1, and the fact that it's
called "inodes"; I have not checked the code.)
Example from my cluster which has "data_ec" as EC 4+2, and "data"
ebing the "default" pool for inode backtrace information (from
https://docs.ceph.com/en/reef/cephfs/createfs/#creating-pools), and nothing else:
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 203 MiB 26 609 MiB 90.00 5 GiB
data 2 32 0 B 112.23M 0 B 0 61 TiB
data_ec 3 168 124 TiB 115.30M 186 TiB 50.53 121 TiB
metadata 4 128 63 GiB 32.87k 189 GiB 90.00 5 GiB
The odd thing here is that the 112 M inodes count as 0 Bytes.
This messes up PG autoscaling, I filed an issue about that here:
https://tracker.ceph.com/issues/65199
I would probably create a pool just for this inode information, simply so that you can see
it separately, and easily migrate it to different storage if you change your mind.
I do not understand your "what would be a good size?" question, since if you
create multiple pools that use SSDs, your SSD OSDs will automatically be used and the
remaining space will be shared across all of your SSD pools anyway -- you do not have to
provision "separate" SSDs for making another SSD pool.
See also my related question:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VKVENC3VP3L…