Hello Gesiel,
Some iscsi settings are stored in an object, this object is stored in
the rbd pool. Hnece the rbd pool is required.
Your LUN's are mapped to {pool}/{rbdimage}. You should treat these as
you treat pools and rbd images in general.
In smallish deployments I try to keep it simple and make 1 pool for
each deviceclass and make LUN's as big as possible, while still
allowing for setting 1 LUN in maintenance mode in the datastore cluster
within vSphere, in case we need to re-format as part of vmfs upgrade.
Remember to set PSP and recovery timeout properly.
From the LUN level and up, you just treat the storage as any other
iscsi storage connected to vSphere.
the iGW consumes RAM pr. LUN export... I can't remember the default
settings but we are talking about single-digit Gb qith tens of LUN
exported, so it's fairly lightweight.
/Heðin
On frí, 2019-09-20 at 15:52 -0300, Gesiel Galvão Bernardes wrote:
Hi,
I'm testing Ceph with Vmware, using Ceph-iscsi gateway. I reading
documentation* and have doubts some points:
- If I understanded, in general terms, for each VMFS datastore in
VMware will match the an RBD image. (consequently in an RBD image I
will possible have many VMWare disks). Its correct?
- In documentation is this: "gwcli requires a pool with the name rbd,
so it can store metadata like the iSCSI configuration". In part 4 of
"Configuration", have: "Add a RBD image with the name disk_1 in the
pool rbd". In this part, the use of "rbd" pool is a example and I
could use any pool for storage of image, or the pool should be "rbd"?
Resuming: gwcli require "rbd" pool for metadata and I could use any
pool for image, or i will use just "rbd pool" for storage image and
metadata?
- How much memory ceph-iscsi use? Which is a good number of RAM?
Regards
Gesiel
*
https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io