Thank you everyone for the explanations. 

Still on this subject: I created an host and attached a disk. For a second host, for use a shared iscsi storage, I just need add disk to second client?
I tried this:
> disk add pool1/vmware_iscsi1
Warning: 'pool1/vmware_iscsi1' mapped to 1 other client(s)
ok

It's a problem, or is correct?

Regards
Gesiel


Em sex, 20 de set de 2019 às 20:13, Mike Christie <mchristi@redhat.com> escreveu:
On 09/20/2019 01:52 PM, Gesiel Galvão Bernardes wrote:
> Hi,
> I'm testing Ceph with Vmware, using Ceph-iscsi gateway. I reading
> documentation*  and have doubts some points:
>
> - If I understanded, in general terms, for each VMFS datastore in VMware
> will match the an RBD image. (consequently in an RBD image I will
> possible have many VMWare disks). Its correct?
>
> - In documentation is this: "gwcli requires a pool with the name rbd, so
> it can store metadata like the iSCSI configuration". In part 4 of
> "Configuration", have: "Add a RBD image with the name disk_1 in the pool
> rbd". In this part, the use of "rbd" pool is a example and I could use
> any pool for storage of image, or the pool should be "rbd"?
> Resuming: gwcli require "rbd" pool for metadata and I could use any pool
> for image, or i will use just "rbd pool" for storage image and metadata?
>
> - How much memory ceph-iscsi use? Which  is a good number of RAM?
>

The major memory use is:

1. In RHEL 7.5 kernels and older we allocate max_data_area_mb of kernel
memory per device. The default value for that is 8. You can use gwcli to
configure it. It is allocated when the device is created. In newer
kernels, there is pool of memory and each device can use up to
max_data_area_mb worth of it. The per device default is the same and you
can change it with gwcli. The total pool limit is 2 GB. There is a sysfs
file:

/sys/module/target_core_user/parameters/global_max_data_area_mb

that can be used to change it.

2. Each device uses about 20 MB of memory in userspace. This is not
configurable.