Thank you everyone for the explanations.
Still on this subject: I created an host and attached a disk. For a
second host, for use a shared iscsi storage, I just need add disk to
I tried this:
disk add pool1/vmware_iscsi1
'pool1/vmware_iscsi1' mapped to 1 other client(s)
It's a problem, or is correct?
Em sex, 20 de set de 2019 às 20:13, Mike Christie <mchristi(a)redhat.com
On 09/20/2019 01:52 PM, Gesiel Galvão Bernardes wrote:
I'm testing Ceph with Vmware, using Ceph-iscsi gateway. I reading
documentation* and have doubts some points:
- If I understanded, in general terms, for each VMFS datastore in
will match the an RBD image. (consequently in an
RBD image I will
possible have many VMWare disks). Its correct?
- In documentation is this: "gwcli requires a pool with the name
it can store metadata like the iSCSI
configuration". In part 4 of
"Configuration", have: "Add a RBD image with the name disk_1 in
rbd". In this part, the use of
"rbd" pool is a example and I could use
any pool for storage of image, or the pool should be "rbd"?
Resuming: gwcli require "rbd" pool for metadata and I could use
for image, or i will use just "rbd
pool" for storage image and
- How much memory ceph-iscsi use? Which is a good number of RAM?
The major memory use is:
1. In RHEL 7.5 kernels and older we allocate max_data_area_mb of kernel
memory per device. The default value for that is 8. You can use gwcli to
configure it. It is allocated when the device is created. In newer
kernels, there is pool of memory and each device can use up to
max_data_area_mb worth of it. The per device default is the same and you
can change it with gwcli. The total pool limit is 2 GB. There is a sysfs
that can be used to change it.
2. Each device uses about 20 MB of memory in userspace. This is not