Hi,
Yes, If I set "show_image_direct_url" to
false, creation of volumes
from images works fine.
But creation takes much more time, because of data movements
out-and-in ceph cluster, instead snap and copy-on-write approach.
All documentation recommends "show_image_direct_url" setted to true
with Ceph storage, even if it exposes image locations.
hm, that's not good, I think this should be addressed again in the
openstack community. It sounds like it's still not properly resolved.
Zitat von "Tecnologia Charne.Net" <tecno(a)charne.net>et>:
> Thanks, Eugen, for your quick answer!
>
Yes, If I set "show_image_direct_url" to
false, creation of volumes
from images works fine.
But creation takes much more time, because of data movements
out-and-in ceph cluster, instead snap and copy-on-write approach.
All documentation recommends "show_image_direct_url" setted to true
with Ceph storage, even if it exposes image locations.
>
> Thanks again!
>
>
> El 28/4/21 a las 03:41, Eugen Block escribió:
>> Hi,
>>
>> the glance option "show_image_direct_url" has been marked as
>> deprecated for quite some time because it's a security issue, but
>> without it the interaction between glance and ceph didn't work very
>> well, I can't quite remember what the side effects were. It seems
>> that they now actually tried to get rid of it, and it seems to work
>> for you if you set it to false, right? Do you see any other side
>> effects when you set it to false?
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von "Tecnología CHARNE.NET" <tecno(a)charne.net>et>:
>>
>>> Hello!
>>>
>>> I'm working with Openstack Wallaby (1 controller, 2 compute nodes)
>>> connected to Ceph Pacific cluster in a devel environment.
>>>
>>> With Openstack Victoria and Ceph Pacific (before last friday
>>> update) everything was running like a charm.
>>>
>>> Then, I upgraded Openstack to Wallaby and Ceph to version 16.2.1.
>>> (Because of auth_allow_insecure_global_id_reclaim I had to upgrade
>>> many clients... but that's another story...)
>>>
>>> After upgrade, when I try to create a volume from image,
>>>
>>> openstack volume create --image
>>> f1df058d-be99-4401-82d9-4af9410744bc debian10_volume1 --size 5
>>>
>>> with "show_image_direct_url = True", I get "No valid
backend" in
>>> /var/log/cinder/cinder-scheduler.log
>>>
>>> 2021-04-26 20:35:24.957 41348 ERROR
>>> cinder.scheduler.flows.create_volume
>>> [req-651937e5-148f-409c-8296-33f200892e48
>>> c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd1b3be964
>>> - - -] Failed to run task
>>> cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create:
No valid backend was found. Exceeded max scheduling attempts 3 for resource
56fbb645-2c34-477d-9a59-beec78f4fd3f: cinder.exception.NoValidBackend: No valid backend
was found. Exceeded max scheduling attempts 3 for resource
>>> 56fbb645-2c34-477d-9a59-beec78f4fd3f
>>>
>>> and
>>>
>>> 2021-04-26 20:35:24.968 41347 ERROR oslo_messaging.rpc.server
>>> [req-651937e5-148f-409c-8296-33f200892e48
>>> c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd
>>> 1b3be964 - - -] Exception during message handling:
>>> rbd.InvalidArgument: [errno 22] RBD invalid argument (error
>>> creating clone)
>>>
>>> in /var/log/cinder/cinder-volume.log
>>>
>>>
>>> If I disable "show_image_direct_url = False", volume creation from
>>> image works fine.
>>>
>>>
>>> I have spent the last four days googling and reading lots of docs,
>>> old and new ones, unlucly...
>>>
>>> Does anybody have a clue, (please)?
>>>
>>> Thanks in advance!
>>>
>>>
>>> Javier.-
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io