[ovirt-users] problem importing ova vm
Jiří Sléžka
jiri.slezka at slu.cz
Wed Feb 21 17:10:27 UTC 2018
On 02/21/2018 05:35 PM, Arik Hadas wrote:
>
>
> On Wed, Feb 21, 2018 at 6:03 PM, Jiří Sléžka <jiri.slezka at slu.cz
> <mailto:jiri.slezka at slu.cz>> wrote:
>
> On 02/21/2018 03:43 PM, Jiří Sléžka wrote:
> > On 02/20/2018 11:09 PM, Arik Hadas wrote:
> >>
> >>
> >> On Tue, Feb 20, 2018 at 6:37 PM, Jiří Sléžka <jiri.slezka at slu.cz
> <mailto:jiri.slezka at slu.cz>
> >> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>> wrote:
> >>
> >> On 02/20/2018 03:48 PM, Arik Hadas wrote:
> >> >
> >> >
> >> > On Tue, Feb 20, 2018 at 3:49 PM, Jiří Sléžka
> <jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>
> >> > <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>>> wrote:
> >> >
> >> > Hi Arik,
> >> >
> >> > On 02/20/2018 01:22 PM, Arik Hadas wrote:
> >> > >
> >> > >
> >> > > On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka
> <jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>
> >> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>>
> >> > > <mailto:jiri.slezka at slu.cz
> <mailto:jiri.slezka at slu.cz> <mailto:jiri.slezka at slu.cz
> <mailto:jiri.slezka at slu.cz>>
> >> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>>>> wrote:
> >> > >
> >> > > Hi,
> >> > >
> >> > >
> >> > > Hi Jiří,
> >> > >
> >> > >
> >> > >
> >> > > I would like to try import some ova files into
> our oVirt
> >> instance [1]
> >> > > [2] but I facing problems.
> >> > >
> >> > > I have downloaded all ova images into one of hosts
> >> (ovirt01) into
> >> > > direcory /ova
> >> > >
> >> > > ll /ova/
> >> > > total 6532872
> >> > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21
> >> HAAS-hpcowrie.ovf
> >> > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22
> >> HAAS-hpdio.ova
> >> > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22
> >> HAAS-hpjdwpd.ova
> >> > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23
> >> HAAS-hptelnetd.ova
> >> > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23
> >> HAAS-hpuchotcp.ova
> >> > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24
> >> HAAS-hpuchoudp.ova
> >> > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24
> >> HAAS-hpuchoweb.ova
> >> > >
> >> > > Then I tried to import them - from host ovirt01 and
> >> directory /ova but
> >> > > spinner spins infinitly and nothing is happen.
> >> > >
> >> > >
> >> > > And does it work when you provide a path to the
> actual ova
> >> file, i.e.,
> >> > > /ova/HAAS-hpdio.ova, rather than to the directory?
> >> >
> >> > this time it ends with "Failed to load VM configuration
> from
> >> OVA file:
> >> > /ova/HAAS-hpdio.ova" error.
> >> >
> >> >
> >> > Note that the logic that is applied on a specified folder
> is "try
> >> > fetching an 'ova folder' out of the destination folder"
> rather than
> >> > "list all the ova files inside the specified folder". It seems
> >> that you
> >> > expected the former output since there are no disks in that
> >> folder, right?
> >>
> >> yes, It would be more user friendly to list all ova files and
> then
> >> select which one to import (like listing all vms in vmware
> import)
> >>
> >> Maybe description of path field in manager should be "Path to
> ova file"
> >> instead of "Path" :-)
> >>
> >>
> >> Sorry, I obviously meant 'latter' rather than 'former' before..
> >> Yeah, I agree that would be better, at least until listing the
> OVA files
> >> in the folder is implemented (that was the original plan, btw) -
> could
> >> you please file a bug?
> >
> > yes, sure
> >
> >
> >> > > I cannot see anything relevant in vdsm log of
> host ovirt01.
> >> > >
> >> > > In the engine.log of our standalone ovirt manager
> is just this
> >> > > relevant line
> >> > >
> >> > > 2018-02-20 12:35:04,289+01 INFO
> >> > >
> [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default
> >> > > task-31) [458990a7-b054-491a-904e-5c4fe44892c4]
> Executing Ansible
> >> > > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin
> >> > > [/usr/bin/ansible-playbook,
> >> > >
> --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa,
> >> > >
> --inventory=/tmp/ansible-inventory8237874608161160784,
> >> > > --extra-vars=ovirt_query_ova_path=/ova,
> >> > >
> /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile:
> >> > >
> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>
> >> >
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>>
> >> > > <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>
> >> <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>>
> >> > <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>
> >> <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>>>>.slu.cz.log]
> >> > >
> >> > > also there are two ansible processes which are
> still running
> >> > (and makes
> >> > > heavy load on system (load 9+ and growing, it
> looks like it
> >> > eats all the
> >> > > memory and system starts swapping))
> >> > >
> >> > > ovirt 32087 3.3 0.0 332252 5980 ? Sl
> >> 12:35 0:41
> >> > > /usr/bin/python2 /usr/bin/ansible-playbook
> >> > >
> --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa
> >> > > --inventory=/tmp/ansible-inventory8237874608161160784
> >> > > --extra-vars=ovirt_query_ova_path=/ova
> >> > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml
> >> > > ovirt 32099 57.5 78.9 15972880 11215312 ? R
> >> 12:35 11:52
> >> > > /usr/bin/python2 /usr/bin/ansible-playbook
> >> > >
> --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa
> >> > > --inventory=/tmp/ansible-inventory8237874608161160784
> >> > > --extra-vars=ovirt_query_ova_path=/ova
> >> > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml
> >> > >
> >> > > playbook looks like
> >> > >
> >> > > - hosts: all
> >> > > remote_user: root
> >> > > gather_facts: no
> >> > >
> >> > > roles:
> >> > > - ovirt-ova-query
> >> > >
> >> > > and it looks like it only runs query_ova.py but
> on all
> >> hosts?
> >> > >
> >> > >
> >> > > No, the engine provides ansible the host to run on
> when it
> >> > executes the
> >> > > playbook.
> >> > > It would only be executed on the selected host.
> >> > >
> >> > >
> >> > >
> >> > > How does this work? ...or should it work?
> >> > >
> >> > >
> >> > > It should, especially that part of querying the OVA
> and is
> >> supposed to
> >> > > be really quick.
> >> > > Can you please share the engine log and
> >> > >
> >> >
> >>
> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>
> >> >
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>>
> >> > > <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>
> >> <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>>
> >> > <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>
> >> <http://20180220123504-ovirt01.net
> <http://20180220123504-ovirt01.net>>>>.slu.cz.log ?
> >> >
> >> > engine log is here:
> >> >
> >> > https://pastebin.com/nWWM3UUq
> >> >
> >> >
> >> > Thanks.
> >> > Alright, so now the configuration is fetched but its
> processing fails.
> >> > We fixed many issues in this area recently, but it appears that
> >> > something is wrong with the actual size of the disk within
> the ovf file
> >> > that resides inside this ova file.
> >> > Can you please share that ovf file that resides
> inside /ova/HAAS-hpdio.ova?
> >>
> >> file HAAS-hpdio.ova
> >> HAAS-hpdio.ova: POSIX tar archive (GNU)
> >>
> >> [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova
> >> HAAS-hpdio.ovf
> >> HAAS-hpdio-disk001.vmdk
> >>
> >> file HAAS-hpdio.ovf is here:
> >>
> >> https://pastebin.com/80qAU0wB
> >>
> >>
> >> Thanks again.
> >> So that seems to be a VM that was exported from Virtual Box, right?
> >> They don't do anything that violates the OVF specification but
> they do
> >> some non-common things that we don't anticipate:
> >
> > yes, it is most likely ova from VirtualBox
> >
> >> First, they don't specify the actual size of the disk and the current
> >> code in oVirt relies on that property.
> >> There is a workaround for this though: you can extract an OVA
> file, edit
> >> its OVF configuration - adding ovf:populatedSize="X" (and change
> >> ovf:capacity as I'll describe next) to the Disk element inside the
> >> DiskSection and pack the OVA again (tar cvf <ovf_file>
> <disk_file) where
> >> X is either:
> >> 1. the actual size of the vmdk file + some buffer (iirc, we used
> to take
> >> 15% of extra space for the conversion)
> >> 2. if you're using a file storage or you don't mind consuming more
> >> storage space on your block storage, simply set X to the virtual
> size of
> >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g.,
> >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova.
> >>
> >> Second, the virtual size (indicated by ovf:capacity) is specified in
> >> bytes. The specification says that the default unit of allocation
> shall
> >> be bytes, but practically every OVA file that I've ever saw
> specified it
> >> in GB and the current code in oVirt kind of assumes that this is the
> >> case without checking the ovf:capacityAllocationUnits attribute that
> >> could indicate the real unit of allocation [1].
> >> Anyway, long story short, the virtual size of the disk should
> currently
> >> be specified in GB, e.g., ovf:populatedSize="20" in the case of
> >> HAAS-hpdio.ova.
> >
> > wow, thanks for this excellent explanation. I have changed this in
> ovf file
> >
> > ...
> > <Disk ovf:capacity="20" ovf:diskId="vmdisk2"
> ovf:populatedSize="20" ...
> > ...
> >
> > then I was able to import this mofified ova file (HAAS-hpdio_new.ova).
> > Interesting thing is that the vm was shown in vm list for while (with
> > state down with lock and status was initializing). After while this vm
> > disapeared :-o
> >
> > I am going to test it again and collect some logs...
>
> there are interesting logs in /var/log/vdsm/import/ at the host used for
> import
>
> http://mirror.slu.cz/tmp/ovirt-import.tar.bz2
> <http://mirror.slu.cz/tmp/ovirt-import.tar.bz2>
>
> first of them describes situation where I chose thick provisioning,
> second situation with thin provisioning
>
> interesting part is I believe
>
> libguestfs: command: run: qemu-img
> libguestfs: command: run: \ create
> libguestfs: command: run: \ -f qcow2
> libguestfs: command: run: \ -o preallocation=off,compat=0.10
> libguestfs: command: run: \
> /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec
> libguestfs: command: run: \ 21474836480
> Formatting
> '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec',
> fmt=qcow2 size=21474836480 compat=0.10 encryption=off cluster_size=65536
> preallocation=off lazy_refcounts=off refcount_bits=16
> libguestfs: trace: vdsm_disk_create: disk_create = 0
> qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2'
> '/var/tmp/v2vovl2dccbd.qcow2'
> '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec'
> qemu-img: error while writing sector 1000960: No space left on device
>
> virt-v2v: error: qemu-img command failed, see earlier errors
>
>
>
> Sorry again, I made a mistake in:
> "Anyway, long story short, the virtual size of the disk should currently
> be specified in GB, e.g., ovf:populatedSize="20" in the case of
> HAAS-hpdio.ova."
> I should have write ovf:capacity="20".
> So if you wish the actual size of the disk to be 20GB (which means the
> disk is preallocated), the disk element should be set with:
> <Disk ovf:capacity="20" ovf:diskId="vmdisk2"
> ovf:populatedSize="21474836480" ...
now I have this inf ovf file
<Disk ovf:capacity="20" ovf:diskId="vmdisk2"
ovf:populatedSize="21474836480"...
but while import it fails again, but in this case faster. It looks like
SPM cannot create disk image
log from SPM host...
2018-02-21 18:02:03,599+0100 INFO (jsonrpc/1) [vdsm.api] START
createVolume(sdUUID=u'69f6b3e7-d754-44cf-a665-9d7128260401',
spUUID=u'00000002-0002-0002-0002-0000000002b9',
imgUUID=u'0a5c4ecb-2c04-4f96-858a-4f74915d5caa', size=u'20',
volFormat=4, preallocate=2, diskType=u'DATA',
volUUID=u'bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0',
desc=u'{"DiskAlias":"HAAS-hpdio-disk001.vmdk","DiskDescription":""}',
srcImgUUID=u'00000000-0000-0000-0000-000000000000',
srcVolUUID=u'00000000-0000-0000-0000-000000000000',
initialSize=u'21474836480') from=::ffff:193.84.206.172,53154,
flow_id=e27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab,
task_id=e7598aa1-420a-4612-9ee8-03012b1277d9 (api:46)
2018-02-21 18:02:03,603+0100 INFO (jsonrpc/1) [IOProcessClient]
Starting client ioprocess-3931 (__init__:330)
2018-02-21 18:02:03,638+0100 INFO (ioprocess/56120) [IOProcess]
Starting ioprocess (__init__:452)
2018-02-21 18:02:03,661+0100 INFO (jsonrpc/1) [vdsm.api] FINISH
createVolume return=None from=::ffff:193.84.206.172,53154,
flow_id=e27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab,
task_id=e7598aa1-420a-4612-9ee8-03012b1277d9 (api:52)
2018-02-21 18:02:03,692+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer]
RPC call Volume.create succeeded in 0.09 seconds (__init__:573)
2018-02-21 18:02:03,694+0100 INFO (tasks/1)
[storage.ThreadPool.WorkerThread] START task
e7598aa1-420a-4612-9ee8-03012b1277d9 (cmd=<bound method Task.commit of
<vdsm.storage.task.Task instance at 0x3faa050>>, args=None) (threadPool:208)
2018-02-21 18:02:03,995+0100 INFO (tasks/1) [storage.StorageDomain]
Create placeholder
/rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665-9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa
for image's volumes (sd:1244)
2018-02-21 18:02:04,016+0100 INFO (tasks/1) [storage.Volume] Creating
volume bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 (volume:1151)
2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] The
requested initial 21474836480 is bigger than the max size 134217728
(blockVolume:345)
2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] Failed to
create volume
/rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665-9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa/bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0:
Invalid parameter: 'initial size=41943040' (volume:1175)
2018-02-21 18:02:04,061+0100 ERROR (tasks/1) [storage.Volume] Unexpected
error (volume:1215)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
1172, in create
initialSize=initialSize)
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py",
line 501, in _create
size, initialSize)
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py",
line 545, in calculate_volume_alloc_size
preallocate, capacity, initial_size)
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py",
line 347, in calculate_volume_alloc_size
initial_size)
InvalidParameterException: Invalid parameter: 'initial size=41943040'
2018-02-21 18:02:04,062+0100 ERROR (tasks/1) [storage.TaskManager.Task]
(Task='e7598aa1-420a-4612-9ee8-03012b1277d9') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
882, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
336, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
line 79, in wrapper
return method(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1936,
in createVolume
initialSize=initialSize)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 801,
in createVolume
initialSize=initialSize)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
1217, in create
(volUUID, e))
VolumeCreationError: Error creating a new volume: (u"Volume creation
bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 failed: Invalid parameter: 'initial
size=41943040'",)
there are no new logs in import folder on host used for import...
>
>
>
> >
> >> That should do it. If not, please share the OVA file and I will
> examine
> >> it in my environment.
> >
> > original file is at
> >
> > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova
> <https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova>
> >
> >>
> >>
> [1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaReader.java#L220
> <https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaReader.java#L220>
> >>
> >>
> >>
> >> > file
> >> >
> >>
> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>
> >> >
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>>
> >> > in the fact does not exists (nor folder
> /var/log/ovirt-engine/ova/)
> >> >
> >> >
> >> > This issue is also resolved in 4.2.2.
> >> > In the meantime, please create the
> /var/log/ovirt-engine/ova/ folder
> >> > manually and make sure its permissions match the ones of
> the other
> >> > folders in /var/log/ovirt-engine.
> >>
> >> ok, done. After another try there is this log file
> >>
> >>
> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ovirt01.net
> <http://ovirt-query-ova-ansible-20180220173005-ovirt01.net>
> >> <http://20180220173005-ovirt01.net
> <http://20180220173005-ovirt01.net>>.slu.cz.log
> >>
> >> https://pastebin.com/M5J44qur
> >>
> >>
> >> Is it the log of the execution of the ansible playbook that was
> provided
> >> with a path to the /ova folder?
> >> I'm interested in that in order to see how comes that its execution
> >> never completed.
> >
> > well, I dont think so, it is log from import with full path to ova
> file
> >
> >
> >
> >>
> >>
> >>
> >>
> >> > Cheers,
> >> >
> >> > Jiri Slezka
> >> >
> >> > >
> >> > >
> >> > >
> >> > > I am using latest 4.2.1.7-1.el7.centos version
> >> > >
> >> > > Cheers,
> >> > > Jiri Slezka
> >> > >
> >> > >
> >> > > [1] https://haas.cesnet.cz/#!index.md
> <https://haas.cesnet.cz/#!index.md>
> >> <https://haas.cesnet.cz/#!index.md
> <https://haas.cesnet.cz/#!index.md>>
> >> <https://haas.cesnet.cz/#!index.md
> <https://haas.cesnet.cz/#!index.md>
> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>>
> >> > > <https://haas.cesnet.cz/#!index.md
> <https://haas.cesnet.cz/#!index.md>
> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>
> >> > <https://haas.cesnet.cz/#!index.md
> <https://haas.cesnet.cz/#!index.md>
> >> <https://haas.cesnet.cz/#!index.md
> <https://haas.cesnet.cz/#!index.md>>>> - Cesnet HAAS
> >> > > [2] https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>
> >> <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>>
> >> > <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>
> >> <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>>>
> >> > > <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>
> >> <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>>
> >> > <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>
> >> <https://haas.cesnet.cz/downloads/release-01/
> <https://haas.cesnet.cz/downloads/release-01/>>>> - Image repository
> >> > >
> >> > >
> >> > > _______________________________________________
> >> > > Users mailing list
> >> > > Users at ovirt.org <mailto:Users at ovirt.org>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
> >> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>>
> >> > <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
> >> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>>>
> >> > > http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>
> >> > <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>>
> >> > > <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>
> >> > <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>>>
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > _______________________________________________
> >> > Users mailing list
> >> > Users at ovirt.org <mailto:Users at ovirt.org>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
> >> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>>
> >> > http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>
> >> > <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>>
> >> >
> >> >
> >>
> >>
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org <mailto:Users at ovirt.org>
> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
> >> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >> <http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>>
> >>
> >>
> >
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org <mailto:Users at ovirt.org>
> > http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
> >
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3716 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180221/8075c886/attachment.p7s>
More information about the Users
mailing list