[ovirt-users] disk not bootable

Fernando Fuentes ffuentes at darktcp.net
Mon Jul 18 20:16:54 UTC 2016


Ops... forgot the link:
 
http://pastebin.com/LereJgyw
 
The requested infor is in the pastebin.
 
Regards,
 
 
--
Fernando Fuentes
ffuentes at txweather.org
http://www.txweather.org
 
 
 
On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
> Nir,
>
> After some playing around with pvscan I was able to get all of the
> need it information.
>
> Please see:
>
>
> --
> Fernando Fuentes
> ffuentes at txweather.org
> http://www.txweather.org
>
>
>
> On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
>> On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes
>> <ffuentes at darktcp.net> wrote:
>> > Nir,
>> >
>> > As requested:
>> >
>> > [root at gamma ~]# lsblk
>> > NAME
>> > MAJ:MIN RM
>> >   SIZE RO TYPE  MOUNTPOINT
>> > sda
>> > 8:0    0
>> >   557G  0 disk
>> > ├─sda1
>> > 8:1    0
>> >   500M  0 part  /boot
>> > └─sda2
>> > 8:2    0
>> > 556.5G  0 part
>> >   ├─vg_gamma-lv_root (dm-0)
>> >   253:0    0
>> >      50G  0 lvm   /
>> >   ├─vg_gamma-lv_swap (dm-1)
>> >   253:1    0
>> >       4G  0 lvm   [SWAP]
>> >   └─vg_gamma-lv_home (dm-2)
>> >   253:2    0
>> >   502.4G  0 lvm   /home
>> > sr0
>> > 11:0    1
>> >  1024M  0 rom
>> > sdb
>> > 8:16   0
>> >     2T  0 disk
>> > └─36589cfc000000881b9b93c2623780840 (dm-4)
>> > 253:4    0
>> >     2T  0 mpath
>> > sdc
>> > 8:32   0
>> >     2T  0 disk
>> > └─36589cfc00000050564002c7e51978316 (dm-3)
>> > 253:3    0
>> >     2T  0 mpath
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5)
>> >   253:5    0
>> >     512M  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6)
>> >   253:6    0
>> >     128M  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7)
>> >   253:7    0
>> >       2G  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8)
>> >   253:8    0
>> >     128M  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9)
>> >   253:9    0
>> >     128M  0 lvm
>> >   └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10)
>> >   253:10   0
>> >       1G  0 lvm
>> > sdd
>> > 8:48   0
>> >     4T  0 disk
>> > └─36589cfc00000059ccab70662b71c47ef (dm-11)
>> > 253:11   0
>> >     4T  0 mpath
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12)
>> >   253:12   0
>> >     512M  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13)
>> >   253:13   0
>> >     128M  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14)
>> >   253:14   0
>> >       2G  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15)
>> >   253:15   0
>> >     128M  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16)
>> >   253:16   0
>> >     128M  0 lvm
>> >   └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17)
>> >   253:17   0
>> >       1G  0 lvm
>>
>> So you have 2 storage domains:
>>
>> - 3ccb7b67-8067-4315-9656-d68ba10975ba
>> - 4861322b-352f-41c6-890a-5cbf1c2c1f01
>>
>> But most likely both of them are not active now.
>>
>> Can you share the output of:
>>
>>     iscsiadm -m session
>>
>> On a system connected to iscsi storage you will see something like:
>>
>> # iscsiadm -m session
>> tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
>>
>> The special lvs (ids, leases, ...) should be active, and you should
>> see also
>> regular disks lvs (used for snapshot for vm disks).
>>
>> Here is an example from machine connected to active iscsi domain:
>>
>> # lvs
>>   LV                                   VG
>>   Attr       LSize
>>   27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-------   1.00g
>>   35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-------   8.00g
>>   36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi------- 128.00m
>>   4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-------   2.12g
>>   c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi------- 128.00m
>>   d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-------   4.00g
>>   f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-------   1.00g
>>   f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-------   1.00g
>>   ids                                  5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-ao---- 128.00m
>>   inbox                                5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a----- 128.00m
>>   leases                               5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a-----   2.00g
>>   master                               5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a-----   1.00g
>>   metadata                             5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a----- 512.00m
>>   outbox                               5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a----- 128.00m
>>
>>
>>
>> > [root at gamma ~]#
>> >
>> > Regards,
>> >
>> > --
>> > Fernando Fuentes
>> > ffuentes at txweather.org
>> > http://www.txweather.org
>> >
>> > On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
>> >> Can you share output of lsblk on this host?
>> >>
>> >> On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes
>> >> <ffuentes at darktcp.net>
>> >> wrote:
>> >> > Nir,
>> >> >
>> >> > That's odd. gamma is my iscsi host, its in up state and it has
>> >> > active
>> >> > VM's.
>> >> > What am I missing?
>> >> >
>> >> > Regards,
>> >> >
>> >> > --
>> >> > Fernando Fuentes
>> >> > ffuentes at txweather.org
>> >> > http://www.txweather.org
>> >> >
>> >> > On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
>> >> >> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes
>> >> >> <ffuentes at darktcp.net>
>> >> >> wrote:
>> >> >> > Nir,
>> >> >> >
>> >> >> > Ok I got the uuid but I am getting the same results as
>> >> >> > before.
>> >> >> > Nothing comes up.
>> >> >> >
>> >> >> > [root at gamma ~]# pvscan --cache
>> >> >> > [root at gamma ~]# lvs -o vg_name,lv_name,tags | grep
>> >> >> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
>> >> >> > [root at gamma ~]#
>> >> >> >
>> >> >> > without the grep all I get is:
>> >> >> >
>> >> >> > [root at gamma ~]# lvs -o vg_name,lv_name,tags
>> >> >> >   VG       LV      LV Tags
>> >> >> >   vg_gamma lv_home
>> >> >> >   vg_gamma lv_root
>> >> >> >   vg_gamma lv_swap
>> >> >>
>> >> >> You are not connected to the iscsi storage domain.
>> >> >>
>> >> >> Please try this from a host in up state in engine.
>> >> >>
>> >> >> Nir
>> >> >>
>> >> >> >
>> >> >> > On the other hand an fdisk shows a bunch of disks and here is
>> >> >> > one
>> >> >> > example:
>> >> >> >
>> >> >> > Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0
>> >> >> > GB,
>> >> >> > 2199023255552 bytes
>> >> >> > 255 heads, 63 sectors/track, 267349 cylinders
>> >> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> >> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> >> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> >> >> > Disk identifier: 0x00000000
>> >> >> >
>> >> >> >
>> >> >> > Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0
>> >> >> > GB,
>> >> >> > 2199023255552 bytes
>> >> >> > 255 heads, 63 sectors/track, 267349 cylinders
>> >> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> >> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> >> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> >> >> > Disk identifier: 0x00000000
>> >> >> >
>> >> >> >
>> >> >> > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-
>> >> >> > metadata: 536
>> >> >> > MB, 536870912 bytes
>> >> >> > 255 heads, 63 sectors/track, 65 cylinders
>> >> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> >> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> >> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> >> >> > Disk identifier: 0x00000000
>> >> >> >
>> >> >> > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-
>> >> >> > master: 1073
>> >> >> > MB, 1073741824 bytes
>> >> >> > 255 heads, 63 sectors/track, 130 cylinders
>> >> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> >> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> >> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> >> >> > Disk identifier: 0x00000000
>> >> >> >
>> >> >> > Regards,
>> >> >> >
>> >> >> > --
>> >> >> > Fernando Fuentes
>> >> >> > ffuentes at txweather.org
>> >> >> > http://www.txweather.org
>> >> >> >
>> >> >> > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
>> >> >> >> Nir,
>> >> >> >>
>> >> >> >> Ok ill look for it here in a few.
>> >> >> >> Thanks for your reply and help!
>> >> >> >>
>> >> >> >> --
>> >> >> >> Fernando Fuentes
>> >> >> >> ffuentes at txweather.org
>> >> >> >> http://www.txweather.org
>> >> >> >>
>> >> >> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
>> >> >> >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes
>> >> >> >> > <ffuentes at darktcp.net>
>> >> >> >> > wrote:
>> >> >> >> > > Nir,
>> >> >> >> > >
>> >> >> >> > > I try to follow your steps but I cant seem to find the
>> >> >> >> > > ID of the
>> >> >> >> > > template.
>> >> >> >> >
>> >> >> >> > The image-uuid of the template is displayed in the Disks
>> >> >> >> > tab in engine.
>> >> >> >> >
>> >> >> >> > To find the volume-uuid on block storage, you can do:
>> >> >> >> >
>> >> >> >> > pvscan --cache
>> >> >> >> > lvs -o vg_name,lv_name,tags | grep image-uuid
>> >> >> >> >
>> >> >> >> > >
>> >> >> >> > > Regards,
>> >> >> >> > >
>> >> >> >> > > --
>> >> >> >> > > Fernando Fuentes
>> >> >> >> > > ffuentes at txweather.org
>> >> >> >> > > http://www.txweather.org
>> >> >> >> > >
>> >> >> >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
>> >> >> >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler
>> >> >> >> > >> <melissa at justmelly.com>
>> >> >> >> > >> wrote:
>> >> >> >> > >> > All, I did a test for Fernando in our ovirt
>> >> >> >> > >> > environment. I created a vm
>> >> >> >> > >> > called win7melly in the nfs domain. I then migrated
>> >> >> >> > >> > it to the iscsi
>> >> >> >> > >> > domain. It booted without any issue. So it has to be
>> >> >> >> > >> > something with the
>> >> >> >> > >> > templates. I have attached the vdsm log for the host
>> >> >> >> > >> > the vm resides on.
>> >> >> >> > >>
>> >> >> >> > >> The log show a working vm, so it does not help much.
>> >> >> >> > >>
>> >> >> >> > >> I think that the template you copied from the nfs
>> >> >> >> > >> domain to the block
>> >> >> >> > >> domain is
>> >> >> >> > >> corrupted, or the volume metadata are incorrect.
>> >> >> >> > >>
>> >> >> >> > >> If I understand this correctly, this started when
>> >> >> >> > >> Fernando could not copy
>> >> >> >> > >> the vm
>> >> >> >> > >> disk to the block storage, and I guess the issue was
>> >> >> >> > >> that the template
>> >> >> >> > >> was missing
>> >> >> >> > >> on that storage domain. I assume that he copied the
>> >> >> >> > >> template to the
>> >> >> >> > >> block storage
>> >> >> >> > >> domain by opening the templates tab, selecting the
>> >> >> >> > >> template, and choosing
>> >> >> >> > >> copy
>> >> >> >> > >> from the menu.
>> >> >> >> > >>
>> >> >> >> > >> Lets compare the template on both nfs and block storage
>> >> >> >> > >> domain.
>> >> >> >> > >>
>> >> >> >> > >> 1. Find the template on the nfs storage domain, using
>> >> >> >> > >>    the image uuid in
>> >> >> >> > >> engine.
>> >> >> >> > >>
>> >> >> >> > >> It should be at
>> >> >> >> > >>
>> >> >> >> > >>     /rhev/data-center/mnt/server:_path/domain-
>> >> >> >> > >>     uuid/images/image-uuid/volume-uuid
>> >> >> >> > >>
>> >> >> >> > >> 2. Please share the output of:
>> >> >> >> > >>
>> >> >> >> > >>     cat /path/to/volume.meta
>> >> >> >> > >>     qemu-img info /path/to/volume
>> >> >> >> > >>     qemu-img check /path/to/volume
>> >> >> >> > >>
>> >> >> >> > >> 4. Find the template on the block storage domain
>> >> >> >> > >>
>> >> >> >> > >> You should have an lv using the same volume uuid and
>> >> >> >> > >> the image-uuid
>> >> >> >> > >> should be in the lv tag.
>> >> >> >> > >>
>> >> >> >> > >> Find it using:
>> >> >> >> > >>
>> >> >> >> > >>     lvs -o vg_name,lv_name,tags | grep volume-uuid
>> >> >> >> > >>
>> >> >> >> > >> 5. Activate the lv
>> >> >> >> > >>
>> >> >> >> > >>     lvchange -ay vg_name/lv_name
>> >> >> >> > >>
>> >> >> >> > >> 6. Share the output of
>> >> >> >> > >>
>> >> >> >> > >>     qemu-img info /dev/vg_name/lv_name
>> >> >> >> > >>     qemu-img check /dev/vg_name/lv_name
>> >> >> >> > >>
>> >> >> >> > >> 7. Deactivate the lv
>> >> >> >> > >>
>> >> >> >> > >>     lvchange -an vg_name/lv_name
>> >> >> >> > >>
>> >> >> >> > >> 8. Find the lv metadata
>> >> >> >> > >>
>> >> >> >> > >> The metadata is stored in /dev/vg_name/metadata. To
>> >> >> >> > >> find the correct
>> >> >> >> > >> block,
>> >> >> >> > >> find the tag named MD_N in the lv tags you found in
>> >> >> >> > >> step 4
>> >> >> >> > >>
>> >> >> >> > >> The block we need is located at offset N from start of
>> >> >> >> > >> volume.
>> >> >> >> > >>
>> >> >> >> > >> 9. Share the output of:
>> >> >> >> > >>
>> >> >> >> > >>     dd if=/dev/vg_name/metadata bs=512 skip=N count=1
>> >> >> >> > >>     iflag=direct
>> >> >> >> > >>
>> >> >> >> > >> The output of this command should show the image-uuid.
>> >> >> >> > >>
>> >> >> >> > >> Nir
>> >> >> >> > >>
>> >> >> >> > >> >
>> >> >> >> > >> > - MeLLy
>> >> >> >> > >> >
>> >> >> >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes
>> >> >> >> > >> > wrote:
>> >> >> >> > >> >> Nir,
>> >> >> >> > >> >>
>> >> >> >> > >> >> That's exactly how I did it Nir.
>> >> >> >> > >> >> I will test tomorrow with a new Windows VM and
>> >> >> >> > >> >> report back.
>> >> >> >> > >> >>
>> >> >> >> > >> >> Regards,
>> >> >> >> > >> >>
>> >> >> >> > >> >> --
>> >> >> >> > >> >> Fernando Fuentes
>> >> >> >> > >> >> ffuentes at txweather.org
>> >> >> >> > >> >> http://www.txweather.org
>> >> >> >> > >> >>
>> >> >> >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
>> >> >> >> > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani
>> >> >> >> > >> >> > <fromani at redhat.com>
>> >> >> >> > >> >> > wrote:
>> >> >> >> > >> >> > > ----- Original Message -----
>> >> >> >> > >> >> > >> From: "Nir Soffer" <nsoffer at redhat.com>
>> >> >> >> > >> >> > >> To: "Fernando Fuentes" <ffuentes at darktcp.net>
>> >> >> >> > >> >> > >> Cc: "Francesco Romani" <fromani at redhat.com>,
>> >> >> >> > >> >> > >> "users" <users at ovirt.org>
>> >> >> >> > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM
>> >> >> >> > >> >> > >> Subject: Re: [ovirt-users] disk not bootable
>> >> >> >> > >> >> > >>
>> >> >> >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando
>> >> >> >> > >> >> > >> Fuentes <ffuentes at darktcp.net>
>> >> >> >> > >> >> > >> wrote:
>> >> >> >> > >> >> > >> > Nir,
>> >> >> >> > >> >> > >> >
>> >> >> >> > >> >> > >> > Ok I ran another test and this one I moved
>> >> >> >> > >> >> > >> > from NFS domain to iSCSI and
>> >> >> >> > >> >> > >> > stop working than I moved it back and still
>> >> >> >> > >> >> > >> > unable to run... Windows VM
>> >> >> >> > >> >> > >> > is saying "no available boot disk"
>> >> >> >> > >> >> > >> > VM: Win7-Test
>> >> >> >> > >> >> > >> > Host: Zeta
>> >> >> >> > >> >> > >> > Info as requested:
>> >> >> >> > >> >> > >> > http://pastebin.com/1fSi3auz
>> >> >> >> > >> >> > >>
>> >> >> >> > >> >> > >> We need a working xml to compare to.
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > > [snip expected changes]
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > >>   <entry name="manufacturer">oVirt</entry>
>> >> >> >> > >> >> > >>   <entry name="product">oVirt Node</entry>
>> >> >> >> > >> >> > >>   <entry name="version">6-
>> >> >> >> > >> >> > >>   5.el6.centos.11.2</entry>
>> >> >> >> > >> >> > >> - <entry name="serial">C938F077-55E2-3E50-A694-
>> >> >> >> > >> >> > >>   9FCB7661FD89</entry>
>> >> >> >> > >> >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-
>> >> >> >> > >> >> > >>   A99823E95AC0</entry>
>> >> >> >> > >> >> > >>
>> >> >> >> > >> >> > >> Not expected - maybe this is confusing windows?
>> >> >> >> > >> >> > >>
>> >> >> >> > >> >> > >> Francesco, why vm serial has changed after
>> >> >> >> > >> >> > >> moving disks from one storage
>> >> >> >> > >> >> > >> domain
>> >> >> >> > >> >> > >> to another?
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > > We put in serial either
>> >> >> >> > >> >> > > 1. the UUID Engine send to us
>> >> >> >> > >> >> > > 2. the host UUID as returned by our getHostUUID
>> >> >> >> > >> >> > >    utility function
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > > the latter is unlikely to change, even after
>> >> >> >> > >> >> > > this disk move.
>> >> >> >> > >> >> >
>> >> >> >> > >> >> > Fernando, can you describe exactly how you moved
>> >> >> >> > >> >> > the disk?
>> >> >> >> > >> >> >
>> >> >> >> > >> >> > I assume that you selected the vm in the virtual
>> >> >> >> > >> >> > machines tab, then
>> >> >> >> > >> >> > selected
>> >> >> >> > >> >> > disks from the sub tab, then selected move, and
>> >> >> >> > >> >> > selected the target
>> >> >> >> > >> >> > storage domain.
>> >> >> >> > >> >> >
>> >> >> >> > >> >> > Also, can you reproduce this with a new vm?
>> >> >> >> > >> >> > (create vm with disk nfs,
>> >> >> >> > >> >> > stop vm,
>> >> >> >> > >> >> > move disk to iscsi, start vm).
>> >> >> >> > >> >> >
>> >> >> >> > >> >> > > So the first suspect in line is Engine
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > > Arik, do you know if Engine is indeed supposed
>> >> >> >> > >> >> > > to change the UUID in this flow?
>> >> >> >> > >> >> > > That seems very surprising.
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > > Thanks and bests,
>> >> >> >> > >> >> > >
>> >> >> >> > >> >> > > --
>> >> >> >> > >> >> > > Francesco Romani
>> >> >> >> > >> >> > > RedHat Engineering Virtualization R & D
>> >> >> >> > >> >> > > Phone: 8261328
>> >> >> >> > >> >> > > IRC: fromani
>> >> >> >> > >> >> _______________________________________________
>> >> >> >> > >> >> Users mailing list
>> >> >> >> > >> >> Users at ovirt.org
>> >> >> >> > >> >> http://lists.ovirt.org/mailman/listinfo/users
>> >> >> >> > >> >
>> >> >> >> > >> > _______________________________________________
>> >> >> >> > >> > Users mailing list
>> >> >> >> > >> > Users at ovirt.org
>> >> >> >> > >> > http://lists.ovirt.org/mailman/listinfo/users
>> >> >> >> > >> >
>> >> >> >> _______________________________________________
>> >> >> >> Users mailing list
>> >> >> >> Users at ovirt.org
>> >> >> >> http://lists.ovirt.org/mailman/listinfo/users
>> >> >> > _______________________________________________
>> >> >> > Users mailing list
>> >> >> > Users at ovirt.org
>> >> >> > http://lists.ovirt.org/mailman/listinfo/users
>
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160718/2b055e16/attachment-0001.html>


More information about the Users mailing list