Nir,
I found something very interesting.
When exporting the same vms and templates and than importing them to a
new cluster running ovirt 4.0 that has iSCSI mounts ONLY, they import
and run just fine. I notice that the disk imported as Blank templates
and they do not seem to have any dependency to the templates. Is this
normal behavior?
Regards,
--
Fernando Fuentes
ffuentes(a)txweather.org
Nir,
I been busy and have not been able to replicate your request.
I will as soon as I get a chance I will.
Thanks again for the help.
Regards,
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
On Tue, Jul 19, 2016, at 08:54 AM, Fernando Fuentes wrote:
> Nir,
>
> Thanks for all the help!
> I am on it and will reply with the requested info asap.
>
> Regards,
>
> --
> Fernando Fuentes
> ffuentes(a)txweather.org
>
http://www.txweather.org
>
> On Tue, Jul 19, 2016, at 07:16 AM, Nir Soffer wrote:
> > On Mon, Jul 18, 2016 at 11:16 PM, Fernando Fuentes
<ffuentes(a)darktcp.net>
> > wrote:
> > > Ops... forgot the link:
> > >
> > >
http://pastebin.com/LereJgyw
> > >
> > > The requested infor is in the pastebin.
> >
> > So the issue is clear now, the template on NFS is using raw format, and
> > on block
> > stoage, qcow format:
> >
> > NFS:
> >
> > root@zeta ~]# cat
> >
/rhev/data-center/mnt/172.30.10.5\:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-
> > ...
> > FORMAT=RAW
> > ...
> >
> > [root@alpha ~]# qemu-img info
> >
/opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > image:
> >
/opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > ...
> > file format: raw
> > ...
> >
> > iSCSI:
> >
> > [root@zeta ~]# qemu-img info
> > /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > image:
> > /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e
> > ...
> > file format: qcow2
> > ...
> >
> > [root@zeta ~]# dd
> > if=/dev/0ef17024-0eae-4590-8eea-6ec8494fe223/metadata bs=512 skip=4
> > count=1 iflag=direct
> > ...
> > FORMAT=COW
> > ...
> >
> > This format conversion is expected, as we don't support raw/sparse on
> > block storage.
> >
> > It looks like the vm is started with the template disk as "raw"
> > format, which is expected
> > to fail when the format is actually "qcow2". The guest will see the
> > qcow headers instead
> > of the actual data.
> >
> > The next step to debug this is:
> >
> > 1. Copy a disk using this template to the block storage domain
> > 2. Create a new vm using this disk
> > 3. Start the vm
> >
> > Does it start? if not, attach engine and vdsm logs from this timefame.
> >
> > If this works, you can try:
> >
> > 1. Move vm disk from NFS to block storage
> > 2. Start the vm
> >
> > Again, it it does not work, add engine and vdsm logs.
> >
> > Nir
> >
> > >
> > > Regards,
> > >
> > >
> > > --
> > > Fernando Fuentes
> > > ffuentes(a)txweather.org
> > >
http://www.txweather.org
> > >
> > >
> > >
> > > On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
> > >
> > > Nir,
> > >
> > > After some playing around with pvscan I was able to get all of the need
it
> > > information.
> > >
> > > Please see:
> > >
> > >
> > > --
> > > Fernando Fuentes
> > > ffuentes(a)txweather.org
> > >
http://www.txweather.org
> > >
> > >
> > >
> > > On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
> > >
> > > On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes
<ffuentes(a)darktcp.net>
> > > wrote:
> > >> Nir,
> > >>
> > >> As requested:
> > >>
> > >> [root@gamma ~]# lsblk
> > >> NAME MAJ:MIN
RM
> > >> SIZE RO TYPE MOUNTPOINT
> > >> sda 8:0
0
> > >> 557G 0 disk
> > >> ├─sda1 8:1
0
> > >> 500M 0 part /boot
> > >> └─sda2 8:2
0
> > >> 556.5G 0 part
> > >> ├─vg_gamma-lv_root (dm-0) 253:0
0
> > >> 50G 0 lvm /
> > >> ├─vg_gamma-lv_swap (dm-1) 253:1
0
> > >> 4G 0 lvm [SWAP]
> > >> └─vg_gamma-lv_home (dm-2) 253:2
0
> > >> 502.4G 0 lvm /home
> > >> sr0 11:0
1
> > >> 1024M 0 rom
> > >> sdb 8:16
0
> > >> 2T 0 disk
> > >> └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4
0
> > >> 2T 0 mpath
> > >> sdc 8:32
0
> > >> 2T 0 disk
> > >> └─36589cfc00000050564002c7e51978316 (dm-3) 253:3
0
> > >> 2T 0 mpath
> > >> ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5
0
> > >> 512M 0 lvm
> > >> ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6
0
> > >> 128M 0 lvm
> > >> ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7
0
> > >> 2G 0 lvm
> > >> ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8
0
> > >> 128M 0 lvm
> > >> ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9
0
> > >> 128M 0 lvm
> > >> └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10
0
> > >> 1G 0 lvm
> > >> sdd 8:48
0
> > >> 4T 0 disk
> > >> └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11
0
> > >> 4T 0 mpath
> > >> ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12
0
> > >> 512M 0 lvm
> > >> ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13
0
> > >> 128M 0 lvm
> > >> ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14
0
> > >> 2G 0 lvm
> > >> ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15
0
> > >> 128M 0 lvm
> > >> ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16
0
> > >> 128M 0 lvm
> > >> └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17
0
> > >> 1G 0 lvm
> > >
> > > So you have 2 storage domains:
> > >
> > > - 3ccb7b67-8067-4315-9656-d68ba10975ba
> > > - 4861322b-352f-41c6-890a-5cbf1c2c1f01
> > >
> > > But most likely both of them are not active now.
> > >
> > > Can you share the output of:
> > >
> > > iscsiadm -m session
> > >
> > > On a system connected to iscsi storage you will see something like:
> > >
> > > # iscsiadm -m session
> > > tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
> > >
> > > The special lvs (ids, leases, ...) should be active, and you should see
also
> > > regular disks lvs (used for snapshot for vm disks).
> > >
> > > Here is an example from machine connected to active iscsi domain:
> > >
> > > # lvs
> > > LV VG
> > > Attr LSize
> > > 27c4c795-bca4-4d7b-9b40-cda9098790f5
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 1.00g
> > > 35be1f52-5b28-4c90-957a-710dbbb8f13f
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 8.00g
> > > 36d9b41b-4b01-4fc2-8e93-ccf79af0f766
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 128.00m
> > > 4fda3b44-27a5-4ce4-b8c3-66744aa9937b
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 2.12g
> > > c2e78f72-d499-44f0-91f5-9930a599dc87
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 128.00m
> > > d49919b4-30fc-440f-9b21-3367ddfdf396
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 4.00g
> > > f3b10280-43ed-4772-b122-18c92e098171
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 1.00g
> > > f409cc48-8248-4239-a4ea-66b0b1084416
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi------- 1.00g
> > > ids
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi-ao---- 128.00m
> > > inbox
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi-a----- 128.00m
> > > leases
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi-a----- 2.00g
> > > master
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi-a----- 1.00g
> > > metadata
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi-a----- 512.00m
> > > outbox
5f35b5c0-17d7-4475-9125-e97f1cdb06f9
> > > -wi-a----- 128.00m
> > >
> > >
> > >
> > >> [root@gamma ~]#
> > >>
> > >> Regards,
> > >>
> > >> --
> > >> Fernando Fuentes
> > >> ffuentes(a)txweather.org
> > >>
http://www.txweather.org
> > >>
> > >> On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
> > >>> Can you share output of lsblk on this host?
> > >>>
> > >>> On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes
<ffuentes(a)darktcp.net>
> > >>> wrote:
> > >>> > Nir,
> > >>> >
> > >>> > That's odd. gamma is my iscsi host, its in up state and
it has active
> > >>> > VM's.
> > >>> > What am I missing?
> > >>> >
> > >>> > Regards,
> > >>> >
> > >>> > --
> > >>> > Fernando Fuentes
> > >>> > ffuentes(a)txweather.org
> > >>> >
http://www.txweather.org
> > >>> >
> > >>> > On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
> > >>> >> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes
> > >>> >> <ffuentes(a)darktcp.net>
> > >>> >> wrote:
> > >>> >> > Nir,
> > >>> >> >
> > >>> >> > Ok I got the uuid but I am getting the same results
as before.
> > >>> >> > Nothing comes up.
> > >>> >> >
> > >>> >> > [root@gamma ~]# pvscan --cache
> > >>> >> > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep
> > >>> >> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
> > >>> >> > [root@gamma ~]#
> > >>> >> >
> > >>> >> > without the grep all I get is:
> > >>> >> >
> > >>> >> > [root@gamma ~]# lvs -o vg_name,lv_name,tags
> > >>> >> > VG LV LV Tags
> > >>> >> > vg_gamma lv_home
> > >>> >> > vg_gamma lv_root
> > >>> >> > vg_gamma lv_swap
> > >>> >>
> > >>> >> You are not connected to the iscsi storage domain.
> > >>> >>
> > >>> >> Please try this from a host in up state in engine.
> > >>> >>
> > >>> >> Nir
> > >>> >>
> > >>> >> >
> > >>> >> > On the other hand an fdisk shows a bunch of disks
and here is one
> > >>> >> > example:
> > >>> >> >
> > >>> >> > Disk /dev/mapper/36589cfc00000050564002c7e51978316:
2199.0 GB,
> > >>> >> > 2199023255552 bytes
> > >>> >> > 255 heads, 63 sectors/track, 267349 cylinders
> > >>> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >>> >> > Sector size (logical/physical): 512 bytes / 32768
bytes
> > >>> >> > I/O size (minimum/optimal): 32768 bytes / 1048576
bytes
> > >>> >> > Disk identifier: 0x00000000
> > >>> >> >
> > >>> >> >
> > >>> >> > Disk /dev/mapper/36589cfc000000881b9b93c2623780840:
2199.0 GB,
> > >>> >> > 2199023255552 bytes
> > >>> >> > 255 heads, 63 sectors/track, 267349 cylinders
> > >>> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >>> >> > Sector size (logical/physical): 512 bytes / 32768
bytes
> > >>> >> > I/O size (minimum/optimal): 32768 bytes / 1048576
bytes
> > >>> >> > Disk identifier: 0x00000000
> > >>> >> >
> > >>> >> >
> > >>> >> > Disk
/dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata:
> > >>> >> > 536
> > >>> >> > MB, 536870912 bytes
> > >>> >> > 255 heads, 63 sectors/track, 65 cylinders
> > >>> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >>> >> > Sector size (logical/physical): 512 bytes / 32768
bytes
> > >>> >> > I/O size (minimum/optimal): 32768 bytes / 1048576
bytes
> > >>> >> > Disk identifier: 0x00000000
> > >>> >> >
> > >>> >> > Disk
/dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master:
> > >>> >> > 1073
> > >>> >> > MB, 1073741824 bytes
> > >>> >> > 255 heads, 63 sectors/track, 130 cylinders
> > >>> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >>> >> > Sector size (logical/physical): 512 bytes / 32768
bytes
> > >>> >> > I/O size (minimum/optimal): 32768 bytes / 1048576
bytes
> > >>> >> > Disk identifier: 0x00000000
> > >>> >> >
> > >>> >> > Regards,
> > >>> >> >
> > >>> >> > --
> > >>> >> > Fernando Fuentes
> > >>> >> > ffuentes(a)txweather.org
> > >>> >> >
http://www.txweather.org
> > >>> >> >
> > >>> >> > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes
wrote:
> > >>> >> >> Nir,
> > >>> >> >>
> > >>> >> >> Ok ill look for it here in a few.
> > >>> >> >> Thanks for your reply and help!
> > >>> >> >>
> > >>> >> >> --
> > >>> >> >> Fernando Fuentes
> > >>> >> >> ffuentes(a)txweather.org
> > >>> >> >>
http://www.txweather.org
> > >>> >> >>
> > >>> >> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer
wrote:
> > >>> >> >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando
Fuentes
> > >>> >> >> > <ffuentes(a)darktcp.net>
> > >>> >> >> > wrote:
> > >>> >> >> > > Nir,
> > >>> >> >> > >
> > >>> >> >> > > I try to follow your steps but I cant
seem to find the ID of
> > >>> >> >> > > the
> > >>> >> >> > > template.
> > >>> >> >> >
> > >>> >> >> > The image-uuid of the template is displayed
in the Disks tab in
> > >>> >> >> > engine.
> > >>> >> >> >
> > >>> >> >> > To find the volume-uuid on block storage,
you can do:
> > >>> >> >> >
> > >>> >> >> > pvscan --cache
> > >>> >> >> > lvs -o vg_name,lv_name,tags | grep
image-uuid
> > >>> >> >> >
> > >>> >> >> > >
> > >>> >> >> > > Regards,
> > >>> >> >> > >
> > >>> >> >> > > --
> > >>> >> >> > > Fernando Fuentes
> > >>> >> >> > > ffuentes(a)txweather.org
> > >>> >> >> > >
http://www.txweather.org
> > >>> >> >> > >
> > >>> >> >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir
Soffer wrote:
> > >>> >> >> > >> On Thu, Jul 7, 2016 at 7:46 PM,
Melissa Mesler
> > >>> >> >> > >> <melissa(a)justmelly.com>
> > >>> >> >> > >> wrote:
> > >>> >> >> > >> > All, I did a test for
Fernando in our ovirt environment. I
> > >>> >> >> > >> > created a vm
> > >>> >> >> > >> > called win7melly in the nfs
domain. I then migrated it to
> > >>> >> >> > >> > the iscsi
> > >>> >> >> > >> > domain. It booted without any
issue. So it has to be
> > >>> >> >> > >> > something with the
> > >>> >> >> > >> > templates. I have attached
the vdsm log for the host the vm
> > >>> >> >> > >> > resides on.
> > >>> >> >> > >>
> > >>> >> >> > >> The log show a working vm, so it
does not help much.
> > >>> >> >> > >>
> > >>> >> >> > >> I think that the template you
copied from the nfs domain to
> > >>> >> >> > >> the block
> > >>> >> >> > >> domain is
> > >>> >> >> > >> corrupted, or the volume metadata
are incorrect.
> > >>> >> >> > >>
> > >>> >> >> > >> If I understand this correctly,
this started when Fernando
> > >>> >> >> > >> could not copy
> > >>> >> >> > >> the vm
> > >>> >> >> > >> disk to the block storage, and I
guess the issue was that the
> > >>> >> >> > >> template
> > >>> >> >> > >> was missing
> > >>> >> >> > >> on that storage domain. I assume
that he copied the template
> > >>> >> >> > >> to the
> > >>> >> >> > >> block storage
> > >>> >> >> > >> domain by opening the templates
tab, selecting the template,
> > >>> >> >> > >> and choosing
> > >>> >> >> > >> copy
> > >>> >> >> > >> from the menu.
> > >>> >> >> > >>
> > >>> >> >> > >> Lets compare the template on both
nfs and block storage
> > >>> >> >> > >> domain.
> > >>> >> >> > >>
> > >>> >> >> > >> 1. Find the template on the nfs
storage domain, using the
> > >>> >> >> > >> image uuid in
> > >>> >> >> > >> engine.
> > >>> >> >> > >>
> > >>> >> >> > >> It should be at
> > >>> >> >> > >>
> > >>> >> >> > >>
> > >>> >> >> > >>
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
> > >>> >> >> > >>
> > >>> >> >> > >> 2. Please share the output of:
> > >>> >> >> > >>
> > >>> >> >> > >> cat /path/to/volume.meta
> > >>> >> >> > >> qemu-img info /path/to/volume
> > >>> >> >> > >> qemu-img check
/path/to/volume
> > >>> >> >> > >>
> > >>> >> >> > >> 4. Find the template on the block
storage domain
> > >>> >> >> > >>
> > >>> >> >> > >> You should have an lv using the
same volume uuid and the
> > >>> >> >> > >> image-uuid
> > >>> >> >> > >> should be in the lv tag.
> > >>> >> >> > >>
> > >>> >> >> > >> Find it using:
> > >>> >> >> > >>
> > >>> >> >> > >> lvs -o vg_name,lv_name,tags |
grep volume-uuid
> > >>> >> >> > >>
> > >>> >> >> > >> 5. Activate the lv
> > >>> >> >> > >>
> > >>> >> >> > >> lvchange -ay vg_name/lv_name
> > >>> >> >> > >>
> > >>> >> >> > >> 6. Share the output of
> > >>> >> >> > >>
> > >>> >> >> > >> qemu-img info
/dev/vg_name/lv_name
> > >>> >> >> > >> qemu-img check
/dev/vg_name/lv_name
> > >>> >> >> > >>
> > >>> >> >> > >> 7. Deactivate the lv
> > >>> >> >> > >>
> > >>> >> >> > >> lvchange -an vg_name/lv_name
> > >>> >> >> > >>
> > >>> >> >> > >> 8. Find the lv metadata
> > >>> >> >> > >>
> > >>> >> >> > >> The metadata is stored in
/dev/vg_name/metadata. To find the
> > >>> >> >> > >> correct
> > >>> >> >> > >> block,
> > >>> >> >> > >> find the tag named MD_N in the lv
tags you found in step 4
> > >>> >> >> > >>
> > >>> >> >> > >> The block we need is located at
offset N from start of volume.
> > >>> >> >> > >>
> > >>> >> >> > >> 9. Share the output of:
> > >>> >> >> > >>
> > >>> >> >> > >> dd if=/dev/vg_name/metadata
bs=512 skip=N count=1
> > >>> >> >> > >> iflag=direct
> > >>> >> >> > >>
> > >>> >> >> > >> The output of this command should
show the image-uuid.
> > >>> >> >> > >>
> > >>> >> >> > >> Nir
> > >>> >> >> > >>
> > >>> >> >> > >> >
> > >>> >> >> > >> > - MeLLy
> > >>> >> >> > >> >
> > >>> >> >> > >> > On Mon, Jul 4, 2016, at 11:52
PM, Fernando Fuentes wrote:
> > >>> >> >> > >> >> Nir,
> > >>> >> >> > >> >>
> > >>> >> >> > >> >> That's exactly how I
did it Nir.
> > >>> >> >> > >> >> I will test tomorrow with
a new Windows VM and report back.
> > >>> >> >> > >> >>
> > >>> >> >> > >> >> Regards,
> > >>> >> >> > >> >>
> > >>> >> >> > >> >> --
> > >>> >> >> > >> >> Fernando Fuentes
> > >>> >> >> > >> >> ffuentes(a)txweather.org
> > >>> >> >> > >> >>
http://www.txweather.org
> > >>> >> >> > >> >>
> > >>> >> >> > >> >> On Mon, Jul 4, 2016, at
10:48 AM, Nir Soffer wrote:
> > >>> >> >> > >> >> > On Mon, Jul 4, 2016
at 6:43 PM, Francesco Romani
> > >>> >> >> > >> >> >
<fromani(a)redhat.com>
> > >>> >> >> > >> >> > wrote:
> > >>> >> >> > >> >> > > ----- Original
Message -----
> > >>> >> >> > >> >> > >> From:
"Nir Soffer" <nsoffer(a)redhat.com>
> > >>> >> >> > >> >> > >> To:
"Fernando Fuentes" <ffuentes(a)darktcp.net>
> > >>> >> >> > >> >> > >> Cc:
"Francesco Romani" <fromani(a)redhat.com>, "users"
> > >>> >> >> > >> >> > >>
<users(a)ovirt.org>
> > >>> >> >> > >> >> > >> Sent:
Saturday, July 2, 2016 11:18:01 AM
> > >>> >> >> > >> >> > >> Subject:
Re: [ovirt-users] disk not bootable
> > >>> >> >> > >> >> > >>
> > >>> >> >> > >> >> > >> On Sat, Jul
2, 2016 at 1:33 AM, Fernando Fuentes
> > >>> >> >> > >> >> > >>
<ffuentes(a)darktcp.net>
> > >>> >> >> > >> >> > >> wrote:
> > >>> >> >> > >> >> > >> > Nir,
> > >>> >> >> > >> >> > >> >
> > >>> >> >> > >> >> > >> > Ok I
ran another test and this one I moved from NFS
> > >>> >> >> > >> >> > >> > domain
to iSCSI and
> > >>> >> >> > >> >> > >> > stop
working than I moved it back and still unable
> > >>> >> >> > >> >> > >> > to
run... Windows VM
> > >>> >> >> > >> >> > >> > is
saying "no available boot disk"
> > >>> >> >> > >> >> > >> > VM:
Win7-Test
> > >>> >> >> > >> >> > >> > Host:
Zeta
> > >>> >> >> > >> >> > >> > Info
as requested:
http://pastebin.com/1fSi3auz
> > >>> >> >> > >> >> > >>
> > >>> >> >> > >> >> > >> We need a
working xml to compare to.
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > > [snip expected
changes]
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > >> <entry
name="manufacturer">oVirt</entry>
> > >>> >> >> > >> >> > >> <entry
name="product">oVirt Node</entry>
> > >>> >> >> > >> >> > >> <entry
name="version">6-5.el6.centos.11.2</entry>
> > >>> >> >> > >> >> > >> -
<entry
> > >>> >> >> > >> >> > >>
name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry>
> > >>> >> >> > >> >> > >> +
<entry
> > >>> >> >> > >> >> > >>
name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
> > >>> >> >> > >> >> > >>
> > >>> >> >> > >> >> > >> Not
expected - maybe this is confusing windows?
> > >>> >> >> > >> >> > >>
> > >>> >> >> > >> >> > >> Francesco,
why vm serial has changed after moving
> > >>> >> >> > >> >> > >> disks from
one storage
> > >>> >> >> > >> >> > >> domain
> > >>> >> >> > >> >> > >> to
another?
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > > We put in
serial either
> > >>> >> >> > >> >> > > 1. the UUID
Engine send to us
> > >>> >> >> > >> >> > > 2. the host
UUID as returned by our getHostUUID utility
> > >>> >> >> > >> >> > > function
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > > the latter is
unlikely to change, even after this disk
> > >>> >> >> > >> >> > > move.
> > >>> >> >> > >> >> >
> > >>> >> >> > >> >> > Fernando, can you
describe exactly how you moved the
> > >>> >> >> > >> >> > disk?
> > >>> >> >> > >> >> >
> > >>> >> >> > >> >> > I assume that you
selected the vm in the virtual machines
> > >>> >> >> > >> >> > tab, then
> > >>> >> >> > >> >> > selected
> > >>> >> >> > >> >> > disks from the sub
tab, then selected move, and selected
> > >>> >> >> > >> >> > the target
> > >>> >> >> > >> >> > storage domain.
> > >>> >> >> > >> >> >
> > >>> >> >> > >> >> > Also, can you
reproduce this with a new vm? (create vm
> > >>> >> >> > >> >> > with disk nfs,
> > >>> >> >> > >> >> > stop vm,
> > >>> >> >> > >> >> > move disk to iscsi,
start vm).
> > >>> >> >> > >> >> >
> > >>> >> >> > >> >> > > So the first
suspect in line is Engine
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > > Arik, do you
know if Engine is indeed supposed to
> > >>> >> >> > >> >> > > change the UUID
in this flow?
> > >>> >> >> > >> >> > > That seems very
surprising.
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > > Thanks and
bests,
> > >>> >> >> > >> >> > >
> > >>> >> >> > >> >> > > --
> > >>> >> >> > >> >> > > Francesco
Romani
> > >>> >> >> > >> >> > > RedHat
Engineering Virtualization R & D
> > >>> >> >> > >> >> > > Phone: 8261328
> > >>> >> >> > >> >> > > IRC: fromani
> > >>> >> >> > >> >>
_______________________________________________
> > >>> >> >> > >> >> Users mailing list
> > >>> >> >> > >> >> Users(a)ovirt.org
> > >>> >> >> > >> >>
http://lists.ovirt.org/mailman/listinfo/users
> > >>> >> >> > >> >
> > >>> >> >> > >> >
_______________________________________________
> > >>> >> >> > >> > Users mailing list
> > >>> >> >> > >> > Users(a)ovirt.org
> > >>> >> >> > >> >
http://lists.ovirt.org/mailman/listinfo/users
> > >>> >> >> > >> >
> > >>> >> >> _______________________________________________
> > >>> >> >> Users mailing list
> > >>> >> >> Users(a)ovirt.org
> > >>> >> >>
http://lists.ovirt.org/mailman/listinfo/users
> > >>> >> > _______________________________________________
> > >>> >> > Users mailing list
> > >>> >> > Users(a)ovirt.org
> > >>> >> >
http://lists.ovirt.org/mailman/listinfo/users
> > >
> > >
> > >
> > >
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users