Nir,
As requested:
[root@gamma ~]# lsblk
NAME MAJ:MIN RM
SIZE RO TYPE MOUNTPOINT
sda 8:0 0
557G 0 disk
├─sda1 8:1 0
500M 0 part /boot
└─sda2 8:2 0
556.5G 0 part
├─vg_gamma-lv_root (dm-0) 253:0 0
50G 0 lvm /
├─vg_gamma-lv_swap (dm-1) 253:1 0
4G 0 lvm [SWAP]
└─vg_gamma-lv_home (dm-2) 253:2 0
502.4G 0 lvm /home
sr0 11:0 1
1024M 0 rom
sdb 8:16 0
2T 0 disk
└─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0
2T 0 mpath
sdc 8:32 0
2T 0 disk
└─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0
2T 0 mpath
├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0
512M 0 lvm
├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0
128M 0 lvm
├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0
2G 0 lvm
├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0
128M 0 lvm
├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0
128M 0 lvm
└─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0
1G 0 lvm
sdd 8:48 0
4T 0 disk
└─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0
4T 0 mpath
├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0
512M 0 lvm
├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0
128M 0 lvm
├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0
2G 0 lvm
├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0
128M 0 lvm
├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0
128M 0 lvm
└─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0
1G 0 lvm
[root@gamma ~]#
Regards,
--
Fernando Fuentes
ffuentes(a)txweather.org
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes(a)darktcp.net>
wrote:
> Nir,
>
> That's odd. gamma is my iscsi host, its in up state and it has active
> VM's.
> What am I missing?
>
> Regards,
>
> --
> Fernando Fuentes
> ffuentes(a)txweather.org
>
http://www.txweather.org
>
> On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
>> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes(a)darktcp.net>
>> wrote:
>> > Nir,
>> >
>> > Ok I got the uuid but I am getting the same results as before.
>> > Nothing comes up.
>> >
>> > [root@gamma ~]# pvscan --cache
>> > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep
>> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
>> > [root@gamma ~]#
>> >
>> > without the grep all I get is:
>> >
>> > [root@gamma ~]# lvs -o vg_name,lv_name,tags
>> > VG LV LV Tags
>> > vg_gamma lv_home
>> > vg_gamma lv_root
>> > vg_gamma lv_swap
>>
>> You are not connected to the iscsi storage domain.
>>
>> Please try this from a host in up state in engine.
>>
>> Nir
>>
>> >
>> > On the other hand an fdisk shows a bunch of disks and here is one
>> > example:
>> >
>> > Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB,
>> > 2199023255552 bytes
>> > 255 heads, 63 sectors/track, 267349 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x00000000
>> >
>> >
>> > Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB,
>> > 2199023255552 bytes
>> > 255 heads, 63 sectors/track, 267349 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x00000000
>> >
>> >
>> > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536
>> > MB, 536870912 bytes
>> > 255 heads, 63 sectors/track, 65 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x00000000
>> >
>> > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073
>> > MB, 1073741824 bytes
>> > 255 heads, 63 sectors/track, 130 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x00000000
>> >
>> > Regards,
>> >
>> > --
>> > Fernando Fuentes
>> > ffuentes(a)txweather.org
>> >
http://www.txweather.org
>> >
>> > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
>> >> Nir,
>> >>
>> >> Ok ill look for it here in a few.
>> >> Thanks for your reply and help!
>> >>
>> >> --
>> >> Fernando Fuentes
>> >> ffuentes(a)txweather.org
>> >>
http://www.txweather.org
>> >>
>> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
>> >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes
<ffuentes(a)darktcp.net>
>> >> > wrote:
>> >> > > Nir,
>> >> > >
>> >> > > I try to follow your steps but I cant seem to find the ID of
the
>> >> > > template.
>> >> >
>> >> > The image-uuid of the template is displayed in the Disks tab in
engine.
>> >> >
>> >> > To find the volume-uuid on block storage, you can do:
>> >> >
>> >> > pvscan --cache
>> >> > lvs -o vg_name,lv_name,tags | grep image-uuid
>> >> >
>> >> > >
>> >> > > Regards,
>> >> > >
>> >> > > --
>> >> > > Fernando Fuentes
>> >> > > ffuentes(a)txweather.org
>> >> > >
http://www.txweather.org
>> >> > >
>> >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
>> >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler
<melissa(a)justmelly.com>
>> >> > >> wrote:
>> >> > >> > All, I did a test for Fernando in our ovirt
environment. I created a vm
>> >> > >> > called win7melly in the nfs domain. I then migrated
it to the iscsi
>> >> > >> > domain. It booted without any issue. So it has to be
something with the
>> >> > >> > templates. I have attached the vdsm log for the host
the vm resides on.
>> >> > >>
>> >> > >> The log show a working vm, so it does not help much.
>> >> > >>
>> >> > >> I think that the template you copied from the nfs domain
to the block
>> >> > >> domain is
>> >> > >> corrupted, or the volume metadata are incorrect.
>> >> > >>
>> >> > >> If I understand this correctly, this started when
Fernando could not copy
>> >> > >> the vm
>> >> > >> disk to the block storage, and I guess the issue was that
the template
>> >> > >> was missing
>> >> > >> on that storage domain. I assume that he copied the
template to the
>> >> > >> block storage
>> >> > >> domain by opening the templates tab, selecting the
template, and choosing
>> >> > >> copy
>> >> > >> from the menu.
>> >> > >>
>> >> > >> Lets compare the template on both nfs and block storage
domain.
>> >> > >>
>> >> > >> 1. Find the template on the nfs storage domain, using the
image uuid in
>> >> > >> engine.
>> >> > >>
>> >> > >> It should be at
>> >> > >>
>> >> > >>
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
>> >> > >>
>> >> > >> 2. Please share the output of:
>> >> > >>
>> >> > >> cat /path/to/volume.meta
>> >> > >> qemu-img info /path/to/volume
>> >> > >> qemu-img check /path/to/volume
>> >> > >>
>> >> > >> 4. Find the template on the block storage domain
>> >> > >>
>> >> > >> You should have an lv using the same volume uuid and the
image-uuid
>> >> > >> should be in the lv tag.
>> >> > >>
>> >> > >> Find it using:
>> >> > >>
>> >> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid
>> >> > >>
>> >> > >> 5. Activate the lv
>> >> > >>
>> >> > >> lvchange -ay vg_name/lv_name
>> >> > >>
>> >> > >> 6. Share the output of
>> >> > >>
>> >> > >> qemu-img info /dev/vg_name/lv_name
>> >> > >> qemu-img check /dev/vg_name/lv_name
>> >> > >>
>> >> > >> 7. Deactivate the lv
>> >> > >>
>> >> > >> lvchange -an vg_name/lv_name
>> >> > >>
>> >> > >> 8. Find the lv metadata
>> >> > >>
>> >> > >> The metadata is stored in /dev/vg_name/metadata. To find
the correct
>> >> > >> block,
>> >> > >> find the tag named MD_N in the lv tags you found in step
4
>> >> > >>
>> >> > >> The block we need is located at offset N from start of
volume.
>> >> > >>
>> >> > >> 9. Share the output of:
>> >> > >>
>> >> > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1
iflag=direct
>> >> > >>
>> >> > >> The output of this command should show the image-uuid.
>> >> > >>
>> >> > >> Nir
>> >> > >>
>> >> > >> >
>> >> > >> > - MeLLy
>> >> > >> >
>> >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes
wrote:
>> >> > >> >> Nir,
>> >> > >> >>
>> >> > >> >> That's exactly how I did it Nir.
>> >> > >> >> I will test tomorrow with a new Windows VM and
report back.
>> >> > >> >>
>> >> > >> >> Regards,
>> >> > >> >>
>> >> > >> >> --
>> >> > >> >> Fernando Fuentes
>> >> > >> >> ffuentes(a)txweather.org
>> >> > >> >>
http://www.txweather.org
>> >> > >> >>
>> >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer
wrote:
>> >> > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco
Romani <fromani(a)redhat.com>
>> >> > >> >> > wrote:
>> >> > >> >> > > ----- Original Message -----
>> >> > >> >> > >> From: "Nir Soffer"
<nsoffer(a)redhat.com>
>> >> > >> >> > >> To: "Fernando Fuentes"
<ffuentes(a)darktcp.net>
>> >> > >> >> > >> Cc: "Francesco Romani"
<fromani(a)redhat.com>, "users" <users(a)ovirt.org>
>> >> > >> >> > >> Sent: Saturday, July 2, 2016
11:18:01 AM
>> >> > >> >> > >> Subject: Re: [ovirt-users] disk
not bootable
>> >> > >> >> > >>
>> >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM,
Fernando Fuentes <ffuentes(a)darktcp.net>
>> >> > >> >> > >> wrote:
>> >> > >> >> > >> > Nir,
>> >> > >> >> > >> >
>> >> > >> >> > >> > Ok I ran another test and
this one I moved from NFS domain to iSCSI and
>> >> > >> >> > >> > stop working than I moved it
back and still unable to run... Windows VM
>> >> > >> >> > >> > is saying "no available
boot disk"
>> >> > >> >> > >> > VM: Win7-Test
>> >> > >> >> > >> > Host: Zeta
>> >> > >> >> > >> > Info as requested:
http://pastebin.com/1fSi3auz
>> >> > >> >> > >>
>> >> > >> >> > >> We need a working xml to compare
to.
>> >> > >> >> > >
>> >> > >> >> > > [snip expected changes]
>> >> > >> >> > >
>> >> > >> >> > >
>> >> > >> >> > >> <entry
name="manufacturer">oVirt</entry>
>> >> > >> >> > >> <entry
name="product">oVirt Node</entry>
>> >> > >> >> > >> <entry
name="version">6-5.el6.centos.11.2</entry>
>> >> > >> >> > >> - <entry
name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry>
>> >> > >> >> > >> + <entry
name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
>> >> > >> >> > >>
>> >> > >> >> > >> Not expected - maybe this is
confusing windows?
>> >> > >> >> > >>
>> >> > >> >> > >> Francesco, why vm serial has
changed after moving disks from one storage
>> >> > >> >> > >> domain
>> >> > >> >> > >> to another?
>> >> > >> >> > >
>> >> > >> >> > > We put in serial either
>> >> > >> >> > > 1. the UUID Engine send to us
>> >> > >> >> > > 2. the host UUID as returned by our
getHostUUID utility function
>> >> > >> >> > >
>> >> > >> >> > > the latter is unlikely to change, even
after this disk move.
>> >> > >> >> >
>> >> > >> >> > Fernando, can you describe exactly how you
moved the disk?
>> >> > >> >> >
>> >> > >> >> > I assume that you selected the vm in the
virtual machines tab, then
>> >> > >> >> > selected
>> >> > >> >> > disks from the sub tab, then selected move,
and selected the target
>> >> > >> >> > storage domain.
>> >> > >> >> >
>> >> > >> >> > Also, can you reproduce this with a new vm?
(create vm with disk nfs,
>> >> > >> >> > stop vm,
>> >> > >> >> > move disk to iscsi, start vm).
>> >> > >> >> >
>> >> > >> >> > > So the first suspect in line is
Engine
>> >> > >> >> > >
>> >> > >> >> > > Arik, do you know if Engine is indeed
supposed to change the UUID in this flow?
>> >> > >> >> > > That seems very surprising.
>> >> > >> >> > >
>> >> > >> >> > > Thanks and bests,
>> >> > >> >> > >
>> >> > >> >> > > --
>> >> > >> >> > > Francesco Romani
>> >> > >> >> > > RedHat Engineering Virtualization R
& D
>> >> > >> >> > > Phone: 8261328
>> >> > >> >> > > IRC: fromani
>> >> > >> >> _______________________________________________
>> >> > >> >> Users mailing list
>> >> > >> >> Users(a)ovirt.org
>> >> > >> >>
http://lists.ovirt.org/mailman/listinfo/users
>> >> > >> >
>> >> > >> > _______________________________________________
>> >> > >> > Users mailing list
>> >> > >> > Users(a)ovirt.org
>> >> > >> >
http://lists.ovirt.org/mailman/listinfo/users
>> >> > >> >
>> >> _______________________________________________
>> >> Users mailing list
>> >> Users(a)ovirt.org
>> >>
http://lists.ovirt.org/mailman/listinfo/users
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> >
http://lists.ovirt.org/mailman/listinfo/users