
Team, After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on? oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt. -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org

Fernando, One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862 On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote: Team, After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on? oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt. -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Pavel, Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine. As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug? Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)? The most interesting part in the logs is the libvirt xml describing the vm. Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir. Thank you for your reply. Attached is the vdsm log of the hosts that power on the vm in this example. VM name is Win7-QA5 vdsm_alpha.log was the host that started the VM when it got transfer to the iscsi domain. vdsm_zeta.log was the host that started the VM when it got transfer back to the nfs domain. I included everything for trouble shooting purposes. Also here is the engine log that talks about it as well: http://pastebin.com/3cHBNMcg Hope this helps. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Fri, Jul 1, 2016, at 01:58 PM, Nir Soffer wrote:
On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)?
The most interesting part in the logs is the libvirt xml describing the vm.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Fernando, These logs do not contain the libvirt xml. I want to see the part that start like this: Thread-17656::INFO::2016-07-01 23:17:33,845::vm::2032::virt.vm::(_run) vmId=`f66da2b0-0f71-490b-a57e-eceb9be9cade`::<?xml version="1.0" encoding="utf-8"?> <domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0"> <name>test-iscsi</name> <uuid>f66da2b0-0f71-490b-a57e-eceb9be9cade</uuid> ... The most important part is the devices sections showing the disks: <disk device="disk" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0"/> <source dev="/rhev/data-center/f9374c0e-ae24-4bc1-a596-f61d5f05bc5f/5f35b5c0-17d7-4475-9125-e97f1cdb06f9/images/8f61b697-b3e4-4f63-8951-bdf9b6b39192/f409cc48-8248-4239-a4ea-66b0b1084416"/> <target bus="scsi" dev="sda"/> <serial>8f61b697-b3e4-4f63-8951-bdf9b6b39192</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> </disk> Please try to find the logs (probably already rotated to vdsm.log.N.xz) containing the xml for this vm when running on nfs and iscsi. Nir On Fri, Jul 1, 2016 at 10:31 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir.
Thank you for your reply.
Attached is the vdsm log of the hosts that power on the vm in this example.
VM name is Win7-QA5 vdsm_alpha.log was the host that started the VM when it got transfer to the iscsi domain. vdsm_zeta.log was the host that started the VM when it got transfer back to the nfs domain.
I included everything for trouble shooting purposes.
Also here is the engine log that talks about it as well: http://pastebin.com/3cHBNMcg
Hope this helps.
Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 01:58 PM, Nir Soffer wrote:
On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)?
The most interesting part in the logs is the libvirt xml describing the vm.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, I apologize I'll look for it ASAP. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Fri, Jul 1, 2016, at 03:22 PM, Nir Soffer wrote:
Fernando,
These logs do not contain the libvirt xml.
I want to see the part that start like this:
Thread-17656::INFO::2016-07-01 23:17:33,845::vm::2032::virt.vm::(_run) vmId=`f66da2b0-0f71-490b-a57e-eceb9be9cade`::<?xml version="1.0" encoding="utf-8"?> <domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0"> <name>test-iscsi</name> <uuid>f66da2b0-0f71-490b-a57e-eceb9be9cade</uuid> ...
The most important part is the devices sections showing the disks:
<disk device="disk" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0"/> <source dev="/rhev/data-center/f9374c0e-ae24-4bc1-a596-f61d5f05bc5f/5f35b5c0-17d7-4475-9125-e97f1cdb06f9/images/8f61b697-b3e4-4f63-8951-bdf9b6b39192/f409cc48-8248-4239-a4ea-66b0b1084416"/> <target bus="scsi" dev="sda"/> <serial>8f61b697-b3e4-4f63-8951-bdf9b6b39192</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> </disk>
Please try to find the logs (probably already rotated to vdsm.log.N.xz) containing the xml for this vm when running on nfs and iscsi.
Nir
On Fri, Jul 1, 2016 at 10:31 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir.
Thank you for your reply.
Attached is the vdsm log of the hosts that power on the vm in this example.
VM name is Win7-QA5 vdsm_alpha.log was the host that started the VM when it got transfer to the iscsi domain. vdsm_zeta.log was the host that started the VM when it got transfer back to the nfs domain.
I included everything for trouble shooting purposes.
Also here is the engine log that talks about it as well: http://pastebin.com/3cHBNMcg
Hope this helps.
Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 01:58 PM, Nir Soffer wrote:
On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)?
The most interesting part in the logs is the libvirt xml describing the vm.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz Now here is an example of one that I migrated from the NFS domain to the iscsi domain and stop working: VM: Win7-QA5 Host: Alpha Info as requested: http://pastebin.com/BXqLFqZt The same VM I moved it back to the NFS domain and boot it just fine: VM: Win7-QA5 Host: Zeta Info as requested: http://pastebin.com/7BA97CsC -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Fri, Jul 1, 2016, at 03:22 PM, Nir Soffer wrote:
Fernando,
These logs do not contain the libvirt xml.
I want to see the part that start like this:
Thread-17656::INFO::2016-07-01 23:17:33,845::vm::2032::virt.vm::(_run) vmId=`f66da2b0-0f71-490b-a57e-eceb9be9cade`::<?xml version="1.0" encoding="utf-8"?> <domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0"> <name>test-iscsi</name> <uuid>f66da2b0-0f71-490b-a57e-eceb9be9cade</uuid> ...
The most important part is the devices sections showing the disks:
<disk device="disk" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0"/> <source dev="/rhev/data-center/f9374c0e-ae24-4bc1-a596-f61d5f05bc5f/5f35b5c0-17d7-4475-9125-e97f1cdb06f9/images/8f61b697-b3e4-4f63-8951-bdf9b6b39192/f409cc48-8248-4239-a4ea-66b0b1084416"/> <target bus="scsi" dev="sda"/> <serial>8f61b697-b3e4-4f63-8951-bdf9b6b39192</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> </disk>
Please try to find the logs (probably already rotated to vdsm.log.N.xz) containing the xml for this vm when running on nfs and iscsi.
Nir
On Fri, Jul 1, 2016 at 10:31 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir.
Thank you for your reply.
Attached is the vdsm log of the hosts that power on the vm in this example.
VM name is Win7-QA5 vdsm_alpha.log was the host that started the VM when it got transfer to the iscsi domain. vdsm_zeta.log was the host that started the VM when it got transfer back to the nfs domain.
I included everything for trouble shooting purposes.
Also here is the engine log that talks about it as well: http://pastebin.com/3cHBNMcg
Hope this helps.
Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 01:58 PM, Nir Soffer wrote:
On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)?
The most interesting part in the logs is the libvirt xml describing the vm.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
Now here is an example of one that I migrated from the NFS domain to the iscsi domain and stop working: VM: Win7-QA5 Host: Alpha Info as requested: http://pastebin.com/BXqLFqZt
The same VM I moved it back to the NFS domain and boot it just fine: VM: Win7-QA5 Host: Zeta Info as requested: http://pastebin.com/7BA97CsC
Diffing both xmls show: $ diff -u win-qa5-iscsi.txt win-qa5-nfs.txt --- win-qa5-iscsi.txt 2016-07-02 12:10:17.094202558 +0300 +++ win-qa5-nfs.txt 2016-07-02 12:11:06.729375231 +0300 @@ -60,13 +60,13 @@ <readonly/> <serial/> </disk> - <disk device="disk" snapshot="no" type="block"> + <disk device="disk" snapshot="no" type="file"> Expected, one vm is using block based disk and the other file based disk <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/> - <source dev="/rhev/data-center/mnt/blockSD/4861322b-352f-41c6-890a-5cbf1c2c1f01/images/25ebd3ac-fe5a-4cbb-9f71-c6b4325231d8/763a9a47-6b29-420a-83c9-c8a887bd15df"/> + <source file="/rhev/data-center/mnt/172.30.10.5:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/25ebd3ac-fe5a-4cbb-9f71-c6b4325231d8/763a9a47-6b29-420a-83c9-c8a887bd15df"/> Expected, file and block disks are mounted on different paths <target bus="virtio" dev="vda"/> <serial>25ebd3ac-fe5a-4cbb-9f71-c6b4325231d8</serial> <boot order="1"/> - <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> + <driver cache="none" error_policy="stop" io="threads" name="qemu" type="qcow2"/> Expected, we use different io for file and block </disk> </devices> <os> @@ -78,7 +78,7 @@ <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> Not expected - maybe this is confusing windows? Francesco, why vm serial has changed after moving disks from one storage domain to another? <entry name="uuid">46e6d52c-2567-4f23-9a07-66f64990663a</entry> </system> </sysinfo>
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 03:22 PM, Nir Soffer wrote:
Fernando,
These logs do not contain the libvirt xml.
I want to see the part that start like this:
Thread-17656::INFO::2016-07-01 23:17:33,845::vm::2032::virt.vm::(_run) vmId=`f66da2b0-0f71-490b-a57e-eceb9be9cade`::<?xml version="1.0" encoding="utf-8"?> <domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0"> <name>test-iscsi</name> <uuid>f66da2b0-0f71-490b-a57e-eceb9be9cade</uuid> ...
The most important part is the devices sections showing the disks:
<disk device="disk" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0"/> <source dev="/rhev/data-center/f9374c0e-ae24-4bc1-a596-f61d5f05bc5f/5f35b5c0-17d7-4475-9125-e97f1cdb06f9/images/8f61b697-b3e4-4f63-8951-bdf9b6b39192/f409cc48-8248-4239-a4ea-66b0b1084416"/> <target bus="scsi" dev="sda"/> <serial>8f61b697-b3e4-4f63-8951-bdf9b6b39192</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> </disk>
Please try to find the logs (probably already rotated to vdsm.log.N.xz) containing the xml for this vm when running on nfs and iscsi.
Nir
On Fri, Jul 1, 2016 at 10:31 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir.
Thank you for your reply.
Attached is the vdsm log of the hosts that power on the vm in this example.
VM name is Win7-QA5 vdsm_alpha.log was the host that started the VM when it got transfer to the iscsi domain. vdsm_zeta.log was the host that started the VM when it got transfer back to the nfs domain.
I included everything for trouble shooting purposes.
Also here is the engine log that talks about it as well: http://pastebin.com/3cHBNMcg
Hope this helps.
Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 01:58 PM, Nir Soffer wrote:
On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)?
The most interesting part in the logs is the libvirt xml describing the vm.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote:
Fernando,
One from VM disks have to have bootable flag. See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862
On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> wrote:
Team,
After I successfully copy my template from one storage to another I was able to move my VM disks from my nfs domain to my iscsi domain. My linux vms which are basic template (No template) moved just fine and boot just fine. On the other hand my windows vms (Template) once moved they cant boot. Complaining that there is no bootable disk available. What is going on?
oVirt 3.6.6 Hosts Centos 6.6 x86_64 iSCSI Domain on TrueNAS Attached via ovirt.
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, I will try and get you a working xml. Any updates on your question from the team? TIA! -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Sat, Jul 2, 2016, at 04:18 AM, Nir Soffer wrote:
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
Now here is an example of one that I migrated from the NFS domain to the iscsi domain and stop working: VM: Win7-QA5 Host: Alpha Info as requested: http://pastebin.com/BXqLFqZt
The same VM I moved it back to the NFS domain and boot it just fine: VM: Win7-QA5 Host: Zeta Info as requested: http://pastebin.com/7BA97CsC
Diffing both xmls show:
$ diff -u win-qa5-iscsi.txt win-qa5-nfs.txt --- win-qa5-iscsi.txt 2016-07-02 12:10:17.094202558 +0300 +++ win-qa5-nfs.txt 2016-07-02 12:11:06.729375231 +0300 @@ -60,13 +60,13 @@ <readonly/> <serial/> </disk> - <disk device="disk" snapshot="no" type="block"> + <disk device="disk" snapshot="no" type="file">
Expected, one vm is using block based disk and the other file based disk
<address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/> - <source dev="/rhev/data-center/mnt/blockSD/4861322b-352f-41c6-890a-5cbf1c2c1f01/images/25ebd3ac-fe5a-4cbb-9f71-c6b4325231d8/763a9a47-6b29-420a-83c9-c8a887bd15df"/> + <source file="/rhev/data-center/mnt/172.30.10.5:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/25ebd3ac-fe5a-4cbb-9f71-c6b4325231d8/763a9a47-6b29-420a-83c9-c8a887bd15df"/>
Expected, file and block disks are mounted on different paths
<target bus="virtio" dev="vda"/> <serial>25ebd3ac-fe5a-4cbb-9f71-c6b4325231d8</serial> <boot order="1"/> - <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> + <driver cache="none" error_policy="stop" io="threads" name="qemu" type="qcow2"/>
Expected, we use different io for file and block
</disk> </devices> <os> @@ -78,7 +78,7 @@ <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
<entry name="uuid">46e6d52c-2567-4f23-9a07-66f64990663a</entry> </system> </sysinfo>
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 03:22 PM, Nir Soffer wrote:
Fernando,
These logs do not contain the libvirt xml.
I want to see the part that start like this:
Thread-17656::INFO::2016-07-01 23:17:33,845::vm::2032::virt.vm::(_run) vmId=`f66da2b0-0f71-490b-a57e-eceb9be9cade`::<?xml version="1.0" encoding="utf-8"?> <domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0"> <name>test-iscsi</name> <uuid>f66da2b0-0f71-490b-a57e-eceb9be9cade</uuid> ...
The most important part is the devices sections showing the disks:
<disk device="disk" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0"/> <source dev="/rhev/data-center/f9374c0e-ae24-4bc1-a596-f61d5f05bc5f/5f35b5c0-17d7-4475-9125-e97f1cdb06f9/images/8f61b697-b3e4-4f63-8951-bdf9b6b39192/f409cc48-8248-4239-a4ea-66b0b1084416"/> <target bus="scsi" dev="sda"/> <serial>8f61b697-b3e4-4f63-8951-bdf9b6b39192</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/> </disk>
Please try to find the logs (probably already rotated to vdsm.log.N.xz) containing the xml for this vm when running on nfs and iscsi.
Nir
On Fri, Jul 1, 2016 at 10:31 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir.
Thank you for your reply.
Attached is the vdsm log of the hosts that power on the vm in this example.
VM name is Win7-QA5 vdsm_alpha.log was the host that started the VM when it got transfer to the iscsi domain. vdsm_zeta.log was the host that started the VM when it got transfer back to the nfs domain.
I included everything for trouble shooting purposes.
Also here is the engine log that talks about it as well: http://pastebin.com/3cHBNMcg
Hope this helps.
Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 01:58 PM, Nir Soffer wrote:
On Fri, Jul 1, 2016 at 7:58 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Pavel,
Thanks for your reply, But thats not the problem The disk is active and is set with to bootable. The problem presents it self only if I move Windows VM DIsks. All my Linux VM Disk's move across storage domains just fine.
As a matter of fact I moved the disk back to the original nfs domain and it started working. Could this be a bug?
Can you share the vdsm log showing startup of the vm when the disk is on on nfs storage domain (vm starts) and iscsi storage domain (vm fail)?
The most interesting part in the logs is the libvirt xml describing the vm.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Fri, Jul 1, 2016, at 11:33 AM, Pavel Gashev wrote: > Fernando, > > One from VM disks have to have bootable flag. > See http://www.ovirt.org/images/wiki/Add_Virtual_Disk.png?1454370862 > > > On 01/07/16 19:09, "users-bounces@ovirt.org on behalf of Fernando > Fuentes" <users-bounces@ovirt.org on behalf of ffuentes@darktcp.net> > wrote: > > Team, > > After I successfully copy my template from one storage to another I was > able to move my VM disks from my nfs domain to my iscsi domain. > My linux vms which are basic template (No template) moved just fine and > boot just fine. > On the other hand my windows vms (Template) once moved they cant boot. > Complaining that there is no bootable disk available. > What is going on? > > oVirt 3.6.6 > Hosts Centos 6.6 x86_64 > iSCSI Domain on TrueNAS Attached via ovirt. > > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function the latter is unlikely to change, even after this disk move. So the first suspect in line is Engine Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising. Thanks and bests, -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk? I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain. Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

Nir, That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on. - MeLLy On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much. I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect. If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu. Lets compare the template on both nfs and block storage domain. 1. Find the template on the nfs storage domain, using the image uuid in engine. It should be at /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid 2. Please share the output of: cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume 4. Find the template on the block storage domain You should have an lv using the same volume uuid and the image-uuid should be in the lv tag. Find it using: lvs -o vg_name,lv_name,tags | grep volume-uuid 5. Activate the lv lvchange -ay vg_name/lv_name 6. Share the output of qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name 7. Deactivate the lv lvchange -an vg_name/lv_name 8. Find the lv metadata The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4 The block we need is located at offset N from start of volume. 9. Share the output of: dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct The output of this command should show the image-uuid. Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, I am on it and will reply with the requested info asap. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote: > Nir, > > Ok I ran another test and this one I moved from NFS domain to iSCSI and > stop working than I moved it back and still unable to run... Windows VM > is saying "no available boot disk" > VM: Win7-Test > Host: Zeta > Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, I try to follow your steps but I cant seem to find the ID of the template. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote: > Nir, > > Ok I ran another test and this one I moved from NFS domain to iSCSI and > stop working than I moved it back and still unable to run... Windows VM > is saying "no available boot disk" > VM: Win7-Test > Host: Zeta > Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
I try to follow your steps but I cant seem to find the ID of the template.
The image-uuid of the template is displayed in the Disks tab in engine. To find the volume-uuid on block storage, you can do: pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message ----- > From: "Nir Soffer" <nsoffer@redhat.com> > To: "Fernando Fuentes" <ffuentes@darktcp.net> > Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> > Sent: Saturday, July 2, 2016 11:18:01 AM > Subject: Re: [ovirt-users] disk not bootable > > On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> > wrote: > > Nir, > > > > Ok I ran another test and this one I moved from NFS domain to iSCSI and > > stop working than I moved it back and still unable to run... Windows VM > > is saying "no available boot disk" > > VM: Win7-Test > > Host: Zeta > > Info as requested: http://pastebin.com/1fSi3auz > > We need a working xml to compare to.
[snip expected changes]
> <entry name="manufacturer">oVirt</entry> > <entry name="product">oVirt Node</entry> > <entry name="version">6-5.el6.centos.11.2</entry> > - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > > Not expected - maybe this is confusing windows? > > Francesco, why vm serial has changed after moving disks from one storage > domain > to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, Ok ill look for it here in a few. Thanks for your reply and help! -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
I try to follow your steps but I cant seem to find the ID of the template.
The image-uuid of the template is displayed in the Disks tab in engine.
To find the volume-uuid on block storage, you can do:
pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote:
On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> wrote: > ----- Original Message ----- >> From: "Nir Soffer" <nsoffer@redhat.com> >> To: "Fernando Fuentes" <ffuentes@darktcp.net> >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> >> Sent: Saturday, July 2, 2016 11:18:01 AM >> Subject: Re: [ovirt-users] disk not bootable >> >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> >> wrote: >> > Nir, >> > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and >> > stop working than I moved it back and still unable to run... Windows VM >> > is saying "no available boot disk" >> > VM: Win7-Test >> > Host: Zeta >> > Info as requested: http://pastebin.com/1fSi3auz >> >> We need a working xml to compare to. > > [snip expected changes] > > >> <entry name="manufacturer">oVirt</entry> >> <entry name="product">oVirt Node</entry> >> <entry name="version">6-5.el6.centos.11.2</entry> >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> >> >> Not expected - maybe this is confusing windows? >> >> Francesco, why vm serial has changed after moving disks from one storage >> domain >> to another? > > We put in serial either > 1. the UUID Engine send to us > 2. the host UUID as returned by our getHostUUID utility function > > the latter is unlikely to change, even after this disk move.
Fernando, can you describe exactly how you moved the disk?
I assume that you selected the vm in the virtual machines tab, then selected disks from the sub tab, then selected move, and selected the target storage domain.
Also, can you reproduce this with a new vm? (create vm with disk nfs, stop vm, move disk to iscsi, start vm).
> So the first suspect in line is Engine > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? > That seems very surprising. > > Thanks and bests, > > -- > Francesco Romani > RedHat Engineering Virtualization R & D > Phone: 8261328 > IRC: fromani
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, Ok I got the uuid but I am getting the same results as before. Nothing comes up. [root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]# without the grep all I get is: [root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap On the other hand an fdisk shows a bunch of disks and here is one example: Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000 Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
Nir,
Ok ill look for it here in a few. Thanks for your reply and help!
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
I try to follow your steps but I cant seem to find the ID of the template.
The image-uuid of the template is displayed in the Disks tab in engine.
To find the volume-uuid on block storage, you can do:
pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote:
Nir,
That's exactly how I did it Nir. I will test tomorrow with a new Windows VM and report back.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> > wrote: > > ----- Original Message ----- > >> From: "Nir Soffer" <nsoffer@redhat.com> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> > >> Sent: Saturday, July 2, 2016 11:18:01 AM > >> Subject: Re: [ovirt-users] disk not bootable > >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> > >> wrote: > >> > Nir, > >> > > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and > >> > stop working than I moved it back and still unable to run... Windows VM > >> > is saying "no available boot disk" > >> > VM: Win7-Test > >> > Host: Zeta > >> > Info as requested: http://pastebin.com/1fSi3auz > >> > >> We need a working xml to compare to. > > > > [snip expected changes] > > > > > >> <entry name="manufacturer">oVirt</entry> > >> <entry name="product">oVirt Node</entry> > >> <entry name="version">6-5.el6.centos.11.2</entry> > >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > >> > >> Not expected - maybe this is confusing windows? > >> > >> Francesco, why vm serial has changed after moving disks from one storage > >> domain > >> to another? > > > > We put in serial either > > 1. the UUID Engine send to us > > 2. the host UUID as returned by our getHostUUID utility function > > > > the latter is unlikely to change, even after this disk move. > > Fernando, can you describe exactly how you moved the disk? > > I assume that you selected the vm in the virtual machines tab, then > selected > disks from the sub tab, then selected move, and selected the target > storage domain. > > Also, can you reproduce this with a new vm? (create vm with disk nfs, > stop vm, > move disk to iscsi, start vm). > > > So the first suspect in line is Engine > > > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? > > That seems very surprising. > > > > Thanks and bests, > > > > -- > > Francesco Romani > > RedHat Engineering Virtualization R & D > > Phone: 8261328 > > IRC: fromani _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain. Please try this from a host in up state in engine. Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
Nir,
Ok ill look for it here in a few. Thanks for your reply and help!
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
I try to follow your steps but I cant seem to find the ID of the template.
The image-uuid of the template is displayed in the Disks tab in engine.
To find the volume-uuid on block storage, you can do:
pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote:
All, I did a test for Fernando in our ovirt environment. I created a vm called win7melly in the nfs domain. I then migrated it to the iscsi domain. It booted without any issue. So it has to be something with the templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
- MeLLy
On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: > Nir, > > That's exactly how I did it Nir. > I will test tomorrow with a new Windows VM and report back. > > Regards, > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> > > wrote: > > > ----- Original Message ----- > > >> From: "Nir Soffer" <nsoffer@redhat.com> > > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> > > >> Sent: Saturday, July 2, 2016 11:18:01 AM > > >> Subject: Re: [ovirt-users] disk not bootable > > >> > > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> > > >> wrote: > > >> > Nir, > > >> > > > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and > > >> > stop working than I moved it back and still unable to run... Windows VM > > >> > is saying "no available boot disk" > > >> > VM: Win7-Test > > >> > Host: Zeta > > >> > Info as requested: http://pastebin.com/1fSi3auz > > >> > > >> We need a working xml to compare to. > > > > > > [snip expected changes] > > > > > > > > >> <entry name="manufacturer">oVirt</entry> > > >> <entry name="product">oVirt Node</entry> > > >> <entry name="version">6-5.el6.centos.11.2</entry> > > >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > > >> > > >> Not expected - maybe this is confusing windows? > > >> > > >> Francesco, why vm serial has changed after moving disks from one storage > > >> domain > > >> to another? > > > > > > We put in serial either > > > 1. the UUID Engine send to us > > > 2. the host UUID as returned by our getHostUUID utility function > > > > > > the latter is unlikely to change, even after this disk move. > > > > Fernando, can you describe exactly how you moved the disk? > > > > I assume that you selected the vm in the virtual machines tab, then > > selected > > disks from the sub tab, then selected move, and selected the target > > storage domain. > > > > Also, can you reproduce this with a new vm? (create vm with disk nfs, > > stop vm, > > move disk to iscsi, start vm). > > > > > So the first suspect in line is Engine > > > > > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? > > > That seems very surprising. > > > > > > Thanks and bests, > > > > > > -- > > > Francesco Romani > > > RedHat Engineering Virtualization R & D > > > Phone: 8261328 > > > IRC: fromani > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing? Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
Nir,
Ok ill look for it here in a few. Thanks for your reply and help!
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
I try to follow your steps but I cant seem to find the ID of the template.
The image-uuid of the template is displayed in the Disks tab in engine.
To find the volume-uuid on block storage, you can do:
pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> wrote: > All, I did a test for Fernando in our ovirt environment. I created a vm > called win7melly in the nfs domain. I then migrated it to the iscsi > domain. It booted without any issue. So it has to be something with the > templates. I have attached the vdsm log for the host the vm resides on.
The log show a working vm, so it does not help much.
I think that the template you copied from the nfs domain to the block domain is corrupted, or the volume metadata are incorrect.
If I understand this correctly, this started when Fernando could not copy the vm disk to the block storage, and I guess the issue was that the template was missing on that storage domain. I assume that he copied the template to the block storage domain by opening the templates tab, selecting the template, and choosing copy from the menu.
Lets compare the template on both nfs and block storage domain.
1. Find the template on the nfs storage domain, using the image uuid in engine.
It should be at
/rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
2. Please share the output of:
cat /path/to/volume.meta qemu-img info /path/to/volume qemu-img check /path/to/volume
4. Find the template on the block storage domain
You should have an lv using the same volume uuid and the image-uuid should be in the lv tag.
Find it using:
lvs -o vg_name,lv_name,tags | grep volume-uuid
5. Activate the lv
lvchange -ay vg_name/lv_name
6. Share the output of
qemu-img info /dev/vg_name/lv_name qemu-img check /dev/vg_name/lv_name
7. Deactivate the lv
lvchange -an vg_name/lv_name
8. Find the lv metadata
The metadata is stored in /dev/vg_name/metadata. To find the correct block, find the tag named MD_N in the lv tags you found in step 4
The block we need is located at offset N from start of volume.
9. Share the output of:
dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct
The output of this command should show the image-uuid.
Nir
> > - MeLLy > > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: >> Nir, >> >> That's exactly how I did it Nir. >> I will test tomorrow with a new Windows VM and report back. >> >> Regards, >> >> -- >> Fernando Fuentes >> ffuentes@txweather.org >> http://www.txweather.org >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> >> > wrote: >> > > ----- Original Message ----- >> > >> From: "Nir Soffer" <nsoffer@redhat.com> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM >> > >> Subject: Re: [ovirt-users] disk not bootable >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> >> > >> wrote: >> > >> > Nir, >> > >> > >> > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and >> > >> > stop working than I moved it back and still unable to run... Windows VM >> > >> > is saying "no available boot disk" >> > >> > VM: Win7-Test >> > >> > Host: Zeta >> > >> > Info as requested: http://pastebin.com/1fSi3auz >> > >> >> > >> We need a working xml to compare to. >> > > >> > > [snip expected changes] >> > > >> > > >> > >> <entry name="manufacturer">oVirt</entry> >> > >> <entry name="product">oVirt Node</entry> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> >> > >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> >> > >> >> > >> Not expected - maybe this is confusing windows? >> > >> >> > >> Francesco, why vm serial has changed after moving disks from one storage >> > >> domain >> > >> to another? >> > > >> > > We put in serial either >> > > 1. the UUID Engine send to us >> > > 2. the host UUID as returned by our getHostUUID utility function >> > > >> > > the latter is unlikely to change, even after this disk move. >> > >> > Fernando, can you describe exactly how you moved the disk? >> > >> > I assume that you selected the vm in the virtual machines tab, then >> > selected >> > disks from the sub tab, then selected move, and selected the target >> > storage domain. >> > >> > Also, can you reproduce this with a new vm? (create vm with disk nfs, >> > stop vm, >> > move disk to iscsi, start vm). >> > >> > > So the first suspect in line is Engine >> > > >> > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? >> > > That seems very surprising. >> > > >> > > Thanks and bests, >> > > >> > > -- >> > > Francesco Romani >> > > RedHat Engineering Virtualization R & D >> > > Phone: 8261328 >> > > IRC: fromani >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Can you share output of lsblk on this host? On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
Nir,
Ok ill look for it here in a few. Thanks for your reply and help!
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
I try to follow your steps but I cant seem to find the ID of the template.
The image-uuid of the template is displayed in the Disks tab in engine.
To find the volume-uuid on block storage, you can do:
pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: > On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> > wrote: > > All, I did a test for Fernando in our ovirt environment. I created a vm > > called win7melly in the nfs domain. I then migrated it to the iscsi > > domain. It booted without any issue. So it has to be something with the > > templates. I have attached the vdsm log for the host the vm resides on. > > The log show a working vm, so it does not help much. > > I think that the template you copied from the nfs domain to the block > domain is > corrupted, or the volume metadata are incorrect. > > If I understand this correctly, this started when Fernando could not copy > the vm > disk to the block storage, and I guess the issue was that the template > was missing > on that storage domain. I assume that he copied the template to the > block storage > domain by opening the templates tab, selecting the template, and choosing > copy > from the menu. > > Lets compare the template on both nfs and block storage domain. > > 1. Find the template on the nfs storage domain, using the image uuid in > engine. > > It should be at > > /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid > > 2. Please share the output of: > > cat /path/to/volume.meta > qemu-img info /path/to/volume > qemu-img check /path/to/volume > > 4. Find the template on the block storage domain > > You should have an lv using the same volume uuid and the image-uuid > should be in the lv tag. > > Find it using: > > lvs -o vg_name,lv_name,tags | grep volume-uuid > > 5. Activate the lv > > lvchange -ay vg_name/lv_name > > 6. Share the output of > > qemu-img info /dev/vg_name/lv_name > qemu-img check /dev/vg_name/lv_name > > 7. Deactivate the lv > > lvchange -an vg_name/lv_name > > 8. Find the lv metadata > > The metadata is stored in /dev/vg_name/metadata. To find the correct > block, > find the tag named MD_N in the lv tags you found in step 4 > > The block we need is located at offset N from start of volume. > > 9. Share the output of: > > dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct > > The output of this command should show the image-uuid. > > Nir > > > > > - MeLLy > > > > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: > >> Nir, > >> > >> That's exactly how I did it Nir. > >> I will test tomorrow with a new Windows VM and report back. > >> > >> Regards, > >> > >> -- > >> Fernando Fuentes > >> ffuentes@txweather.org > >> http://www.txweather.org > >> > >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> > >> > wrote: > >> > > ----- Original Message ----- > >> > >> From: "Nir Soffer" <nsoffer@redhat.com> > >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> > >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM > >> > >> Subject: Re: [ovirt-users] disk not bootable > >> > >> > >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> > >> > >> wrote: > >> > >> > Nir, > >> > >> > > >> > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and > >> > >> > stop working than I moved it back and still unable to run... Windows VM > >> > >> > is saying "no available boot disk" > >> > >> > VM: Win7-Test > >> > >> > Host: Zeta > >> > >> > Info as requested: http://pastebin.com/1fSi3auz > >> > >> > >> > >> We need a working xml to compare to. > >> > > > >> > > [snip expected changes] > >> > > > >> > > > >> > >> <entry name="manufacturer">oVirt</entry> > >> > >> <entry name="product">oVirt Node</entry> > >> > >> <entry name="version">6-5.el6.centos.11.2</entry> > >> > >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > >> > >> > >> > >> Not expected - maybe this is confusing windows? > >> > >> > >> > >> Francesco, why vm serial has changed after moving disks from one storage > >> > >> domain > >> > >> to another? > >> > > > >> > > We put in serial either > >> > > 1. the UUID Engine send to us > >> > > 2. the host UUID as returned by our getHostUUID utility function > >> > > > >> > > the latter is unlikely to change, even after this disk move. > >> > > >> > Fernando, can you describe exactly how you moved the disk? > >> > > >> > I assume that you selected the vm in the virtual machines tab, then > >> > selected > >> > disks from the sub tab, then selected move, and selected the target > >> > storage domain. > >> > > >> > Also, can you reproduce this with a new vm? (create vm with disk nfs, > >> > stop vm, > >> > move disk to iscsi, start vm). > >> > > >> > > So the first suspect in line is Engine > >> > > > >> > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? > >> > > That seems very surprising. > >> > > > >> > > Thanks and bests, > >> > > > >> > > -- > >> > > Francesco Romani > >> > > RedHat Engineering Virtualization R & D > >> > > Phone: 8261328 > >> > > IRC: fromani > >> _______________________________________________ > >> Users mailing list > >> Users@ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > >
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, As requested: [root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm [root@gamma ~]# Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
Nir,
Ok ill look for it here in a few. Thanks for your reply and help!
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote: > Nir, > > I try to follow your steps but I cant seem to find the ID of the > template.
The image-uuid of the template is displayed in the Disks tab in engine.
To find the volume-uuid on block storage, you can do:
pvscan --cache lvs -o vg_name,lv_name,tags | grep image-uuid
> > Regards, > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler <melissa@justmelly.com> >> wrote: >> > All, I did a test for Fernando in our ovirt environment. I created a vm >> > called win7melly in the nfs domain. I then migrated it to the iscsi >> > domain. It booted without any issue. So it has to be something with the >> > templates. I have attached the vdsm log for the host the vm resides on. >> >> The log show a working vm, so it does not help much. >> >> I think that the template you copied from the nfs domain to the block >> domain is >> corrupted, or the volume metadata are incorrect. >> >> If I understand this correctly, this started when Fernando could not copy >> the vm >> disk to the block storage, and I guess the issue was that the template >> was missing >> on that storage domain. I assume that he copied the template to the >> block storage >> domain by opening the templates tab, selecting the template, and choosing >> copy >> from the menu. >> >> Lets compare the template on both nfs and block storage domain. >> >> 1. Find the template on the nfs storage domain, using the image uuid in >> engine. >> >> It should be at >> >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid >> >> 2. Please share the output of: >> >> cat /path/to/volume.meta >> qemu-img info /path/to/volume >> qemu-img check /path/to/volume >> >> 4. Find the template on the block storage domain >> >> You should have an lv using the same volume uuid and the image-uuid >> should be in the lv tag. >> >> Find it using: >> >> lvs -o vg_name,lv_name,tags | grep volume-uuid >> >> 5. Activate the lv >> >> lvchange -ay vg_name/lv_name >> >> 6. Share the output of >> >> qemu-img info /dev/vg_name/lv_name >> qemu-img check /dev/vg_name/lv_name >> >> 7. Deactivate the lv >> >> lvchange -an vg_name/lv_name >> >> 8. Find the lv metadata >> >> The metadata is stored in /dev/vg_name/metadata. To find the correct >> block, >> find the tag named MD_N in the lv tags you found in step 4 >> >> The block we need is located at offset N from start of volume. >> >> 9. Share the output of: >> >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct >> >> The output of this command should show the image-uuid. >> >> Nir >> >> > >> > - MeLLy >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: >> >> Nir, >> >> >> >> That's exactly how I did it Nir. >> >> I will test tomorrow with a new Windows VM and report back. >> >> >> >> Regards, >> >> >> >> -- >> >> Fernando Fuentes >> >> ffuentes@txweather.org >> >> http://www.txweather.org >> >> >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani <fromani@redhat.com> >> >> > wrote: >> >> > > ----- Original Message ----- >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM >> >> > >> Subject: Re: [ovirt-users] disk not bootable >> >> > >> >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> >> >> > >> wrote: >> >> > >> > Nir, >> >> > >> > >> >> > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and >> >> > >> > stop working than I moved it back and still unable to run... Windows VM >> >> > >> > is saying "no available boot disk" >> >> > >> > VM: Win7-Test >> >> > >> > Host: Zeta >> >> > >> > Info as requested: http://pastebin.com/1fSi3auz >> >> > >> >> >> > >> We need a working xml to compare to. >> >> > > >> >> > > [snip expected changes] >> >> > > >> >> > > >> >> > >> <entry name="manufacturer">oVirt</entry> >> >> > >> <entry name="product">oVirt Node</entry> >> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> >> >> > >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> >> >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> >> >> > >> >> >> > >> Not expected - maybe this is confusing windows? >> >> > >> >> >> > >> Francesco, why vm serial has changed after moving disks from one storage >> >> > >> domain >> >> > >> to another? >> >> > > >> >> > > We put in serial either >> >> > > 1. the UUID Engine send to us >> >> > > 2. the host UUID as returned by our getHostUUID utility function >> >> > > >> >> > > the latter is unlikely to change, even after this disk move. >> >> > >> >> > Fernando, can you describe exactly how you moved the disk? >> >> > >> >> > I assume that you selected the vm in the virtual machines tab, then >> >> > selected >> >> > disks from the sub tab, then selected move, and selected the target >> >> > storage domain. >> >> > >> >> > Also, can you reproduce this with a new vm? (create vm with disk nfs, >> >> > stop vm, >> >> > move disk to iscsi, start vm). >> >> > >> >> > > So the first suspect in line is Engine >> >> > > >> >> > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? >> >> > > That seems very surprising. >> >> > > >> >> > > Thanks and bests, >> >> > > >> >> > > -- >> >> > > Francesco Romani >> >> > > RedHat Engineering Virtualization R & D >> >> > > Phone: 8261328 >> >> > > IRC: fromani >> >> _______________________________________________ >> >> Users mailing list >> >> Users@ovirt.org >> >> http://lists.ovirt.org/mailman/listinfo/users >> > >> > _______________________________________________ >> > Users mailing list >> > Users@ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users >> >
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes < ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata:
536
MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
Nir,
Ok ill look for it here in a few. Thanks for your reply and help!
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes < ffuentes@darktcp.net> > wrote: > > Nir, > > > > I try to follow your steps but I cant seem to find the ID of
> > template. > > The image-uuid of the template is displayed in the Disks tab in engine. > > To find the volume-uuid on block storage, you can do: > > pvscan --cache > lvs -o vg_name,lv_name,tags | grep image-uuid > > > > > Regards, > > > > -- > > Fernando Fuentes > > ffuentes@txweather.org > > http://www.txweather.org > > > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler < melissa@justmelly.com> > >> wrote: > >> > All, I did a test for Fernando in our ovirt environment. I created a vm > >> > called win7melly in the nfs domain. I then migrated it to
> >> > domain. It booted without any issue. So it has to be something with the > >> > templates. I have attached the vdsm log for the host the vm resides on. > >> > >> The log show a working vm, so it does not help much. > >> > >> I think that the template you copied from the nfs domain to
> >> domain is > >> corrupted, or the volume metadata are incorrect. > >> > >> If I understand this correctly, this started when Fernando could not copy > >> the vm > >> disk to the block storage, and I guess the issue was that the template > >> was missing > >> on that storage domain. I assume that he copied the template to the > >> block storage > >> domain by opening the templates tab, selecting the template, and choosing > >> copy > >> from the menu. > >> > >> Lets compare the template on both nfs and block storage domain. > >> > >> 1. Find the template on the nfs storage domain, using the image uuid in > >> engine. > >> > >> It should be at > >> > >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid > >> > >> 2. Please share the output of: > >> > >> cat /path/to/volume.meta > >> qemu-img info /path/to/volume > >> qemu-img check /path/to/volume > >> > >> 4. Find the template on the block storage domain > >> > >> You should have an lv using the same volume uuid and the image-uuid > >> should be in the lv tag. > >> > >> Find it using: > >> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid > >> > >> 5. Activate the lv > >> > >> lvchange -ay vg_name/lv_name > >> > >> 6. Share the output of > >> > >> qemu-img info /dev/vg_name/lv_name > >> qemu-img check /dev/vg_name/lv_name > >> > >> 7. Deactivate the lv > >> > >> lvchange -an vg_name/lv_name > >> > >> 8. Find the lv metadata > >> > >> The metadata is stored in /dev/vg_name/metadata. To find the correct > >> block, > >> find the tag named MD_N in the lv tags you found in step 4 > >> > >> The block we need is located at offset N from start of volume. > >> > >> 9. Share the output of: > >> > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 iflag=direct > >> > >> The output of this command should show the image-uuid. > >> > >> Nir > >> > >> > > >> > - MeLLy > >> > > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: > >> >> Nir, > >> >> > >> >> That's exactly how I did it Nir. > >> >> I will test tomorrow with a new Windows VM and report back. > >> >> > >> >> Regards, > >> >> > >> >> -- > >> >> Fernando Fuentes > >> >> ffuentes@txweather.org > >> >> http://www.txweather.org > >> >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani < fromani@redhat.com> > >> >> > wrote: > >> >> > > ----- Original Message ----- > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" < users@ovirt.org> > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM > >> >> > >> Subject: Re: [ovirt-users] disk not bootable > >> >> > >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes < ffuentes@darktcp.net> > >> >> > >> wrote: > >> >> > >> > Nir, > >> >> > >> > > >> >> > >> > Ok I ran another test and this one I moved from NFS domain to iSCSI and > >> >> > >> > stop working than I moved it back and still unable to run... Windows VM > >> >> > >> > is saying "no available boot disk" > >> >> > >> > VM: Win7-Test > >> >> > >> > Host: Zeta > >> >> > >> > Info as requested: http://pastebin.com/1fSi3auz > >> >> > >> > >> >> > >> We need a working xml to compare to. > >> >> > > > >> >> > > [snip expected changes] > >> >> > > > >> >> > > > >> >> > >> <entry name="manufacturer">oVirt</entry> > >> >> > >> <entry name="product">oVirt Node</entry> > >> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> > >> >> > >> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > >> >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > >> >> > >> > >> >> > >> Not expected - maybe this is confusing windows? > >> >> > >> > >> >> > >> Francesco, why vm serial has changed after moving disks from one storage > >> >> > >> domain > >> >> > >> to another? > >> >> > > > >> >> > > We put in serial either > >> >> > > 1. the UUID Engine send to us > >> >> > > 2. the host UUID as returned by our getHostUUID utility function > >> >> > > > >> >> > > the latter is unlikely to change, even after this disk move. > >> >> > > >> >> > Fernando, can you describe exactly how you moved the disk? > >> >> > > >> >> > I assume that you selected the vm in the virtual machines tab, then > >> >> > selected > >> >> > disks from the sub tab, then selected move, and selected
So you have 2 storage domains: - 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01 But most likely both of them are not active now. Can you share the output of: iscsiadm -m session On a system connected to iscsi storage you will see something like: # iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash) The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks). Here is an example from machine connected to active iscsi domain: # lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m the the iscsi the block the target
> >> >> > storage domain. > >> >> > > >> >> > Also, can you reproduce this with a new vm? (create vm with disk nfs, > >> >> > stop vm, > >> >> > move disk to iscsi, start vm). > >> >> > > >> >> > > So the first suspect in line is Engine > >> >> > > > >> >> > > Arik, do you know if Engine is indeed supposed to change the UUID in this flow? > >> >> > > That seems very surprising. > >> >> > > > >> >> > > Thanks and bests, > >> >> > > > >> >> > > -- > >> >> > > Francesco Romani > >> >> > > RedHat Engineering Virtualization R & D > >> >> > > Phone: 8261328 > >> >> > > IRC: fromani > >> >> _______________________________________________ > >> >> Users mailing list > >> >> Users@ovirt.org > >> >> http://lists.ovirt.org/mailman/listinfo/users > >> > > >> > _______________________________________________ > >> > Users mailing list > >> > Users@ovirt.org > >> > http://lists.ovirt.org/mailman/listinfo/users > >> > _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, After some playing around with pvscan I was able to get all of the need it information. Please see: -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
So you have 2 storage domains:
- 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01
But most likely both of them are not active now.
Can you share the output of:
iscsiadm -m session
On a system connected to iscsi storage you will see something like:
# iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks).
Here is an example from machine connected to active iscsi domain:
# lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 128.00m
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba- metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01- master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote: > Nir, > > Ok ill look for it here in a few. > Thanks for your reply and help! > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: > > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes > > <ffuentes@darktcp.net> > > wrote: > > > Nir, > > > > > > I try to follow your steps but I cant seem to find the ID > > > of the > > > template. > > > > The image-uuid of the template is displayed in the Disks > > tab in engine. > > > > To find the volume-uuid on block storage, you can do: > > > > pvscan --cache > > lvs -o vg_name,lv_name,tags | grep image-uuid > > > > > > > > Regards, > > > > > > -- > > > Fernando Fuentes > > > ffuentes@txweather.org > > > http://www.txweather.org > > > > > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: > > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler > > >> <melissa@justmelly.com> > > >> wrote: > > >> > All, I did a test for Fernando in our ovirt > > >> > environment. I created a vm > > >> > called win7melly in the nfs domain. I then migrated it > > >> > to the iscsi > > >> > domain. It booted without any issue. So it has to be > > >> > something with the > > >> > templates. I have attached the vdsm log for the host > > >> > the vm resides on. > > >> > > >> The log show a working vm, so it does not help much. > > >> > > >> I think that the template you copied from the nfs domain > > >> to the block > > >> domain is > > >> corrupted, or the volume metadata are incorrect. > > >> > > >> If I understand this correctly, this started when > > >> Fernando could not copy > > >> the vm > > >> disk to the block storage, and I guess the issue was > > >> that the template > > >> was missing > > >> on that storage domain. I assume that he copied the > > >> template to the > > >> block storage > > >> domain by opening the templates tab, selecting the > > >> template, and choosing > > >> copy > > >> from the menu. > > >> > > >> Lets compare the template on both nfs and block storage > > >> domain. > > >> > > >> 1. Find the template on the nfs storage domain, using > > >> the image uuid in > > >> engine. > > >> > > >> It should be at > > >> > > >> /rhev/data-center/mnt/server:_path/domain- > > >> uuid/images/image-uuid/volume-uuid > > >> > > >> 2. Please share the output of: > > >> > > >> cat /path/to/volume.meta > > >> qemu-img info /path/to/volume > > >> qemu-img check /path/to/volume > > >> > > >> 4. Find the template on the block storage domain > > >> > > >> You should have an lv using the same volume uuid and the > > >> image-uuid > > >> should be in the lv tag. > > >> > > >> Find it using: > > >> > > >> lvs -o vg_name,lv_name,tags | grep volume-uuid > > >> > > >> 5. Activate the lv > > >> > > >> lvchange -ay vg_name/lv_name > > >> > > >> 6. Share the output of > > >> > > >> qemu-img info /dev/vg_name/lv_name > > >> qemu-img check /dev/vg_name/lv_name > > >> > > >> 7. Deactivate the lv > > >> > > >> lvchange -an vg_name/lv_name > > >> > > >> 8. Find the lv metadata > > >> > > >> The metadata is stored in /dev/vg_name/metadata. To find > > >> the correct > > >> block, > > >> find the tag named MD_N in the lv tags you found in step > > >> 4 > > >> > > >> The block we need is located at offset N from start of > > >> volume. > > >> > > >> 9. Share the output of: > > >> > > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 > > >> iflag=direct > > >> > > >> The output of this command should show the image-uuid. > > >> > > >> Nir > > >> > > >> > > > >> > - MeLLy > > >> > > > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes > > >> > wrote: > > >> >> Nir, > > >> >> > > >> >> That's exactly how I did it Nir. > > >> >> I will test tomorrow with a new Windows VM and report > > >> >> back. > > >> >> > > >> >> Regards, > > >> >> > > >> >> -- > > >> >> Fernando Fuentes > > >> >> ffuentes@txweather.org > > >> >> http://www.txweather.org > > >> >> > > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani > > >> >> > <fromani@redhat.com> > > >> >> > wrote: > > >> >> > > ----- Original Message ----- > > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> > > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, > > >> >> > >> "users" <users@ovirt.org> > > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM > > >> >> > >> Subject: Re: [ovirt-users] disk not bootable > > >> >> > >> > > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes > > >> >> > >> <ffuentes@darktcp.net> > > >> >> > >> wrote: > > >> >> > >> > Nir, > > >> >> > >> > > > >> >> > >> > Ok I ran another test and this one I moved > > >> >> > >> > from NFS domain to iSCSI and > > >> >> > >> > stop working than I moved it back and still > > >> >> > >> > unable to run... Windows VM > > >> >> > >> > is saying "no available boot disk" > > >> >> > >> > VM: Win7-Test > > >> >> > >> > Host: Zeta > > >> >> > >> > Info as requested: > > >> >> > >> > http://pastebin.com/1fSi3auz > > >> >> > >> > > >> >> > >> We need a working xml to compare to. > > >> >> > > > > >> >> > > [snip expected changes] > > >> >> > > > > >> >> > > > > >> >> > >> <entry name="manufacturer">oVirt</entry> > > >> >> > >> <entry name="product">oVirt Node</entry> > > >> >> > >> <entry name="version">6- > > >> >> > >> 5.el6.centos.11.2</entry> > > >> >> > >> - <entry name="serial">C938F077-55E2-3E50-A694- > > >> >> > >> 9FCB7661FD89</entry> > > >> >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C- > > >> >> > >> A99823E95AC0</entry> > > >> >> > >> > > >> >> > >> Not expected - maybe this is confusing windows? > > >> >> > >> > > >> >> > >> Francesco, why vm serial has changed after > > >> >> > >> moving disks from one storage > > >> >> > >> domain > > >> >> > >> to another? > > >> >> > > > > >> >> > > We put in serial either > > >> >> > > 1. the UUID Engine send to us > > >> >> > > 2. the host UUID as returned by our getHostUUID > > >> >> > > utility function > > >> >> > > > > >> >> > > the latter is unlikely to change, even after this > > >> >> > > disk move. > > >> >> > > > >> >> > Fernando, can you describe exactly how you moved > > >> >> > the disk? > > >> >> > > > >> >> > I assume that you selected the vm in the virtual > > >> >> > machines tab, then > > >> >> > selected > > >> >> > disks from the sub tab, then selected move, and > > >> >> > selected the target > > >> >> > storage domain. > > >> >> > > > >> >> > Also, can you reproduce this with a new vm? (create > > >> >> > vm with disk nfs, > > >> >> > stop vm, > > >> >> > move disk to iscsi, start vm). > > >> >> > > > >> >> > > So the first suspect in line is Engine > > >> >> > > > > >> >> > > Arik, do you know if Engine is indeed supposed to > > >> >> > > change the UUID in this flow? > > >> >> > > That seems very surprising. > > >> >> > > > > >> >> > > Thanks and bests, > > >> >> > > > > >> >> > > -- > > >> >> > > Francesco Romani > > >> >> > > RedHat Engineering Virtualization R & D > > >> >> > > Phone: 8261328 > > >> >> > > IRC: fromani > > >> >> _______________________________________________ > > >> >> Users mailing list > > >> >> Users@ovirt.org > > >> >> http://lists.ovirt.org/mailman/listinfo/users > > >> > > > >> > _______________________________________________ > > >> > Users mailing list > > >> > Users@ovirt.org > > >> > http://lists.ovirt.org/mailman/listinfo/users > > >> > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Ops... forgot the link: http://pastebin.com/LereJgyw The requested infor is in the pastebin. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
Nir,
After some playing around with pvscan I was able to get all of the need it information.
Please see:
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
So you have 2 storage domains:
- 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01
But most likely both of them are not active now.
Can you share the output of:
iscsiadm -m session
On a system connected to iscsi storage you will see something like:
# iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks).
Here is an example from machine connected to active iscsi domain:
# lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125- e97f1cdb06f9 -wi-a----- 128.00m
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote: > Nir, > > Ok I got the uuid but I am getting the same results as > before. > Nothing comes up. > > [root@gamma ~]# pvscan --cache > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 > [root@gamma ~]# > > without the grep all I get is: > > [root@gamma ~]# lvs -o vg_name,lv_name,tags > VG LV LV Tags > vg_gamma lv_home > vg_gamma lv_root > vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
> > On the other hand an fdisk shows a bunch of disks and here is > one > example: > > Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 > GB, > 2199023255552 bytes > 255 heads, 63 sectors/track, 267349 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > > Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 > GB, > 2199023255552 bytes > 255 heads, 63 sectors/track, 267349 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba- > metadata: 536 > MB, 536870912 bytes > 255 heads, 63 sectors/track, 65 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01- > master: 1073 > MB, 1073741824 bytes > 255 heads, 63 sectors/track, 130 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > Regards, > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote: >> Nir, >> >> Ok ill look for it here in a few. >> Thanks for your reply and help! >> >> -- >> Fernando Fuentes >> ffuentes@txweather.org >> http://www.txweather.org >> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes >> > <ffuentes@darktcp.net> >> > wrote: >> > > Nir, >> > > >> > > I try to follow your steps but I cant seem to find the >> > > ID of the >> > > template. >> > >> > The image-uuid of the template is displayed in the Disks >> > tab in engine. >> > >> > To find the volume-uuid on block storage, you can do: >> > >> > pvscan --cache >> > lvs -o vg_name,lv_name,tags | grep image-uuid >> > >> > > >> > > Regards, >> > > >> > > -- >> > > Fernando Fuentes >> > > ffuentes@txweather.org >> > > http://www.txweather.org >> > > >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler >> > >> <melissa@justmelly.com> >> > >> wrote: >> > >> > All, I did a test for Fernando in our ovirt >> > >> > environment. I created a vm >> > >> > called win7melly in the nfs domain. I then migrated >> > >> > it to the iscsi >> > >> > domain. It booted without any issue. So it has to be >> > >> > something with the >> > >> > templates. I have attached the vdsm log for the host >> > >> > the vm resides on. >> > >> >> > >> The log show a working vm, so it does not help much. >> > >> >> > >> I think that the template you copied from the nfs >> > >> domain to the block >> > >> domain is >> > >> corrupted, or the volume metadata are incorrect. >> > >> >> > >> If I understand this correctly, this started when >> > >> Fernando could not copy >> > >> the vm >> > >> disk to the block storage, and I guess the issue was >> > >> that the template >> > >> was missing >> > >> on that storage domain. I assume that he copied the >> > >> template to the >> > >> block storage >> > >> domain by opening the templates tab, selecting the >> > >> template, and choosing >> > >> copy >> > >> from the menu. >> > >> >> > >> Lets compare the template on both nfs and block storage >> > >> domain. >> > >> >> > >> 1. Find the template on the nfs storage domain, using >> > >> the image uuid in >> > >> engine. >> > >> >> > >> It should be at >> > >> >> > >> /rhev/data-center/mnt/server:_path/domain- >> > >> uuid/images/image-uuid/volume-uuid >> > >> >> > >> 2. Please share the output of: >> > >> >> > >> cat /path/to/volume.meta >> > >> qemu-img info /path/to/volume >> > >> qemu-img check /path/to/volume >> > >> >> > >> 4. Find the template on the block storage domain >> > >> >> > >> You should have an lv using the same volume uuid and >> > >> the image-uuid >> > >> should be in the lv tag. >> > >> >> > >> Find it using: >> > >> >> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid >> > >> >> > >> 5. Activate the lv >> > >> >> > >> lvchange -ay vg_name/lv_name >> > >> >> > >> 6. Share the output of >> > >> >> > >> qemu-img info /dev/vg_name/lv_name >> > >> qemu-img check /dev/vg_name/lv_name >> > >> >> > >> 7. Deactivate the lv >> > >> >> > >> lvchange -an vg_name/lv_name >> > >> >> > >> 8. Find the lv metadata >> > >> >> > >> The metadata is stored in /dev/vg_name/metadata. To >> > >> find the correct >> > >> block, >> > >> find the tag named MD_N in the lv tags you found in >> > >> step 4 >> > >> >> > >> The block we need is located at offset N from start of >> > >> volume. >> > >> >> > >> 9. Share the output of: >> > >> >> > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 >> > >> iflag=direct >> > >> >> > >> The output of this command should show the image-uuid. >> > >> >> > >> Nir >> > >> >> > >> > >> > >> > - MeLLy >> > >> > >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes >> > >> > wrote: >> > >> >> Nir, >> > >> >> >> > >> >> That's exactly how I did it Nir. >> > >> >> I will test tomorrow with a new Windows VM and >> > >> >> report back. >> > >> >> >> > >> >> Regards, >> > >> >> >> > >> >> -- >> > >> >> Fernando Fuentes >> > >> >> ffuentes@txweather.org >> > >> >> http://www.txweather.org >> > >> >> >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: >> > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani >> > >> >> > <fromani@redhat.com> >> > >> >> > wrote: >> > >> >> > > ----- Original Message ----- >> > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> >> > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> >> > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, >> > >> >> > >> "users" <users@ovirt.org> >> > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM >> > >> >> > >> Subject: Re: [ovirt-users] disk not bootable >> > >> >> > >> >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando >> > >> >> > >> Fuentes <ffuentes@darktcp.net> >> > >> >> > >> wrote: >> > >> >> > >> > Nir, >> > >> >> > >> > >> > >> >> > >> > Ok I ran another test and this one I moved >> > >> >> > >> > from NFS domain to iSCSI and >> > >> >> > >> > stop working than I moved it back and still >> > >> >> > >> > unable to run... Windows VM >> > >> >> > >> > is saying "no available boot disk" >> > >> >> > >> > VM: Win7-Test >> > >> >> > >> > Host: Zeta >> > >> >> > >> > Info as requested: >> > >> >> > >> > http://pastebin.com/1fSi3auz >> > >> >> > >> >> > >> >> > >> We need a working xml to compare to. >> > >> >> > > >> > >> >> > > [snip expected changes] >> > >> >> > > >> > >> >> > > >> > >> >> > >> <entry name="manufacturer">oVirt</entry> >> > >> >> > >> <entry name="product">oVirt Node</entry> >> > >> >> > >> <entry name="version">6- >> > >> >> > >> 5.el6.centos.11.2</entry> >> > >> >> > >> - <entry name="serial">C938F077-55E2-3E50-A694- >> > >> >> > >> 9FCB7661FD89</entry> >> > >> >> > >> + <entry name="serial">735C7A01-1F16-3CF0-AF8C- >> > >> >> > >> A99823E95AC0</entry> >> > >> >> > >> >> > >> >> > >> Not expected - maybe this is confusing windows? >> > >> >> > >> >> > >> >> > >> Francesco, why vm serial has changed after >> > >> >> > >> moving disks from one storage >> > >> >> > >> domain >> > >> >> > >> to another? >> > >> >> > > >> > >> >> > > We put in serial either >> > >> >> > > 1. the UUID Engine send to us >> > >> >> > > 2. the host UUID as returned by our getHostUUID >> > >> >> > > utility function >> > >> >> > > >> > >> >> > > the latter is unlikely to change, even after >> > >> >> > > this disk move. >> > >> >> > >> > >> >> > Fernando, can you describe exactly how you moved >> > >> >> > the disk? >> > >> >> > >> > >> >> > I assume that you selected the vm in the virtual >> > >> >> > machines tab, then >> > >> >> > selected >> > >> >> > disks from the sub tab, then selected move, and >> > >> >> > selected the target >> > >> >> > storage domain. >> > >> >> > >> > >> >> > Also, can you reproduce this with a new vm? >> > >> >> > (create vm with disk nfs, >> > >> >> > stop vm, >> > >> >> > move disk to iscsi, start vm). >> > >> >> > >> > >> >> > > So the first suspect in line is Engine >> > >> >> > > >> > >> >> > > Arik, do you know if Engine is indeed supposed >> > >> >> > > to change the UUID in this flow? >> > >> >> > > That seems very surprising. >> > >> >> > > >> > >> >> > > Thanks and bests, >> > >> >> > > >> > >> >> > > -- >> > >> >> > > Francesco Romani >> > >> >> > > RedHat Engineering Virtualization R & D >> > >> >> > > Phone: 8261328 >> > >> >> > > IRC: fromani >> > >> >> _______________________________________________ >> > >> >> Users mailing list >> > >> >> Users@ovirt.org >> > >> >> http://lists.ovirt.org/mailman/listinfo/users >> > >> > >> > >> > _______________________________________________ >> > >> > Users mailing list >> > >> > Users@ovirt.org >> > >> > http://lists.ovirt.org/mailman/listinfo/users >> > >> > >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 18, 2016 at 11:16 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Ops... forgot the link:
The requested infor is in the pastebin.
So the issue is clear now, the template on NFS is using raw format, and on block stoage, qcow format: NFS: root@zeta ~]# cat /rhev/data-center/mnt/172.30.10.5\:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f- ... FORMAT=RAW ... [root@alpha ~]# qemu-img info /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: raw ... iSCSI: [root@zeta ~]# qemu-img info /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: qcow2 ... [root@zeta ~]# dd if=/dev/0ef17024-0eae-4590-8eea-6ec8494fe223/metadata bs=512 skip=4 count=1 iflag=direct ... FORMAT=COW ... This format conversion is expected, as we don't support raw/sparse on block storage. It looks like the vm is started with the template disk as "raw" format, which is expected to fail when the format is actually "qcow2". The guest will see the qcow headers instead of the actual data. The next step to debug this is: 1. Copy a disk using this template to the block storage domain 2. Create a new vm using this disk 3. Start the vm Does it start? if not, attach engine and vdsm logs from this timefame. If this works, you can try: 1. Move vm disk from NFS to block storage 2. Start the vm Again, it it does not work, add engine and vdsm logs. Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
Nir,
After some playing around with pvscan I was able to get all of the need it information.
Please see:
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
So you have 2 storage domains:
- 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01
But most likely both of them are not active now.
Can you share the output of:
iscsiadm -m session
On a system connected to iscsi storage you will see something like:
# iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks).
Here is an example from machine connected to active iscsi domain:
# lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I got the uuid but I am getting the same results as before. Nothing comes up.
[root@gamma ~]# pvscan --cache [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 [root@gamma ~]#
without the grep all I get is:
[root@gamma ~]# lvs -o vg_name,lv_name,tags VG LV LV Tags vg_gamma lv_home vg_gamma lv_root vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
On the other hand an fdisk shows a bunch of disks and here is one example:
Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 32768 bytes I/O size (minimum/optimal): 32768 bytes / 1048576 bytes Disk identifier: 0x00000000
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote: > Nir, > > Ok ill look for it here in a few. > Thanks for your reply and help! > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: > > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes > > <ffuentes@darktcp.net> > > wrote: > > > Nir, > > > > > > I try to follow your steps but I cant seem to find the ID of > > > the > > > template. > > > > The image-uuid of the template is displayed in the Disks tab in > > engine. > > > > To find the volume-uuid on block storage, you can do: > > > > pvscan --cache > > lvs -o vg_name,lv_name,tags | grep image-uuid > > > > > > > > Regards, > > > > > > -- > > > Fernando Fuentes > > > ffuentes@txweather.org > > > http://www.txweather.org > > > > > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: > > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler > > >> <melissa@justmelly.com> > > >> wrote: > > >> > All, I did a test for Fernando in our ovirt environment. I > > >> > created a vm > > >> > called win7melly in the nfs domain. I then migrated it to > > >> > the iscsi > > >> > domain. It booted without any issue. So it has to be > > >> > something with the > > >> > templates. I have attached the vdsm log for the host the vm > > >> > resides on. > > >> > > >> The log show a working vm, so it does not help much. > > >> > > >> I think that the template you copied from the nfs domain to > > >> the block > > >> domain is > > >> corrupted, or the volume metadata are incorrect. > > >> > > >> If I understand this correctly, this started when Fernando > > >> could not copy > > >> the vm > > >> disk to the block storage, and I guess the issue was that the > > >> template > > >> was missing > > >> on that storage domain. I assume that he copied the template > > >> to the > > >> block storage > > >> domain by opening the templates tab, selecting the template, > > >> and choosing > > >> copy > > >> from the menu. > > >> > > >> Lets compare the template on both nfs and block storage > > >> domain. > > >> > > >> 1. Find the template on the nfs storage domain, using the > > >> image uuid in > > >> engine. > > >> > > >> It should be at > > >> > > >> > > >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid > > >> > > >> 2. Please share the output of: > > >> > > >> cat /path/to/volume.meta > > >> qemu-img info /path/to/volume > > >> qemu-img check /path/to/volume > > >> > > >> 4. Find the template on the block storage domain > > >> > > >> You should have an lv using the same volume uuid and the > > >> image-uuid > > >> should be in the lv tag. > > >> > > >> Find it using: > > >> > > >> lvs -o vg_name,lv_name,tags | grep volume-uuid > > >> > > >> 5. Activate the lv > > >> > > >> lvchange -ay vg_name/lv_name > > >> > > >> 6. Share the output of > > >> > > >> qemu-img info /dev/vg_name/lv_name > > >> qemu-img check /dev/vg_name/lv_name > > >> > > >> 7. Deactivate the lv > > >> > > >> lvchange -an vg_name/lv_name > > >> > > >> 8. Find the lv metadata > > >> > > >> The metadata is stored in /dev/vg_name/metadata. To find the > > >> correct > > >> block, > > >> find the tag named MD_N in the lv tags you found in step 4 > > >> > > >> The block we need is located at offset N from start of volume. > > >> > > >> 9. Share the output of: > > >> > > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 > > >> iflag=direct > > >> > > >> The output of this command should show the image-uuid. > > >> > > >> Nir > > >> > > >> > > > >> > - MeLLy > > >> > > > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: > > >> >> Nir, > > >> >> > > >> >> That's exactly how I did it Nir. > > >> >> I will test tomorrow with a new Windows VM and report back. > > >> >> > > >> >> Regards, > > >> >> > > >> >> -- > > >> >> Fernando Fuentes > > >> >> ffuentes@txweather.org > > >> >> http://www.txweather.org > > >> >> > > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani > > >> >> > <fromani@redhat.com> > > >> >> > wrote: > > >> >> > > ----- Original Message ----- > > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> > > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" > > >> >> > >> <users@ovirt.org> > > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM > > >> >> > >> Subject: Re: [ovirt-users] disk not bootable > > >> >> > >> > > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes > > >> >> > >> <ffuentes@darktcp.net> > > >> >> > >> wrote: > > >> >> > >> > Nir, > > >> >> > >> > > > >> >> > >> > Ok I ran another test and this one I moved from NFS > > >> >> > >> > domain to iSCSI and > > >> >> > >> > stop working than I moved it back and still unable > > >> >> > >> > to run... Windows VM > > >> >> > >> > is saying "no available boot disk" > > >> >> > >> > VM: Win7-Test > > >> >> > >> > Host: Zeta > > >> >> > >> > Info as requested: http://pastebin.com/1fSi3auz > > >> >> > >> > > >> >> > >> We need a working xml to compare to. > > >> >> > > > > >> >> > > [snip expected changes] > > >> >> > > > > >> >> > > > > >> >> > >> <entry name="manufacturer">oVirt</entry> > > >> >> > >> <entry name="product">oVirt Node</entry> > > >> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> > > >> >> > >> - <entry > > >> >> > >> name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > > >> >> > >> + <entry > > >> >> > >> name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > > >> >> > >> > > >> >> > >> Not expected - maybe this is confusing windows? > > >> >> > >> > > >> >> > >> Francesco, why vm serial has changed after moving > > >> >> > >> disks from one storage > > >> >> > >> domain > > >> >> > >> to another? > > >> >> > > > > >> >> > > We put in serial either > > >> >> > > 1. the UUID Engine send to us > > >> >> > > 2. the host UUID as returned by our getHostUUID utility > > >> >> > > function > > >> >> > > > > >> >> > > the latter is unlikely to change, even after this disk > > >> >> > > move. > > >> >> > > > >> >> > Fernando, can you describe exactly how you moved the > > >> >> > disk? > > >> >> > > > >> >> > I assume that you selected the vm in the virtual machines > > >> >> > tab, then > > >> >> > selected > > >> >> > disks from the sub tab, then selected move, and selected > > >> >> > the target > > >> >> > storage domain. > > >> >> > > > >> >> > Also, can you reproduce this with a new vm? (create vm > > >> >> > with disk nfs, > > >> >> > stop vm, > > >> >> > move disk to iscsi, start vm). > > >> >> > > > >> >> > > So the first suspect in line is Engine > > >> >> > > > > >> >> > > Arik, do you know if Engine is indeed supposed to > > >> >> > > change the UUID in this flow? > > >> >> > > That seems very surprising. > > >> >> > > > > >> >> > > Thanks and bests, > > >> >> > > > > >> >> > > -- > > >> >> > > Francesco Romani > > >> >> > > RedHat Engineering Virtualization R & D > > >> >> > > Phone: 8261328 > > >> >> > > IRC: fromani > > >> >> _______________________________________________ > > >> >> Users mailing list > > >> >> Users@ovirt.org > > >> >> http://lists.ovirt.org/mailman/listinfo/users > > >> > > > >> > _______________________________________________ > > >> > Users mailing list > > >> > Users@ovirt.org > > >> > http://lists.ovirt.org/mailman/listinfo/users > > >> > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, Thanks for all the help! I am on it and will reply with the requested info asap. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Tue, Jul 19, 2016, at 07:16 AM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 11:16 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Ops... forgot the link:
The requested infor is in the pastebin.
So the issue is clear now, the template on NFS is using raw format, and on block stoage, qcow format:
NFS:
root@zeta ~]# cat /rhev/data-center/mnt/172.30.10.5\:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f- ... FORMAT=RAW ...
[root@alpha ~]# qemu-img info /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: raw ...
iSCSI:
[root@zeta ~]# qemu-img info /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: qcow2 ...
[root@zeta ~]# dd if=/dev/0ef17024-0eae-4590-8eea-6ec8494fe223/metadata bs=512 skip=4 count=1 iflag=direct ... FORMAT=COW ...
This format conversion is expected, as we don't support raw/sparse on block storage.
It looks like the vm is started with the template disk as "raw" format, which is expected to fail when the format is actually "qcow2". The guest will see the qcow headers instead of the actual data.
The next step to debug this is:
1. Copy a disk using this template to the block storage domain 2. Create a new vm using this disk 3. Start the vm
Does it start? if not, attach engine and vdsm logs from this timefame.
If this works, you can try:
1. Move vm disk from NFS to block storage 2. Start the vm
Again, it it does not work, add engine and vdsm logs.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
Nir,
After some playing around with pvscan I was able to get all of the need it information.
Please see:
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
So you have 2 storage domains:
- 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01
But most likely both of them are not active now.
Can you share the output of:
iscsiadm -m session
On a system connected to iscsi storage you will see something like:
# iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks).
Here is an example from machine connected to active iscsi domain:
# lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote: > Nir, > > Ok I got the uuid but I am getting the same results as before. > Nothing comes up. > > [root@gamma ~]# pvscan --cache > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 > [root@gamma ~]# > > without the grep all I get is: > > [root@gamma ~]# lvs -o vg_name,lv_name,tags > VG LV LV Tags > vg_gamma lv_home > vg_gamma lv_root > vg_gamma lv_swap
You are not connected to the iscsi storage domain.
Please try this from a host in up state in engine.
Nir
> > On the other hand an fdisk shows a bunch of disks and here is one > example: > > Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, > 2199023255552 bytes > 255 heads, 63 sectors/track, 267349 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > > Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, > 2199023255552 bytes > 255 heads, 63 sectors/track, 267349 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: > 536 > MB, 536870912 bytes > 255 heads, 63 sectors/track, 65 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: > 1073 > MB, 1073741824 bytes > 255 heads, 63 sectors/track, 130 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 32768 bytes > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > Disk identifier: 0x00000000 > > Regards, > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote: >> Nir, >> >> Ok ill look for it here in a few. >> Thanks for your reply and help! >> >> -- >> Fernando Fuentes >> ffuentes@txweather.org >> http://www.txweather.org >> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes >> > <ffuentes@darktcp.net> >> > wrote: >> > > Nir, >> > > >> > > I try to follow your steps but I cant seem to find the ID of >> > > the >> > > template. >> > >> > The image-uuid of the template is displayed in the Disks tab in >> > engine. >> > >> > To find the volume-uuid on block storage, you can do: >> > >> > pvscan --cache >> > lvs -o vg_name,lv_name,tags | grep image-uuid >> > >> > > >> > > Regards, >> > > >> > > -- >> > > Fernando Fuentes >> > > ffuentes@txweather.org >> > > http://www.txweather.org >> > > >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler >> > >> <melissa@justmelly.com> >> > >> wrote: >> > >> > All, I did a test for Fernando in our ovirt environment. I >> > >> > created a vm >> > >> > called win7melly in the nfs domain. I then migrated it to >> > >> > the iscsi >> > >> > domain. It booted without any issue. So it has to be >> > >> > something with the >> > >> > templates. I have attached the vdsm log for the host the vm >> > >> > resides on. >> > >> >> > >> The log show a working vm, so it does not help much. >> > >> >> > >> I think that the template you copied from the nfs domain to >> > >> the block >> > >> domain is >> > >> corrupted, or the volume metadata are incorrect. >> > >> >> > >> If I understand this correctly, this started when Fernando >> > >> could not copy >> > >> the vm >> > >> disk to the block storage, and I guess the issue was that the >> > >> template >> > >> was missing >> > >> on that storage domain. I assume that he copied the template >> > >> to the >> > >> block storage >> > >> domain by opening the templates tab, selecting the template, >> > >> and choosing >> > >> copy >> > >> from the menu. >> > >> >> > >> Lets compare the template on both nfs and block storage >> > >> domain. >> > >> >> > >> 1. Find the template on the nfs storage domain, using the >> > >> image uuid in >> > >> engine. >> > >> >> > >> It should be at >> > >> >> > >> >> > >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid >> > >> >> > >> 2. Please share the output of: >> > >> >> > >> cat /path/to/volume.meta >> > >> qemu-img info /path/to/volume >> > >> qemu-img check /path/to/volume >> > >> >> > >> 4. Find the template on the block storage domain >> > >> >> > >> You should have an lv using the same volume uuid and the >> > >> image-uuid >> > >> should be in the lv tag. >> > >> >> > >> Find it using: >> > >> >> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid >> > >> >> > >> 5. Activate the lv >> > >> >> > >> lvchange -ay vg_name/lv_name >> > >> >> > >> 6. Share the output of >> > >> >> > >> qemu-img info /dev/vg_name/lv_name >> > >> qemu-img check /dev/vg_name/lv_name >> > >> >> > >> 7. Deactivate the lv >> > >> >> > >> lvchange -an vg_name/lv_name >> > >> >> > >> 8. Find the lv metadata >> > >> >> > >> The metadata is stored in /dev/vg_name/metadata. To find the >> > >> correct >> > >> block, >> > >> find the tag named MD_N in the lv tags you found in step 4 >> > >> >> > >> The block we need is located at offset N from start of volume. >> > >> >> > >> 9. Share the output of: >> > >> >> > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 >> > >> iflag=direct >> > >> >> > >> The output of this command should show the image-uuid. >> > >> >> > >> Nir >> > >> >> > >> > >> > >> > - MeLLy >> > >> > >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: >> > >> >> Nir, >> > >> >> >> > >> >> That's exactly how I did it Nir. >> > >> >> I will test tomorrow with a new Windows VM and report back. >> > >> >> >> > >> >> Regards, >> > >> >> >> > >> >> -- >> > >> >> Fernando Fuentes >> > >> >> ffuentes@txweather.org >> > >> >> http://www.txweather.org >> > >> >> >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: >> > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani >> > >> >> > <fromani@redhat.com> >> > >> >> > wrote: >> > >> >> > > ----- Original Message ----- >> > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> >> > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> >> > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" >> > >> >> > >> <users@ovirt.org> >> > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM >> > >> >> > >> Subject: Re: [ovirt-users] disk not bootable >> > >> >> > >> >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes >> > >> >> > >> <ffuentes@darktcp.net> >> > >> >> > >> wrote: >> > >> >> > >> > Nir, >> > >> >> > >> > >> > >> >> > >> > Ok I ran another test and this one I moved from NFS >> > >> >> > >> > domain to iSCSI and >> > >> >> > >> > stop working than I moved it back and still unable >> > >> >> > >> > to run... Windows VM >> > >> >> > >> > is saying "no available boot disk" >> > >> >> > >> > VM: Win7-Test >> > >> >> > >> > Host: Zeta >> > >> >> > >> > Info as requested: http://pastebin.com/1fSi3auz >> > >> >> > >> >> > >> >> > >> We need a working xml to compare to. >> > >> >> > > >> > >> >> > > [snip expected changes] >> > >> >> > > >> > >> >> > > >> > >> >> > >> <entry name="manufacturer">oVirt</entry> >> > >> >> > >> <entry name="product">oVirt Node</entry> >> > >> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> >> > >> >> > >> - <entry >> > >> >> > >> name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> >> > >> >> > >> + <entry >> > >> >> > >> name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> >> > >> >> > >> >> > >> >> > >> Not expected - maybe this is confusing windows? >> > >> >> > >> >> > >> >> > >> Francesco, why vm serial has changed after moving >> > >> >> > >> disks from one storage >> > >> >> > >> domain >> > >> >> > >> to another? >> > >> >> > > >> > >> >> > > We put in serial either >> > >> >> > > 1. the UUID Engine send to us >> > >> >> > > 2. the host UUID as returned by our getHostUUID utility >> > >> >> > > function >> > >> >> > > >> > >> >> > > the latter is unlikely to change, even after this disk >> > >> >> > > move. >> > >> >> > >> > >> >> > Fernando, can you describe exactly how you moved the >> > >> >> > disk? >> > >> >> > >> > >> >> > I assume that you selected the vm in the virtual machines >> > >> >> > tab, then >> > >> >> > selected >> > >> >> > disks from the sub tab, then selected move, and selected >> > >> >> > the target >> > >> >> > storage domain. >> > >> >> > >> > >> >> > Also, can you reproduce this with a new vm? (create vm >> > >> >> > with disk nfs, >> > >> >> > stop vm, >> > >> >> > move disk to iscsi, start vm). >> > >> >> > >> > >> >> > > So the first suspect in line is Engine >> > >> >> > > >> > >> >> > > Arik, do you know if Engine is indeed supposed to >> > >> >> > > change the UUID in this flow? >> > >> >> > > That seems very surprising. >> > >> >> > > >> > >> >> > > Thanks and bests, >> > >> >> > > >> > >> >> > > -- >> > >> >> > > Francesco Romani >> > >> >> > > RedHat Engineering Virtualization R & D >> > >> >> > > Phone: 8261328 >> > >> >> > > IRC: fromani >> > >> >> _______________________________________________ >> > >> >> Users mailing list >> > >> >> Users@ovirt.org >> > >> >> http://lists.ovirt.org/mailman/listinfo/users >> > >> > >> > >> > _______________________________________________ >> > >> > Users mailing list >> > >> > Users@ovirt.org >> > >> > http://lists.ovirt.org/mailman/listinfo/users >> > >> > >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users

Nir, I been busy and have not been able to replicate your request. I will as soon as I get a chance I will. Thanks again for the help. Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Tue, Jul 19, 2016, at 08:54 AM, Fernando Fuentes wrote:
Nir,
Thanks for all the help! I am on it and will reply with the requested info asap.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Tue, Jul 19, 2016, at 07:16 AM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 11:16 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Ops... forgot the link:
The requested infor is in the pastebin.
So the issue is clear now, the template on NFS is using raw format, and on block stoage, qcow format:
NFS:
root@zeta ~]# cat /rhev/data-center/mnt/172.30.10.5\:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f- ... FORMAT=RAW ...
[root@alpha ~]# qemu-img info /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: raw ...
iSCSI:
[root@zeta ~]# qemu-img info /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: qcow2 ...
[root@zeta ~]# dd if=/dev/0ef17024-0eae-4590-8eea-6ec8494fe223/metadata bs=512 skip=4 count=1 iflag=direct ... FORMAT=COW ...
This format conversion is expected, as we don't support raw/sparse on block storage.
It looks like the vm is started with the template disk as "raw" format, which is expected to fail when the format is actually "qcow2". The guest will see the qcow headers instead of the actual data.
The next step to debug this is:
1. Copy a disk using this template to the block storage domain 2. Create a new vm using this disk 3. Start the vm
Does it start? if not, attach engine and vdsm logs from this timefame.
If this works, you can try:
1. Move vm disk from NFS to block storage 2. Start the vm
Again, it it does not work, add engine and vdsm logs.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
Nir,
After some playing around with pvscan I was able to get all of the need it information.
Please see:
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
So you have 2 storage domains:
- 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01
But most likely both of them are not active now.
Can you share the output of:
iscsiadm -m session
On a system connected to iscsi storage you will see something like:
# iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks).
Here is an example from machine connected to active iscsi domain:
# lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
That's odd. gamma is my iscsi host, its in up state and it has active VM's. What am I missing?
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote: > On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes > <ffuentes@darktcp.net> > wrote: > > Nir, > > > > Ok I got the uuid but I am getting the same results as before. > > Nothing comes up. > > > > [root@gamma ~]# pvscan --cache > > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep > > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 > > [root@gamma ~]# > > > > without the grep all I get is: > > > > [root@gamma ~]# lvs -o vg_name,lv_name,tags > > VG LV LV Tags > > vg_gamma lv_home > > vg_gamma lv_root > > vg_gamma lv_swap > > You are not connected to the iscsi storage domain. > > Please try this from a host in up state in engine. > > Nir > > > > > On the other hand an fdisk shows a bunch of disks and here is one > > example: > > > > Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, > > 2199023255552 bytes > > 255 heads, 63 sectors/track, 267349 cylinders > > Units = cylinders of 16065 * 512 = 8225280 bytes > > Sector size (logical/physical): 512 bytes / 32768 bytes > > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > > Disk identifier: 0x00000000 > > > > > > Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, > > 2199023255552 bytes > > 255 heads, 63 sectors/track, 267349 cylinders > > Units = cylinders of 16065 * 512 = 8225280 bytes > > Sector size (logical/physical): 512 bytes / 32768 bytes > > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > > Disk identifier: 0x00000000 > > > > > > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: > > 536 > > MB, 536870912 bytes > > 255 heads, 63 sectors/track, 65 cylinders > > Units = cylinders of 16065 * 512 = 8225280 bytes > > Sector size (logical/physical): 512 bytes / 32768 bytes > > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > > Disk identifier: 0x00000000 > > > > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: > > 1073 > > MB, 1073741824 bytes > > 255 heads, 63 sectors/track, 130 cylinders > > Units = cylinders of 16065 * 512 = 8225280 bytes > > Sector size (logical/physical): 512 bytes / 32768 bytes > > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes > > Disk identifier: 0x00000000 > > > > Regards, > > > > -- > > Fernando Fuentes > > ffuentes@txweather.org > > http://www.txweather.org > > > > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote: > >> Nir, > >> > >> Ok ill look for it here in a few. > >> Thanks for your reply and help! > >> > >> -- > >> Fernando Fuentes > >> ffuentes@txweather.org > >> http://www.txweather.org > >> > >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: > >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes > >> > <ffuentes@darktcp.net> > >> > wrote: > >> > > Nir, > >> > > > >> > > I try to follow your steps but I cant seem to find the ID of > >> > > the > >> > > template. > >> > > >> > The image-uuid of the template is displayed in the Disks tab in > >> > engine. > >> > > >> > To find the volume-uuid on block storage, you can do: > >> > > >> > pvscan --cache > >> > lvs -o vg_name,lv_name,tags | grep image-uuid > >> > > >> > > > >> > > Regards, > >> > > > >> > > -- > >> > > Fernando Fuentes > >> > > ffuentes@txweather.org > >> > > http://www.txweather.org > >> > > > >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: > >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler > >> > >> <melissa@justmelly.com> > >> > >> wrote: > >> > >> > All, I did a test for Fernando in our ovirt environment. I > >> > >> > created a vm > >> > >> > called win7melly in the nfs domain. I then migrated it to > >> > >> > the iscsi > >> > >> > domain. It booted without any issue. So it has to be > >> > >> > something with the > >> > >> > templates. I have attached the vdsm log for the host the vm > >> > >> > resides on. > >> > >> > >> > >> The log show a working vm, so it does not help much. > >> > >> > >> > >> I think that the template you copied from the nfs domain to > >> > >> the block > >> > >> domain is > >> > >> corrupted, or the volume metadata are incorrect. > >> > >> > >> > >> If I understand this correctly, this started when Fernando > >> > >> could not copy > >> > >> the vm > >> > >> disk to the block storage, and I guess the issue was that the > >> > >> template > >> > >> was missing > >> > >> on that storage domain. I assume that he copied the template > >> > >> to the > >> > >> block storage > >> > >> domain by opening the templates tab, selecting the template, > >> > >> and choosing > >> > >> copy > >> > >> from the menu. > >> > >> > >> > >> Lets compare the template on both nfs and block storage > >> > >> domain. > >> > >> > >> > >> 1. Find the template on the nfs storage domain, using the > >> > >> image uuid in > >> > >> engine. > >> > >> > >> > >> It should be at > >> > >> > >> > >> > >> > >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid > >> > >> > >> > >> 2. Please share the output of: > >> > >> > >> > >> cat /path/to/volume.meta > >> > >> qemu-img info /path/to/volume > >> > >> qemu-img check /path/to/volume > >> > >> > >> > >> 4. Find the template on the block storage domain > >> > >> > >> > >> You should have an lv using the same volume uuid and the > >> > >> image-uuid > >> > >> should be in the lv tag. > >> > >> > >> > >> Find it using: > >> > >> > >> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid > >> > >> > >> > >> 5. Activate the lv > >> > >> > >> > >> lvchange -ay vg_name/lv_name > >> > >> > >> > >> 6. Share the output of > >> > >> > >> > >> qemu-img info /dev/vg_name/lv_name > >> > >> qemu-img check /dev/vg_name/lv_name > >> > >> > >> > >> 7. Deactivate the lv > >> > >> > >> > >> lvchange -an vg_name/lv_name > >> > >> > >> > >> 8. Find the lv metadata > >> > >> > >> > >> The metadata is stored in /dev/vg_name/metadata. To find the > >> > >> correct > >> > >> block, > >> > >> find the tag named MD_N in the lv tags you found in step 4 > >> > >> > >> > >> The block we need is located at offset N from start of volume. > >> > >> > >> > >> 9. Share the output of: > >> > >> > >> > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 > >> > >> iflag=direct > >> > >> > >> > >> The output of this command should show the image-uuid. > >> > >> > >> > >> Nir > >> > >> > >> > >> > > >> > >> > - MeLLy > >> > >> > > >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: > >> > >> >> Nir, > >> > >> >> > >> > >> >> That's exactly how I did it Nir. > >> > >> >> I will test tomorrow with a new Windows VM and report back. > >> > >> >> > >> > >> >> Regards, > >> > >> >> > >> > >> >> -- > >> > >> >> Fernando Fuentes > >> > >> >> ffuentes@txweather.org > >> > >> >> http://www.txweather.org > >> > >> >> > >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: > >> > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani > >> > >> >> > <fromani@redhat.com> > >> > >> >> > wrote: > >> > >> >> > > ----- Original Message ----- > >> > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> > >> > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> > >> > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" > >> > >> >> > >> <users@ovirt.org> > >> > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM > >> > >> >> > >> Subject: Re: [ovirt-users] disk not bootable > >> > >> >> > >> > >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes > >> > >> >> > >> <ffuentes@darktcp.net> > >> > >> >> > >> wrote: > >> > >> >> > >> > Nir, > >> > >> >> > >> > > >> > >> >> > >> > Ok I ran another test and this one I moved from NFS > >> > >> >> > >> > domain to iSCSI and > >> > >> >> > >> > stop working than I moved it back and still unable > >> > >> >> > >> > to run... Windows VM > >> > >> >> > >> > is saying "no available boot disk" > >> > >> >> > >> > VM: Win7-Test > >> > >> >> > >> > Host: Zeta > >> > >> >> > >> > Info as requested: http://pastebin.com/1fSi3auz > >> > >> >> > >> > >> > >> >> > >> We need a working xml to compare to. > >> > >> >> > > > >> > >> >> > > [snip expected changes] > >> > >> >> > > > >> > >> >> > > > >> > >> >> > >> <entry name="manufacturer">oVirt</entry> > >> > >> >> > >> <entry name="product">oVirt Node</entry> > >> > >> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> > >> > >> >> > >> - <entry > >> > >> >> > >> name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> > >> > >> >> > >> + <entry > >> > >> >> > >> name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> > >> > >> >> > >> > >> > >> >> > >> Not expected - maybe this is confusing windows? > >> > >> >> > >> > >> > >> >> > >> Francesco, why vm serial has changed after moving > >> > >> >> > >> disks from one storage > >> > >> >> > >> domain > >> > >> >> > >> to another? > >> > >> >> > > > >> > >> >> > > We put in serial either > >> > >> >> > > 1. the UUID Engine send to us > >> > >> >> > > 2. the host UUID as returned by our getHostUUID utility > >> > >> >> > > function > >> > >> >> > > > >> > >> >> > > the latter is unlikely to change, even after this disk > >> > >> >> > > move. > >> > >> >> > > >> > >> >> > Fernando, can you describe exactly how you moved the > >> > >> >> > disk? > >> > >> >> > > >> > >> >> > I assume that you selected the vm in the virtual machines > >> > >> >> > tab, then > >> > >> >> > selected > >> > >> >> > disks from the sub tab, then selected move, and selected > >> > >> >> > the target > >> > >> >> > storage domain. > >> > >> >> > > >> > >> >> > Also, can you reproduce this with a new vm? (create vm > >> > >> >> > with disk nfs, > >> > >> >> > stop vm, > >> > >> >> > move disk to iscsi, start vm). > >> > >> >> > > >> > >> >> > > So the first suspect in line is Engine > >> > >> >> > > > >> > >> >> > > Arik, do you know if Engine is indeed supposed to > >> > >> >> > > change the UUID in this flow? > >> > >> >> > > That seems very surprising. > >> > >> >> > > > >> > >> >> > > Thanks and bests, > >> > >> >> > > > >> > >> >> > > -- > >> > >> >> > > Francesco Romani > >> > >> >> > > RedHat Engineering Virtualization R & D > >> > >> >> > > Phone: 8261328 > >> > >> >> > > IRC: fromani > >> > >> >> _______________________________________________ > >> > >> >> Users mailing list > >> > >> >> Users@ovirt.org > >> > >> >> http://lists.ovirt.org/mailman/listinfo/users > >> > >> > > >> > >> > _______________________________________________ > >> > >> > Users mailing list > >> > >> > Users@ovirt.org > >> > >> > http://lists.ovirt.org/mailman/listinfo/users > >> > >> > > >> _______________________________________________ > >> Users mailing list > >> Users@ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Nir, I found something very interesting. When exporting the same vms and templates and than importing them to a new cluster running ovirt 4.0 that has iSCSI mounts ONLY, they import and run just fine. I notice that the disk imported as Blank templates and they do not seem to have any dependency to the templates. Is this normal behavior? Regards, -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Thu, Jul 28, 2016, at 09:01 AM, Fernando Fuentes wrote:
Nir,
I been busy and have not been able to replicate your request.
I will as soon as I get a chance I will.
Thanks again for the help.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Tue, Jul 19, 2016, at 08:54 AM, Fernando Fuentes wrote:
Nir,
Thanks for all the help! I am on it and will reply with the requested info asap.
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Tue, Jul 19, 2016, at 07:16 AM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 11:16 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Ops... forgot the link:
The requested infor is in the pastebin.
So the issue is clear now, the template on NFS is using raw format, and on block stoage, qcow format:
NFS:
root@zeta ~]# cat /rhev/data-center/mnt/172.30.10.5\:_opt_libvirtd_images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f- ... FORMAT=RAW ...
[root@alpha ~]# qemu-img info /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /opt/libvirtd/images/ecfaf7ac-5459-4c83-bd97-2bb448e38526/images/3b7d9349-9eb1-42f8-9e04-7bbb97c02b98/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: raw ...
iSCSI:
[root@zeta ~]# qemu-img info /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e image: /dev/0ef17024-0eae-4590-8eea-6ec8494fe223/25b6b0fe-d416-458f-b89f-363338ee0c0e ... file format: qcow2 ...
[root@zeta ~]# dd if=/dev/0ef17024-0eae-4590-8eea-6ec8494fe223/metadata bs=512 skip=4 count=1 iflag=direct ... FORMAT=COW ...
This format conversion is expected, as we don't support raw/sparse on block storage.
It looks like the vm is started with the template disk as "raw" format, which is expected to fail when the format is actually "qcow2". The guest will see the qcow headers instead of the actual data.
The next step to debug this is:
1. Copy a disk using this template to the block storage domain 2. Create a new vm using this disk 3. Start the vm
Does it start? if not, attach engine and vdsm logs from this timefame.
If this works, you can try:
1. Move vm disk from NFS to block storage 2. Start the vm
Again, it it does not work, add engine and vdsm logs.
Nir
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
Nir,
After some playing around with pvscan I was able to get all of the need it information.
Please see:
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
As requested:
[root@gamma ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 557G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 556.5G 0 part ├─vg_gamma-lv_root (dm-0) 253:0 0 50G 0 lvm / ├─vg_gamma-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] └─vg_gamma-lv_home (dm-2) 253:2 0 502.4G 0 lvm /home sr0 11:0 1 1024M 0 rom sdb 8:16 0 2T 0 disk └─36589cfc000000881b9b93c2623780840 (dm-4) 253:4 0 2T 0 mpath sdc 8:32 0 2T 0 disk └─36589cfc00000050564002c7e51978316 (dm-3) 253:3 0 2T 0 mpath ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5) 253:5 0 512M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6) 253:6 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7) 253:7 0 2G 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8) 253:8 0 128M 0 lvm ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:9 0 128M 0 lvm └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10) 253:10 0 1G 0 lvm sdd 8:48 0 4T 0 disk └─36589cfc00000059ccab70662b71c47ef (dm-11) 253:11 0 4T 0 mpath ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12 0 512M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13) 253:13 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14) 253:14 0 2G 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15) 253:15 0 128M 0 lvm ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16) 253:16 0 128M 0 lvm └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17) 253:17 0 1G 0 lvm
So you have 2 storage domains:
- 3ccb7b67-8067-4315-9656-d68ba10975ba - 4861322b-352f-41c6-890a-5cbf1c2c1f01
But most likely both of them are not active now.
Can you share the output of:
iscsiadm -m session
On a system connected to iscsi storage you will see something like:
# iscsiadm -m session tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
The special lvs (ids, leases, ...) should be active, and you should see also regular disks lvs (used for snapshot for vm disks).
Here is an example from machine connected to active iscsi domain:
# lvs LV VG Attr LSize 27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g 35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 8.00g 36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m 4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 2.12g c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 128.00m d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 4.00g f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi------- 1.00g ids 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-ao---- 128.00m inbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m leases 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 2.00g master 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 1.00g metadata 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 512.00m outbox 5f35b5c0-17d7-4475-9125-e97f1cdb06f9 -wi-a----- 128.00m
[root@gamma ~]#
Regards,
-- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org
On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
Can you share output of lsblk on this host?
On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote: > Nir, > > That's odd. gamma is my iscsi host, its in up state and it has active > VM's. > What am I missing? > > Regards, > > -- > Fernando Fuentes > ffuentes@txweather.org > http://www.txweather.org > > On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote: >> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes >> <ffuentes@darktcp.net> >> wrote: >> > Nir, >> > >> > Ok I got the uuid but I am getting the same results as before. >> > Nothing comes up. >> > >> > [root@gamma ~]# pvscan --cache >> > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep >> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98 >> > [root@gamma ~]# >> > >> > without the grep all I get is: >> > >> > [root@gamma ~]# lvs -o vg_name,lv_name,tags >> > VG LV LV Tags >> > vg_gamma lv_home >> > vg_gamma lv_root >> > vg_gamma lv_swap >> >> You are not connected to the iscsi storage domain. >> >> Please try this from a host in up state in engine. >> >> Nir >> >> > >> > On the other hand an fdisk shows a bunch of disks and here is one >> > example: >> > >> > Disk /dev/mapper/36589cfc00000050564002c7e51978316: 2199.0 GB, >> > 2199023255552 bytes >> > 255 heads, 63 sectors/track, 267349 cylinders >> > Units = cylinders of 16065 * 512 = 8225280 bytes >> > Sector size (logical/physical): 512 bytes / 32768 bytes >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes >> > Disk identifier: 0x00000000 >> > >> > >> > Disk /dev/mapper/36589cfc000000881b9b93c2623780840: 2199.0 GB, >> > 2199023255552 bytes >> > 255 heads, 63 sectors/track, 267349 cylinders >> > Units = cylinders of 16065 * 512 = 8225280 bytes >> > Sector size (logical/physical): 512 bytes / 32768 bytes >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes >> > Disk identifier: 0x00000000 >> > >> > >> > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: >> > 536 >> > MB, 536870912 bytes >> > 255 heads, 63 sectors/track, 65 cylinders >> > Units = cylinders of 16065 * 512 = 8225280 bytes >> > Sector size (logical/physical): 512 bytes / 32768 bytes >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes >> > Disk identifier: 0x00000000 >> > >> > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: >> > 1073 >> > MB, 1073741824 bytes >> > 255 heads, 63 sectors/track, 130 cylinders >> > Units = cylinders of 16065 * 512 = 8225280 bytes >> > Sector size (logical/physical): 512 bytes / 32768 bytes >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes >> > Disk identifier: 0x00000000 >> > >> > Regards, >> > >> > -- >> > Fernando Fuentes >> > ffuentes@txweather.org >> > http://www.txweather.org >> > >> > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote: >> >> Nir, >> >> >> >> Ok ill look for it here in a few. >> >> Thanks for your reply and help! >> >> >> >> -- >> >> Fernando Fuentes >> >> ffuentes@txweather.org >> >> http://www.txweather.org >> >> >> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote: >> >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes >> >> > <ffuentes@darktcp.net> >> >> > wrote: >> >> > > Nir, >> >> > > >> >> > > I try to follow your steps but I cant seem to find the ID of >> >> > > the >> >> > > template. >> >> > >> >> > The image-uuid of the template is displayed in the Disks tab in >> >> > engine. >> >> > >> >> > To find the volume-uuid on block storage, you can do: >> >> > >> >> > pvscan --cache >> >> > lvs -o vg_name,lv_name,tags | grep image-uuid >> >> > >> >> > > >> >> > > Regards, >> >> > > >> >> > > -- >> >> > > Fernando Fuentes >> >> > > ffuentes@txweather.org >> >> > > http://www.txweather.org >> >> > > >> >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote: >> >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler >> >> > >> <melissa@justmelly.com> >> >> > >> wrote: >> >> > >> > All, I did a test for Fernando in our ovirt environment. I >> >> > >> > created a vm >> >> > >> > called win7melly in the nfs domain. I then migrated it to >> >> > >> > the iscsi >> >> > >> > domain. It booted without any issue. So it has to be >> >> > >> > something with the >> >> > >> > templates. I have attached the vdsm log for the host the vm >> >> > >> > resides on. >> >> > >> >> >> > >> The log show a working vm, so it does not help much. >> >> > >> >> >> > >> I think that the template you copied from the nfs domain to >> >> > >> the block >> >> > >> domain is >> >> > >> corrupted, or the volume metadata are incorrect. >> >> > >> >> >> > >> If I understand this correctly, this started when Fernando >> >> > >> could not copy >> >> > >> the vm >> >> > >> disk to the block storage, and I guess the issue was that the >> >> > >> template >> >> > >> was missing >> >> > >> on that storage domain. I assume that he copied the template >> >> > >> to the >> >> > >> block storage >> >> > >> domain by opening the templates tab, selecting the template, >> >> > >> and choosing >> >> > >> copy >> >> > >> from the menu. >> >> > >> >> >> > >> Lets compare the template on both nfs and block storage >> >> > >> domain. >> >> > >> >> >> > >> 1. Find the template on the nfs storage domain, using the >> >> > >> image uuid in >> >> > >> engine. >> >> > >> >> >> > >> It should be at >> >> > >> >> >> > >> >> >> > >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid >> >> > >> >> >> > >> 2. Please share the output of: >> >> > >> >> >> > >> cat /path/to/volume.meta >> >> > >> qemu-img info /path/to/volume >> >> > >> qemu-img check /path/to/volume >> >> > >> >> >> > >> 4. Find the template on the block storage domain >> >> > >> >> >> > >> You should have an lv using the same volume uuid and the >> >> > >> image-uuid >> >> > >> should be in the lv tag. >> >> > >> >> >> > >> Find it using: >> >> > >> >> >> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid >> >> > >> >> >> > >> 5. Activate the lv >> >> > >> >> >> > >> lvchange -ay vg_name/lv_name >> >> > >> >> >> > >> 6. Share the output of >> >> > >> >> >> > >> qemu-img info /dev/vg_name/lv_name >> >> > >> qemu-img check /dev/vg_name/lv_name >> >> > >> >> >> > >> 7. Deactivate the lv >> >> > >> >> >> > >> lvchange -an vg_name/lv_name >> >> > >> >> >> > >> 8. Find the lv metadata >> >> > >> >> >> > >> The metadata is stored in /dev/vg_name/metadata. To find the >> >> > >> correct >> >> > >> block, >> >> > >> find the tag named MD_N in the lv tags you found in step 4 >> >> > >> >> >> > >> The block we need is located at offset N from start of volume. >> >> > >> >> >> > >> 9. Share the output of: >> >> > >> >> >> > >> dd if=/dev/vg_name/metadata bs=512 skip=N count=1 >> >> > >> iflag=direct >> >> > >> >> >> > >> The output of this command should show the image-uuid. >> >> > >> >> >> > >> Nir >> >> > >> >> >> > >> > >> >> > >> > - MeLLy >> >> > >> > >> >> > >> > On Mon, Jul 4, 2016, at 11:52 PM, Fernando Fuentes wrote: >> >> > >> >> Nir, >> >> > >> >> >> >> > >> >> That's exactly how I did it Nir. >> >> > >> >> I will test tomorrow with a new Windows VM and report back. >> >> > >> >> >> >> > >> >> Regards, >> >> > >> >> >> >> > >> >> -- >> >> > >> >> Fernando Fuentes >> >> > >> >> ffuentes@txweather.org >> >> > >> >> http://www.txweather.org >> >> > >> >> >> >> > >> >> On Mon, Jul 4, 2016, at 10:48 AM, Nir Soffer wrote: >> >> > >> >> > On Mon, Jul 4, 2016 at 6:43 PM, Francesco Romani >> >> > >> >> > <fromani@redhat.com> >> >> > >> >> > wrote: >> >> > >> >> > > ----- Original Message ----- >> >> > >> >> > >> From: "Nir Soffer" <nsoffer@redhat.com> >> >> > >> >> > >> To: "Fernando Fuentes" <ffuentes@darktcp.net> >> >> > >> >> > >> Cc: "Francesco Romani" <fromani@redhat.com>, "users" >> >> > >> >> > >> <users@ovirt.org> >> >> > >> >> > >> Sent: Saturday, July 2, 2016 11:18:01 AM >> >> > >> >> > >> Subject: Re: [ovirt-users] disk not bootable >> >> > >> >> > >> >> >> > >> >> > >> On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes >> >> > >> >> > >> <ffuentes@darktcp.net> >> >> > >> >> > >> wrote: >> >> > >> >> > >> > Nir, >> >> > >> >> > >> > >> >> > >> >> > >> > Ok I ran another test and this one I moved from NFS >> >> > >> >> > >> > domain to iSCSI and >> >> > >> >> > >> > stop working than I moved it back and still unable >> >> > >> >> > >> > to run... Windows VM >> >> > >> >> > >> > is saying "no available boot disk" >> >> > >> >> > >> > VM: Win7-Test >> >> > >> >> > >> > Host: Zeta >> >> > >> >> > >> > Info as requested: http://pastebin.com/1fSi3auz >> >> > >> >> > >> >> >> > >> >> > >> We need a working xml to compare to. >> >> > >> >> > > >> >> > >> >> > > [snip expected changes] >> >> > >> >> > > >> >> > >> >> > > >> >> > >> >> > >> <entry name="manufacturer">oVirt</entry> >> >> > >> >> > >> <entry name="product">oVirt Node</entry> >> >> > >> >> > >> <entry name="version">6-5.el6.centos.11.2</entry> >> >> > >> >> > >> - <entry >> >> > >> >> > >> name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> >> >> > >> >> > >> + <entry >> >> > >> >> > >> name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry> >> >> > >> >> > >> >> >> > >> >> > >> Not expected - maybe this is confusing windows? >> >> > >> >> > >> >> >> > >> >> > >> Francesco, why vm serial has changed after moving >> >> > >> >> > >> disks from one storage >> >> > >> >> > >> domain >> >> > >> >> > >> to another? >> >> > >> >> > > >> >> > >> >> > > We put in serial either >> >> > >> >> > > 1. the UUID Engine send to us >> >> > >> >> > > 2. the host UUID as returned by our getHostUUID utility >> >> > >> >> > > function >> >> > >> >> > > >> >> > >> >> > > the latter is unlikely to change, even after this disk >> >> > >> >> > > move. >> >> > >> >> > >> >> > >> >> > Fernando, can you describe exactly how you moved the >> >> > >> >> > disk? >> >> > >> >> > >> >> > >> >> > I assume that you selected the vm in the virtual machines >> >> > >> >> > tab, then >> >> > >> >> > selected >> >> > >> >> > disks from the sub tab, then selected move, and selected >> >> > >> >> > the target >> >> > >> >> > storage domain. >> >> > >> >> > >> >> > >> >> > Also, can you reproduce this with a new vm? (create vm >> >> > >> >> > with disk nfs, >> >> > >> >> > stop vm, >> >> > >> >> > move disk to iscsi, start vm). >> >> > >> >> > >> >> > >> >> > > So the first suspect in line is Engine >> >> > >> >> > > >> >> > >> >> > > Arik, do you know if Engine is indeed supposed to >> >> > >> >> > > change the UUID in this flow? >> >> > >> >> > > That seems very surprising. >> >> > >> >> > > >> >> > >> >> > > Thanks and bests, >> >> > >> >> > > >> >> > >> >> > > -- >> >> > >> >> > > Francesco Romani >> >> > >> >> > > RedHat Engineering Virtualization R & D >> >> > >> >> > > Phone: 8261328 >> >> > >> >> > > IRC: fromani >> >> > >> >> _______________________________________________ >> >> > >> >> Users mailing list >> >> > >> >> Users@ovirt.org >> >> > >> >> http://lists.ovirt.org/mailman/listinfo/users >> >> > >> > >> >> > >> > _______________________________________________ >> >> > >> > Users mailing list >> >> > >> > Users@ovirt.org >> >> > >> > http://lists.ovirt.org/mailman/listinfo/users >> >> > >> > >> >> _______________________________________________ >> >> Users mailing list >> >> Users@ovirt.org >> >> http://lists.ovirt.org/mailman/listinfo/users >> > _______________________________________________ >> > Users mailing list >> > Users@ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04 Jul 2016, at 17:43, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Nope, only specific licensing software seem to be checking this
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
depends on which host the VM is started. It’s the host-derived ID so it’s often different every time you run a VM. So I suppose here it just means the VM was launched on a different host. It shouldn’t be significant to the boot issue.
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Michal, given what you said, then it has to be template related. We were able to create a new vm from scratch and move from the nfs domain to the iscsi domain. So where do we go from here? On Fri, Jul 8, 2016, at 04:24 AM, Michal Skrivanek wrote:
On 04 Jul 2016, at 17:43, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Nope, only specific licensing software seem to be checking this
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
depends on which host the VM is started. It’s the host-derived ID so it’s often different every time you run a VM. So I suppose here it just means the VM was launched on a different host. It shouldn’t be significant to the boot issue.
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Team, Any more ideas? :( -- Fernando Fuentes ffuentes@txweather.org http://www.txweather.org On Fri, Jul 8, 2016, at 09:36 AM, Melissa Mesler wrote:
Michal, given what you said, then it has to be template related. We were able to create a new vm from scratch and move from the nfs domain to the iscsi domain. So where do we go from here?
On Fri, Jul 8, 2016, at 04:24 AM, Michal Skrivanek wrote:
On 04 Jul 2016, at 17:43, Francesco Romani <fromani@redhat.com> wrote:
----- Original Message -----
From: "Nir Soffer" <nsoffer@redhat.com> To: "Fernando Fuentes" <ffuentes@darktcp.net> Cc: "Francesco Romani" <fromani@redhat.com>, "users" <users@ovirt.org> Sent: Saturday, July 2, 2016 11:18:01 AM Subject: Re: [ovirt-users] disk not bootable
On Sat, Jul 2, 2016 at 1:33 AM, Fernando Fuentes <ffuentes@darktcp.net> wrote:
Nir,
Ok I ran another test and this one I moved from NFS domain to iSCSI and stop working than I moved it back and still unable to run... Windows VM is saying "no available boot disk" VM: Win7-Test Host: Zeta Info as requested: http://pastebin.com/1fSi3auz
We need a working xml to compare to.
[snip expected changes]
<entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-5.el6.centos.11.2</entry> - <entry name="serial">C938F077-55E2-3E50-A694-9FCB7661FD89</entry> + <entry name="serial">735C7A01-1F16-3CF0-AF8C-A99823E95AC0</entry>
Not expected - maybe this is confusing windows?
Nope, only specific licensing software seem to be checking this
Francesco, why vm serial has changed after moving disks from one storage domain to another?
We put in serial either 1. the UUID Engine send to us 2. the host UUID as returned by our getHostUUID utility function
the latter is unlikely to change, even after this disk move.
depends on which host the VM is started. It’s the host-derived ID so it’s often different every time you run a VM. So I suppose here it just means the VM was launched on a different host. It shouldn’t be significant to the boot issue.
So the first suspect in line is Engine
Arik, do you know if Engine is indeed supposed to change the UUID in this flow? That seems very surprising.
Thanks and bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (6)
-
Fernando Fuentes
-
Francesco Romani
-
Melissa Mesler
-
Michal Skrivanek
-
Nir Soffer
-
Pavel Gashev