Re: [ovirt-users] vm freezes when using yum update

</div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><br></di= v><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><br></div>Mor= e information:</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D= "ltr"><br></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D"ltr= ">VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:5= 0:43 AM):</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D"ltr"= <br></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D"ltr"><di= v dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499051725992_8436"><pre style=3D"use= r-select: text; font-family: "Courier New", Courier, monospace, a= rial, sans-serif; font-size: 14px; margin-top: 0px; margin-bottom: 0px; wor= d-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_8875">2017-07-03= 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.= getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) <b id=3D"yui_3_16_0_ym19_1_1499051725992_9241">2017-07-03 09:50:43,548+0800= INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats(options=3DNone) = (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats, Return response: {u'e01186c1-7e44-4808-b551-4722f0f8e84b': {'c= ode': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000144= 822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d-ba2a372= 190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay'= : '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4-8= a06-37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr= ue, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb771-= 5b73-4b5c-ac46-56499df97721': {'code': 0, 'actual': True, 'version': 0, 'ac= quired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': True}, = u'f620781f-93d4-4410-8697-eb41045cacd6': {'code': 0, 'actual': True, 'versi= on': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', 'valid= ': True}, u'a1a7d0a4-e3b6-4bd5-862b-96e70dae3f29': {'code': 0, 'actual': Tr= ue, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastCheck': '8= .8', 'valid': True}} (logUtils:54) </b>2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] = RPC call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7-= 9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a-9b51-f5ab8242c8d5', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d-= a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18-9c00-c1539b632e8a', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8-= b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c-b193-045f23c596d8', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08-= a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c-b72c-2f46c4b4db1e', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize= ': '107374182400'} (logUtils:54)</pre><pre style=3D"user-select: text; font= -family: "Courier New", Courier, monospace, arial, sans-serif; fo= nt-size: 14px; margin-top: 0px; margin-bottom: 0px; word-wrap: break-word;"= id=3D"yui_3_16_0_ym19_1_1499051725992_8875"><br></pre><pre style=3D"user-s= elect: text; font-family: "Courier New", Courier, monospace, aria= l, sans-serif; font-size: 14px; margin-top: 0px; margin-bottom: 0px; word-w= rap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_8875">......</pre><=
017, 2:48 PM, Yaniv Kaul <ykaul@redhat.com> wrote:</div><blockquote = class=3D"yiv6639019017iosymail" id=3D"yui_3_16_0_ym19_1_1499051725992_6642"= <div id=3D"yiv6639019017"><div id=3D"yui_3_16_0_ym19_1_1499051725992_6641"= <div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499051725992_6640"><br clear=3D"= none"><div class=3D"yiv6639019017gmail_extra" id=3D"yui_3_16_0_ym19_1_14990= 51725992_6643"><br clear=3D"none"><div class=3D"yiv6639019017gmail_quote" i= d=3D"yui_3_16_0_ym19_1_1499051725992_6644">On Thu, Jun 22, 2017 at 5:07 AM,= M Mahboubian <span dir=3D"ltr"><<a rel=3D"nofollow" shape=3D"rect" ymai= lto=3D"mailto:m_mahboubian@yahoo.com" target=3D"_blank" href=3D"mailto:m_ma= hboubian@yahoo.com">m_mahboubian@yahoo.com</a>></span> wrote:<br clear= =3D"none"><blockquote class=3D"yiv6639019017gmail_quote" style=3D"margin:0p= x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" i= d=3D"yui_3_16_0_ym19_1_1499051725992_6646"><div id=3D"yui_3_16_0_ym19_1_149= 9051725992_6645"> Dear all,<div id=3D"yui_3_16_0_ym19_1_1499051725992_6647">I appreciate if a= nybody could possibly help with the issue I am facing.</div><div id=3D"yui_= 3_16_0_ym19_1_1499051725992_6648"><br clear=3D"none"></div><div id=3D"yui_3= _16_0_ym19_1_1499051725992_6649">In our environment we have 2 hosts 1 NFS s= erver and 1 ovirt engine server. The NFS server provides storage to the VMs= in the hosts.</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6650"><br cl= ear=3D"none"></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6651">I can c= reate new VMs and install os but once i do something like yum update the VM= freezes. I can reproduce this every single time I do yum update.</div></di= v></blockquote><div id=3D"yui_3_16_0_ym19_1_1499051725992_6652"><br clear= =3D"none"></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6653">Is it paus= ed, or completely frozen?</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6= 654"> <br clear=3D"none"></div><blockquote class=3D"yiv6639019017gmail= _quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204= ,204);padding-left:1ex;" id=3D"yui_3_16_0_ym19_1_1499051725992_9059"><div i= d=3D"yui_3_16_0_ym19_1_1499051725992_9058"><div id=3D"yui_3_16_0_ym19_1_149= 9051725992_9057"><br clear=3D"none"></div><div id=3D"yui_3_16_0_ym19_1_1499= 051725992_9791">what information/log files should I provide you to trublesh= oot this?</div></div></blockquote><div><br clear=3D"none"></div><div id=3D"= yui_3_16_0_ym19_1_1499051725992_9834">Versions of all the components involv= ed - guest OS, host OS (qemu-kvm version), how do you run the VM (vdsm log = would be helpful here), exact storage specification (1Gb or 10Gb link? What= is the NFS version? What is it hosted on? etc.)</div><div> Y.</div><d= iv class=3D"yiv6639019017yqt6606837936" id=3D"yiv6639019017yqtfd75418"><div= <br clear=3D"none"></div></div><blockquote class=3D"yiv6639019017gmail_quo= te" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204= );padding-left:1ex;" id=3D"yui_3_16_0_ym19_1_1499051725992_9794"><div id=3D= "yui_3_16_0_ym19_1_1499051725992_9793"><div class=3D"yiv6639019017yqt660683= 7936" id=3D"yiv6639019017yqtfd33806"><div id=3D"yui_3_16_0_ym19_1_149905172= 5992_9792"><br clear=3D"none"></div><div> Regards</div></div> </div><br clear=3D"none">______________________________ _________________<b= r clear=3D"none"> Users mailing list<br clear=3D"none"> <a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:Users@ovirt.org" targe= t=3D"_blank" href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br clear= =3D"none"> <a rel=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"http://lists.o= virt.org/mailman/listinfo/users">http://lists.ovirt.org/ mailman/listinfo/u= sers</a><div class=3D"yiv6639019017yqt6606837936" id=3D"yiv6639019017yqtfd6= 5242"><br clear=3D"none"> <br clear=3D"none"></div></blockquote></div><div class=3D"yiv6639019017yqt6= 606837936" id=3D"yiv6639019017yqtfd02968"><br clear=3D"none"></div></div></=
------=_Part_3118757_945095184.1499053755165 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Yaniv, Thanks for your reply. Apologies for my late reply we had a long holiday he= re.=C2=A0 To answer you: Yes the =C2=A0guest VM become completely frozen and non responsive as soon = as its disk has any activity for example when we shutdown or do a yum updat= e.=C2=A0 Versions of all the components involved - guest OS, host OS (qemu-kvm versi= on), how do you run the VM (vdsm log would be helpful here), exact storage = specification (1Gb or 10Gb link? What is the NFS version? What is it hosted= on? etc.)=C2=A0Y. Some facts about our environment: 1) Previously, this environment was using XEN using raw disk and we change = it to Ovirt (Ovirt were able to read the VMs's disks without any conversion= .)=C2=A02) The issue we are facing is not happening for any of the existing= VMs.=C2=A03) This issue only happens for new VMs.4) Guest (kernel v3.10) a= nd host(kernel v4.1) OSes are both CentOS 7 minimal installation.=C2=A05) N= FS version 4 and Using Ovirt 4.16)=C2=A0The network speed is 1 GB.7)=C2=A0T= he output for rpm -qa | grep qemu-kvm shows:=C2=A0 =C2=A0 =C2=A0qemu-kvm-co= mmon-ev-2.6.0-28.e17_3.6.1.x86_64 =C2=A0 =C2=A0 =C2=A0qemu-kvm-tools-ev-2.6.0-28.e17_3.6.1.x86_64=C2=A0 =C2= =A0 =C2=A0qemu-kvm-ev-2.6.0-28.e17_3.6.1.x86_648)=C2=A0The storage is from = a SAN device which is connected to the NFS server using fiber channel. So for example during shutdown also it froze and shows something like this = in event section: VM ILMU_WEB has been paused due to storage I/O problem. More information: VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:50:= 43 AM): 2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats(options=3DNone) (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats, Return response: {u'e01186c1-7e44-4808-b551-4722f0f8e84b': {'c= ode': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000144= 822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d-ba2a372= 190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay'= : '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4-8= a06-37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr= ue, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb771-= 5b73-4b5c-ac46-56499df97721': {'code': 0, 'actual': True, 'version': 0, 'ac= quired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': True}, = u'f620781f-93d4-4410-8697-eb41045cacd6': {'code': 0, 'actual': True, 'versi= on': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', 'valid= ': True}, u'a1a7d0a4-e3b6-4bd5-862b-96e70dae3f29': {'code': 0, 'actual': Tr= ue, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastCheck': '8= .8', 'valid': True}} (logUtils:54) 2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC = call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7-= 9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a-9b51-f5ab8242c8d5', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d-= a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18-9c00-c1539b632e8a', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8-= b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c-b193-045f23c596d8', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08-= a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c-b72c-2f46c4b4db1e', opti= ons=3DNone) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize= ': '107374182400'} (logUtils:54) ............ 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro= r eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro= r eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro= r eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError On Thursday, June 22, 2017, 2:48 PM, Yaniv Kaul <ykaul@redhat.com> wrote: On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian <m_mahboubian@yahoo.com> wrot= e: Dear all,I appreciate if anybody could possibly help with the issue I am fa= cing. In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. = The NFS server provides storage to the VMs in the hosts. I can create new VMs and install os but once i do something like yum update= the VM freezes. I can reproduce this every single time I do yum update. Is it paused, or completely frozen?=C2=A0 what information/log files should I provide you to trubleshoot this? Versions of all the components involved - guest OS, host OS (qemu-kvm versi= on), how do you run the VM (vdsm log would be helpful here), exact storage = specification (1Gb or 10Gb link? What is the NFS version? What is it hosted= on? etc.)=C2=A0Y. =C2=A0Regards ______________________________ _________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/ mailman/listinfo/users ------=_Part_3118757_945095184.1499053755165 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <html><head></head><body><div style=3D"color:#000; background-color:#fff; f= ont-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font= -size:13px"><div id=3D"yiv6639019017"><div id=3D"yui_3_16_0_ym19_1_14990517= 25992_6544"> Hi Yaniv,<div id=3D"yui_3_16_0_ym19_1_1499051725992_6543"><br clear=3D"none= "></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6566">Thanks for your re= ply. Apologies for my late reply we had a long holiday here. </div><di= v id=3D"yui_3_16_0_ym19_1_1499051725992_6565"><br clear=3D"none"></div><div= id=3D"yui_3_16_0_ym19_1_1499051725992_6564">To answer you:</div><div id=3D= "yui_3_16_0_ym19_1_1499051725992_6563"><br clear=3D"none"></div><div id=3D"= yui_3_16_0_ym19_1_1499051725992_6557">Yes the guest VM become complet= ely frozen and non responsive as soon as its disk has any activity for exam= ple when we shutdown or do a yum update. </div><div id=3D"yui_3_16_0_y= m19_1_1499051725992_6556"><br clear=3D"none"></div><div id=3D"yui_3_16_0_ym= 19_1_1499051725992_6555"><br clear=3D"none"></div><div id=3D"yui_3_16_0_ym1= 9_1_1499051725992_6546" dir=3D"ltr"><div style=3D"" id=3D"yui_3_16_0_ym19_1= _1499051725992_6554"><span style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992= _6553">Versions of all the components involved - guest OS, host OS (qemu-kv= m version), how do you run the VM (vdsm log would be helpful here), exact s= torage specification (1Gb or 10Gb link? What is the NFS version? What is it= hosted on? etc.)</span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_14990= 51725992_6552"><span style=3D""> Y.</span></div><div style=3D"" id=3D"= yui_3_16_0_ym19_1_1499051725992_6551"><span style=3D""><br clear=3D"none"><= /span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6551"><sp= an style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_9556">Some facts about = our environment:</span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_149905= 1725992_6551"><span style=3D""><br></span></div><div style=3D"" id=3D"yui_3= _16_0_ym19_1_1499051725992_6551"><span style=3D"" id=3D"yui_3_16_0_ym19_1_1= 499051725992_9703">1) Previously, this environment was using XEN using raw = disk and we change it to Ovirt (Ovirt were able to read the VMs's disks wit= hout any conversion.) </span></div><div style=3D"" id=3D"yui_3_16_0_ym= 19_1_1499051725992_6551"><span style=3D"" id=3D"yui_3_16_0_ym19_1_149905172= 5992_9800">2) The issue we are facing is not happening for any of the exist= ing VMs. </span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_14990517= 25992_6551"><span style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_9686"><b= id=3D"yui_3_16_0_ym19_1_1499051725992_9685">3) This issue only happens for= new VMs.</b></span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_149905172= 5992_6551"><span style=3D"">4) G</span>uest (kernel v3.10) and host(kernel = v4.1) OSes are both CentOS 7 minimal installation. </div><div style=3D= "" id=3D"yui_3_16_0_ym19_1_1499051725992_6551"><b id=3D"yui_3_16_0_ym19_1_1= 499051725992_9591">5) NFS version 4</b> and Using Ovirt 4.1</div><div style= =3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6550"><span style=3D"">6) = </span><b>The network speed is 1 GB.</b></div><div style=3D"" id=3D"yui_3_1= 6_0_ym19_1_1499051725992_6550"><span style=3D"">7) </span>The output f= or rpm -qa | grep qemu-kvm shows:</div><div style=3D"" id=3D"yui_3_16_0_ym1= 9_1_1499051725992_6550"><b id=3D"yui_3_16_0_ym19_1_1499051725992_9222">&nbs= p; qemu-kvm-common-ev-2.6.0-28.e17_3.6.1.x86_64</b><br></div><= div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6550" dir=3D"ltr"><b i= d=3D"yui_3_16_0_ym19_1_1499051725992_9360"><span style=3D"" id=3D"yui_3_16_= 0_ym19_1_1499051725992_8120"> qemu-kvm-tools-ev-</span>2= .6.0-28.e17_3.6.1.x86_64</b></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1= 499051725992_6550" dir=3D"ltr"><b id=3D"yui_3_16_0_ym19_1_1499051725992_935= 9"> qemu-kvm-ev-2.6.0-28.e17_3.6.1.x86_64</b></div><div = style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6550" dir=3D"ltr"><b>8)&nb= sp;</b>The storage is from a SAN device which is connected to the NFS serve= r using fiber channel.</div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051= 725992_6548"><span style=3D""><br clear=3D"none"></span></div><div style=3D= "" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><span style=3D"" id=3D"yui_3= _16_0_ym19_1_1499051725992_7076">So for example during shutdown also it fro= ze and shows something like this in event section:</span></div><div style= =3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><br></div><div style=3D""= id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><b id=3D"yui_3_16_0_ym19_1_149= 9051725992_9195">VM ILMU_WEB has been paused due to storage I/O problem.</b= pre style=3D"user-select: text; font-family: "Courier New", Couri= er, monospace, arial, sans-serif; font-size: 14px; margin-top: 0px; margin-= bottom: 0px; word-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_= 8875">......</pre><pre style=3D"user-select: text; font-family: "Couri= er New", Courier, monospace, arial, sans-serif; font-size: 14px; margi= n-top: 0px; margin-bottom: 0px; word-wrap: break-word;" id=3D"yui_3_16_0_ym= 19_1_1499051725992_8875"><br></pre><pre style=3D"user-select: text; font-fa= mily: "Courier New", Courier, monospace, arial, sans-serif; font-= size: 14px; margin-top: 0px; margin-bottom: 0px; word-wrap: break-word;" id= =3D"yui_3_16_0_ym19_1_1499051725992_8875"><pre style=3D"user-select: text; = font-family: "Courier New", Courier, monospace, arial, sans-serif= ; margin-top: 0px; margin-bottom: 0px; word-wrap: break-word;" id=3D"yui_3_= 16_0_ym19_1_1499051725992_9311"><b id=3D"yui_3_16_0_ym19_1_1499051725992_93= 41">2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'= c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 = error eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro= r eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro= r eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError</b></pre><pre sty= le=3D"user-select: text; font-family: "Courier New", Courier, mon= ospace, arial, sans-serif; margin-top: 0px; margin-bottom: 0px; word-wrap: = break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_9311"><b><br></b></pre><= pre style=3D"user-select: text; font-family: "Courier New", Couri= er, monospace, arial, sans-serif; margin-top: 0px; margin-bottom: 0px; word= -wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_9311"><b><br></b>= </pre><pre style=3D"user-select: text; font-family: "Courier New"= , Courier, monospace, arial, sans-serif; margin-top: 0px; margin-bottom: 0p= x; word-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_9311"><b><= br></b></pre></pre></div><div class=3D"qtdSeparateBR"><br><br></div><div cl= ass=3D"yiv6639019017yqt2677119843" id=3D"yiv6639019017yqtfd82856"><div clas= s=3D"yiv6639019017yahoo-quoted-begin" style=3D"font-size:15px;color:#715FFA= ;padding-top:15px;margin-top:0;" id=3D"yui_3_16_0_ym19_1_1499051725992_6545= "><b id=3D"yui_3_16_0_ym19_1_1499051725992_9344">On Thursday, June 22, 2</b= div></div></div><blockquote></blockquote></blockquote></div></div><div clas= s=3D"yiv6639019017yqt2677119843" id=3D"yiv6639019017yqtfd23176"> </div></div></div></div></body></html> ------=_Part_3118757_945095184.1499053755165--

On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote:
Hi Yaniv,
Thanks for your reply. Apologies for my late reply we had a long holiday here.
To answer you:
Yes the guest VM become completely frozen and non responsive as soon as its disk has any activity for example when we shutdown or do a yum update.
Versions of all the components involved - guest OS, host OS (qemu-kvm version), how do you run the VM (vdsm log would be helpful here), exact storage specification (1Gb or 10Gb link? What is the NFS version? What is it hosted on? etc.) Y.
Some facts about our environment:
1) Previously, this environment was using XEN using raw disk and we change it to Ovirt (Ovirt were able to read the VMs's disks without any conversion.)
Interesting - what interface are they using? Is that raw or raw sparse? How did you perform the conversion? (or no conversion - just copied the disks over?)
2) The issue we are facing is not happening for any of the existing VMs. *3) This issue only happens for new VMs.*
New VMs from blank, or from a template (as a snapshot over the previous VMs) ?
4) Guest (kernel v3.10) and host(kernel v4.1) OSes are both CentOS 7 minimal installation.
Kernel 4.1? From where?
*5) NFS version 4* and Using Ovirt 4.1 6) *The network speed is 1 GB.*
That might be very slow (but should not cause such an issue, unless severely overloaded. 7) The output for rpm -qa | grep qemu-kvm shows:
* qemu-kvm-common-ev-2.6.0-28.e17_3.6.1.x86_64* * qemu-kvm-tools-ev-2.6.0-28.e17_3.6.1.x86_64* * qemu-kvm-ev-2.6.0-28.e17_3.6.1.x86_64*
That's good - that's almost the latest-greatest.
*8) *The storage is from a SAN device which is connected to the NFS server using fiber channel.
So for example during shutdown also it froze and shows something like this in event section:
*VM ILMU_WEB has been paused due to storage I/O problem.*
We might need to get libvirt debug logs (and perhaps journal output of the host). Y.
More information:
VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:50:43 AM):
2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)*2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats, Return response: {u'e01186c1-7e44-4808-b551-4722f0f8e84b': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000144822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d-ba2a372190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4-8a06-37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb771-5b73-4b5c-ac46-56499df97721': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': True}, u'f620781f-93d4-4410-8697-eb41045cacd6': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', 'valid': True}, u'a1a7d0a4-e3b6-4bd5-862b-96e70dae3f29': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastCheck': '8.8', 'valid': True}} (logUtils:54) *2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protect: getVolumeSize(sdUUID=u'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=u'3c26476e-1dae-44d7-9208-531b91ae5ae1', volUUID=u'a7e789fb-6646-4d0a-9b51-f5ab8242c8d5', options=None) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protect: getVolumeSize(sdUUID=u'f620781f-93d4-4410-8697-eb41045cacd6', spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=u'2158fdae-54e1-413d-a844-73da5d1bb4ca', volUUID=u'6ee0b0eb-0bba-4e18-9c00-c1539b632e8a', options=None) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protect: getVolumeSize(sdUUID=u'f620781f-93d4-4410-8697-eb41045cacd6', spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=u'a967016d-a56b-41e8-b7a2-57903cbd2825', volUUID=u'784514cb-2b33-431c-b193-045f23c596d8', options=None) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protect: getVolumeSize(sdUUID=u'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID=u'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=u'bb35c163-f068-4f08-a1c2-28c4cb1b76d9', volUUID=u'fce7e0a0-7411-4d8c-b72c-2f46c4b4db1e', options=None) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protect: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize': '107374182400'} (logUtils:54)
......
......
*2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError*
*On Thursday, June 22, 2*017, 2:48 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote:
Dear all, I appreciate if anybody could possibly help with the issue I am facing.
In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. The NFS server provides storage to the VMs in the hosts.
I can create new VMs and install os but once i do something like yum update the VM freezes. I can reproduce this every single time I do yum update.
Is it paused, or completely frozen?
what information/log files should I provide you to trubleshoot this?
Versions of all the components involved - guest OS, host OS (qemu-kvm version), how do you run the VM (vdsm log would be helpful here), exact storage specification (1Gb or 10Gb link? What is the NFS version? What is it hosted on? etc.) Y.
Regards
______________________________ _________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/ mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>

<br></div> <div class=3D"qtdSeparateBR"><br><br></div><div class=3D"yahoo_= quoted" style=3D"display: block;"> <div style=3D"font-family: Helvetica Neu= e, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 13px;"> <div sty= le=3D"font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida = Grande, sans-serif; font-size: 16px;"> <div dir=3D"ltr"><font size=3D"2" fa= ce=3D"Arial"> On Monday, July 3, 2017 3:01 PM, Yaniv Kaul <ykaul@redhat.= com> wrote:<br></font></div> <br><br> <div class=3D"y_msg_container"><d= iv id=3D"yiv5808088584"><div><div dir=3D"ltr"><br clear=3D"none"><div class= =3D"yiv5808088584gmail_extra"><br clear=3D"none"><div class=3D"yiv580808858= 4gmail_quote">On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian <span dir=3D"ltr= "><<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:m_mahboubian@yah= oo.com" target=3D"_blank" href=3D"mailto:m_mahboubian@yahoo.com">m_mahboubi= an@yahoo.com</a>></span> wrote:<br clear=3D"none"><blockquote class=3D"y= iv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc so=
Kernel 4.1? From where?</div><div> </div><blockquote class=3D"yiv5808= 088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;pa=
<div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725= 992_6548"><span id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_= 1499051725992_7076">So for example during shutdown also it froze and shows = something like this in event section:</span></div><div id=3D"yiv5808088584m= _-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6548"><br clear=3D"non= e"></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_6548"><b id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym= 19_1_1499051725992_9195">VM ILMU_WEB has been paused due to storage I/O pro= blem.</b></div></div></div></div></div></div></blockquote><div><br clear=3D= "none"></div><div>We might need to get libvirt debug logs (and perhaps jour= nal output of the host).</div><div class=3D"yiv5808088584yqt9794303116" id= =3D"yiv5808088584yqtfd75759"><div>Y.</div><div> </div><blockquote clas= s=3D"yiv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #= ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;background-color= :#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-ser= if;font-size:13px;"><div id=3D"yiv5808088584m_-6379656960604602434yiv663901= 9017"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990= 51725992_6544"><div dir=3D"ltr" id=3D"yiv5808088584m_-6379656960604602434yu= i_3_16_0_ym19_1_1499051725992_6546"><div id=3D"yiv5808088584m_-637965696060= 4602434yui_3_16_0_ym19_1_1499051725992_6548"><br clear=3D"none"></div><div = id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_65= 48"><br clear=3D"none"></div>More information:</div><div dir=3D"ltr" id=3D"= yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6546"><b= r clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv5808088584m_-6379656960604= 602434yui_3_16_0_ym19_1_1499051725992_6546">VDSM log at the time of this is= sue (The issue happened at Jul 3, 2017 9:50:43 AM):</div><div dir=3D"ltr" i= d=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_654= 6"><br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv5808088584m_-63796569= 60604602434yui_3_16_0_ym19_1_1499051725992_6546"><div dir=3D"ltr" id=3D"yiv= 5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_8436"><pre = id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_88= 75" style=3D"">2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.Json= RpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:=
------=_Part_3181214_2007753566.1499068839683 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Yanis, Thank you for your reply.=C2=A0 | Interesting - what interface are they using?| Is that raw or raw sparse? = How did you perform the conversion? (or no conversion - just copied the dis= ks over?) The VM disks are in the SAN storage in order to use oVirt we just pointed t= hem to the oVirt VMs. This is how we did it precisely:First we created the = VMs in oVirt with disks which are the same size as the existing disks. then= we deleted these disks which was generated by oVirt and renamed our existi= ng disks to match the deleted ones naming-wise. Finally we started the oVir= t VMs and they were able to run and these VMs are always ok without any iss= ue. The new VMs which have problem are from scratch (no template). One thing th= ough, all these new VMs are created based on an CentOS 7 ISO. We have not t= ried any other flavor of Linux. The kernel 4.1 is actually from Oracle Linux repository since we needed to = have OCFS2 support. So after installing oVirt we updated the kernel to Orac= le Linux kernel 4.1 since that kernel supports OCFS2. | We might need to get libvirt debug logs (and perhaps journal output of th= e host). I'll get this information and post here. Regards =20 On Monday, July 3, 2017 3:01 PM, Yaniv Kaul <ykaul@redhat.com> wrote: =20 =20 On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote= : Hi Yaniv, Thanks for your reply. Apologies for my late reply we had a long holiday he= re.=C2=A0 To answer you: Yes the =C2=A0guest VM become completely frozen and non responsive as soon = as its disk has any activity for example when we shutdown or do a yum updat= e.=C2=A0 Versions of all the components involved - guest OS, host OS (qemu-kvm versi= on), how do you run the VM (vdsm log would be helpful here), exact storage = specification (1Gb or 10Gb link? What is the NFS version? What is it hosted= on? etc.)=C2=A0Y. Some facts about our environment: 1) Previously, this environment was using XEN using raw disk and we change = it to Ovirt (Ovirt were able to read the VMs's disks without any conversion= .)=C2=A0 Interesting - what interface are they using?Is that raw or raw sparse? How = did you perform the conversion? (or no conversion - just copied the disks o= ver?)=C2=A0 2) The issue we are facing is not happening for any of the existing VMs.=C2= =A03) This issue only happens for new VMs. New VMs from blank, or from a template (as a snapshot over the previous VMs= ) ?=C2=A0 4) Guest (kernel v3.10) and host(kernel v4.1) OSes are both CentOS 7 minima= l installation.=C2=A0 Kernel 4.1? From where?=C2=A0 5) NFS version 4 and Using Ovirt 4.16)=C2=A0The network speed is 1 GB. That might be very slow (but should not cause such an issue, unless severel= y overloaded.=C2=A0 7)=C2=A0The output for rpm -qa | grep qemu-kvm shows:=C2=A0 =C2=A0 =C2=A0qe= mu-kvm-common-ev-2.6.0-28. e17_3.6.1.x86_64 =C2=A0 =C2=A0 =C2=A0qemu-kvm-tools-ev-2.6.0-28. e17_3.6.1.x86_64=C2=A0 =C2= =A0 =C2=A0qemu-kvm-ev-2.6.0-28.e17_3.6. 1.x86_64 That's good - that's almost the latest-greatest.=C2=A0 8)=C2=A0The storage is from a SAN device which is connected to the NFS serv= er using fiber channel. So for example during shutdown also it froze and shows something like this = in event section: VM ILMU_WEB has been paused due to storage I/O problem. We might need to get libvirt debug logs (and perhaps journal output of the = host).Y.=C2=A0 More information: VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:50:= 43 AM): 2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats(options=3DNone) (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats, Return response: {u'e01186c1-7e44-4808-b551- 4722f0f8e84b': {'= code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00014= 4822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d- ba2a3= 72190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'dela= y': '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4= -8a06- 37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired':= True, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb7= 71-5b73-4b5c-ac46- 56499df97721': {'code': 0, 'actual': True, 'version': 0,= 'acquired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': Tru= e}, u'f620781f-93d4-4410-8697- eb41045cacd6': {'code': 0, 'actual': True, '= version': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', '= valid': True}, u'a1a7d0a4-e3b6-4bd5-862b- 96e70dae3f29': {'code': 0, 'actua= l': True, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastChec= k': '8.8', 'valid': True}} (logUtils:54) 2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC = call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7= - 9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a- 9b51-f5ab8242c8d5', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d= - a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18- 9c00-c1539b632e8a', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8= - b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c- b193-045f23c596d8', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08= - a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c- b72c-2f46c4b4db1e', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize= ': '107374182400'} (logUtils:54) ............ 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError On Thursday, June 22, 2017, 2:48 PM, Yaniv Kaul <ykaul@redhat.com> wrote: On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian <m_mahboubian@yahoo.com> wrot= e: Dear all,I appreciate if anybody could possibly help with the issue I am fa= cing. In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. = The NFS server provides storage to the VMs in the hosts. I can create new VMs and install os but once i do something like yum update= the VM freezes. I can reproduce this every single time I do yum update. Is it paused, or completely frozen?=C2=A0 what information/log files should I provide you to trubleshoot this? Versions of all the components involved - guest OS, host OS (qemu-kvm versi= on), how do you run the VM (vdsm log would be helpful here), exact storage = specification (1Gb or 10Gb link? What is the NFS version? What is it hosted= on? etc.)=C2=A0Y. =C2=A0Regards ______________________________ _________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/ mailman/listinfo/users =20 ------=_Part_3181214_2007753566.1499068839683 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <html><head></head><body><div style=3D"color:#000; background-color:#fff; f= ont-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font= -size:13px"><div id=3D"yui_3_16_0_ym19_1_1499067397139_3359"><span>Hi Yanis= ,</span></div><div id=3D"yui_3_16_0_ym19_1_1499067397139_3360"><span><br></= span></div><div id=3D"yui_3_16_0_ym19_1_1499067397139_3361" dir=3D"ltr"><sp= an>Thank you for your reply. </span></div><div id=3D"yui_3_16_0_ym19_1= _1499067397139_3361" dir=3D"ltr"><span><br></span></div><div style=3D"font-= family: "Helvetica Neue", "Segoe UI", Helvetica, Arial,= "Lucida Grande", sans-serif;" id=3D"yui_3_16_0_ym19_1_1499067397= 139_3420">| Interesting - what interface are they using?</div><div style=3D= "font-family: "Helvetica Neue", "Segoe UI", Helvetica, = Arial, "Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0= _ym19_1_1499067397139_3421">| Is that raw or raw sparse? How did you perfor= m the conversion? (or no conversion - just copied the disks over?)</div><di= v style=3D"font-family: "Helvetica Neue", "Segoe UI", H= elvetica, Arial, "Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"= yui_3_16_0_ym19_1_1499067397139_3421"><br></div><div style=3D"font-family: = "Helvetica Neue", "Segoe UI", Helvetica, Arial, "L= ucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_149906= 7397139_3421">The VM disks are in the SAN storage in order to use oVirt we = just pointed them to the oVirt VMs. This is how we did it precisely:</div><= div style=3D"font-family: "Helvetica Neue", "Segoe UI",= Helvetica, Arial, "Lucida Grande", sans-serif;" dir=3D"ltr" id= =3D"yui_3_16_0_ym19_1_1499067397139_3421">First we created the VMs in oVirt= with disks which are the same size as the existing disks. then we deleted = these disks which was generated by oVirt and renamed our existing disks to = match the deleted ones naming-wise. Finally we started the oVirt VMs and th= ey were able to run and these VMs are always ok without any issue.</div><di= v style=3D"font-family: "Helvetica Neue", "Segoe UI", H= elvetica, Arial, "Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"= yui_3_16_0_ym19_1_1499067397139_3421"><br></div><div style=3D"font-family: = "Helvetica Neue", "Segoe UI", Helvetica, Arial, "L= ucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_149906= 7397139_3421">The new VMs which have problem are from scratch (no template)= . One thing though, all these new VMs are created based on an CentOS 7 ISO.= We have not tried any other flavor of Linux.</div><div style=3D"font-famil= y: "Helvetica Neue", "Segoe UI", Helvetica, Arial, &quo= t;Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_149= 9067397139_3421"><br></div><div style=3D"font-family: "Helvetica Neue&= quot;, "Segoe UI", Helvetica, Arial, "Lucida Grande", s= ans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499067397139_3421">The ker= nel 4.1 is actually from Oracle Linux repository since we needed to have OC= FS2 support. So after installing oVirt we updated the kernel to Oracle Linu= x kernel 4.1 since that kernel supports OCFS2.</div><div style=3D"font-fami= ly: "Helvetica Neue", "Segoe UI", Helvetica, Arial, &qu= ot;Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_14= 99067397139_3421"><br></div><div style=3D"font-family: "Helvetica Neue= ", "Segoe UI", Helvetica, Arial, "Lucida Grande", = sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499067397139_3421">| We m= ight need to get libvirt debug logs (and perhaps journal output of the host= ).<br></div><div style=3D"font-family: "Helvetica Neue", "Se= goe UI", Helvetica, Arial, "Lucida Grande", sans-serif;" dir= =3D"ltr" id=3D"yui_3_16_0_ym19_1_1499067397139_3421"><br></div><div style= =3D"font-family: "Helvetica Neue", "Segoe UI", Helvetic= a, Arial, "Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_1= 6_0_ym19_1_1499067397139_3421">I'll get this information and post here.</di= v><div style=3D"font-family: "Helvetica Neue", "Segoe UI&quo= t;, Helvetica, Arial, "Lucida Grande", sans-serif;" dir=3D"ltr" i= d=3D"yui_3_16_0_ym19_1_1499067397139_3421"><br></div><div style=3D"font-fam= ily: "Helvetica Neue", "Segoe UI", Helvetica, Arial, &q= uot;Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1= 499067397139_3421">Regards</div><div style=3D"font-family: "Helvetica = Neue", "Segoe UI", Helvetica, Arial, "Lucida Grande&quo= t;, sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499067397139_3421"><b= r></div><div style=3D"font-family: "Helvetica Neue", "Segoe = UI", Helvetica, Arial, "Lucida Grande", sans-serif;" dir=3D"= ltr" id=3D"yui_3_16_0_ym19_1_1499067397139_3421"><br></div><div style=3D"fo= nt-family: "Helvetica Neue", "Segoe UI", Helvetica, Ari= al, "Lucida Grande", sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym= 19_1_1499067397139_3421"><br></div><div style=3D"font-family: "Helveti= ca Neue", "Segoe UI", Helvetica, Arial, "Lucida Grande&= quot;, sans-serif;" dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499067397139_3421"= lid;padding-left:1ex;"><div><div style=3D"color:#000;background-color:#fff;= font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;fon= t-size:13px;"><div id=3D"yiv5808088584m_-6379656960604602434yiv6639019017">= <div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517259= 92_6544"> Hi Yaniv,<div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_6543"><br clear=3D"none"></div><div id=3D"yiv5808088584m_-63796= 56960604602434yui_3_16_0_ym19_1_1499051725992_6566">Thanks for your reply. = Apologies for my late reply we had a long holiday here. </div><div id= =3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6565= "><br clear=3D"none"></div><div id=3D"yiv5808088584m_-6379656960604602434yu= i_3_16_0_ym19_1_1499051725992_6564">To answer you:</div><div id=3D"yiv58080= 88584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6563"><br clear= =3D"none"></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym= 19_1_1499051725992_6557">Yes the guest VM become completely frozen an= d non responsive as soon as its disk has any activity for example when we s= hutdown or do a yum update. </div><div id=3D"yiv5808088584m_-637965696= 0604602434yui_3_16_0_ym19_1_1499051725992_6556"><br clear=3D"none"></div><d= iv id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992= _6555"><br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv5808088584m_-6379= 656960604602434yui_3_16_0_ym19_1_1499051725992_6546"><div id=3D"yiv58080885= 84m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6554"><span id=3D"y= iv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6553">Ver= sions of all the components involved - guest OS, host OS (qemu-kvm version)= , how do you run the VM (vdsm log would be helpful here), exact storage spe= cification (1Gb or 10Gb link? What is the NFS version? What is it hosted on= ? etc.)</span></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_= 0_ym19_1_1499051725992_6552"><span> Y.</span></div><div id=3D"yiv58080= 88584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6551"><span><br = clear=3D"none"></span></div><div id=3D"yiv5808088584m_-6379656960604602434y= ui_3_16_0_ym19_1_1499051725992_6551"><span id=3D"yiv5808088584m_-6379656960= 604602434yui_3_16_0_ym19_1_1499051725992_9556">Some facts about our environ= ment:</span></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_= ym19_1_1499051725992_6551"><span><br clear=3D"none"></span></div><div id=3D= "yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6551"><= span id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517259= 92_9703">1) Previously, this environment was using XEN using raw disk and w= e change it to Ovirt (Ovirt were able to read the VMs's disks without any c= onversion.) </span></div></div></div></div></div></div></blockquote><d= iv><br clear=3D"none"></div><div>Interesting - what interface are they usin= g?</div><div>Is that raw or raw sparse? How did you perform the conversion?= (or no conversion - just copied the disks over?)</div><div> </div><bl= ockquote class=3D"yiv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;bord= er-left:1px #ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;bac= kground-color:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Gra= nde, sans-serif;font-size:13px;"><div id=3D"yiv5808088584m_-637965696060460= 2434yiv6639019017"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0= _ym19_1_1499051725992_6544"><div dir=3D"ltr" id=3D"yiv5808088584m_-63796569= 60604602434yui_3_16_0_ym19_1_1499051725992_6546"><div id=3D"yiv5808088584m_= -6379656960604602434yui_3_16_0_ym19_1_1499051725992_6551"><span id=3D"yiv58= 08088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9800">2) The = issue we are facing is not happening for any of the existing VMs. </sp= an></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_6551"><span id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0= _ym19_1_1499051725992_9686"><b id=3D"yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_9685">3) This issue only happens for new VMs.<= /b></span></div></div></div></div></div></div></blockquote><div><br clear= =3D"none"></div><div>New VMs from blank, or from a template (as a snapshot = over the previous VMs) ?</div><div> </div><blockquote class=3D"yiv5808= 088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;pa= dding-left:1ex;"><div><div style=3D"color:#000;background-color:#fff;font-f= amily:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size= :13px;"><div id=3D"yiv5808088584m_-6379656960604602434yiv6639019017"><div i= d=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_654= 4"><div dir=3D"ltr" id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym1= 9_1_1499051725992_6546"><div id=3D"yiv5808088584m_-6379656960604602434yui_3= _16_0_ym19_1_1499051725992_6551"><span>4) G</span>uest (kernel v3.10) and h= ost(kernel v4.1) OSes are both CentOS 7 minimal installation. </div></= div></div></div></div></div></blockquote><div><br clear=3D"none"></div><div= dding-left:1ex;"><div><div style=3D"color:#000;background-color:#fff;font-f= amily:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size= :13px;"><div id=3D"yiv5808088584m_-6379656960604602434yiv6639019017"><div i= d=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_654= 4"><div dir=3D"ltr" id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym1= 9_1_1499051725992_6546"><div id=3D"yiv5808088584m_-6379656960604602434yui_3= _16_0_ym19_1_1499051725992_6551"><b id=3D"yiv5808088584m_-63796569606046024= 34yui_3_16_0_ym19_1_1499051725992_9591">5) NFS version 4</b> and Using Ovir= t 4.1</div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_= 1499051725992_6550"><span>6) </span><b>The network speed is 1 GB.</b><= /div></div></div></div></div></div></blockquote><div><br clear=3D"none"></d= iv><div>That might be very slow (but should not cause such an issue, unless= severely overloaded. </div><div><br clear=3D"none"></div><blockquote = class=3D"yiv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1= px #ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;background-c= olor:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans= -serif;font-size:13px;"><div id=3D"yiv5808088584m_-6379656960604602434yiv66= 39019017"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1= 499051725992_6544"><div dir=3D"ltr" id=3D"yiv5808088584m_-63796569606046024= 34yui_3_16_0_ym19_1_1499051725992_6546"><div id=3D"yiv5808088584m_-63796569= 60604602434yui_3_16_0_ym19_1_1499051725992_6550"><span>7) </span>The o= utput for rpm -qa | grep qemu-kvm shows:</div><div id=3D"yiv5808088584m_-63= 79656960604602434yui_3_16_0_ym19_1_1499051725992_6550"><b id=3D"yiv58080885= 84m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9222"> = qemu-kvm-common-ev-2.6.0-28. e17_3.6.1.x86_64</b><br clear=3D"none">= </div><div dir=3D"ltr" id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_= ym19_1_1499051725992_6550"><b id=3D"yiv5808088584m_-6379656960604602434yui_= 3_16_0_ym19_1_1499051725992_9360"><span id=3D"yiv5808088584m_-6379656960604= 602434yui_3_16_0_ym19_1_1499051725992_8120"> qemu-kvm-to= ols-ev-</span>2.6.0-28. e17_3.6.1.x86_64</b></div><div dir=3D"ltr" id=3D"yi= v5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6550"><b i= d=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_935= 9"> qemu-kvm-ev-2.6.0-28.e17_3.6. 1.x86_64</b></div></di= v></div></div></div></div></blockquote><div><br clear=3D"none"></div><div>T= hat's good - that's almost the latest-greatest.</div><div> </div><bloc= kquote class=3D"yiv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;border= -left:1px #ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;backg= round-color:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grand= e, sans-serif;font-size:13px;"><div id=3D"yiv5808088584m_-63796569606046024= 34yiv6639019017"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_y= m19_1_1499051725992_6544"><div dir=3D"ltr" id=3D"yiv5808088584m_-6379656960= 604602434yui_3_16_0_ym19_1_1499051725992_6546"><div dir=3D"ltr" id=3D"yiv58= 08088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6550"><b>8)&n= bsp;</b>The storage is from a SAN device which is connected to the NFS serv= er using fiber channel.</div><div id=3D"yiv5808088584m_-6379656960604602434= yui_3_16_0_ym19_1_1499051725992_6548"><span><br clear=3D"none"></span></div= 515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) <b id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992= _9241">2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and = protect: repoStats(options=3DNone) (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats, Return response: {u'e01186c1-7e44-4808-b551- 4722f0f8e84b': {'= code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00014= 4822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d- ba2a3= 72190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'dela= y': '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4= -8a06- 37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired':= True, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb7= 71-5b73-4b5c-ac46- 56499df97721': {'code': 0, 'actual': True, 'version': 0,= 'acquired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': Tru= e}, u'f620781f-93d4-4410-8697- eb41045cacd6': {'code': 0, 'actual': True, '= version': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', '= valid': True}, u'a1a7d0a4-e3b6-4bd5-862b- 96e70dae3f29': {'code': 0, 'actua= l': True, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastChec= k': '8.8', 'valid': True}} (logUtils:54) </b>2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] = RPC call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7= - 9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a- 9b51-f5ab8242c8d5', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d= - a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18- 9c00-c1539b632e8a', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8= - b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c- b193-045f23c596d8', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08= - a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c- b72c-2f46c4b4db1e', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize= ': '107374182400'} (logUtils:54)</pre><pre id=3D"yiv5808088584m_-6379656960= 604602434yui_3_16_0_ym19_1_1499051725992_8875" style=3D""><br clear=3D"none= "></pre><pre id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_149= 9051725992_8875" style=3D"">......</pre><pre id=3D"yiv5808088584m_-63796569= 60604602434yui_3_16_0_ym19_1_1499051725992_8875" style=3D"">......</pre><pr= e id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_= 8875" style=3D""><br clear=3D"none"></pre><pre id=3D"yiv5808088584m_-637965= 6960604602434yui_3_16_0_ym19_1_1499051725992_8875" style=3D""></pre><pre id= =3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9311= " style=3D""><b id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_= 1499051725992_9341">2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [vi= rt.vm] (vmId=3D'c84f519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop de= vice scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError</b></pre><pre id= =3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9311= " style=3D""><b><br clear=3D"none"></b></pre><pre id=3D"yiv5808088584m_-637= 9656960604602434yui_3_16_0_ym19_1_1499051725992_9311" style=3D""><b><br cle= ar=3D"none"></b></pre><pre id=3D"yiv5808088584m_-6379656960604602434yui_3_1= 6_0_ym19_1_1499051725992_9311" style=3D""><b><br clear=3D"none"></b></pre><= /div><div class=3D"yiv5808088584m_-6379656960604602434qtdSeparateBR"><br cl= ear=3D"none"><br clear=3D"none"></div><div class=3D"yiv5808088584m_-6379656= 960604602434yiv6639019017yqt2677119843" id=3D"yiv5808088584m_-6379656960604= 602434yiv6639019017yqtfd82856"><div class=3D"yiv5808088584m_-63796569606046= 02434yiv6639019017yahoo-quoted-begin" id=3D"yiv5808088584m_-637965696060460= 2434yui_3_16_0_ym19_1_1499051725992_6545" style=3D"font-size:15px;color:#71= 5ffa;padding-top:15px;margin-top:0;"><b id=3D"yiv5808088584m_-6379656960604= 602434yui_3_16_0_ym19_1_1499051725992_9344">On Thursday, June 22, 2</b>017,= 2:48 PM, Yaniv Kaul <<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mail= to:ykaul@redhat.com" target=3D"_blank" href=3D"mailto:ykaul@redhat.com">yka= ul@redhat.com</a>> wrote:</div><blockquote class=3D"yiv5808088584m_-6379= 656960604602434yiv6639019017iosymail" id=3D"yiv5808088584m_-637965696060460= 2434yui_3_16_0_ym19_1_1499051725992_6642"><div id=3D"yiv5808088584m_-637965= 6960604602434yiv6639019017"><div id=3D"yiv5808088584m_-6379656960604602434y= ui_3_16_0_ym19_1_1499051725992_6641"><div dir=3D"ltr" id=3D"yiv5808088584m_= -6379656960604602434yui_3_16_0_ym19_1_1499051725992_6640"><br clear=3D"none= "><div class=3D"yiv5808088584m_-6379656960604602434yiv6639019017gmail_extra= " id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_= 6643"><br clear=3D"none"><div class=3D"yiv5808088584m_-6379656960604602434y= iv6639019017gmail_quote" id=3D"yiv5808088584m_-6379656960604602434yui_3_16_= 0_ym19_1_1499051725992_6644">On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian = <span dir=3D"ltr"><<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:= m_mahboubian@yahoo.com" target=3D"_blank" href=3D"mailto:m_mahboubian@yahoo= .com">m_mahboubian@yahoo.com</a>></span> wrote:<br clear=3D"none"><block= quote class=3D"yiv5808088584m_-6379656960604602434yiv6639019017gmail_quote"= id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6= 646" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,20= 4);padding-left:1ex;"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_1= 6_0_ym19_1_1499051725992_6645"> Dear all,<div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_6647">I appreciate if anybody could possibly help with the issu= e I am facing.</div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_= 0_ym19_1_1499051725992_6648"><br clear=3D"none"></div><div id=3D"yiv5808088= 584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6649">In our envir= onment we have 2 hosts 1 NFS server and 1 ovirt engine server. The NFS serv= er provides storage to the VMs in the hosts.</div><div id=3D"yiv5808088584m= _-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6650"><br clear=3D"non= e"></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_6651">I can create new VMs and install os but once i do somethi= ng like yum update the VM freezes. I can reproduce this every single time I= do yum update.</div></div></blockquote><div id=3D"yiv5808088584m_-63796569= 60604602434yui_3_16_0_ym19_1_1499051725992_6652"><br clear=3D"none"></div><= div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_149905172599= 2_6653">Is it paused, or completely frozen?</div><div id=3D"yiv5808088584m_= -6379656960604602434yui_3_16_0_ym19_1_1499051725992_6654"> <br clear= =3D"none"></div><blockquote class=3D"yiv5808088584m_-6379656960604602434yiv= 6639019017gmail_quote" id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_= ym19_1_1499051725992_9059" style=3D"margin:0px 0px 0px 0.8ex;border-left:1p= x solid rgb(204,204,204);padding-left:1ex;"><div id=3D"yiv5808088584m_-6379= 656960604602434yui_3_16_0_ym19_1_1499051725992_9058"><div id=3D"yiv58080885= 84m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9057"><br clear=3D"= none"></div><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1= _1499051725992_9791">what information/log files should I provide you to tru= bleshoot this?</div></div></blockquote><div><br clear=3D"none"></div><div i= d=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_983= 4">Versions of all the components involved - guest OS, host OS (qemu-kvm ve= rsion), how do you run the VM (vdsm log would be helpful here), exact stora= ge specification (1Gb or 10Gb link? What is the NFS version? What is it hos= ted on? etc.)</div><div> Y.</div><div class=3D"yiv5808088584m_-6379656= 960604602434yiv6639019017yqt6606837936" id=3D"yiv5808088584m_-6379656960604= 602434yiv6639019017yqtfd75418"><div><br clear=3D"none"></div></div><blockqu= ote class=3D"yiv5808088584m_-6379656960604602434yiv6639019017gmail_quote" i= d=3D"yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_979= 4" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)= ;padding-left:1ex;"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_16_= 0_ym19_1_1499051725992_9793"><div class=3D"yiv5808088584m_-6379656960604602= 434yiv6639019017yqt6606837936" id=3D"yiv5808088584m_-6379656960604602434yiv= 6639019017yqtfd33806"><div id=3D"yiv5808088584m_-6379656960604602434yui_3_1= 6_0_ym19_1_1499051725992_9792"><br clear=3D"none"></div><div> Regards<= /div></div> </div><br clear=3D"none">______________________________ _________________<b= r clear=3D"none"> Users mailing list<br clear=3D"none"> <a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:Users@ovirt.org" targe= t=3D"_blank" href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br clear= =3D"none"> <a rel=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"http://lists.o= virt.org/mailman/listinfo/users">http://lists.ovirt.org/ mailman/listinfo/u= sers</a><div class=3D"yiv5808088584m_-6379656960604602434yiv6639019017yqt66= 06837936" id=3D"yiv5808088584m_-6379656960604602434yiv6639019017yqtfd65242"=
<br clear=3D"none"> <br clear=3D"none"></div></blockquote></div><div class=3D"yiv5808088584m_-6= 379656960604602434yiv6639019017yqt6606837936" id=3D"yiv5808088584m_-6379656= 960604602434yiv6639019017yqtfd02968"><br clear=3D"none"></div></div></div><= /div></div><blockquote></blockquote></blockquote></div></div><div class=3D"= yiv5808088584m_-6379656960604602434yiv6639019017yqt2677119843" id=3D"yiv580= 8088584m_-6379656960604602434yiv6639019017yqtfd23176"> </div></div></div></div></div></blockquote></div></div><div class=3D"yiv580= 8088584yqt9794303116" id=3D"yiv5808088584yqtfd09260"><br clear=3D"none"></d= iv></div></div></div></div><br><br></div> </div> </div> </div></div></bod= y></html> ------=_Part_3181214_2007753566.1499068839683--

On Mon, Jul 3, 2017 at 11:00 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote:
Hi Yanis,
Thank you for your reply.
| Interesting - what interface are they using? | Is that raw or raw sparse? How did you perform the conversion? (or no conversion - just copied the disks over?)
The VM disks are in the SAN storage in order to use oVirt we just pointed them to the oVirt VMs. This is how we did it precisely: First we created the VMs in oVirt with disks which are the same size as the existing disks. then we deleted these disks which was generated by oVirt and renamed our existing disks to match the deleted ones naming-wise. Finally we started the oVirt VMs and they were able to run and these VMs are always ok without any issue.
The new VMs which have problem are from scratch (no template). One thing though, all these new VMs are created based on an CentOS 7 ISO. We have not tried any other flavor of Linux.
The kernel 4.1 is actually from Oracle Linux repository since we needed to have OCFS2 support. So after installing oVirt we updated the kernel to Oracle Linux kernel 4.1 since that kernel supports OCFS2.
Would it be possible to test with the regular CentOS kernel? Just to ensure it's not the kernel causing this? Y.
| We might need to get libvirt debug logs (and perhaps journal output of the host).
I'll get this information and post here.
Regards
On Monday, July 3, 2017 3:01 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote:
Hi Yaniv,
Thanks for your reply. Apologies for my late reply we had a long holiday here.
To answer you:
Yes the guest VM become completely frozen and non responsive as soon as its disk has any activity for example when we shutdown or do a yum update.
Versions of all the components involved - guest OS, host OS (qemu-kvm version), how do you run the VM (vdsm log would be helpful here), exact storage specification (1Gb or 10Gb link? What is the NFS version? What is it hosted on? etc.) Y.
Some facts about our environment:
1) Previously, this environment was using XEN using raw disk and we change it to Ovirt (Ovirt were able to read the VMs's disks without any conversion.)
Interesting - what interface are they using? Is that raw or raw sparse? How did you perform the conversion? (or no conversion - just copied the disks over?)
2) The issue we are facing is not happening for any of the existing VMs. *3) This issue only happens for new VMs.*
New VMs from blank, or from a template (as a snapshot over the previous VMs) ?
4) Guest (kernel v3.10) and host(kernel v4.1) OSes are both CentOS 7 minimal installation.
Kernel 4.1? From where?
*5) NFS version 4* and Using Ovirt 4.1 6) *The network speed is 1 GB.*
That might be very slow (but should not cause such an issue, unless severely overloaded.
7) The output for rpm -qa | grep qemu-kvm shows: * qemu-kvm-common-ev-2.6.0-28. e17_3.6.1.x86_64* * qemu-kvm-tools-ev-2.6.0-28. e17_3.6.1.x86_64* * qemu-kvm-ev-2.6.0-28.e17_3.6. 1.x86_64*
That's good - that's almost the latest-greatest.
*8) *The storage is from a SAN device which is connected to the NFS server using fiber channel.
So for example during shutdown also it froze and shows something like this in event section:
*VM ILMU_WEB has been paused due to storage I/O problem.*
We might need to get libvirt debug logs (and perhaps journal output of the host). Y.
More information:
VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:50:43 AM):
2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) *2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats, Return response: {u'e01186c1-7e44-4808-b551- 4722f0f8e84b': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000144822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d- ba2a372190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4-8a06- 37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb771-5b73-4b5c-ac46- 56499df97721': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': True}, u'f620781f-93d4-4410-8697- eb41045cacd6': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', 'valid': True}, u'a1a7d0a4-e3b6-4bd5-862b- 96e70dae3f29': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastCheck': '8.8', 'valid': True}} (logUtils:54) *2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protect: getVolumeSize(sdUUID=u' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID=u'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=u'3c26476e-1dae-44d7- 9208-531b91ae5ae1', volUUID=u'a7e789fb-6646-4d0a- 9b51-f5ab8242c8d5', options=None) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protect: getVolumeSize(sdUUID=u' f620781f-93d4-4410-8697- eb41045cacd6', spUUID=u'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=u'2158fdae-54e1-413d- a844-73da5d1bb4ca', volUUID=u'6ee0b0eb-0bba-4e18- 9c00-c1539b632e8a', options=None) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protect: getVolumeSize(sdUUID=u' f620781f-93d4-4410-8697- eb41045cacd6', spUUID=u'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=u'a967016d-a56b-41e8- b7a2-57903cbd2825', volUUID=u'784514cb-2b33-431c- b193-045f23c596d8', options=None) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protect: getVolumeSize(sdUUID=u' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID=u'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=u'bb35c163-f068-4f08- a1c2-28c4cb1b76d9', volUUID=u'fce7e0a0-7411-4d8c- b72c-2f46c4b4db1e', options=None) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protect: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize': '107374182400'} (logUtils:54)
......
......
*2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId='c84f519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError*
*On Thursday, June 22, 2*017, 2:48 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote:
Dear all, I appreciate if anybody could possibly help with the issue I am facing.
In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. The NFS server provides storage to the VMs in the hosts.
I can create new VMs and install os but once i do something like yum update the VM freezes. I can reproduce this every single time I do yum update.
Is it paused, or completely frozen?
what information/log files should I provide you to trubleshoot this?
Versions of all the components involved - guest OS, host OS (qemu-kvm version), how do you run the VM (vdsm log would be helpful here), exact storage specification (1Gb or 10Gb link? What is the NFS version? What is it hosted on? etc.) Y.
Regards
______________________________ _________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/ mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>

Thank you for your reply. </span></div><span class=3D"yiv1733839113">= </span><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_0_y= m19_1_1499067397139_3361"><span><br clear=3D"none"></span></div><div id=3D"= yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3420" styl= e=3D"">| Interesting - what interface are they using?</div><div dir=3D"ltr"= id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_342= 1" style=3D"">| Is that raw or raw sparse? How did you perform the conversi= on? (or no conversion - just copied the disks over?)</div><div dir=3D"ltr" = id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3421= " style=3D""><br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv1733839113m= _897362570595642410yui_3_16_0_ym19_1_1499067397139_3421" style=3D"">The VM = disks are in the SAN storage in order to use oVirt we just pointed them to =
<div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796569606046= 02434yui_3_16_0_ym19_1_1499051725992_6566">Thanks for your reply. Apologies= for my late reply we had a long holiday here. </div><div id=3D"yiv173= 3839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym= 19_1_1499051725992_6565"><br clear=3D"none"></div><div id=3D"yiv1733839113m= _897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_149= 9051725992_6564">To answer you:</div><div id=3D"yiv1733839113m_897362570595= 642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_65= 63"><br clear=3D"none"></div><div id=3D"yiv1733839113m_897362570595642410yi= v5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6557">Yes =
</div></div></div></div></div></div></blockquote><div><br clear=3D"none"><= /div><div>New VMs from blank, or from a template (as a snapshot over the pr= evious VMs) ?</div><div> </div><blockquote class=3D"yiv1733839113m_897= 362570595642410yiv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-= left:1px #ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;backgr= ound-color:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande= , sans-serif;font-size:13px;"><div id=3D"yiv1733839113m_897362570595642410y= iv5808088584m_-6379656960604602434yiv6639019017"><div id=3D"yiv1733839113m_= 897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499= 051725992_6544"><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv= 5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6546"><div = id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434y= ui_3_16_0_ym19_1_1499051725992_6551"><span>4) G</span>uest (kernel v3.10) a= nd host(kernel v4.1) OSes are both CentOS 7 minimal installation. </di= v></div></div></div></div></div></blockquote><div><br clear=3D"none"></div>= <div>Kernel 4.1? From where?</div><div> </div><blockquote class=3D"yiv= 1733839113m_897362570595642410yiv5808088584gmail_quote" style=3D"margin:0 0= 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div style=3D"co= lor:#000;background-color:#fff;font-family:Helvetica Neue, Helvetica, Arial= , Lucida Grande, sans-serif;font-size:13px;"><div id=3D"yiv1733839113m_8973= 62570595642410yiv5808088584m_-6379656960604602434yiv6639019017"><div id=3D"= yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_1= 6_0_ym19_1_1499051725992_6544"><div dir=3D"ltr" id=3D"yiv1733839113m_897362= 570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725= 992_6546"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796= 56960604602434yui_3_16_0_ym19_1_1499051725992_6551"><b id=3D"yiv1733839113m= _897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_149= 9051725992_9591">5) NFS version 4</b> and Using Ovirt 4.1</div><div id=3D"y= iv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16= _0_ym19_1_1499051725992_6550"><span>6) </span><b>The network speed is = 1 GB.</b></div></div></div></div></div></div></blockquote><div><br clear=3D= "none"></div><div>That might be very slow (but should not cause such an iss= ue, unless severely overloaded. </div><div><br clear=3D"none"></div><b= lockquote class=3D"yiv1733839113m_897362570595642410yiv5808088584gmail_quot= e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"= <div><div style=3D"color:#000;background-color:#fff;font-family:Helvetica = Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:13px;"><div id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yiv= 6639019017"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637= 9656960604602434yui_3_16_0_ym19_1_1499051725992_6544"><div dir=3D"ltr" id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_6546"><div id=3D"yiv1733839113m_89736257059564= 2410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6550= "><span>7) </span>The output for rpm -qa | grep qemu-kvm shows:</div><=
......</pre><pre id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63= 79656960604602434yui_3_16_0_ym19_1_1499051725992_8875">......</pre><pre id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_8875"><br clear=3D"none"></pre><pre id=3D"yiv1= 733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_= ym19_1_1499051725992_8875"></pre><pre id=3D"yiv1733839113m_8973625705956424= 10yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9311">= <b id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796569606046024= 34yui_3_16_0_ym19_1_1499051725992_9341">2017-07-03 09:52:16,941+0800 INFO = (libvirt/events) [virt.vm] (vmId=3D'c84f519e-398d-40a3- 85b2-b7e53f3d7f67')= abnormal vm stop device scsi0-0-0-0 error eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError</b></pre><pre id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_9311"><b><br clear=3D"none"></b></pre><pre id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_9311"><b><br clear=3D"none"></b></pre><pre id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_9311"><b><br clear=3D"none"></b></pre></div><s=
<br clear=3D"none"></div><div class=3D"yiv1733839113m_897362570595642410yi= v5808088584m_-6379656960604602434yiv6639019017yqt2677119843" id=3D"yiv17338= 39113m_897362570595642410yiv5808088584m_-6379656960604602434yiv6639019017yq= tfd82856"><div class=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63= 79656960604602434yiv6639019017yahoo-quoted-begin" id=3D"yiv1733839113m_8973= 62570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517= 25992_6545" style=3D"font-size:15px;color:#715ffa;padding-top:15px;margin-t= op:0;"><b id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960= 604602434yui_3_16_0_ym19_1_1499051725992_9344">On Thursday, June 22, 2</b>0= 17, 2:48 PM, Yaniv Kaul <<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"m= ailto:ykaul@redhat.com" target=3D"_blank" href=3D"mailto:ykaul@redhat.com">= ykaul@redhat.com</a>> wrote:</div><blockquote class=3D"yiv1733839113m_89= 7362570595642410yiv5808088584m_-6379656960604602434yiv6639019017iosymail" i= d=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yu= i_3_16_0_ym19_1_1499051725992_6642"><div id=3D"yiv1733839113m_8973625705956= 42410yiv5808088584m_-6379656960604602434yiv6639019017"><div id=3D"yiv173383= 9113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_= 1_1499051725992_6641"><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642= 410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6640"= <br clear=3D"none"><div class=3D"yiv1733839113m_897362570595642410yiv58080= 88584m_-6379656960604602434yiv6639019017gmail_extra" id=3D"yiv1733839113m_8= 97362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990= 51725992_6643"><br clear=3D"none"><div class=3D"yiv1733839113m_897362570595= 642410yiv5808088584m_-6379656960604602434yiv6639019017gmail_quote" id=3D"yi= v1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_= 0_ym19_1_1499051725992_6644">On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian = <span dir=3D"ltr"><<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:= m_mahboubian@yahoo.com" target=3D"_blank" href=3D"mailto:m_mahboubian@yahoo= .com">m_mahboubian@yahoo.com</a>></span> wrote:<br clear=3D"none"><block= quote class=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796569606= 04602434yiv6639019017gmail_quote" id=3D"yiv1733839113m_897362570595642410yi= v5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6646" styl= e=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddin= g-left:1ex;"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63= 79656960604602434yui_3_16_0_ym19_1_1499051725992_6645"> Dear all,<div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965= 6960604602434yui_3_16_0_ym19_1_1499051725992_6647">I appreciate if anybody = could possibly help with the issue I am facing.</div><div id=3D"yiv17338391= 13m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_= 1499051725992_6648"><br clear=3D"none"></div><div id=3D"yiv1733839113m_8973= 62570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517= 25992_6649">In our environment we have 2 hosts 1 NFS server and 1 ovirt eng= ine server. The NFS server provides storage to the VMs in the hosts.</div><=
------=_Part_1782636_158770879.1499670432485 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Yanis, I just wanted to share something with you regarding the issue I am facing. Basically, I found out If during creating a new VM I choose IDE for disk in= terface instead of VirtIO everything will work perfectly without any issue. Regarding your suggestion to switch to CentOS kernel, Unfortunately, we are= not able to do it because of the fact that we can not effort to have downt= ime for existing running VMs. With Best RegardsM.Mahboubian On Monday, July 3, 2017, 4:49 PM, Yaniv Kaul <ykaul@redhat.com> wrote: On Mon, Jul 3, 2017 at 11:00 AM, M Mahboubian <m_mahboubian@yahoo.com> wrot= e: Hi Yanis, Thank you for your reply.=C2=A0 | Interesting - what interface are they using?| Is that raw or raw sparse? = How did you perform the conversion? (or no conversion - just copied the dis= ks over?) The VM disks are in the SAN storage in order to use oVirt we just pointed t= hem to the oVirt VMs. This is how we did it precisely:First we created the = VMs in oVirt with disks which are the same size as the existing disks. then= we deleted these disks which was generated by oVirt and renamed our existi= ng disks to match the deleted ones naming-wise. Finally we started the oVir= t VMs and they were able to run and these VMs are always ok without any iss= ue. The new VMs which have problem are from scratch (no template). One thing th= ough, all these new VMs are created based on an CentOS 7 ISO. We have not t= ried any other flavor of Linux. The kernel 4.1 is actually from Oracle Linux repository since we needed to = have OCFS2 support. So after installing oVirt we updated the kernel to Orac= le Linux kernel 4.1 since that kernel supports OCFS2. Would it be possible to test with the regular CentOS kernel? Just to ensure= it's not the kernel causing this?Y.=C2=A0 | We might need to get libvirt debug logs (and perhaps journal output of th= e host). I'll get this information and post here. Regards =20 On Monday, July 3, 2017 3:01 PM, Yaniv Kaul <ykaul@redhat.com> wrote: =20 =20 On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian <m_mahboubian@yahoo.com> wrote= : Hi Yaniv, Thanks for your reply. Apologies for my late reply we had a long holiday he= re.=C2=A0 To answer you: Yes the =C2=A0guest VM become completely frozen and non responsive as soon = as its disk has any activity for example when we shutdown or do a yum updat= e.=C2=A0 Versions of all the components involved - guest OS, host OS (qemu-kvm versi= on), how do you run the VM (vdsm log would be helpful here), exact storage = specification (1Gb or 10Gb link? What is the NFS version? What is it hosted= on? etc.)=C2=A0Y. Some facts about our environment: 1) Previously, this environment was using XEN using raw disk and we change = it to Ovirt (Ovirt were able to read the VMs's disks without any conversion= .)=C2=A0 Interesting - what interface are they using?Is that raw or raw sparse? How = did you perform the conversion? (or no conversion - just copied the disks o= ver?)=C2=A0 2) The issue we are facing is not happening for any of the existing VMs.=C2= =A03) This issue only happens for new VMs. New VMs from blank, or from a template (as a snapshot over the previous VMs= ) ?=C2=A0 4) Guest (kernel v3.10) and host(kernel v4.1) OSes are both CentOS 7 minima= l installation.=C2=A0 Kernel 4.1? From where?=C2=A0 5) NFS version 4 and Using Ovirt 4.16)=C2=A0The network speed is 1 GB. That might be very slow (but should not cause such an issue, unless severel= y overloaded.=C2=A0 7)=C2=A0The output for rpm -qa | grep qemu-kvm shows:=C2=A0 =C2=A0 =C2=A0qe= mu-kvm-common-ev-2.6.0-28. e17_3.6.1.x86_64 =C2=A0 =C2=A0 =C2=A0qemu-kvm-tools-ev-2.6.0-28. e17_3.6.1.x86_64=C2=A0 =C2= =A0 =C2=A0qemu-kvm-ev-2.6.0-28.e17_3.6. 1.x86_64 That's good - that's almost the latest-greatest.=C2=A0 8)=C2=A0The storage is from a SAN device which is connected to the NFS serv= er using fiber channel. So for example during shutdown also it froze and shows something like this = in event section: VM ILMU_WEB has been paused due to storage I/O problem. We might need to get libvirt debug logs (and perhaps journal output of the = host).Y.=C2=A0 More information: VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:50:= 43 AM): 2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC c= all Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)2017-07-03 0= 9:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.get= AllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515)2017-07-03 09:5= 0:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAll= VmStats succeeded in 0.00 seconds (__init__:515)2017-07-03 09:50:43,548+080= 0 INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats(options=3DNone) = (logUtils:51)2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run= and protect: repoStats, Return response: {u'e01186c1-7e44-4808-b551- 4722f= 0f8e84b': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'dela= y': '0.000144822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722= -8a7d- ba2a372190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired':= True, 'delay': '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775b= d3-3244-45b4-8a06- 37eff8856afa': {'code': 0, 'actual': True, 'version': 4,= 'acquired': True, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': Tru= e}, u'731bb771-5b73-4b5c-ac46- 56499df97721': {'code': 0, 'actual': True, '= version': 0, 'acquired': True, 'delay': '0.000238159', 'lastCheck': '8.9', = 'valid': True}, u'f620781f-93d4-4410-8697- eb41045cacd6': {'code': 0, 'actu= al': True, 'version': 4, 'acquired': True, 'delay': '0.00022004', 'lastChec= k': '8.9', 'valid': True}, u'a1a7d0a4-e3b6-4bd5-862b- 96e70dae3f29': {'code= ': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000298581= ', 'lastCheck': '8.8', 'valid': True}} (logUtils:54)2017-07-03 09:50:43,563= +0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succe= eded in 0.01 seconds (__init__:515)2017-07-03 09:50:46,737+0800 INFO (perio= dic/3) [dispatcher] Run and protect: getVolumeSize(sdUUID=3Du' 721b5233-b0b= a-4722-8a7d- ba2a372190a0', spUUID=3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2= d', imgUUID=3Du'3c26476e-1dae-44d7- 9208-531b91ae5ae1', volUUID=3Du'a7e789f= b-6646-4d0a- 9b51-f5ab8242c8d5', options=3DNone) (logUtils:51)2017-07-03 09= :50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protect: getVolumeSi= ze(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID=3Du'b04ca6e4-= 2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d- a844-73da5d1= bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18- 9c00-c1539b632e8a', options=3DNone)= (logUtils:51)2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] R= un and protect: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045c= acd6', spUUID=3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'a9670= 16d-a56b-41e8- b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c- b193-045= f23c596d8', options=3DNone) (logUtils:51)2017-07-03 09:50:46,741+0800 INFO = (periodic/1) [dispatcher] Run and protect: getVolumeSize(sdUUID=3Du' 721b52= 33-b0ba-4722-8a7d- ba2a372190a0', spUUID=3Du'b04ca6e4-2660-4eaa- acdb-c1dae= 4e21f2d', imgUUID=3Du'bb35c163-f068-4f08- a1c2-28c4cb1b76d9', volUUID=3Du'f= ce7e0a0-7411-4d8c- b72c-2f46c4b4db1e', options=3DNone) (logUtils:51)2017-07= -03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protect: getVo= lumeSize, Return response: {'truesize': '6361276416', 'apparentsize': '1073= 74182400'} (logUtils:54) ............ 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997) 2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997) 2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 err= or eio (vm:4112) 2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f= 519e-398d-40a3- 85b2-b7e53f3d7f67') CPU stopped: onIOError On Thursday, June 22, 2017, 2:48 PM, Yaniv Kaul <ykaul@redhat.com> wrote: On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian <m_mahboubian@yahoo.com> wrot= e: Dear all,I appreciate if anybody could possibly help with the issue I am fa= cing. In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. = The NFS server provides storage to the VMs in the hosts. I can create new VMs and install os but once i do something like yum update= the VM freezes. I can reproduce this every single time I do yum update. Is it paused, or completely frozen?=C2=A0 what information/log files should I provide you to trubleshoot this? Versions of all the components involved - guest OS, host OS (qemu-kvm versi= on), how do you run the VM (vdsm log would be helpful here), exact storage = specification (1Gb or 10Gb link? What is the NFS version? What is it hosted= on? etc.)=C2=A0Y. =C2=A0Regards ______________________________ _________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/ mailman/listinfo/users =20 ------=_Part_1782636_158770879.1499670432485 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <html xmlns=3D"http://www.w3.org/1999/xhtml" xmlns:v=3D"urn:schemas-microso= ft-com:vml" xmlns:o=3D"urn:schemas-microsoft-com:office:office"><head><!--[= if gte mso 9]><xml><o:OfficeDocumentSettings><o:AllowPNG/><o:PixelsPerInch>= 96</o:PixelsPerInch></o:OfficeDocumentSettings></xml><![endif]--></head><bo= dy> Hi Yanis,<div><br></div><div>I just wanted to share something with you rega= rding the issue I am facing.</div><div><br></div><div>Basically, I found ou= t If during creating a new VM I choose IDE for disk interface instead of Vi= rtIO everything will work perfectly without any issue.</div><div><br></div>= <div>Regarding your suggestion to switch to CentOS kernel, Unfortunately, w= e are not able to do it because of the fact that we can not effort to have = downtime for existing running VMs.</div><div><br></div><div>With Best Regar= ds</div><div>M.Mahboubian<br><br><p class=3D"yahoo-quoted-begin" style=3D"f= ont-size: 15px; color: #715FFA; padding-top: 15px; margin-top: 0">On Monday= , July 3, 2017, 4:49 PM, Yaniv Kaul <ykaul@redhat.com> wrote:</p><blo= ckquote class=3D"iosymail"><div id=3D"yiv1733839113"><div><div dir=3D"ltr">= <br clear=3D"none"><div class=3D"yiv1733839113gmail_extra"><br clear=3D"non= e"><div class=3D"yiv1733839113gmail_quote">On Mon, Jul 3, 2017 at 11:00 AM,= M Mahboubian <span dir=3D"ltr"><<a rel=3D"nofollow" shape=3D"rect" ymai= lto=3D"mailto:m_mahboubian@yahoo.com" target=3D"_blank" href=3D"mailto:m_ma= hboubian@yahoo.com">m_mahboubian@yahoo.com</a>></span> wrote:<br clear= =3D"none"><blockquote class=3D"yiv1733839113gmail_quote" style=3D"margin:0 = 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div style=3D"c= olor:#000;background-color:#fff;font-family:Helvetica Neue, Helvetica, Aria= l, Lucida Grande, sans-serif;font-size:13px;"><div id=3D"yiv1733839113m_897= 362570595642410yui_3_16_0_ym19_1_1499067397139_3359"><span>Hi Yanis,</span>= </div><div id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067= 397139_3360"><span><br clear=3D"none"></span></div><div dir=3D"ltr" id=3D"y= iv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3361"><span= the oVirt VMs. This is how we did it precisely:</div><div dir=3D"ltr" id=3D= "yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3421" sty= le=3D"">First we created the VMs in oVirt with disks which are the same siz= e as the existing disks. then we deleted these disks which was generated by= oVirt and renamed our existing disks to match the deleted ones naming-wise= . Finally we started the oVirt VMs and they were able to run and these VMs = are always ok without any issue.</div><div dir=3D"ltr" id=3D"yiv1733839113m= _897362570595642410yui_3_16_0_ym19_1_1499067397139_3421" style=3D""><br cle= ar=3D"none"></div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410y= ui_3_16_0_ym19_1_1499067397139_3421" style=3D"">The new VMs which have prob= lem are from scratch (no template). One thing though, all these new VMs are= created based on an CentOS 7 ISO. We have not tried any other flavor of Li= nux.</div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_= 0_ym19_1_1499067397139_3421" style=3D""><br clear=3D"none"></div><div dir= =3D"ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_149906739= 7139_3421" style=3D"">The kernel 4.1 is actually from Oracle Linux reposito= ry since we needed to have OCFS2 support. So after installing oVirt we upda= ted the kernel to Oracle Linux kernel 4.1 since that kernel supports OCFS2.= </div></div></div></blockquote><div><br clear=3D"none"></div><div>Would it = be possible to test with the regular CentOS kernel? Just to ensure it's not= the kernel causing this?</div><div class=3D"yiv1733839113yqt9267956910" id= =3D"yiv1733839113yqtfd34862"><div>Y.</div><div> </div><blockquote clas= s=3D"yiv1733839113gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #= ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;background-color= :#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-ser= if;font-size:13px;"><span class=3D"yiv1733839113"></span><div dir=3D"ltr" i= d=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3421"= style=3D""><br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv1733839113m_= 897362570595642410yui_3_16_0_ym19_1_1499067397139_3421" style=3D"">| We mig= ht need to get libvirt debug logs (and perhaps journal output of the host).= <br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595= 642410yui_3_16_0_ym19_1_1499067397139_3421" style=3D""><br clear=3D"none"><= /div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym1= 9_1_1499067397139_3421" style=3D"">I'll get this information and post here.= </div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym= 19_1_1499067397139_3421" style=3D""><br clear=3D"none"></div><div dir=3D"lt= r" id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3= 421" style=3D"">Regards</div><div dir=3D"ltr" id=3D"yiv1733839113m_89736257= 0595642410yui_3_16_0_ym19_1_1499067397139_3421" style=3D""><br clear=3D"non= e"></div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_0= _ym19_1_1499067397139_3421" style=3D""><br clear=3D"none"></div><div dir=3D= "ltr" id=3D"yiv1733839113m_897362570595642410yui_3_16_0_ym19_1_149906739713= 9_3421" style=3D""><br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv17338= 39113m_897362570595642410yui_3_16_0_ym19_1_1499067397139_3421" style=3D""><= br clear=3D"none"></div> <div class=3D"yiv1733839113m_897362570595642410qtd= SeparateBR"><br clear=3D"none"><br clear=3D"none"></div><div class=3D"yiv17= 33839113m_897362570595642410yahoo_quoted" style=3D"display:block;"> <div st= yle=3D"font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-se= rif;font-size:13px;"> <div style=3D"font-family:HelveticaNeue, Helvetica Ne= ue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:16px;"><div><div = class=3D"yiv1733839113h5"> <div dir=3D"ltr"> On Monday, July 3, 2017 3:01 P= M, Yaniv Kaul <<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:ykau= l@redhat.com" target=3D"_blank" href=3D"mailto:ykaul@redhat.com">ykaul@redh= at.com</a>> wrote:<br clear=3D"none"></div> <br clear=3D"none"><br clea= r=3D"none"> </div></div><div class=3D"yiv1733839113m_897362570595642410y_ms= g_container"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584"><di= v><div dir=3D"ltr"><br clear=3D"none"><div class=3D"yiv1733839113m_89736257= 0595642410yiv5808088584gmail_extra"><br clear=3D"none"><div class=3D"yiv173= 3839113m_897362570595642410yiv5808088584gmail_quote"><div><div class=3D"yiv= 1733839113h5">On Mon, Jul 3, 2017 at 6:49 AM, M Mahboubian <span dir=3D"ltr= "><<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:m_mahboubian@yah= oo.com" target=3D"_blank" href=3D"mailto:m_mahboubian@yahoo.com">m_mahboubi= an@yahoo.com</a>></span> wrote:<br clear=3D"none"><blockquote class=3D"y= iv1733839113m_897362570595642410yiv5808088584gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div style=3D"= color:#000;background-color:#fff;font-family:Helvetica Neue, Helvetica, Ari= al, Lucida Grande, sans-serif;font-size:13px;"><div id=3D"yiv1733839113m_89= 7362570595642410yiv5808088584m_-6379656960604602434yiv6639019017"><div id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_6544"> Hi Yaniv,<div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965= 6960604602434yui_3_16_0_ym19_1_1499051725992_6543"><br clear=3D"none"></div= the guest VM become completely frozen and non responsive as soon as i= ts disk has any activity for example when we shutdown or do a yum update.&n= bsp;</div><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796= 56960604602434yui_3_16_0_ym19_1_1499051725992_6556"><br clear=3D"none"></di= v><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604= 602434yui_3_16_0_ym19_1_1499051725992_6555"><br clear=3D"none"></div><div d= ir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965696= 0604602434yui_3_16_0_ym19_1_1499051725992_6546"><div id=3D"yiv1733839113m_8= 97362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990= 51725992_6554"><span id=3D"yiv1733839113m_897362570595642410yiv5808088584m_= -6379656960604602434yui_3_16_0_ym19_1_1499051725992_6553">Versions of all t= he components involved - guest OS, host OS (qemu-kvm version), how do you r= un the VM (vdsm log would be helpful here), exact storage specification (1G= b or 10Gb link? What is the NFS version? What is it hosted on? etc.)</span>= </div><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965696= 0604602434yui_3_16_0_ym19_1_1499051725992_6552"><span> Y.</span></div>= <div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965696060460= 2434yui_3_16_0_ym19_1_1499051725992_6551"><span><br clear=3D"none"></span><= /div><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960= 604602434yui_3_16_0_ym19_1_1499051725992_6551"><span id=3D"yiv1733839113m_8= 97362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990= 51725992_9556">Some facts about our environment:</span></div><div id=3D"yiv= 1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0= _ym19_1_1499051725992_6551"><span><br clear=3D"none"></span></div><div id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_6551"><span id=3D"yiv1733839113m_8973625705956= 42410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_970= 3">1) Previously, this environment was using XEN using raw disk and we chan= ge it to Ovirt (Ovirt were able to read the VMs's disks without any convers= ion.) </span></div></div></div></div></div></div></blockquote><div><br= clear=3D"none"></div><div>Interesting - what interface are they using?</di= v><div>Is that raw or raw sparse? How did you perform the conversion? (or n= o conversion - just copied the disks over?)</div><div> </div><blockquo= te class=3D"yiv1733839113m_897362570595642410yiv5808088584gmail_quote" styl= e=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><= div style=3D"color:#000;background-color:#fff;font-family:Helvetica Neue, H= elvetica, Arial, Lucida Grande, sans-serif;font-size:13px;"><div id=3D"yiv1= 733839113m_897362570595642410yiv5808088584m_-6379656960604602434yiv66390190= 17"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796569606= 04602434yui_3_16_0_ym19_1_1499051725992_6544"><div dir=3D"ltr" id=3D"yiv173= 3839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym= 19_1_1499051725992_6546"><div id=3D"yiv1733839113m_897362570595642410yiv580= 8088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6551"><span id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_9800">2) The issue we are facing is not happen= ing for any of the existing VMs. </span></div><div id=3D"yiv1733839113= m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_6551"><span id=3D"yiv1733839113m_897362570595642410yiv580808858= 4m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9686"><b id=3D"yiv17= 33839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_y= m19_1_1499051725992_9685">3) This issue only happens for new VMs.</b></span= div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602= 434yui_3_16_0_ym19_1_1499051725992_6550"><b id=3D"yiv1733839113m_8973625705= 95642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_= 9222"> qemu-kvm-common-ev-2.6.0-28. e17_3.6.1.x86_64</b>= <br clear=3D"none"></div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595= 642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_65= 50"><b id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604= 602434yui_3_16_0_ym19_1_1499051725992_9360"><span id=3D"yiv1733839113m_8973= 62570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517= 25992_8120"> qemu-kvm-tools-ev-</span>2.6.0-28. e17_3.6.= 1.x86_64</b></div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410y= iv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6550"><b = id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434y= ui_3_16_0_ym19_1_1499051725992_9359"> qemu-kvm-ev-2.6.0-= 28.e17_3.6. 1.x86_64</b></div></div></div></div></div></div></blockquote><d= iv><br clear=3D"none"></div><div>That's good - that's almost the latest-gre= atest.</div><div> </div><blockquote class=3D"yiv1733839113m_8973625705= 95642410yiv5808088584gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex;"><div><div style=3D"color:#000;background-co= lor:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-= serif;font-size:13px;"><div id=3D"yiv1733839113m_897362570595642410yiv58080= 88584m_-6379656960604602434yiv6639019017"><div id=3D"yiv1733839113m_8973625= 70595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517259= 92_6544"><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv5808088= 584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6546"><div dir=3D"= ltr" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965696060460= 2434yui_3_16_0_ym19_1_1499051725992_6550"><b>8) </b>The storage is fro= m a SAN device which is connected to the NFS server using fiber channel.</d= iv><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965696060= 4602434yui_3_16_0_ym19_1_1499051725992_6548"><span><br clear=3D"none"></spa= n></div><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656= 960604602434yui_3_16_0_ym19_1_1499051725992_6548"><span id=3D"yiv1733839113= m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14= 99051725992_7076">So for example during shutdown also it froze and shows so= mething like this in event section:</span></div><div id=3D"yiv1733839113m_8= 97362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990= 51725992_6548"><br clear=3D"none"></div><div id=3D"yiv1733839113m_897362570= 595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992= _6548"><b id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960= 604602434yui_3_16_0_ym19_1_1499051725992_9195">VM ILMU_WEB has been paused = due to storage I/O problem.</b></div></div></div></div></div></div></blockq= uote><div><br clear=3D"none"></div><div>We might need to get libvirt debug = logs (and perhaps journal output of the host).</div></div></div><div class= =3D"yiv1733839113m_897362570595642410yiv5808088584yqt9794303116" id=3D"yiv1= 733839113m_897362570595642410yiv5808088584yqtfd75759"><div>Y.</div><div>&nb= sp;</div><blockquote class=3D"yiv1733839113m_897362570595642410yiv580808858= 4gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding= -left:1ex;"><div><div style=3D"color:#000;background-color:#fff;font-family= :Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:13px= ;"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-637965696060= 4602434yiv6639019017"><div id=3D"yiv1733839113m_897362570595642410yiv580808= 8584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6544"><div><div c= lass=3D"yiv1733839113h5"><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595= 642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_65= 46"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796569606= 04602434yui_3_16_0_ym19_1_1499051725992_6548"><br clear=3D"none"></div><div= id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434= yui_3_16_0_ym19_1_1499051725992_6548"><br clear=3D"none"></div>More informa= tion:</div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv58080= 88584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6546"><br clear= =3D"none"></div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv= 5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6546">VDSM = log at the time of this issue (The issue happened at Jul 3, 2017 9:50:43 AM= ):</div><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv58080885= 84m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_6546"><br clear=3D"= none"></div></div></div><div dir=3D"ltr" id=3D"yiv1733839113m_8973625705956= 42410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_654= 6"><div dir=3D"ltr" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-= 6379656960604602434yui_3_16_0_ym19_1_1499051725992_8436"><pre id=3D"yiv1733= 839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym1= 9_1_1499051725992_8875"></pre><div><div class=3D"yiv1733839113h5">2017-07-0= 3 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host= .getAllVmStats succeeded in 0.00 seconds (__init__:515) 2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515) 2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC = call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515) </div></div><b id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796= 56960604602434yui_3_16_0_ym19_1_1499051725992_9241">2017-07-03 09:50:43,548= +0800 INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats(options=3DN= one) (logUtils:51) 2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect= : repoStats, Return response: {u'e01186c1-7e44-4808-b551- 4722f0f8e84b': {'= code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00014= 4822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d- ba2a3= 72190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'dela= y': '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4= -8a06- 37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired':= True, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb7= 71-5b73-4b5c-ac46- 56499df97721': {'code': 0, 'actual': True, 'version': 0,= 'acquired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': Tru= e}, u'f620781f-93d4-4410-8697- eb41045cacd6': {'code': 0, 'actual': True, '= version': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', '= valid': True}, u'a1a7d0a4-e3b6-4bd5-862b- 96e70dae3f29': {'code': 0, 'actua= l': True, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastChec= k': '8.8', 'valid': True}} (logUtils:54) </b>2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] = RPC call Host.getStats succeeded in 0.01 seconds (__init__:515) 2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7= - 9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a- 9b51-f5ab8242c8d5', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d= - a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18- 9c00-c1539b632e8a', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' f620781f-93d4-4410-8697- eb41045cacd6', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8= - b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c- b193-045f23c596d8', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec= t: getVolumeSize(sdUUID=3Du' 721b5233-b0ba-4722-8a7d- ba2a372190a0', spUUID= =3Du'b04ca6e4-2660-4eaa- acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08= - a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c- b72c-2f46c4b4db1e', o= ptions=3DNone) (logUtils:51) 2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec= t: getVolumeSize, Return response: {'truesize': '<a dir=3D"ltr" href=3D"tel= :6361276416" x-apple-data-detectors=3D"true" x-apple-data-detectors-type=3D= "telephone" x-apple-data-detectors-result=3D"25">6361276416</a>', 'apparent= size': '<a dir=3D"ltr" href=3D"tel:107374182400" x-apple-data-detectors=3D"= true" x-apple-data-detectors-type=3D"telephone" x-apple-data-detectors-resu= lt=3D"26">107374182400</a>'} (logUtils:54)<pre id=3D"yiv1733839113m_8973625= 70595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990517259= 92_8875"><br clear=3D"none"></pre><pre id=3D"yiv1733839113m_897362570595642= 410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_8875"= pan class=3D"yiv1733839113"></span><div class=3D"yiv1733839113m_89736257059= 5642410yiv5808088584m_-6379656960604602434qtdSeparateBR"><br clear=3D"none"= div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602= 434yui_3_16_0_ym19_1_1499051725992_6650"><br clear=3D"none"></div><div id= =3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yui= _3_16_0_ym19_1_1499051725992_6651">I can create new VMs and install os but = once i do something like yum update the VM freezes. I can reproduce this ev= ery single time I do yum update.</div></div></blockquote><div id=3D"yiv1733= 839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym1= 9_1_1499051725992_6652"><br clear=3D"none"></div><div id=3D"yiv1733839113m_= 897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499= 051725992_6653">Is it paused, or completely frozen?</div><div id=3D"yiv1733= 839113m_897362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym1= 9_1_1499051725992_6654"> <br clear=3D"none"></div><blockquote class=3D= "yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602434yiv663= 9019017gmail_quote" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-= 6379656960604602434yui_3_16_0_ym19_1_1499051725992_9059" style=3D"margin:0p= x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><= div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602= 434yui_3_16_0_ym19_1_1499051725992_9058"><div id=3D"yiv1733839113m_89736257= 0595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_149905172599= 2_9057"><br clear=3D"none"></div><div id=3D"yiv1733839113m_8973625705956424= 10yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9791">= what information/log files should I provide you to trubleshoot this?</div><= /div></blockquote><div><br clear=3D"none"></div><div id=3D"yiv1733839113m_8= 97362570595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_14990= 51725992_9834">Versions of all the components involved - guest OS, host OS = (qemu-kvm version), how do you run the VM (vdsm log would be helpful here),= exact storage specification (1Gb or 10Gb link? What is the NFS version? Wh= at is it hosted on? etc.)</div><div> Y.</div><div class=3D"yiv17338391= 13m_897362570595642410yiv5808088584m_-6379656960604602434yiv6639019017yqt66= 06837936" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960= 604602434yiv6639019017yqtfd75418"><div><br clear=3D"none"></div></div><bloc= kquote class=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960= 604602434yiv6639019017gmail_quote" id=3D"yiv1733839113m_897362570595642410y= iv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992_9794" sty= le=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddi= ng-left:1ex;"><div id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6= 379656960604602434yui_3_16_0_ym19_1_1499051725992_9793"><div class=3D"yiv17= 33839113m_897362570595642410yiv5808088584m_-6379656960604602434yiv663901901= 7yqt6606837936" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379= 656960604602434yiv6639019017yqtfd33806"><div id=3D"yiv1733839113m_897362570= 595642410yiv5808088584m_-6379656960604602434yui_3_16_0_ym19_1_1499051725992= _9792"><br clear=3D"none"></div><div> Regards</div></div> </div><br clear=3D"none">______________________________ _________________<b= r clear=3D"none"> Users mailing list<br clear=3D"none"> <a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:Users@ovirt.org" targe= t=3D"_blank" href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br clear= =3D"none"> <a rel=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"http://lists.o= virt.org/mailman/listinfo/users">http://lists.ovirt.org/ mailman/listinfo/u= sers</a><div class=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379= 656960604602434yiv6639019017yqt6606837936" id=3D"yiv1733839113m_89736257059= 5642410yiv5808088584m_-6379656960604602434yiv6639019017yqtfd65242"><br clea= r=3D"none"> <br clear=3D"none"></div></blockquote></div><div class=3D"yiv1733839113m_89= 7362570595642410yiv5808088584m_-6379656960604602434yiv6639019017yqt66068379= 36" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-6379656960604602= 434yiv6639019017yqtfd02968"><br clear=3D"none"></div></div></div></div></di= v><blockquote></blockquote></blockquote></div></div><div class=3D"yiv173383= 9113m_897362570595642410yiv5808088584m_-6379656960604602434yiv6639019017yqt= 2677119843" id=3D"yiv1733839113m_897362570595642410yiv5808088584m_-63796569= 60604602434yiv6639019017yqtfd23176"> </div></div></div></div></div></blockquote></div></div><div class=3D"yiv173= 3839113m_897362570595642410yiv5808088584yqt9794303116" id=3D"yiv1733839113m= _897362570595642410yiv5808088584yqtfd09260"><br clear=3D"none"></div></div>= </div></div></div><br clear=3D"none"><br clear=3D"none"></div> </div> </di= v> </div></div></div></blockquote></div></div><div class=3D"yiv1733839113y= qt9267956910" id=3D"yiv1733839113yqtfd39225"><br clear=3D"none"></div></div=
</div></div></div><blockquote></blockquote></blockquote></div> </body></html> ------=_Part_1782636_158770879.1499670432485--
participants (2)
-
M Mahboubian
-
Yaniv Kaul