<div dir="ltr"><div>Hi<br><br></div>I add a similar issue on fedora18, I fixed it :<br><div><div> yum update --enablerepo=updates-testing systemd-197-1.fc18.2<br></div><div>Do know if it can help.<br><br></div><div>Kevin<br>
</div></div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/6/14 Cuongds <span dir="ltr"><<a href="mailto:cuongds.hut@gmail.com" target="_blank">cuongds.hut@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">Nicolas Ecarnot <nicolas@...> writes:<br>
<br>
><br>
> Le 20/04/2013 22:55, Itamar Heim a écrit :<br>
> > On 03/27/2013 10:38 AM, Nicolas Ecarnot wrote:<br>
> >> Le 26/03/2013 12:17, Nicolas Ecarnot a écrit :<br>
> >>> Le 25/03/2013 12:10, Nicolas Ecarnot a écrit :<br>
> >>>> Le 24/03/2013 09:53, Dafna Ron a écrit :<br>
> >>>>> is the vm preallocated or thin provision disk type?<br>
> >>>><br>
> >>>> This VM has 3 disks :<br>
> >>>> - first disk to host the windows system : Thin provision<br>
> >>>> - second disk to store some data : Preallocated<br>
> >>>> - third disk to store some more data : Thin provision<br>
> >>>><br>
> >>>> I'm realizing that amongst the 15 VMs, only this one and another one<br>
> >>>> that is stopped are using preallocated disks.<br>
> >>>> I'm regularly migrating some VMs (and stopping and starting and<br>
playing<br>
> >>>> with them) with no issue, and they all are using thin provisioned<br>
> >>>> disks!<br>
> >>>><br>
> >>>> Could this be a common factor of the problem?<br>
> >>>><br>
> >>>>><br>
> >>>>> also, can you please attach engine, vdsm, libvirt and the vm's qemu<br>
> >>>>> logs?<br>
> >>>><br>
> >>>> Relevant logs :<br>
> >>>><br>
> >>>> ############<br>
> >>>><br>
> >>>> Ok, I'm in the process of collecting the logs and posting them in a<br>
> >>>> useable manner.<br>
> >>>><br>
> >>>> More to come.<br>
> >>><br>
> >>> Ok, once again, I ran a test and observed the relevant logs.<br>
> >>> I tried to isolate the time frames, but it may be long for vdsm.log<br>
> >>><br>
> >>> Here they are :<br>
> >>> * /var/log/libvirt/qemu/serv-chk-adm3.log<br>
> >>> <a href="http://pastebin.com/JVKMSmxD" target="_blank">http://pastebin.com/JVKMSmxD</a><br>
> >>> * /var/log/libvirtd.log<br>
> >>> <a href="http://pastebin.com/sWGDCqNh" target="_blank">http://pastebin.com/sWGDCqNh</a><br>
> >>> * /var/log/vdsm/vdsm.log (the BIG one)<br>
> >>> <a href="http://pastebin.com/bevTEhym" target="_blank">http://pastebin.com/bevTEhym</a><br>
> >>><br>
> >>> What I can add to help you help me, is that :<br>
> >>> - I saw that all my VM appear as tainted. I did not know what that<br>
meant<br>
> >>> (but RTFMed since), and this does not appear to disturb the other VMs<br>
> >>> - Many VMs including the problematic one have been imported from<br>
> >>> ovirt-v2v with now such issue.<br>
> >>> - This particular VM was also imported, but the starting point was a<br>
> >>> vmdk or ova single file.<br>
> >>> - Two additionnal data disks were added<br>
> >>> - As I said, this is the only running VM stored as pre allocated.<br>
> >>><br>
> >>> Regards,<br>
> >>><br>
> >><br>
> >> One suggestion : I see no obvious errors in the log files. Could this<br>
> >> paused state happen due to a VM's kernel panic?<br>
> >><br>
> ><br>
> > is this still relevant?<br>
><br>
> It is!<br>
> Further investigations from my colleague shown the following facts :<br>
> - This VM has 3 disks. Only one of those disks is responsible for the<br>
> problem<br>
> - In this disk, my coworker has found only 3 files (database files) that<br>
> he can do nothing with without leading to the freeze.<br>
> - He tried to cat them into /dev/null, and this is leading to the freeze<br>
> - He tried to copy them into another disk -> freeze!<br>
><br>
> We see absolutely no evidence of a kernel panic.<br>
> Rather, this seems to be related to a network bottleneck between the<br>
> node and the iSCSI SAN, leading to oVirt unable to sustain a sufficent<br>
> bandwidth and freezing the VM.<br>
><br>
> Since then, we moved to another solution, but for the sake of opensource<br>
> debugging, we did kept the faulty VM for your eyes only :)<br>
><br>
<br>
<br>
</div></div>Hi, anyone has answer? I got same issue. I create new vm and cannot start on<br>
ovirt node. The status wating for launch long time and vm cannot start.<br>
here is vdsm log:<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:32,504::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 37.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:32,504::guestIF::95::vm.Vm::<br>
(_connect) vmId=`ffd60b1c-9a3c-4853-88aa-7973f9756c96`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000570-<br>
01.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,505::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 32.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,505::guestIF::95::vm.Vm::<br>
(_connect) vmId=`187f61c9-d81f-491a-b5f0-4798ec6c8342`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000565-<br>
01.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,505::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 33.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,505::guestIF::95::vm.Vm::<br>
(_connect) vmId=`6c3074ae-c752-4622-94e7-a4ca09b252f7`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000563-<br>
02.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,506::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 35.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,506::guestIF::95::vm.Vm::<br>
(_connect) vmId=`20f144cd-f027-4710-a433-dcdc62eec554`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000568-<br>
01.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,506::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 37.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:33,506::guestIF::95::vm.Vm::<br>
(_connect) vmId=`ffd60b1c-9a3c-4853-88aa-7973f9756c96`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000570-<br>
01.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,507::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 32.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,508::guestIF::95::vm.Vm::<br>
(_connect) vmId=`187f61c9-d81f-491a-b5f0-4798ec6c8342`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000565-<br>
01.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,508::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 33.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,508::guestIF::95::vm.Vm::<br>
(_connect) vmId=`6c3074ae-c752-4622-94e7-a4ca09b252f7`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000563-<br>
02.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,508::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 35.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,509::guestIF::95::vm.Vm::<br>
(_connect) vmId=`20f144cd-f027-4710-a433-dcdc62eec554`::Attempting<br>
connection to /var/lib/libvirt/qemu/channels/4000568-<br>
01.com.redhat.rhevm.vdsm<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,509::vmChannels::104::vds::<br>
(_handle_unconnected) Trying to connect fileno 37.<br>
VM Channels Listener::DEBUG::2013-06-14 18:32:34,509::guestIF::95::vm.Vm::<br>
(_connect) vmId=`ffd60b1c-9a3c-4853-88aa-7973f9756c96`::Attempting<br>
connection to /var/lib/libvirt/q<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><br> <span style="color:rgb(89,89,89);font-weight:bold" lang="EN-US">Kevin Mazière<br>
</span><span style="color:rgb(89,89,89)" lang="EN-US">Responsable Infrastructure<br>
</span><span style="color:rgb(255,0,153)" lang="EN-US">Alter Way
– Hosting<br>
</span>
<span style="color:rgb(89,89,89)">1 rue Royal - 227 Bureaux de la Colline</span><br>
<span style="color:rgb(89,89,89)">92213 Saint-Cloud Cedex</span><br>
<span style="color:rgb(89,89,89);font-weight:bold">Tél :</span>
<span style="color:rgb(89,89,89)"> +33 (0)1 41 16 38 41 </span><span style="color:rgb(89,89,89);font-weight:bold"><br></span>
<span style="color:rgb(89,89,89);font-weight:bold">Mob :</span>
<span style="color:rgb(89,89,89)"> +33 (0)7 62 55 57 05 </span><span style="color:rgb(89,89,89);font-weight:bold"><br>
<span style="color:rgb(89,89,89)"> <a href="http://www.alterway.fr/" target="_blank">http://www.alterway.fr</a> </span></span>
</div>