[Users] Problem running virtual machines

This is a multi-part message in MIME format. --------------000607070509080900090900 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Good evening, I am running oVirt 3. I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage. There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over: /usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state. Has anyone any ideas why this happens? Regards, Jose Garcia --------------000607070509080900090900 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> </head> <body bgcolor="#FFFFFF" text="#000000"> Good evening,<br> <br> I am running oVirt 3.<br> <br> I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.<br> <br> There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:<br> <br> <blockquote>/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing<br> </blockquote> <br> and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.<br> <br> Has anyone any ideas why this happens?<br> <br> Regards,<br> <br> Jose Garcia<br> <br> <br> </body> </html> --------------000607070509080900090900--

This is a multi-part message in MIME format. --------------020203030309000205080507 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Apologies, version is 3.1.0-0.1.20120620git6ef9f8.fc17 On 07/26/2012 05:18 PM, jose garcia wrote:
Good evening,
I am running oVirt 3.
I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.
There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:
/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.
Has anyone any ideas why this happens?
Regards,
Jose Garcia
--------------020203030309000205080507 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Apologies, version is <span class="gwt-InlineLabel">3.1.0-0.1.20120620git6ef9f8.fc17</span><br> <br> On 07/26/2012 05:18 PM, jose garcia wrote:<br> </div> <blockquote cite="mid:50116DE2.8000805@gmail.com" type="cite"> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> Good evening,<br> <br> I am running oVirt 3.<br> <br> I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.<br> <br> There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:<br> <br> <blockquote>/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing<br> </blockquote> <br> and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.<br> <br> Has anyone any ideas why this happens?<br> <br> Regards,<br> <br> Jose Garcia<br> <br> <br> </blockquote> <br> </body> </html> --------------020203030309000205080507--

On 07/26/2012 07:38 PM, jose garcia wrote:
Apologies, version is 3.1.0-0.1.20120620git6ef9f8.fc17
On 07/26/2012 05:18 PM, jose garcia wrote:
Good evening,
I am running oVirt 3.
I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.
There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:
/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
what type of storage? what networking from host to storage?
and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.
Has anyone any ideas why this happens?
Regards,
Jose Garcia
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/27/2012 09:36 AM, Itamar Heim wrote:
On 07/26/2012 07:38 PM, jose garcia wrote:
Apologies, version is 3.1.0-0.1.20120620git6ef9f8.fc17
On 07/26/2012 05:18 PM, jose garcia wrote:
Good evening,
I am running oVirt 3.
I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.
There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:
/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
what type of storage? what networking from host to storage?
Good morning, I have got two nodes for an iSCSI datacenter and another for a NFS datacenter. The machines run slow whatever the datacenter or node is used (all run fedora 17, recently updated). Networking is basic. Only ovirtmgmt bridge and one network interface is used per node. What is strange is that the VMs seem to run without any problem, only they take more time to perform any activity. That, of course, causes problems to ovirt-engine and the delays tend to set a node in a non-operational state, specially when installing.
and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.
Has anyone any ideas why this happens?
Regards,
Jose Garcia
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/27/2012 12:15 PM, jose garcia wrote:
On 07/27/2012 09:36 AM, Itamar Heim wrote:
On 07/26/2012 07:38 PM, jose garcia wrote:
Apologies, version is 3.1.0-0.1.20120620git6ef9f8.fc17
On 07/26/2012 05:18 PM, jose garcia wrote:
Good evening,
I am running oVirt 3.
I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.
There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:
/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
what type of storage? what networking from host to storage?
Good morning,
I have got two nodes for an iSCSI datacenter and another for a NFS datacenter. The machines run slow whatever the datacenter or node is used (all run fedora 17, recently updated).
Networking is basic. Only ovirtmgmt bridge and one network interface is used per node.
What is strange is that the VMs seem to run without any problem, only they take more time to perform any activity. That, of course, causes problems to ovirt-engine and the delays tend to set a node in a non-operational state, specially when installing.
VMs running slowly doesn't affect ovirt-engine. slow storage affects the SPM, which in turn affects ovirt-engine. what is the network bandwidth between nodes to storage? utilization? how many spindles on the storage server? type of storage server?
and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.
Has anyone any ideas why this happens?
Regards,
Jose Garcia
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/27/2012 11:51 AM, Itamar Heim wrote:
On 07/27/2012 12:15 PM, jose garcia wrote:
On 07/27/2012 09:36 AM, Itamar Heim wrote:
On 07/26/2012 07:38 PM, jose garcia wrote:
Apologies, version is 3.1.0-0.1.20120620git6ef9f8.fc17
On 07/26/2012 05:18 PM, jose garcia wrote:
Good evening,
I am running oVirt 3.
I am not sure if the source of the problem is oVirt or KVM/qemu but the machines run really slow since some days ago. There are some warnings regarding iscsi storage latency in the oVirt portal but the behaviour is also present with NFS storage.
There are no visible errors either in engine.log or vdsm.log. Var/log/messages in the node shows over and over:
/usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
what type of storage? what networking from host to storage?
Good morning,
I have got two nodes for an iSCSI datacenter and another for a NFS datacenter. The machines run slow whatever the datacenter or node is used (all run fedora 17, recently updated).
Networking is basic. Only ovirtmgmt bridge and one network interface is used per node.
What is strange is that the VMs seem to run without any problem, only they take more time to perform any activity. That, of course, causes problems to ovirt-engine and the delays tend to set a node in a non-operational state, specially when installing.
VMs running slowly doesn't affect ovirt-engine. slow storage affects the SPM, which in turn affects ovirt-engine. what is the network bandwidth between nodes to storage? utilization? how many spindles on the storage server? type of storage server?
Yes, you are right. Utilization and r_await are high. I will perform some tests in order to determine what's changed. There are no network problems I am aware of. If It turns out to be something related to the OS, I will let you know. Thank you.
and, after a while, waits in the node rise over 50%. It finally leads to a disconnection of the SPM node and a non-operational state.
Has anyone any ideas why this happens?
Regards,
Jose Garcia
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Itamar Heim
-
jose garcia