Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface

------=_NextPartTM-000-842b7e16-7dd5-497a-b341-090e5ac35743 Content-Type: multipart/alternative; boundary="_000_354c6d82d06343f59dfebdb779762820emailandroidcom_" --_000_354c6d82d06343f59dfebdb779762820emailandroidcom_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 RG8geW91IGhhdmUgYSBjaGFuY2UgdG8gaW5zdGFsbCBxZW11LWRlYnVnPyBJZiB5ZXMgSSB3b3Vs ZCB0cnkgYSBiYWNrdHJhY2UuDQoNCmdkYiAtcCA8cWVtdS1waWQ+DQojIGJ0DQoNCk1hcmt1cw0K DQpBbSAxNS4wOS4yMDE1IDQ6MTUgbmFjaG0uIHNjaHJpZWIgRGFuaWVsIEhlbGdlbmJlcmdlciA8 ZGFuaWVsLmhlbGdlbmJlcmdlckBtLWJveC5kZT46DQpIZWxsbywNCg0KSSBkbyBub3Qgd2FudCB0 byBoaWphY2sgdGhlIHRocmVhZCBidXQgbWF5YmUgbXkgaXNzdWUgaXMgcmVsYXRlZD8NCg0KSXQg bWlnaHQgaGF2ZSBzdGFydGVkIHdpdGggb3ZpcnQgMy41LjM7IGJ1dCBJIGNhbm5vdCB0ZWxsIGZv ciBzdXJlLg0KDQpGb3IgbWUsIG9uZSB2bSAoZm9yZW1hbikgaXMgYWZmZWN0ZWQ7IHRoZSBzZWNv bmQgdGltZSBpbiAxNCBkYXlzLiBJIGNhbiBjb25maXJtIHRoaXMgYXMgSSBhbHNvIGxvb3NlIGFu eSBuZXR3b3JrIGNvbm5lY3Rpb24gdG8gdGhlIFZNIGFuZA0KdGhlIGFiaWxpdHkgdG8gY29ubmVj dCBhIGNvbnNvbGUuDQpBbHNvLCB0aGUgb25seSB0aGluZyB3aXRjaCAnZml4ZXMnIHRoZSBpc3N1 ZSBpcyByaWdodCBub3cgJ2tpbGwgLTkgPHBpZCBvZiBxZW11LWt2bSBwcm9jZXNzPicNCg0KQXMg ZmFyIGFzIEkgY2FuIHRlbGwgdGhlIFZNIGJlY2FtZSB1bnJlc3BvbnNpdmUgYXQgYXJvdW5kIFNl cCAxNSAxMjozMDowMTsgZW5naW5lIGxvZ2dlZCB0aGlzIGF0IDEyOjM0LiBOb3RoaW5nIG9idmlv dXMgaW4gVkRTTSBsb2dzIChzZWUNCmF0dGFjaGVkKS4NCg0KQmVsb3cgdGhlIGVuZ2luZS5sb2cg cGFydC4NCg0KVmVyc2lvbnM6DQpvdmlydC1lbmdpbmUtMy41LjQuMi0xLmVsNy5jZW50b3Mubm9h cmNoDQoNCnZkc20tNC4xNi4yNi0wLmVsNy5jZW50b3MNCmxpYnZpcnQtMS4yLjgtMTYuZWw3XzEu Mw0KDQplbmdpbmUubG9nICgxMjAwIC0gMTMwMDoNCjIwMTUtMDktMTUgMTI6MDM6NDcsOTQ5IElO Rk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhh bmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNTYpIFsyNjRkNTAyYV0gSEEN CnJlc2VydmF0aW9uIHN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1 IDEyOjA4OjAyLDcwOCBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5PdmZEYXRhVXBk YXRlcl0gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTg5KSBbMmU3YmY1NmVdIEF0dGVt cHRpbmcgdG8gdXBkYXRlDQpWTXMvVGVtcGxhdGVzIE92Zi4NCjIwMTUtMDktMTUgMTI6MDg6MDIs NzA5IElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLlByb2Nlc3NPdmZVcGRhdGVGb3JT dG9yYWdlUG9vbENvbW1hbmRdIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci04OSkNCls1 ZTlmNGJhNl0gUnVubmluZyBjb21tYW5kOiBQcm9jZXNzT3ZmVXBkYXRlRm9yU3RvcmFnZVBvb2xD b21tYW5kIGludGVybmFsOiB0cnVlLiBFbnRpdGllcyBhZmZlY3RlZCA6ICBJRDoNCjAwMDAwMDAy LTAwMDItMDAwMi0wMDAyLTAwMDAwMDAwMDA4OCBUeXBlOiBsDQoyMDE1LTA5LTE1IDEyOjA4OjAy LDc4MCBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5Qcm9jZXNzT3ZmVXBkYXRlRm9y U3RvcmFnZVBvb2xDb21tYW5kXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItODkpDQpb NWU5ZjRiYTZdIExvY2sgZnJlZWQgdG8gb2JqZWN0IEVuZ2luZUxvY2sgW2V4Y2x1c2l2ZUxvY2tz PSBrZXk6IDAwMDAwMDAyLTAwMDItMDAwMi0wMDAyLTAwMDAwMDAwMDA4OCB2YWx1ZTogT1ZGX1VQ REFURQ0KMjAxNS0wOS0xNSAxMjowODo0Nyw5OTcgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29y ZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2No ZWR1bGVyX1dvcmtlci0yMSkgWzNmYzg1NGEyXSBIQQ0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBj bHVzdGVyIERlZmF1bHQgaXMgT0sNCjIwMTUtMDktMTUgMTI6MTM6MDYsOTk4IElORk8gIFtvcmcu b3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5HZXRGaWxlU3RhdHNWRFNDb21t YW5kXSAob3JnLm92aXJ0LnRocmVhZC5wb29sLTgtdGhyZWFkLTQ4KQ0KWzUwMjIxY2RjXSBTVEFS VCwgR2V0RmlsZVN0YXRzVkRTQ29tbWFuZCggc3RvcmFnZVBvb2xJZCA9IDAwMDAwMDAyLTAwMDIt MDAwMi0wMDAyLTAwMDAwMDAwMDA4OCwgaWdub3JlRmFpbG92ZXJMaW1pdCA9IGZhbHNlKSwgbG9n IGlkOiAxNTAzOTY4DQoyMDE1LTA5LTE1IDEyOjEzOjA3LDEzNyBJTkZPICBbb3JnLm92aXJ0LmVu Z2luZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9rZXIuR2V0RmlsZVN0YXRzVkRTQ29tbWFuZF0gKG9y Zy5vdmlydC50aHJlYWQucG9vbC04LXRocmVhZC00OCkNCls1MDIyMWNkY10gRklOSVNILCBHZXRG aWxlU3RhdHNWRFNDb21tYW5kLCByZXR1cm46IHtwZlNlbnNlLTIuMC1SRUxFQVNFLWkzODYuaXNv PXtzdGF0dXM9MCwgY3RpbWU9MTQzMjI4Njg4Ny4wLCBzaXplPTExNTcwOTk1Mn0sDQpGZWRvcmEt MTUtaTY4Ni1MaXZlOA0KMjAxNS0wOS0xNSAxMjoxMzowNywxNzggSU5GTyAgW29yZy5vdmlydC5l bmdpbmUuY29yZS5ibGwuSXNvRG9tYWluTGlzdFN5bmNyb25pemVyXSAob3JnLm92aXJ0LnRocmVh ZC5wb29sLTgtdGhyZWFkLTQ4KSBbNTAyMjFjZGNdIEZpbmlzaGVkDQphdXRvbWF0aWMgcmVmcmVz aCBwcm9jZXNzIGZvciBJU08gZmlsZSB0eXBlIHdpdGggc3VjY2VzcywgZm9yIHN0b3JhZ2UgZG9t YWluIGlkIDg0ZGNiMmZjLWZiNjMtNDQyZi1hYTc3LTNlODRkYzdkNWE3Mi4NCjIwMTUtMDktMTUg MTI6MTM6NDgsMDQzIElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcu SGFSZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItODcp IFs0ZmExYmIxNl0gSEENCnJlc2VydmF0aW9uIHN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlz IE9LDQoyMDE1LTA5LTE1IDEyOjE4OjQ4LDA4OCBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3Jl LmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hl ZHVsZXJfV29ya2VyLTQ0KSBbNjM0NWU2OThdIEhBDQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNs dXN0ZXIgRGVmYXVsdCBpcyBPSw0KMjAxNS0wOS0xNSAxMjoyMzo0OCwxMzcgSU5GTyAgW29yZy5v dmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChE ZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci0xMykgSEEgcmVzZXJ2YXRpb24NCnN0YXR1cyBm b3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1IDEyOjI4OjQ4LDE4MyBJTkZPICBb b3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGlu Z10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTc2KSBbMTU0YzkxZDVdIEhBDQpyZXNl cnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBPSw0KMjAxNS0wOS0xNSAxMjoz Mzo0OCwyMjkgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJl c2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci0zNikgWzI3 YzczYWM2XSBIQQ0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMgT0sN CjIwMTUtMDktMTUgMTI6MzQ6NDksNDMyIElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRz YnJva2VyLlZkc1VwZGF0ZVJ1blRpbWVJbmZvXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3Jr ZXItNDEpIFs1ZjJhNGI2OF0gVk0NCmZvcmVtYW4gOGI1N2ZmMWQtMjgwMC00OGFkLWIyNjctZmQ4 ZTllMmY2ZmIyIG1vdmVkIGZyb20gVXAgLS0+IE5vdFJlc3BvbmRpbmcNCjIwMTUtMDktMTUgMTI6 MzQ6NDksNTc4IFdBUk4gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuZGFsLmRiYnJva2VyLmF1ZGl0 bG9naGFuZGxpbmcuQXVkaXRMb2dEaXJlY3Rvcl0gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29y a2VyLTQxKQ0KWzVmMmE0YjY4XSBDb3JyZWxhdGlvbiBJRDogbnVsbCwgQ2FsbCBTdGFjazogbnVs bCwgQ3VzdG9tIEV2ZW50IElEOiAtMSwgTWVzc2FnZTogVk0gZm9yZW1hbiBpcyBub3QgcmVzcG9u ZGluZy4NCjIwMTUtMDktMTUgMTI6Mzg6NDgsMjczIElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNv cmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNj aGVkdWxlcl9Xb3JrZXItMTApIFs3YTgwMDc2Nl0gSEENCnJlc2VydmF0aW9uIHN0YXR1cyBmb3Ig Y2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1IDEyOjQzOjQ4LDMyMCBJTkZPICBbb3Jn Lm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10g KERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTQyKSBbNDQwZjFjNDBdIEhBDQpyZXNlcnZh dGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBPSw0KMjAxNS0wOS0xNSAxMjo0ODo0 OCwzNjYgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2Vy dmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci03MCkgSEEgcmVz ZXJ2YXRpb24NCnN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1IDEy OjUzOjQ4LDQxMiBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhh UmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTEyKSBb NTAyMjFjZGNdIEhBDQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBP Sw0KMjAxNS0wOS0xNSAxMjo1ODo0OCw0NTkgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS5i bGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1 bGVyX1dvcmtlci0zKSBIQSByZXNlcnZhdGlvbg0Kc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQg aXMgT0sNCg0KDQoNCk9uIDI5LjA4LjIwMTUgMjI6NDgsIENocmlzdGlhbiBIYWlsZXIgd3JvdGU6 DQo+IEhlbGxvLA0KPg0KPiBsYXN0IFdlZG5lc2RheSBJIHdhbnRlZCB0byB1cGRhdGUgbXkgb1Zp cnQgMy41IGh5cGVydmlzb3IuIEl0IGlzIGEgc2luZ2xlIENlbnRvcw0KPiA3IHNlcnZlciwgc28g SSBzdGFydGVkIGJ5IHN1c3BlbmRpbmcgdGhlIFZNcyBpbiBvcmRlciB0byBzZXQgdGhlIG9WaXJ0 IGVuZ2luZQ0KPiBob3N0IHRvIG1haW50ZW5hbmNlIG1vZGUuIER1cmluZyB0aGUgcHJvY2VzcyBv ZiBzdXNwZW5kaW5nIHRoZSBWTXMgdGhlIHNlcnZlcg0KPiBjcmFzaGVkLCBrZXJuZWwgcGFuaWPi gKYNCj4NCj4gQWZ0ZXIgcmVzdGFydGluZyB0aGUgc2VydmVyIEkgaW5zdGFsbGVkIHRoZSB1cGRh dGVzIHZpYSB5dW0gYW4gcmVzdGFydGVkIHRoZQ0KPiBzZXJ2ZXIgYWdhaW4uIEFmdGVyd2FyZHMs IGFsbCB0aGUgVk1zIGNvdWxkIGJlIHN0YXJ0ZWQgYWdhaW4uIFNvbWUgaG91cnMgbGF0ZXINCj4g bXkgbW9uaXRvcmluZyBzeXN0ZW0gcmVnaXN0ZXJlZCBzb21lIHVucmVzcG9uc2l2ZSBob3N0cywg SSBoYWQgYSBsb29rIGluIHRoZQ0KPiBvVmlydCBpbnRlcmZhY2UsIDMgb2YgdGhlIFZNcyB3ZXJl IGluIHRoZSBzdGF0ZSDigJxub3QgcmVzcG9uZGluZ+KAnSwgbWFya2VkIGJ5IGENCj4gcXVlc3Rp b24gbWFyay4NCj4NCj4gSSB0cmllZCB0byBzaHV0IGRvd24gdGhlIFZNcywgYnV0IG9WaXJ0IHdh c27igJl0IGFibGUgdG8gZG8gc28uIEkgdHJpZWQgdG8gcmVzZXQNCj4gdGhlIHN0YXR1cyBpbiB0 aGUgZGF0YWJhc2Ugd2l0aCB0aGUgc3FsIHN0YXRlbWVudA0KPg0KPiB1cGRhdGUgdm1fZHluYW1p YyBzZXQgc3RhdHVzID0gMCB3aGVyZSB2bV9ndWlkID0gKHNlbGVjdCB2bV9ndWlkIGZyb20gdm1f c3RhdGljDQo+IHdoZXJlIHZtX25hbWUgPSAnTVlWTU5BTUUnKTsNCj4NCj4gYnV0IHRoYXQgZGlk buKAmXQgaGVscCwgZWl0aGVyLiBPbmx5IHJlYm9vdGluZyB0aGUgd2hvbGUgaHlwZXJ2aXNvciBo ZWxwZWTigKYNCj4gYWZ0ZXJ3YXJkcyBldmVyeXRoaW5nIHdvcmtlZCBhZ2Fpbi4gQnV0IG9ubHkg Zm9yIGEgZmV3IGhvdXJzLCB0aGVuIG9uZSBvZiB0aGUNCj4gVk1zIGVudGVyZWQgdGhlIOKAnG5v dCByZXNwb25kaW5n4oCdIHN0YXRlIGFnYWlu4oCmIGFnYWluIG9ubHkgYSByZWJvb3QgaGVscGVk Lg0KPiBZZXN0ZXJkYXkgaXQgaGFwcGVuZWQgYWdhaW46DQo+DQo+IDIwMTUtMDgtMjggMTc6NDQ6 MjIsNjY0IElORk8NCj4gW29yZy5vdmlydC5lbmdpbmUuY29yZS52ZHNicm9rZXIuVmRzVXBkYXRl UnVuVGltZUluZm9dDQo+IChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci02MCkgWzRlZjkw YjEyXSBWTSBEQw0KPiAwZjNkMWYwNi1lNTE2LTQ4Y2UtYWE2Zi03MjczYzMzZDM0OTEgbW92ZWQg ZnJvbSBVcCAtLT4gTm90UmVzcG9uZGluZw0KPg0KPiAyMDE1LTA4LTI4IDE3OjQ0OjIyLDY5MiBX QVJODQo+IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuZGFsLmRiYnJva2VyLmF1ZGl0bG9naGFuZGxp bmcuQXVkaXRMb2dEaXJlY3Rvcl0NCj4gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTYw KSBbNGVmOTBiMTJdIENvcnJlbGF0aW9uIElEOiBudWxsLCBDYWxsIFN0YWNrOg0KPiBudWxsLCBD dXN0b20gRXZlbnQgSUQ6IC0xLCBNZXNzYWdlOiBWTSBEQyBpcyBub3QgcmVzcG9uZGluZy4NCj4N Cj4gRG9lcyBhbnlib2R5IGtub3cgd2hhdCBJIGNhbiBkbz8gV2hlcmUgc2hvdWxkIEkgaGF2ZSBh IGxvb2s/IEhpbnRzIGFyZSBncmVhdGx5DQo+IGFwcHJlY2lhdGVkIQ0KPg0KPiBUaGFua3MsDQo+ DQo+IENocmlzdGlhbg0KPg0KDQotLQ0KRGFuaWVsIEhlbGdlbmJlcmdlcg0KbSBib3ggYmV3ZWd0 YmlsZCBHbWJIDQoNClA6ICs0OS8zMC8yNDA4NzgxLTIyDQpGOiArNDkvMzAvMjQwODc4MS0xMA0K DQpBQ0tFUlNUUi4gMTkNCkQtMTAxMTUgQkVSTElODQoNCg0Kd3d3Lm0tYm94LmRlPGh0dHA6Ly93 d3cubS1ib3guZGU+ICB3d3cubW9ua2V5bWVuLnR2PGh0dHA6Ly93d3cubW9ua2V5bWVuLnR2Pg0K DQpHZXNjaMOkZnRzZsO8aHJlcjogTWFydGluIFJldHNjaGl0emVnZ2VyIC8gTWljaGFlbGEgR8O2 bGxuZXINCkhhbmRlc2xyZWdpc3RlcjogQW10c2dlcmljaHQgQ2hhcmxvdHRlbmJ1cmcgLyBIUkIg MTEyNzY3DQo= --_000_354c6d82d06343f59dfebdb779762820emailandroidcom_ Content-Type: text/html; charset="utf-8" Content-ID: <BB39A1C46290544BB19A578F6A0B74C4@collogia.de> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi PkRvIHlvdSBoYXZlIGEgY2hhbmNlIHRvIGluc3RhbGwgcWVtdS1kZWJ1Zz8gSWYgeWVzIEkgd291 bGQgdHJ5IGEgYmFja3RyYWNlLjwvcD4NCjxwIGRpcj0ibHRyIj5nZGIgLXAgJmx0O3FlbXUtcGlk Jmd0Ozxicj4NCiMgYnQ8L3A+DQo8cCBkaXI9Imx0ciI+TWFya3VzPC9wPg0KPGRpdiBjbGFzcz0i Z21haWxfcXVvdGUiPkFtIDE1LjA5LjIwMTUgNDoxNSBuYWNobS4gc2NocmllYiBEYW5pZWwgSGVs Z2VuYmVyZ2VyICZsdDtkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlJmd0Ozo8YnIgdHlwZT0i YXR0cmlidXRpb24iPg0KPGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBzdHlsZT0ibWFyZ2luOjAg MCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8 ZGl2Pjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTBwdCI+PC9zcGFuPjwv Zm9udD4NCjxkaXY+SGVsbG8sPGJyPg0KPGJyPg0KSSBkbyBub3Qgd2FudCB0byBoaWphY2sgdGhl IHRocmVhZCBidXQgbWF5YmUgbXkgaXNzdWUgaXMgcmVsYXRlZD88YnI+DQo8YnI+DQpJdCBtaWdo dCBoYXZlIHN0YXJ0ZWQgd2l0aCBvdmlydCAzLjUuMzsgYnV0IEkgY2Fubm90IHRlbGwgZm9yIHN1 cmUuPGJyPg0KPGJyPg0KRm9yIG1lLCBvbmUgdm0gKGZvcmVtYW4pIGlzIGFmZmVjdGVkOyB0aGUg c2Vjb25kIHRpbWUgaW4gMTQgZGF5cy4gSSBjYW4gY29uZmlybSB0aGlzIGFzIEkgYWxzbyBsb29z ZSBhbnkgbmV0d29yayBjb25uZWN0aW9uIHRvIHRoZSBWTSBhbmQ8YnI+DQp0aGUgYWJpbGl0eSB0 byBjb25uZWN0IGEgY29uc29sZS48YnI+DQpBbHNvLCB0aGUgb25seSB0aGluZyB3aXRjaCAnZml4 ZXMnIHRoZSBpc3N1ZSBpcyByaWdodCBub3cgJ2tpbGwgLTkgJmx0O3BpZCBvZiBxZW11LWt2bSBw cm9jZXNzJmd0Oyc8YnI+DQo8YnI+DQpBcyBmYXIgYXMgSSBjYW4gdGVsbCB0aGUgVk0gYmVjYW1l IHVucmVzcG9uc2l2ZSBhdCBhcm91bmQgU2VwIDE1IDEyOjMwOjAxOyBlbmdpbmUgbG9nZ2VkIHRo aXMgYXQgMTI6MzQuIE5vdGhpbmcgb2J2aW91cyBpbiBWRFNNIGxvZ3MgKHNlZTxicj4NCmF0dGFj aGVkKS48YnI+DQo8YnI+DQpCZWxvdyB0aGUgZW5naW5lLmxvZyBwYXJ0Ljxicj4NCjxicj4NClZl cnNpb25zOjxicj4NCm92aXJ0LWVuZ2luZS0zLjUuNC4yLTEuZWw3LmNlbnRvcy5ub2FyY2g8YnI+ DQo8YnI+DQp2ZHNtLTQuMTYuMjYtMC5lbDcuY2VudG9zPGJyPg0KbGlidmlydC0xLjIuOC0xNi5l bDdfMS4zPGJyPg0KPGJyPg0KZW5naW5lLmxvZyAoMTIwMCAtIDEzMDA6PGJyPg0KMjAxNS0wOS0x NSAxMjowMzo0Nyw5NDkgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hl ZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29y a2VyLTU2KSBbMjY0ZDUwMmFdIEhBPGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVy IERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjA4OjAyLDcwOCBJTkZPJm5ic3A7IFtv cmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLk92ZkRhdGFVcGRhdGVyXSAoRGVmYXVsdFF1YXJ0elNj aGVkdWxlcl9Xb3JrZXItODkpIFsyZTdiZjU2ZV0gQXR0ZW1wdGluZyB0byB1cGRhdGU8YnI+DQpW TXMvVGVtcGxhdGVzIE92Zi48YnI+DQoyMDE1LTA5LTE1IDEyOjA4OjAyLDcwOSBJTkZPJm5ic3A7 IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLlByb2Nlc3NPdmZVcGRhdGVGb3JTdG9yYWdlUG9v bENvbW1hbmRdIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci04OSk8YnI+DQpbNWU5ZjRi YTZdIFJ1bm5pbmcgY29tbWFuZDogUHJvY2Vzc092ZlVwZGF0ZUZvclN0b3JhZ2VQb29sQ29tbWFu ZCBpbnRlcm5hbDogdHJ1ZS4gRW50aXRpZXMgYWZmZWN0ZWQgOiZuYnNwOyBJRDo8YnI+DQowMDAw MDAwMi0wMDAyLTAwMDItMDAwMi0wMDAwMDAwMDAwODggVHlwZTogbDxicj4NCjIwMTUtMDktMTUg MTI6MDg6MDIsNzgwIElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuUHJvY2Vz c092ZlVwZGF0ZUZvclN0b3JhZ2VQb29sQ29tbWFuZF0gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJf V29ya2VyLTg5KTxicj4NCls1ZTlmNGJhNl0gTG9jayBmcmVlZCB0byBvYmplY3QgRW5naW5lTG9j ayBbZXhjbHVzaXZlTG9ja3M9IGtleTogMDAwMDAwMDItMDAwMi0wMDAyLTAwMDItMDAwMDAwMDAw MDg4IHZhbHVlOiBPVkZfVVBEQVRFPGJyPg0KMjAxNS0wOS0xNSAxMjowODo0Nyw5OTcgSU5GTyZu YnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25I YW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTIxKSBbM2ZjODU0YTJdIEhB PGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoy MDE1LTA5LTE1IDEyOjEzOjA2LDk5OCBJTkZPJm5ic3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUu dmRzYnJva2VyLnZkc2Jyb2tlci5HZXRGaWxlU3RhdHNWRFNDb21tYW5kXSAob3JnLm92aXJ0LnRo cmVhZC5wb29sLTgtdGhyZWFkLTQ4KTxicj4NCls1MDIyMWNkY10gU1RBUlQsIEdldEZpbGVTdGF0 c1ZEU0NvbW1hbmQoIHN0b3JhZ2VQb29sSWQgPSAwMDAwMDAwMi0wMDAyLTAwMDItMDAwMi0wMDAw MDAwMDAwODgsIGlnbm9yZUZhaWxvdmVyTGltaXQgPSBmYWxzZSksIGxvZyBpZDogMTUwMzk2ODxi cj4NCjIwMTUtMDktMTUgMTI6MTM6MDcsMTM3IElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdpbmUu Y29yZS52ZHNicm9rZXIudmRzYnJva2VyLkdldEZpbGVTdGF0c1ZEU0NvbW1hbmRdIChvcmcub3Zp cnQudGhyZWFkLnBvb2wtOC10aHJlYWQtNDgpPGJyPg0KWzUwMjIxY2RjXSBGSU5JU0gsIEdldEZp bGVTdGF0c1ZEU0NvbW1hbmQsIHJldHVybjoge3BmU2Vuc2UtMi4wLVJFTEVBU0UtaTM4Ni5pc289 e3N0YXR1cz0wLCBjdGltZT0xNDMyMjg2ODg3LjAsIHNpemU9MTE1NzA5OTUyfSw8YnI+DQpGZWRv cmEtMTUtaTY4Ni1MaXZlODxicj4NCjIwMTUtMDktMTUgMTI6MTM6MDcsMTc4IElORk8mbmJzcDsg W29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuSXNvRG9tYWluTGlzdFN5bmNyb25pemVyXSAob3Jn Lm92aXJ0LnRocmVhZC5wb29sLTgtdGhyZWFkLTQ4KSBbNTAyMjFjZGNdIEZpbmlzaGVkPGJyPg0K YXV0b21hdGljIHJlZnJlc2ggcHJvY2VzcyBmb3IgSVNPIGZpbGUgdHlwZSB3aXRoIHN1Y2Nlc3Ms IGZvciBzdG9yYWdlIGRvbWFpbiBpZCA4NGRjYjJmYy1mYjYzLTQ0MmYtYWE3Ny0zZTg0ZGM3ZDVh NzIuPGJyPg0KMjAxNS0wOS0xNSAxMjoxMzo0OCwwNDMgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVu Z2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRR dWFydHpTY2hlZHVsZXJfV29ya2VyLTg3KSBbNGZhMWJiMTZdIEhBPGJyPg0KcmVzZXJ2YXRpb24g c3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjE4OjQ4 LDA4OCBJTkZPJm5ic3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFS ZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNDQpIFs2 MzQ1ZTY5OF0gSEE8YnI+DQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBp cyBPSzxicj4NCjIwMTUtMDktMTUgMTI6MjM6NDgsMTM3IElORk8mbmJzcDsgW29yZy5vdmlydC5l bmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0 UXVhcnR6U2NoZWR1bGVyX1dvcmtlci0xMykgSEEgcmVzZXJ2YXRpb248YnI+DQpzdGF0dXMgZm9y IGNsdXN0ZXIgRGVmYXVsdCBpcyBPSzxicj4NCjIwMTUtMDktMTUgMTI6Mjg6NDgsMTgzIElORk8m bmJzcDsgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9u SGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci03NikgWzE1NGM5MWQ1XSBI QTxicj4NCnJlc2VydmF0aW9uIHN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LPGJyPg0K MjAxNS0wOS0xNSAxMjozMzo0OCwyMjkgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3Jl LmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hl ZHVsZXJfV29ya2VyLTM2KSBbMjdjNzNhYzZdIEhBPGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZv ciBjbHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjM0OjQ5LDQzMiBJTkZP Jm5ic3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLlZkc1VwZGF0ZVJ1blRpbWVJ bmZvXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNDEpIFs1ZjJhNGI2OF0gVk08YnI+ DQpmb3JlbWFuIDhiNTdmZjFkLTI4MDAtNDhhZC1iMjY3LWZkOGU5ZTJmNmZiMiBtb3ZlZCBmcm9t IFVwIC0tJmd0OyBOb3RSZXNwb25kaW5nPGJyPg0KMjAxNS0wOS0xNSAxMjozNDo0OSw1NzggV0FS TiZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmRhbC5kYmJyb2tlci5hdWRpdGxvZ2hhbmRs aW5nLkF1ZGl0TG9nRGlyZWN0b3JdIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci00MSk8 YnI+DQpbNWYyYTRiNjhdIENvcnJlbGF0aW9uIElEOiBudWxsLCBDYWxsIFN0YWNrOiBudWxsLCBD dXN0b20gRXZlbnQgSUQ6IC0xLCBNZXNzYWdlOiBWTSBmb3JlbWFuIGlzIG5vdCByZXNwb25kaW5n Ljxicj4NCjIwMTUtMDktMTUgMTI6Mzg6NDgsMjczIElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdp bmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVh cnR6U2NoZWR1bGVyX1dvcmtlci0xMCkgWzdhODAwNzY2XSBIQTxicj4NCnJlc2VydmF0aW9uIHN0 YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LPGJyPg0KMjAxNS0wOS0xNSAxMjo0Mzo0OCwz MjAgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVz ZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTQyKSBbNDQw ZjFjNDBdIEhBPGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMg T0s8YnI+DQoyMDE1LTA5LTE1IDEyOjQ4OjQ4LDM2NiBJTkZPJm5ic3A7IFtvcmcub3ZpcnQuZW5n aW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1 YXJ0elNjaGVkdWxlcl9Xb3JrZXItNzApIEhBIHJlc2VydmF0aW9uPGJyPg0Kc3RhdHVzIGZvciBj bHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjUzOjQ4LDQxMiBJTkZPJm5i c3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhh bmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItMTIpIFs1MDIyMWNkY10gSEE8 YnI+DQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBPSzxicj4NCjIw MTUtMDktMTUgMTI6NTg6NDgsNDU5IElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdpbmUuY29yZS5i bGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1 bGVyX1dvcmtlci0zKSBIQSByZXNlcnZhdGlvbjxicj4NCnN0YXR1cyBmb3IgY2x1c3RlciBEZWZh dWx0IGlzIE9LPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KT24gMjkuMDguMjAxNSAyMjo0OCwgQ2hy aXN0aWFuIEhhaWxlciB3cm90ZTo8YnI+DQomZ3Q7IEhlbGxvLDxicj4NCiZndDsgPGJyPg0KJmd0 OyBsYXN0IFdlZG5lc2RheSBJIHdhbnRlZCB0byB1cGRhdGUgbXkgb1ZpcnQgMy41IGh5cGVydmlz b3IuIEl0IGlzIGEgc2luZ2xlIENlbnRvcw0KPGJyPg0KJmd0OyA3IHNlcnZlciwgc28gSSBzdGFy dGVkIGJ5IHN1c3BlbmRpbmcgdGhlIFZNcyBpbiBvcmRlciB0byBzZXQgdGhlIG9WaXJ0IGVuZ2lu ZSA8YnI+DQomZ3Q7IGhvc3QgdG8gbWFpbnRlbmFuY2UgbW9kZS4gRHVyaW5nIHRoZSBwcm9jZXNz IG9mIHN1c3BlbmRpbmcgdGhlIFZNcyB0aGUgc2VydmVyIDxicj4NCiZndDsgY3Jhc2hlZCwga2Vy bmVsIHBhbmlj4oCmPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IEFmdGVyIHJlc3RhcnRpbmcgdGhlIHNl cnZlciBJIGluc3RhbGxlZCB0aGUgdXBkYXRlcyB2aWEgeXVtIGFuIHJlc3RhcnRlZCB0aGUgPGJy Pg0KJmd0OyBzZXJ2ZXIgYWdhaW4uIEFmdGVyd2FyZHMsIGFsbCB0aGUgVk1zIGNvdWxkIGJlIHN0 YXJ0ZWQgYWdhaW4uIFNvbWUgaG91cnMgbGF0ZXIgPGJyPg0KJmd0OyBteSBtb25pdG9yaW5nIHN5 c3RlbSByZWdpc3RlcmVkIHNvbWUgdW5yZXNwb25zaXZlIGhvc3RzLCBJIGhhZCBhIGxvb2sgaW4g dGhlIDxicj4NCiZndDsgb1ZpcnQgaW50ZXJmYWNlLCAzIG9mIHRoZSBWTXMgd2VyZSBpbiB0aGUg c3RhdGUg4oCcbm90IHJlc3BvbmRpbmfigJ0sIG1hcmtlZCBieSBhIDxicj4NCiZndDsgcXVlc3Rp b24gbWFyay48YnI+DQomZ3Q7IDxicj4NCiZndDsgSSB0cmllZCB0byBzaHV0IGRvd24gdGhlIFZN cywgYnV0IG9WaXJ0IHdhc27igJl0IGFibGUgdG8gZG8gc28uIEkgdHJpZWQgdG8gcmVzZXQgPGJy Pg0KJmd0OyB0aGUgc3RhdHVzIGluIHRoZSBkYXRhYmFzZSB3aXRoIHRoZSBzcWwgc3RhdGVtZW50 PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IHVwZGF0ZSB2bV9keW5hbWljIHNldCBzdGF0dXMgPSAwIHdo ZXJlIHZtX2d1aWQgPSAoc2VsZWN0IHZtX2d1aWQgZnJvbSB2bV9zdGF0aWMNCjxicj4NCiZndDsg d2hlcmUgdm1fbmFtZSA9ICdNWVZNTkFNRScpOzxicj4NCiZndDsgPGJyPg0KJmd0OyBidXQgdGhh dCBkaWRu4oCZdCBoZWxwLCBlaXRoZXIuIE9ubHkgcmVib290aW5nIHRoZSB3aG9sZSBoeXBlcnZp c29yIGhlbHBlZOKApiA8YnI+DQomZ3Q7IGFmdGVyd2FyZHMgZXZlcnl0aGluZyB3b3JrZWQgYWdh aW4uIEJ1dCBvbmx5IGZvciBhIGZldyBob3VycywgdGhlbiBvbmUgb2YgdGhlIDxicj4NCiZndDsg Vk1zIGVudGVyZWQgdGhlIOKAnG5vdCByZXNwb25kaW5n4oCdIHN0YXRlIGFnYWlu4oCmIGFnYWlu IG9ubHkgYSByZWJvb3QgaGVscGVkLiA8YnI+DQomZ3Q7IFllc3RlcmRheSBpdCBoYXBwZW5lZCBh Z2Fpbjo8YnI+DQomZ3Q7IDxicj4NCiZndDsgMjAxNS0wOC0yOCAxNzo0NDoyMiw2NjQgSU5GTyZu YnNwOyA8YnI+DQomZ3Q7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLlZkc1VwZGF0 ZVJ1blRpbWVJbmZvXSA8YnI+DQomZ3Q7IChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci02 MCkgWzRlZjkwYjEyXSBWTSBEQyA8YnI+DQomZ3Q7IDBmM2QxZjA2LWU1MTYtNDhjZS1hYTZmLTcy NzNjMzNkMzQ5MSBtb3ZlZCBmcm9tIFVwIC0tJmd0OyBOb3RSZXNwb25kaW5nPGJyPg0KJmd0OyA8 YnI+DQomZ3Q7IDIwMTUtMDgtMjggMTc6NDQ6MjIsNjkyIFdBUk4mbmJzcDsgPGJyPg0KJmd0OyBb b3JnLm92aXJ0LmVuZ2luZS5jb3JlLmRhbC5kYmJyb2tlci5hdWRpdGxvZ2hhbmRsaW5nLkF1ZGl0 TG9nRGlyZWN0b3JdIDxicj4NCiZndDsgKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTYw KSBbNGVmOTBiMTJdIENvcnJlbGF0aW9uIElEOiBudWxsLCBDYWxsIFN0YWNrOg0KPGJyPg0KJmd0 OyBudWxsLCBDdXN0b20gRXZlbnQgSUQ6IC0xLCBNZXNzYWdlOiBWTSBEQyBpcyBub3QgcmVzcG9u ZGluZy48YnI+DQomZ3Q7IDxicj4NCiZndDsgRG9lcyBhbnlib2R5IGtub3cgd2hhdCBJIGNhbiBk bz8gV2hlcmUgc2hvdWxkIEkgaGF2ZSBhIGxvb2s/IEhpbnRzIGFyZSBncmVhdGx5IDxicj4NCiZn dDsgYXBwcmVjaWF0ZWQhPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFRoYW5rcyw8YnI+DQomZ3Q7IDxi cj4NCiZndDsgQ2hyaXN0aWFuPGJyPg0KJmd0OyA8YnI+DQo8YnI+DQotLSA8YnI+DQpEYW5pZWwg SGVsZ2VuYmVyZ2VyPGJyPg0KbSBib3ggYmV3ZWd0YmlsZCBHbWJIPGJyPg0KPGJyPg0KUDogJiM0 Mzs0OS8zMC8yNDA4NzgxLTIyPGJyPg0KRjogJiM0Mzs0OS8zMC8yNDA4NzgxLTEwPGJyPg0KPGJy Pg0KQUNLRVJTVFIuIDE5PGJyPg0KRC0xMDExNSBCRVJMSU48YnI+DQo8YnI+DQo8YnI+DQo8YSBo cmVmPSJodHRwOi8vd3d3Lm0tYm94LmRlIj53d3cubS1ib3guZGU8L2E+Jm5ic3A7IDxhIGhyZWY9 Imh0dHA6Ly93d3cubW9ua2V5bWVuLnR2Ij4NCnd3dy5tb25rZXltZW4udHY8L2E+PGJyPg0KPGJy Pg0KR2VzY2jDpGZ0c2bDvGhyZXI6IE1hcnRpbiBSZXRzY2hpdHplZ2dlciAvIE1pY2hhZWxhIEfD tmxsbmVyPGJyPg0KSGFuZGVzbHJlZ2lzdGVyOiBBbXRzZ2VyaWNodCBDaGFybG90dGVuYnVyZyAv IEhSQiAxMTI3Njc8YnI+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPC9kaXY+DQo8 L2JvZHk+DQo8L2h0bWw+DQo= --_000_354c6d82d06343f59dfebdb779762820emailandroidcom_-- ------=_NextPartTM-000-842b7e16-7dd5-497a-b341-090e5ac35743 Content-Type: text/plain; name="InterScan_Disclaimer.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="InterScan_Disclaimer.txt" **************************************************************************** Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. Über das Internet versandte E-Mails können unter fremden Namen erstellt oder manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine rechtsverbindliche Willenserklärung. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln Vorstand: Kadir Akin Dr. Michael Höhnerbach Vorsitzender des Aufsichtsrates: Hans Kristian Langva Registergericht: Amtsgericht Köln Registernummer: HRB 52 497 This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. e-mails sent over the internet may have been written under a wrong name or been manipulated. That is why this message sent as an e-mail is not a legally binding declaration of intention. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln executive board: Kadir Akin Dr. Michael Höhnerbach President of the supervisory board: Hans Kristian Langva Registry office: district court Cologne Register number: HRB 52 497 **************************************************************************** ------=_NextPartTM-000-842b7e16-7dd5-497a-b341-090e5ac35743--

This is a multipart message in MIME format. ------=_NextPart_000_0479_01D0EFFC.E11214C0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi Markus, =20 gdb is available on CentOS 7, but what do you mean by qemu-debug? I = Installed qemu-kvm-tools, maybe this is the pendant for CentOS? =20 qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools =20 Regards, Christian =20 Von: Markus Stockhausen [mailto:stockhausen@collogia.de]=20 Gesendet: Dienstag, 15. September 2015 20:40 An: Daniel Helgenberger <daniel.helgenberger@m-box.de> Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com; = users@ovirt.org Betreff: Re: [ovirt-users] Some VMs in status "not responding" in oVirt = interface =20 Do you have a chance to install qemu-debug? If yes I would try a = backtrace. gdb -p <qemu-pid> # bt Markus Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger = <daniel.helgenberger@m-box.de <mailto:daniel.helgenberger@m-box.de> >: Hello, I do not want to hijack the thread but maybe my issue is related? It might have started with ovirt 3.5.3; but I cannot tell for sure. For me, one vm (foreman) is affected; the second time in 14 days. I can = confirm this as I also loose any network connection to the VM and the ability to connect a console. Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid = of qemu-kvm process>' As far as I can tell the VM became unresponsive at around Sep 15 = 12:30:01; engine logged this at 12:34. Nothing obvious in VDSM logs (see attached). Below the engine.log part. Versions: ovirt-engine-3.5.4.2-1.el7.centos.noarch vdsm-4.16.26-0.el7.centos libvirt-1.2.8-16.el7_1.3 engine.log (1200 - 1300: 2015-09-15 12:03:47,949 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-56) [264d502a] HA reservation status for cluster Default is OK 2015-09-15 12:08:02,708 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] = (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update VMs/Templates Ovf. 2015-09-15 12:08:02,709 INFO = [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] = (DefaultQuartzScheduler_Worker-89) [5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand = internal: true. Entities affected : ID: 00000002-0002-0002-0002-000000000088 Type: l 2015-09-15 12:08:02,780 INFO = [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] = (DefaultQuartzScheduler_Worker-89) [5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks=3D key: = 00000002-0002-0002-0002-000000000088 value: OVF_UPDATE 2015-09-15 12:08:47,997 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA reservation status for cluster Default is OK 2015-09-15 12:13:06,998 INFO = [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] = (org.ovirt.thread.pool-8-thread-48) [50221cdc] START, GetFileStatsVDSCommand( storagePoolId =3D = 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit =3D false), = log id: 1503968 2015-09-15 12:13:07,137 INFO = [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] = (org.ovirt.thread.pool-8-thread-48) [50221cdc] FINISH, GetFileStatsVDSCommand, return: = {pfSense-2.0-RELEASE-i386.iso=3D{status=3D0, ctime=3D1432286887.0, = size=3D115709952}, Fedora-15-i686-Live8 2015-09-15 12:13:07,178 INFO = [org.ovirt.engine.core.bll.IsoDomainListSyncronizer] = (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished automatic refresh process for ISO file type with success, for storage = domain id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72. 2015-09-15 12:13:48,043 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA reservation status for cluster Default is OK 2015-09-15 12:18:48,088 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-44) [6345e698] HA reservation status for cluster Default is OK 2015-09-15 12:23:48,137 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-13) HA reservation status for cluster Default is OK 2015-09-15 12:28:48,183 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-76) [154c91d5] HA reservation status for cluster Default is OK 2015-09-15 12:33:48,229 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA reservation status for cluster Default is OK 2015-09-15 12:34:49,432 INFO = [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] = (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> = NotResponding 2015-09-15 12:34:49,578 WARN = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] = (DefaultQuartzScheduler_Worker-41) [5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, = Message: VM foreman is not responding. 2015-09-15 12:38:48,273 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-10) [7a800766] HA reservation status for cluster Default is OK 2015-09-15 12:43:48,320 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-42) [440f1c40] HA reservation status for cluster Default is OK 2015-09-15 12:48:48,366 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-70) HA reservation status for cluster Default is OK 2015-09-15 12:53:48,412 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-12) [50221cdc] HA reservation status for cluster Default is OK 2015-09-15 12:58:48,459 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-3) HA reservation status for cluster Default is OK On 29.08.2015 22:48, Christian Hailer wrote:
Hello, =20 last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a = single Centos=20 7 server, so I started by suspending the VMs in order to set the oVirt = engine=20 host to maintenance mode. During the process of suspending the VMs the = server=20 crashed, kernel panic=E2=80=A6 =20 After restarting the server I installed the updates via yum an = restarted the=20 server again. Afterwards, all the VMs could be started again. Some = hours later=20 my monitoring system registered some unresponsive hosts, I had a look = in the=20 oVirt interface, 3 of the VMs were in the state =E2=80=9Cnot = responding=E2=80=9D, marked by a=20 question mark. =20 I tried to shut down the VMs, but oVirt wasn=E2=80=99t able to do so. = I tried to reset=20 the status in the database with the sql statement =20 update vm_dynamic set status =3D 0 where vm_guid =3D (select vm_guid = from vm_static=20 where vm_name =3D 'MYVMNAME'); =20 but that didn=E2=80=99t help, either. Only rebooting the whole = hypervisor helped=E2=80=A6=20 afterwards everything worked again. But only for a few hours, then one = of the=20 VMs entered the =E2=80=9Cnot responding=E2=80=9D state again=E2=80=A6 = again only a reboot helped.=20 Yesterday it happened again: =20 2015-08-28 17:44:22,664 INFO =20 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]=20 (DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC=20 0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding =20 2015-08-28 17:44:22,692 WARN =20 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, = Call Stack:=20 null, Custom Event ID: -1, Message: VM DC is not responding. =20 Does anybody know what I can do? Where should I have a look? Hints are = greatly=20 appreciated! =20 Thanks, =20 Christian =20
<span style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'> = Dienstag, 15. September 2015 20:40<br><b>An:</b> Daniel Helgenberger = <daniel.helgenberger@m-box.de><br><b>Cc:</b> Christian Hailer = <christian@hailer.eu>; ydary@redhat.com; = users@ovirt.org<br><b>Betreff:</b> Re: [ovirt-users] Some VMs in status = "not responding" in oVirt = interface<o:p></o:p></span></p></div></div><p = class=3DMsoNormal><o:p> </o:p></p><p>Do you have a chance to = install qemu-debug? If yes I would try a backtrace.<o:p></o:p></p><p>gdb = -p <qemu-pid><br># = bt<o:p></o:p></p><p>Markus<o:p></o:p></p><div><p class=3DMsoNormal>Am = 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <<a =
vdsm-4.16.26-0.el7.centos<br>libvirt-1.2.8-16.el7_1.3<br><br>engine.log = (1200 - 1300:<br>2015-09-15 12:03:47,949 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-56) [264d502a] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:08:02,708 INFO = [org.ovirt.engine.core.bll.OvfDataUpdater] = (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to = update<br>VMs/Templates Ovf.<br>2015-09-15 12:08:02,709 INFO = [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] = (DefaultQuartzScheduler_Worker-89)<br>[5e9f4ba6] Running command: = ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected = : ID:<br>00000002-0002-0002-0002-000000000088 Type: = l<br>2015-09-15 12:08:02,780 INFO = [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] = (DefaultQuartzScheduler_Worker-89)<br>[5e9f4ba6] Lock freed to object = EngineLock [exclusiveLocks=3D key: 00000002-0002-0002-0002-000000000088 = value: OVF_UPDATE<br>2015-09-15 12:08:47,997 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:13:06,998 INFO = [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] = (org.ovirt.thread.pool-8-thread-48)<br>[50221cdc] START, = GetFileStatsVDSCommand( storagePoolId =3D = 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit =3D false), = log id: 1503968<br>2015-09-15 12:13:07,137 INFO = [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] = (org.ovirt.thread.pool-8-thread-48)<br>[50221cdc] FINISH, = GetFileStatsVDSCommand, return: = {pfSense-2.0-RELEASE-i386.iso=3D{status=3D0, ctime=3D1432286887.0, = size=3D115709952},<br>Fedora-15-i686-Live8<br>2015-09-15 12:13:07,178 = INFO [org.ovirt.engine.core.bll.IsoDomainListSyncronizer] = (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished<br>automatic = refresh process for ISO file type with success, for storage domain id = 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.<br>2015-09-15 12:13:48,043 = INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:18:48,088 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-44) [6345e698] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:23:48,137 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-13) HA reservation<br>status for cluster = Default is OK<br>2015-09-15 12:28:48,183 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-76) [154c91d5] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:33:48,229 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:34:49,432 INFO = [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] = (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM<br>foreman = 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> = NotResponding<br>2015-09-15 12:34:49,578 WARN = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] = (DefaultQuartzScheduler_Worker-41)<br>[5f2a4b68] Correlation ID: null, = Call Stack: null, Custom Event ID: -1, Message: VM foreman is not = responding.<br>2015-09-15 12:38:48,273 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-10) [7a800766] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:43:48,320 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-42) [440f1c40] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:48:48,366 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-70) HA reservation<br>status for cluster = Default is OK<br>2015-09-15 12:53:48,412 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-12) [50221cdc] HA<br>reservation status = for cluster Default is OK<br>2015-09-15 12:58:48,459 INFO = [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] = (DefaultQuartzScheduler_Worker-3) HA reservation<br>status for cluster = Default is OK<br><br><br><br>On 29.08.2015 22:48, Christian Hailer = wrote:<br>> Hello,<br>> <br>> last Wednesday I wanted to update = my oVirt 3.5 hypervisor. It is a single Centos <br>> 7 server, so I = started by suspending the VMs in order to set the oVirt engine <br>> = host to maintenance mode. During the process of suspending the VMs the = server <br>> crashed, kernel panic=E2=80=A6<br>> <br>> After = restarting the server I installed the updates via yum an restarted the = <br>> server again. Afterwards, all the VMs could be started again. = Some hours later <br>> my monitoring system registered some = unresponsive hosts, I had a look in the <br>> oVirt interface, 3 of =
--=20 Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de <http://www.m-box.de> www.monkeymen.tv = <http://www.monkeymen.tv>=20 Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 ------=_NextPart_000_0479_01D0EFFC.E11214C0 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta = name=3DGenerator content=3D"Microsoft Word 15 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p {mso-style-priority:99; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; font-size:12.0pt; font-family:"Times New Roman",serif;} span.E-MailFormatvorlage18 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:#1F497D;} span.E-MailFormatvorlage19 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @page WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 2.0cm 70.85pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DDE link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'>Hi Markus,<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'>gdb is available on CentOS 7, but what do = you mean by qemu-debug? I Installed qemu-kvm-tools, maybe this is the = pendant for CentOS?<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'>qemu-kvm-tools.x86_64 : KVM debugging and = diagnostics tools<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'>qemu-kvm-tools-ev.x86_64 : KVM debugging and = diagnostics tools<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'>qemu-kvm-tools-rhev.x86_64 : KVM debugging = and diagnostics tools<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'>Regards, Christian<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;= mso-fareast-language:EN-US'><o:p> </o:p></span></p><div><div = style=3D'border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm = 0cm 0cm'><p class=3DMsoNormal><b><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'>Von:</span></= b><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'> Markus = Stockhausen [mailto:stockhausen@collogia.de] = <br><b>Gesend</b></span><b><span = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'>et:</span></b= href=3D"mailto:daniel.helgenberger@m-box.de">daniel.helgenberger@m-box.de= </a>>:<o:p></o:p></p><blockquote = style=3D'border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm = 6.0pt;margin-left:4.8pt;margin-right:0cm'><div><div><p = class=3DMsoNormal>Hello,<br><br>I do not want to hijack the thread but = maybe my issue is related?<br><br>It might have started with ovirt = 3.5.3; but I cannot tell for sure.<br><br>For me, one vm (foreman) is = affected; the second time in 14 days. I can confirm this as I also loose = any network connection to the VM and<br>the ability to connect a = console.<br>Also, the only thing witch 'fixes' the issue is right now = 'kill -9 <pid of qemu-kvm process>'<br><br>As far as I can tell = the VM became unresponsive at around Sep 15 12:30:01; engine logged this = at 12:34. Nothing obvious in VDSM logs (see<br>attached).<br><br>Below = the engine.log = part.<br><br>Versions:<br>ovirt-engine-3.5.4.2-1.el7.centos.noarch<br><br= the VMs were in the state =E2=80=9Cnot responding=E2=80=9D, marked by a = <br>> question mark.<br>> <br>> I tried to shut down the VMs, = but oVirt wasn=E2=80=99t able to do so. I tried to reset <br>> the = status in the database with the sql statement<br>> <br>> update = vm_dynamic set status =3D 0 where vm_guid =3D (select vm_guid from = vm_static <br>> where vm_name =3D 'MYVMNAME');<br>> <br>> but = that didn=E2=80=99t help, either. Only rebooting the whole hypervisor = helped=E2=80=A6 <br>> afterwards everything worked again. But only = for a few hours, then one of the <br>> VMs entered the =E2=80=9Cnot = responding=E2=80=9D state again=E2=80=A6 again only a reboot helped. = <br>> Yesterday it happened again:<br>> <br>> 2015-08-28 = 17:44:22,664 INFO <br>> = [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] <br>> = (DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC <br>> = 0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> = NotResponding<br>> <br>> 2015-08-28 17:44:22,692 WARN = <br>> = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] = <br>> (DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: = null, Call Stack: <br>> null, Custom Event ID: -1, Message: VM DC is = not responding.<br>> <br>> Does anybody know what I can do? Where = should I have a look? Hints are greatly <br>> appreciated!<br>> = <br>> Thanks,<br>> <br>> Christian<br>> <br><br>-- = <br>Daniel Helgenberger<br>m box bewegtbild GmbH<br><br>P: = +49/30/2408781-22<br>F: +49/30/2408781-10<br><br>ACKERSTR. 19<br>D-10115 = BERLIN<br><br><br><a href=3D"http://www.m-box.de">www.m-box.de</a> = <a = href=3D"http://www.monkeymen.tv">www.monkeymen.tv</a><br><br>Gesch=C3=A4f= tsf=C3=BChrer: Martin Retschitzegger / Michaela = G=C3=B6llner<br>Handeslregister: Amtsgericht Charlottenburg / HRB = 112767<o:p></o:p></p></div></div></blockquote></div></div></body></html> ------=_NextPart_000_0479_01D0EFFC.E11214C0--

Hello,=0A= =0A= =0A= =0A= last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a single= Centos=0A= =0A= =0A= 7 server, so I started by suspending the VMs in order to set the oVirt en= gine =0A= =0A= host to maintenance mode. During the process of suspending the VMs the se= rver =0A= =0A= crashed, kernel panic=85=0A= =0A= =0A= =0A= After restarting the server I installed the updates via yum an restarted =
server again. Afterwards, all the VMs could be started again. Some hours = later =0A= =0A= my monitoring system registered some unresponsive hosts, I had a look in =
oVirt interface, 3 of the VMs were in the state =93not responding=94, mar= ked by a =0A= =0A= question mark.=0A= =0A= =0A= =0A= I tried to shut down the VMs, but oVirt wasn=92t able to do so. I tried t= o reset =0A= =0A= the status in the database with the sql statement=0A= =0A= =0A= =0A= update vm_dynamic set status =3D 0 where vm_guid =3D (select vm_guid from= vm_static=0A= =0A= =0A= where vm_name =3D 'MYVMNAME');=0A= =0A= =0A= =0A= but that didn=92t help, either. Only rebooting the whole hypervisor helpe= d=85 =0A= =0A= afterwards everything worked again. But only for a few hours, then one of=
This is a multi-part message in MIME format. ------=_NextPartTM-000-c61d044c-4b16-4064-ab82-f5449098e861 Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Hi Christian,=0A= =0A= I think of a package similar like this:=0A= =0A= qemu-debuginfo.x86_64 2:2.1.3-10.fc21=0A= =0A= That allows gdb to show information about backtrace symbols. See=0A= comment 12 of https://bugzilla.redhat.com/show_bug.cgi?id=3D1262251=0A= Makes error search much simpler - especially if qemu hangs.=0A= =0A= Markus=0A= =0A= **********************************=0A= =0A= Von: Christian Hailer [christian@hailer.eu]=0A= =0A= Gesendet: Dienstag, 15. September 2015 21:24=0A= =0A= An: Markus Stockhausen; 'Daniel Helgenberger'=0A= =0A= Cc: ydary@redhat.com; users@ovirt.org=0A= =0A= Betreff: AW: [ovirt-users] Some VMs in status "not responding" in oVirt int= erface=0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= Hi Markus,=0A= =0A= gdb is available on CentOS 7, but what do you mean by qemu-debug? I Install= ed qemu-kvm-tools, maybe this is the pendant for CentOS?=0A= =0A= qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools=0A= qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools=0A= qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools=0A= =0A= Regards, Christian=0A= =0A= =0A= =0A= =0A= =0A= Von: Markus Stockhausen [mailto:stockhausen@collogia.de]=0A= =0A= =0A= Gesendet: Dienstag, 15. September 2015 20:40=0A= =0A= An: Daniel Helgenberger <daniel.helgenberger@m-box.de>=0A= =0A= Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com; users@ovirt.o= rg=0A= =0A= Betreff: Re: [ovirt-users] Some VMs in status "not responding" in oVirt int= erface=0A= =0A= =0A= =0A= Do you have a chance to install qemu-debug? If yes I would try a backtrace.= =0A= gdb -p <qemu-pid>=0A= =0A= # bt=0A= Markus=0A= =0A= =0A= Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <daniel.helgenberger@= m-box.de>:=0A= =0A= =0A= =0A= =0A= =0A= Hello,=0A= =0A= =0A= =0A= I do not want to hijack the thread but maybe my issue is related?=0A= =0A= =0A= =0A= It might have started with ovirt 3.5.3; but I cannot tell for sure.=0A= =0A= =0A= =0A= For me, one vm (foreman) is affected; the second time in 14 days. I can con= firm this as I also loose any network connection to the VM and=0A= =0A= the ability to connect a console.=0A= =0A= Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid of = qemu-kvm process>'=0A= =0A= =0A= =0A= As far as I can tell the VM became unresponsive at around Sep 15 12:30:01; = engine logged this at 12:34. Nothing obvious in VDSM logs (see=0A= =0A= attached).=0A= =0A= =0A= =0A= Below the engine.log part.=0A= =0A= =0A= =0A= Versions:=0A= =0A= ovirt-engine-3.5.4.2-1.el7.centos.noarch=0A= =0A= =0A= =0A= vdsm-4.16.26-0.el7.centos=0A= =0A= libvirt-1.2.8-16.el7_1.3=0A= =0A= =0A= =0A= engine.log (1200 - 1300:=0A= =0A= 2015-09-15 12:03:47,949 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-56) [264d502a] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:08:02,708 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] (D= efaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update=0A= =0A= VMs/Templates Ovf.=0A= =0A= 2015-09-15 12:08:02,709 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateFo= rStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)=0A= =0A= [5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand internal:= true. Entities affected : ID:=0A= =0A= 00000002-0002-0002-0002-000000000088 Type: l=0A= =0A= 2015-09-15 12:08:02,780 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateFo= rStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)=0A= =0A= [5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks=3D key: 00000002= -0002-0002-0002-000000000088 value: OVF_UPDATE=0A= =0A= 2015-09-15 12:08:47,997 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:13:06,998 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge= tFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)=0A= =0A= [50221cdc] START, GetFileStatsVDSCommand( storagePoolId =3D 00000002-0002-0= 002-0002-000000000088, ignoreFailoverLimit =3D false), log id: 1503968=0A= =0A= 2015-09-15 12:13:07,137 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge= tFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)=0A= =0A= [50221cdc] FINISH, GetFileStatsVDSCommand, return: {pfSense-2.0-RELEASE-i38= 6.iso=3D{status=3D0, ctime=3D1432286887.0, size=3D115709952},=0A= =0A= Fedora-15-i686-Live8=0A= =0A= 2015-09-15 12:13:07,178 INFO [org.ovirt.engine.core.bll.IsoDomainListSyncr= onizer] (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished=0A= =0A= automatic refresh process for ISO file type with success, for storage domai= n id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.=0A= =0A= 2015-09-15 12:13:48,043 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:18:48,088 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-44) [6345e698] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:23:48,137 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-13) HA reservation=0A= =0A= status for cluster Default is OK=0A= =0A= 2015-09-15 12:28:48,183 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-76) [154c91d5] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:33:48,229 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:34:49,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRun= TimeInfo] (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM=0A= =0A= foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> NotRespondin= g=0A= =0A= 2015-09-15 12:34:49,578 WARN [org.ovirt.engine.core.dal.dbbroker.auditlogh= andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-41)=0A= =0A= [5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Mes= sage: VM foreman is not responding.=0A= =0A= 2015-09-15 12:38:48,273 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-10) [7a800766] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:43:48,320 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-42) [440f1c40] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:48:48,366 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-70) HA reservation=0A= =0A= status for cluster Default is OK=0A= =0A= 2015-09-15 12:53:48,412 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-12) [50221cdc] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:58:48,459 INFO [org.ovirt.engine.core.bll.scheduling.HaReser= vationHandling] (DefaultQuartzScheduler_Worker-3) HA reservation=0A= =0A= status for cluster Default is OK=0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= On 29.08.2015 22:48, Christian Hailer wrote:=0A= =0A= the =0A= =0A= the =0A= =0A= the =0A= =0A=
VMs entered the =93not responding=94 state again=85 again only a reboot h= elped. =0A= =0A= Yesterday it happened again:=0A= =0A= =0A= =0A= 2015-08-28 17:44:22,664 INFO =0A= =0A= [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] =0A= =0A= (DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC =0A= =0A= 0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding=0A= =0A= =0A= =0A= 2015-08-28 17:44:22,692 WARN =0A= =0A= [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] = =0A= =0A= (DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call = Stack:=0A= =0A= =0A= null, Custom Event ID: -1, Message: VM DC is not responding.=0A= =0A= =0A= =0A= Does anybody know what I can do? Where should I have a look? Hints are gr= eatly =0A= =0A= appreciated!=0A= =0A= =0A= =0A= Thanks,=0A= =0A= =0A= =0A= Christian=0A= =0A= =0A= =0A= =0A= =0A= -- =0A= =0A= Daniel Helgenberger=0A= =0A= m box bewegtbild GmbH=0A= =0A= =0A= =0A= P: +49/30/2408781-22=0A= =0A= F: +49/30/2408781-10=0A= =0A= =0A= =0A= ACKERSTR. 19=0A= =0A= D-10115 BERLIN=0A= =0A= =0A= =0A= =0A= =0A= www.m-box.de =0A= =0A= www.monkeymen.tv=0A= =0A= =0A= =0A= Gesch=E4ftsf=FChrer: Martin Retschitzegger / Michaela G=F6llner=0A= =0A= Handeslregister: Amtsgericht Charlottenburg / HRB 112767=0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= ------=_NextPartTM-000-c61d044c-4b16-4064-ab82-f5449098e861 Content-Type: text/plain; name="InterScan_Disclaimer.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="InterScan_Disclaimer.txt"
**************************************************************************** Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. Über das Internet versandte E-Mails können unter fremden Namen erstellt oder manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine rechtsverbindliche Willenserklärung. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln Vorstand: Kadir Akin Dr. Michael Höhnerbach Vorsitzender des Aufsichtsrates: Hans Kristian Langva Registergericht: Amtsgericht Köln Registernummer: HRB 52 497 This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. e-mails sent over the internet may have been written under a wrong name or been manipulated. That is why this message sent as an e-mail is not a legally binding declaration of intention. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln executive board: Kadir Akin Dr. Michael Höhnerbach President of the supervisory board: Hans Kristian Langva Registry office: district court Cologne Register number: HRB 52 497 **************************************************************************** ------=_NextPartTM-000-c61d044c-4b16-4064-ab82-f5449098e861--

On 15.09.2015 21:31, Markus Stockhausen wrote:
Hi Christian,
I think of a package similar like this:
qemu-debuginfo.x86_64 2:2.1.3-10.fc21
That allows gdb to show information about backtrace symbols. See comment 12 of https://bugzilla.redhat.com/show_bug.cgi?id=1262251 Makes error search much simpler - especially if qemu hangs.
Markus, thanks for the BZ. I think I do see the same issue. Actually my VM is currently the only with a live snapshot and (puppetmaster) does a lot of I/O. Christian, maybe this BZ1262251 also applicable? I'll go ahead and delete the live snapshot. If I see this issue again I will submit the trace to your BZ.
Markus
**********************************
Von: Christian Hailer [christian@hailer.eu]
Gesendet: Dienstag, 15. September 2015 21:24
An: Markus Stockhausen; 'Daniel Helgenberger'
Cc: ydary@redhat.com; users@ovirt.org
Betreff: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Hi Markus,
gdb is available on CentOS 7, but what do you mean by qemu-debug? I Installed qemu-kvm-tools, maybe this is the pendant for CentOS?
qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools
Regards, Christian
Von: Markus Stockhausen [mailto:stockhausen@collogia.de]
Gesendet: Dienstag, 15. September 2015 20:40
An: Daniel Helgenberger <daniel.helgenberger@m-box.de>
Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com; users@ovirt.org
Betreff: Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Do you have a chance to install qemu-debug? If yes I would try a backtrace. gdb -p <qemu-pid>
# bt Markus
Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Hello,
I do not want to hijack the thread but maybe my issue is related?
It might have started with ovirt 3.5.3; but I cannot tell for sure.
For me, one vm (foreman) is affected; the second time in 14 days. I can confirm this as I also loose any network connection to the VM and
the ability to connect a console.
Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid of qemu-kvm process>'
As far as I can tell the VM became unresponsive at around Sep 15 12:30:01; engine logged this at 12:34. Nothing obvious in VDSM logs (see
attached).
Below the engine.log part.
Versions:
ovirt-engine-3.5.4.2-1.el7.centos.noarch
vdsm-4.16.26-0.el7.centos
libvirt-1.2.8-16.el7_1.3
engine.log (1200 - 1300:
2015-09-15 12:03:47,949 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-56) [264d502a] HA
reservation status for cluster Default is OK
2015-09-15 12:08:02,708 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update
VMs/Templates Ovf.
2015-09-15 12:08:02,709 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID:
00000002-0002-0002-0002-000000000088 Type: l
2015-09-15 12:08:02,780 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks= key: 00000002-0002-0002-0002-000000000088 value: OVF_UPDATE
2015-09-15 12:08:47,997 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA
reservation status for cluster Default is OK
2015-09-15 12:13:06,998 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] START, GetFileStatsVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit = false), log id: 1503968
2015-09-15 12:13:07,137 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] FINISH, GetFileStatsVDSCommand, return: {pfSense-2.0-RELEASE-i386.iso={status=0, ctime=1432286887.0, size=115709952},
Fedora-15-i686-Live8
2015-09-15 12:13:07,178 INFO [org.ovirt.engine.core.bll.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished
automatic refresh process for ISO file type with success, for storage domain id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.
2015-09-15 12:13:48,043 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA
reservation status for cluster Default is OK
2015-09-15 12:18:48,088 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-44) [6345e698] HA
reservation status for cluster Default is OK
2015-09-15 12:23:48,137 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-13) HA reservation
status for cluster Default is OK
2015-09-15 12:28:48,183 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-76) [154c91d5] HA
reservation status for cluster Default is OK
2015-09-15 12:33:48,229 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA
reservation status for cluster Default is OK
2015-09-15 12:34:49,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM
foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> NotResponding
2015-09-15 12:34:49,578 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-41)
[5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM foreman is not responding.
2015-09-15 12:38:48,273 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-10) [7a800766] HA
reservation status for cluster Default is OK
2015-09-15 12:43:48,320 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-42) [440f1c40] HA
reservation status for cluster Default is OK
2015-09-15 12:48:48,366 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-70) HA reservation
status for cluster Default is OK
2015-09-15 12:53:48,412 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-12) [50221cdc] HA
reservation status for cluster Default is OK
2015-09-15 12:58:48,459 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-3) HA reservation
status for cluster Default is OK
On 29.08.2015 22:48, Christian Hailer wrote:
Hello,
last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a single Centos
7 server, so I started by suspending the VMs in order to set the oVirt engine
host to maintenance mode. During the process of suspending the VMs the server
crashed, kernel panic…
After restarting the server I installed the updates via yum an restarted the
server again. Afterwards, all the VMs could be started again. Some hours later
my monitoring system registered some unresponsive hosts, I had a look in the
oVirt interface, 3 of the VMs were in the state “not responding”, marked by a
question mark.
I tried to shut down the VMs, but oVirt wasn’t able to do so. I tried to reset
the status in the database with the sql statement
update vm_dynamic set status = 0 where vm_guid = (select vm_guid from vm_static
where vm_name = 'MYVMNAME');
but that didn’t help, either. Only rebooting the whole hypervisor helped…
afterwards everything worked again. But only for a few hours, then one of the
VMs entered the “not responding” state again… again only a reboot helped.
Yesterday it happened again:
2015-08-28 17:44:22,664 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC
0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding
2015-08-28 17:44:22,692 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM DC is not responding.
Does anybody know what I can do? Where should I have a look? Hints are greatly
appreciated!
Thanks,
Christian
-- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767

Hi, just to get it straight: most of my VMs had one or more existing snapshots. Do you think this is a problem currently? If I understand it correctly the BZ of Markus concerns only a short period of time while removing a snapshot, but my VMs stopped responding in the middle of the night without any interaction... I deleted all the snapshots, just in case :) my system is running fine for nearly three days now, I'm not quite sure but I think it helped that I changed the HDD and NIC of the Windows 2012 VMs to VirtIO devices... Best regards, Christian -----Ursprüngliche Nachricht----- Von: Daniel Helgenberger [mailto:daniel.helgenberger@m-box.de] Gesendet: Dienstag, 15. September 2015 22:24 An: Markus Stockhausen <stockhausen@collogia.de>; Christian Hailer <christian@hailer.eu> Cc: ydary@redhat.com; users@ovirt.org Betreff: Re: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface On 15.09.2015 21:31, Markus Stockhausen wrote:
Hi Christian,
I think of a package similar like this:
qemu-debuginfo.x86_64 2:2.1.3-10.fc21
That allows gdb to show information about backtrace symbols. See comment 12 of https://bugzilla.redhat.com/show_bug.cgi?id=1262251 Makes error search much simpler - especially if qemu hangs.
Markus, thanks for the BZ. I think I do see the same issue. Actually my VM is currently the only with a live snapshot and (puppetmaster) does a lot of I/O. Christian, maybe this BZ1262251 also applicable? I'll go ahead and delete the live snapshot. If I see this issue again I will submit the trace to your BZ.
Markus
**********************************
Von: Christian Hailer [christian@hailer.eu]
Gesendet: Dienstag, 15. September 2015 21:24
An: Markus Stockhausen; 'Daniel Helgenberger'
Cc: ydary@redhat.com; users@ovirt.org
Betreff: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Hi Markus,
gdb is available on CentOS 7, but what do you mean by qemu-debug? I Installed qemu-kvm-tools, maybe this is the pendant for CentOS?
qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools
Regards, Christian
Von: Markus Stockhausen [mailto:stockhausen@collogia.de]
Gesendet: Dienstag, 15. September 2015 20:40
An: Daniel Helgenberger <daniel.helgenberger@m-box.de>
Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com; users@ovirt.org
Betreff: Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Do you have a chance to install qemu-debug? If yes I would try a backtrace. gdb -p <qemu-pid>
# bt Markus
Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Hello,
I do not want to hijack the thread but maybe my issue is related?
It might have started with ovirt 3.5.3; but I cannot tell for sure.
For me, one vm (foreman) is affected; the second time in 14 days. I can confirm this as I also loose any network connection to the VM and
the ability to connect a console.
Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid of qemu-kvm process>'
As far as I can tell the VM became unresponsive at around Sep 15 12:30:01; engine logged this at 12:34. Nothing obvious in VDSM logs (see
attached).
Below the engine.log part.
Versions:
ovirt-engine-3.5.4.2-1.el7.centos.noarch
vdsm-4.16.26-0.el7.centos
libvirt-1.2.8-16.el7_1.3
engine.log (1200 - 1300:
2015-09-15 12:03:47,949 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-56) [264d502a] HA
reservation status for cluster Default is OK
2015-09-15 12:08:02,708 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update
VMs/Templates Ovf.
2015-09-15 12:08:02,709 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID:
00000002-0002-0002-0002-000000000088 Type: l
2015-09-15 12:08:02,780 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks= key: 00000002-0002-0002-0002-000000000088 value: OVF_UPDATE
2015-09-15 12:08:47,997 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA
reservation status for cluster Default is OK
2015-09-15 12:13:06,998 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] START, GetFileStatsVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit = false), log id: 1503968
2015-09-15 12:13:07,137 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] FINISH, GetFileStatsVDSCommand, return: {pfSense-2.0-RELEASE-i386.iso={status=0, ctime=1432286887.0, size=115709952},
Fedora-15-i686-Live8
2015-09-15 12:13:07,178 INFO [org.ovirt.engine.core.bll.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished
automatic refresh process for ISO file type with success, for storage domain id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.
2015-09-15 12:13:48,043 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA
reservation status for cluster Default is OK
2015-09-15 12:18:48,088 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-44) [6345e698] HA
reservation status for cluster Default is OK
2015-09-15 12:23:48,137 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-13) HA reservation
status for cluster Default is OK
2015-09-15 12:28:48,183 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-76) [154c91d5] HA
reservation status for cluster Default is OK
2015-09-15 12:33:48,229 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA
reservation status for cluster Default is OK
2015-09-15 12:34:49,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM
foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> NotResponding
2015-09-15 12:34:49,578 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-41)
[5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM foreman is not responding.
2015-09-15 12:38:48,273 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-10) [7a800766] HA
reservation status for cluster Default is OK
2015-09-15 12:43:48,320 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-42) [440f1c40] HA
reservation status for cluster Default is OK
2015-09-15 12:48:48,366 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-70) HA reservation
status for cluster Default is OK
2015-09-15 12:53:48,412 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-12) [50221cdc] HA
reservation status for cluster Default is OK
2015-09-15 12:58:48,459 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-3) HA reservation
status for cluster Default is OK
On 29.08.2015 22:48, Christian Hailer wrote:
Hello,
last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a single Centos
7 server, so I started by suspending the VMs in order to set the oVirt engine
host to maintenance mode. During the process of suspending the VMs the server
crashed, kernel panic…
After restarting the server I installed the updates via yum an restarted the
server again. Afterwards, all the VMs could be started again. Some hours later
my monitoring system registered some unresponsive hosts, I had a look in the
oVirt interface, 3 of the VMs were in the state “not responding”, marked by a
question mark.
I tried to shut down the VMs, but oVirt wasn’t able to do so. I tried to reset
the status in the database with the sql statement
update vm_dynamic set status = 0 where vm_guid = (select vm_guid from vm_static
where vm_name = 'MYVMNAME');
but that didn’t help, either. Only rebooting the whole hypervisor helped…
afterwards everything worked again. But only for a few hours, then one of the
VMs entered the “not responding” state again… again only a reboot helped.
Yesterday it happened again:
2015-08-28 17:44:22,664 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC
0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding
2015-08-28 17:44:22,692 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector ]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM DC is not responding.
Does anybody know what I can do? Where should I have a look? Hints are greatly
appreciated!
Thanks,
Christian
-- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767

This is a multi-part message in MIME format. ------=_NextPartTM-000-4a9428dc-2b1c-4caf-8ee3-a447a009db9b Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Hi Christian,=0A= =0A= you are right. These are different errors. In the past we had VM hangs or c= rashes from time to time.=0A= So we implemented 2 features for mitigation. That helped to analyze a lot o= f problems.=0A= =0A= 1) Install debug packages to get backtraces in case of stalled qemu process= es.=0A= 2) Setup the hosts to generate coredumps for qemu:=0A= =0A= a) /etc/security/limits.conf=0A= * soft core unlimited=0A= b) /usr/lib/systemd/system/libvirtd.service section Service=0A= LimitCORE=3Dinfinity=0A= c) create new before vdsm start hook to disable full qemu memory dump=0A= libvirtpid=3D`ps -ef | grep libvirt | grep listen | awk '{ print $2 }'`= =0A= echo 0 > /proc/$libvirtpid/coredump_filter=0A= =0A= Best regards.=0A= =0A= Markus=0A= =0A= ________________________________________=0A= Von: Christian Hailer [christian@hailer.eu]=0A= Gesendet: Donnerstag, 17. September 2015 07:39=0A= An: 'Daniel Helgenberger'; Markus Stockhausen=0A= Cc: ydary@redhat.com; users@ovirt.org=0A= Betreff: AW: AW: [ovirt-users] Some VMs in status "not responding" in oVirt= interface=0A= =0A= Hi,=0A= =0A= just to get it straight: most of my VMs had one or more existing snapshots.= Do you think this is a problem currently? If I understand it correctly the= BZ of Markus concerns only a short period of time while removing a snapsho= t, but my VMs stopped responding in the middle of the night without any int= eraction...=0A= I deleted all the snapshots, just in case :) my system is running fine for = nearly three days now, I'm not quite sure but I think it helped that I chan= ged the HDD and NIC of the Windows 2012 VMs to VirtIO devices...=0A= =0A= Best regards, Christian=0A= =0A= -----Urspr=FCngliche Nachricht-----=0A= Von: Daniel Helgenberger [mailto:daniel.helgenberger@m-box.de]=0A= Gesendet: Dienstag, 15. September 2015 22:24=0A= An: Markus Stockhausen <stockhausen@collogia.de>; Christian Hailer <christi= an@hailer.eu>=0A= Cc: ydary@redhat.com; users@ovirt.org=0A= Betreff: Re: AW: [ovirt-users] Some VMs in status "not responding" in oVirt= interface=0A= =0A= =0A= =0A= On 15.09.2015 21:31, Markus Stockhausen wrote:=0A=
Hi Christian,=0A= =0A= I think of a package similar like this:=0A= =0A= qemu-debuginfo.x86_64 2:2.1.3-10.fc21=0A= =0A= That allows gdb to show information about backtrace symbols. See=0A= comment 12 of https://bugzilla.redhat.com/show_bug.cgi?id=3D1262251=0A= Makes error search much simpler - especially if qemu hangs.=0A= =0A= Markus, thanks for the BZ. I think I do see the same issue. Actually my VM = is currently the only with a live snapshot and (puppetmaster) does a lot of= I/O.=0A= =0A= Christian, maybe this BZ1262251 also applicable?=0A= =0A= I'll go ahead and delete the live snapshot. If I see this issue again I wil= l submit the trace to your BZ.=0A= =0A= =0A= =0A= Markus=0A= =0A= **********************************=0A= =0A= Von: Christian Hailer [christian@hailer.eu]=0A= =0A= Gesendet: Dienstag, 15. September 2015 21:24=0A= =0A= An: Markus Stockhausen; 'Daniel Helgenberger'=0A= =0A= Cc: ydary@redhat.com; users@ovirt.org=0A= =0A= Betreff: AW: [ovirt-users] Some VMs in status "not responding" in=0A= oVirt interface=0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= Hi Markus,=0A= =0A= gdb is available on CentOS 7, but what do you mean by qemu-debug? I Insta= lled qemu-kvm-tools, maybe this is the pendant for CentOS?=0A= =0A= qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools=0A= qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools=0A= qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools=0A= =0A= Regards, Christian=0A= =0A= =0A= =0A= =0A= =0A= Von: Markus Stockhausen [mailto:stockhausen@collogia.de]=0A= =0A= =0A= Gesendet: Dienstag, 15. September 2015 20:40=0A= =0A= An: Daniel Helgenberger <daniel.helgenberger@m-box.de>=0A= =0A= Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com;=0A= users@ovirt.org=0A= =0A= Betreff: Re: [ovirt-users] Some VMs in status "not responding" in=0A= oVirt interface=0A= =0A= =0A= =0A= Do you have a chance to install qemu-debug? If yes I would try a backtrac= e.=0A= gdb -p <qemu-pid>=0A= =0A= # bt=0A= Markus=0A= =0A= =0A= Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <daniel.helgenberge= r@m-box.de>:=0A= =0A= =0A= =0A= =0A= =0A= Hello,=0A= =0A= =0A= =0A= I do not want to hijack the thread but maybe my issue is related?=0A= =0A= =0A= =0A= It might have started with ovirt 3.5.3; but I cannot tell for sure.=0A= =0A= =0A= =0A= For me, one vm (foreman) is affected; the second time in 14 days. I=0A= can confirm this as I also loose any network connection to the VM and=0A= =0A= the ability to connect a console.=0A= =0A= Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid o= f qemu-kvm process>'=0A= =0A= =0A= =0A= As far as I can tell the VM became unresponsive at around Sep 15=0A= 12:30:01; engine logged this at 12:34. Nothing obvious in VDSM logs=0A= (see=0A= =0A= attached).=0A= =0A= =0A= =0A= Below the engine.log part.=0A= =0A= =0A= =0A= Versions:=0A= =0A= ovirt-engine-3.5.4.2-1.el7.centos.noarch=0A= =0A= =0A= =0A= vdsm-4.16.26-0.el7.centos=0A= =0A= libvirt-1.2.8-16.el7_1.3=0A= =0A= =0A= =0A= engine.log (1200 - 1300:=0A= =0A= 2015-09-15 12:03:47,949 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-56) [264d502a] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:08:02,708 INFO=0A= [org.ovirt.engine.core.bll.OvfDataUpdater]=0A= (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update=0A= =0A= VMs/Templates Ovf.=0A= =0A= 2015-09-15 12:08:02,709 INFO=0A= [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand]=0A= (DefaultQuartzScheduler_Worker-89)=0A= =0A= [5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand interna= l: true. Entities affected : ID:=0A= =0A= 00000002-0002-0002-0002-000000000088 Type: l=0A= =0A= 2015-09-15 12:08:02,780 INFO=0A= [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand]=0A= (DefaultQuartzScheduler_Worker-89)=0A= =0A= [5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks=3D key:=0A= 00000002-0002-0002-0002-000000000088 value: OVF_UPDATE=0A= =0A= 2015-09-15 12:08:47,997 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:13:06,998 INFO=0A= [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand]=0A= (org.ovirt.thread.pool-8-thread-48)=0A= =0A= [50221cdc] START, GetFileStatsVDSCommand( storagePoolId =3D=0A= 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit =3D false),=0A= log id: 1503968=0A= =0A= 2015-09-15 12:13:07,137 INFO=0A= [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand]=0A= (org.ovirt.thread.pool-8-thread-48)=0A= =0A= [50221cdc] FINISH, GetFileStatsVDSCommand, return:=0A= {pfSense-2.0-RELEASE-i386.iso=3D{status=3D0, ctime=3D1432286887.0,=0A= size=3D115709952},=0A= =0A= Fedora-15-i686-Live8=0A= =0A= 2015-09-15 12:13:07,178 INFO=0A= [org.ovirt.engine.core.bll.IsoDomainListSyncronizer]=0A= (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished=0A= =0A= automatic refresh process for ISO file type with success, for storage dom= ain id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.=0A= =0A= 2015-09-15 12:13:48,043 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:18:48,088 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-44) [6345e698] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:23:48,137 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-13) HA reservation=0A= =0A= status for cluster Default is OK=0A= =0A= 2015-09-15 12:28:48,183 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-76) [154c91d5] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:33:48,229 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:34:49,432 INFO=0A= [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]=0A= (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM=0A= =0A= foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up -->=0A= NotResponding=0A= =0A= 2015-09-15 12:34:49,578 WARN=0A= [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]=0A= (DefaultQuartzScheduler_Worker-41)=0A= =0A= [5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, M= essage: VM foreman is not responding.=0A= =0A= 2015-09-15 12:38:48,273 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-10) [7a800766] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:43:48,320 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-42) [440f1c40] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:48:48,366 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-70) HA reservation=0A= =0A= status for cluster Default is OK=0A= =0A= 2015-09-15 12:53:48,412 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-12) [50221cdc] HA=0A= =0A= reservation status for cluster Default is OK=0A= =0A= 2015-09-15 12:58:48,459 INFO=0A= [org.ovirt.engine.core.bll.scheduling.HaReservationHandling]=0A= (DefaultQuartzScheduler_Worker-3) HA reservation=0A= =0A= status for cluster Default is OK=0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= On 29.08.2015 22:48, Christian Hailer wrote:=0A= =0A=
=0A= I tried to shut down the VMs, but oVirt wasn=92t able to do so. I tried= =0A= to reset=0A= =0A=
Hello,=0A= =0A= =0A= =0A= last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a=0A= single Centos=0A= =0A= =0A= 7 server, so I started by suspending the VMs in order to set the=0A= oVirt engine=0A= =0A= host to maintenance mode. During the process of suspending the VMs=0A= the server=0A= =0A= crashed, kernel panic=85=0A= =0A= =0A= =0A= After restarting the server I installed the updates via yum an=0A= restarted the=0A= =0A= server again. Afterwards, all the VMs could be started again. Some=0A= hours later=0A= =0A= my monitoring system registered some unresponsive hosts, I had a look=0A= in the=0A= =0A= oVirt interface, 3 of the VMs were in the state =93not responding=94,=0A= marked by a=0A= =0A= question mark.=0A= =0A= the status in the database with the sql statement=0A= =0A= =0A= =0A= update vm_dynamic set status =3D 0 where vm_guid =3D (select vm_guid fro= m=0A= vm_static=0A= =0A= =0A= where vm_name =3D 'MYVMNAME');=0A= =0A= =0A= =0A= but that didn=92t help, either. Only rebooting the whole hypervisor=0A= helped=85=0A= =0A= afterwards everything worked again. But only for a few hours, then=0A= one of the=0A= =0A= VMs entered the =93not responding=94 state again=85 again only a reboot = helped.=0A= =0A= Yesterday it happened again:=0A= =0A= =0A= =0A= 2015-08-28 17:44:22,664 INFO=0A= =0A= [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]=0A= =0A= (DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC=0A= =0A= 0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding=0A= =0A= =0A= =0A= 2015-08-28 17:44:22,692 WARN=0A= =0A= [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector=0A= ]=0A= =0A= (DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call= Stack:=0A= =0A= =0A= null, Custom Event ID: -1, Message: VM DC is not responding.=0A= =0A= =0A= =0A= Does anybody know what I can do? Where should I have a look? Hints=0A= are greatly=0A= =0A= appreciated!=0A= =0A= =0A= =0A= Thanks,=0A= =0A= =0A= =0A= Christian=0A= =0A= =0A= =0A= =0A= =0A= =0A= --=0A= Daniel Helgenberger=0A= m box bewegtbild GmbH=0A= =0A= P: +49/30/2408781-22=0A= F: +49/30/2408781-10=0A= =0A= ACKERSTR. 19=0A= D-10115 BERLIN=0A= =0A= =0A= www.m-box.de www.monkeymen.tv=0A= =0A= Gesch=E4ftsf=FChrer: Martin Retschitzegger / Michaela G=F6llner=0A= Handeslregister: Amtsgericht Charlottenburg / HRB 112767=0A= ------=_NextPartTM-000-4a9428dc-2b1c-4caf-8ee3-a447a009db9b Content-Type: text/plain; name="InterScan_Disclaimer.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="InterScan_Disclaimer.txt"
**************************************************************************** Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. Über das Internet versandte E-Mails können unter fremden Namen erstellt oder manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine rechtsverbindliche Willenserklärung. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln Vorstand: Kadir Akin Dr. Michael Höhnerbach Vorsitzender des Aufsichtsrates: Hans Kristian Langva Registergericht: Amtsgericht Köln Registernummer: HRB 52 497 This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. e-mails sent over the internet may have been written under a wrong name or been manipulated. That is why this message sent as an e-mail is not a legally binding declaration of intention. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln executive board: Kadir Akin Dr. Michael Höhnerbach President of the supervisory board: Hans Kristian Langva Registry office: district court Cologne Register number: HRB 52 497 **************************************************************************** ------=_NextPartTM-000-4a9428dc-2b1c-4caf-8ee3-a447a009db9b--

On 17.09.2015 07:39, Christian Hailer wrote:
Hi,
just to get it straight: most of my VMs had one or more existing snapshots. Do you think this is a problem currently? If I understand it correctly the BZ of Markus concerns only a short period of time while removing a snapshot, but my VMs stopped responding in the middle of the night without any interaction... I deleted all the snapshots, just in case :) my system is running fine for nearly three days now, I'm not quite sure but I think it helped that I changed the HDD and NIC of the Windows 2012 VMs to VirtIO devices...
In my case these these Linux guests. So far I only had one VM with a live snap; the one showing the issue. This points into the direction qemu-rhev; might be independent from guest os. For testing I created a new guest just for this purpose; it has a live snap shot. Maybe you do the same? To dig deeper into the issue: my storage is NFS3 - backed.
Best regards, Christian
-----Ursprüngliche Nachricht----- Von: Daniel Helgenberger [mailto:daniel.helgenberger@m-box.de] Gesendet: Dienstag, 15. September 2015 22:24 An: Markus Stockhausen <stockhausen@collogia.de>; Christian Hailer <christian@hailer.eu> Cc: ydary@redhat.com; users@ovirt.org Betreff: Re: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface
On 15.09.2015 21:31, Markus Stockhausen wrote:
Hi Christian,
I think of a package similar like this:
qemu-debuginfo.x86_64 2:2.1.3-10.fc21
That allows gdb to show information about backtrace symbols. See comment 12 of https://bugzilla.redhat.com/show_bug.cgi?id=1262251 Makes error search much simpler - especially if qemu hangs.
Markus, thanks for the BZ. I think I do see the same issue. Actually my VM is currently the only with a live snapshot and (puppetmaster) does a lot of I/O.
Christian, maybe this BZ1262251 also applicable?
I'll go ahead and delete the live snapshot. If I see this issue again I will submit the trace to your BZ.
Markus
**********************************
Von: Christian Hailer [christian@hailer.eu]
Gesendet: Dienstag, 15. September 2015 21:24
An: Markus Stockhausen; 'Daniel Helgenberger'
Cc: ydary@redhat.com; users@ovirt.org
Betreff: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Hi Markus,
gdb is available on CentOS 7, but what do you mean by qemu-debug? I Installed qemu-kvm-tools, maybe this is the pendant for CentOS?
qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools
Regards, Christian
Von: Markus Stockhausen [mailto:stockhausen@collogia.de]
Gesendet: Dienstag, 15. September 2015 20:40
An: Daniel Helgenberger <daniel.helgenberger@m-box.de>
Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com; users@ovirt.org
Betreff: Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Do you have a chance to install qemu-debug? If yes I would try a backtrace. gdb -p <qemu-pid>
# bt Markus
Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Hello,
I do not want to hijack the thread but maybe my issue is related?
It might have started with ovirt 3.5.3; but I cannot tell for sure.
For me, one vm (foreman) is affected; the second time in 14 days. I can confirm this as I also loose any network connection to the VM and
the ability to connect a console.
Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid of qemu-kvm process>'
As far as I can tell the VM became unresponsive at around Sep 15 12:30:01; engine logged this at 12:34. Nothing obvious in VDSM logs (see
attached).
Below the engine.log part.
Versions:
ovirt-engine-3.5.4.2-1.el7.centos.noarch
vdsm-4.16.26-0.el7.centos
libvirt-1.2.8-16.el7_1.3
engine.log (1200 - 1300:
2015-09-15 12:03:47,949 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-56) [264d502a] HA
reservation status for cluster Default is OK
2015-09-15 12:08:02,708 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update
VMs/Templates Ovf.
2015-09-15 12:08:02,709 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID:
00000002-0002-0002-0002-000000000088 Type: l
2015-09-15 12:08:02,780 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks= key: 00000002-0002-0002-0002-000000000088 value: OVF_UPDATE
2015-09-15 12:08:47,997 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA
reservation status for cluster Default is OK
2015-09-15 12:13:06,998 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] START, GetFileStatsVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit = false), log id: 1503968
2015-09-15 12:13:07,137 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] FINISH, GetFileStatsVDSCommand, return: {pfSense-2.0-RELEASE-i386.iso={status=0, ctime=1432286887.0, size=115709952},
Fedora-15-i686-Live8
2015-09-15 12:13:07,178 INFO [org.ovirt.engine.core.bll.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished
automatic refresh process for ISO file type with success, for storage domain id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.
2015-09-15 12:13:48,043 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA
reservation status for cluster Default is OK
2015-09-15 12:18:48,088 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-44) [6345e698] HA
reservation status for cluster Default is OK
2015-09-15 12:23:48,137 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-13) HA reservation
status for cluster Default is OK
2015-09-15 12:28:48,183 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-76) [154c91d5] HA
reservation status for cluster Default is OK
2015-09-15 12:33:48,229 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA
reservation status for cluster Default is OK
2015-09-15 12:34:49,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM
foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> NotResponding
2015-09-15 12:34:49,578 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-41)
[5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM foreman is not responding.
2015-09-15 12:38:48,273 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-10) [7a800766] HA
reservation status for cluster Default is OK
2015-09-15 12:43:48,320 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-42) [440f1c40] HA
reservation status for cluster Default is OK
2015-09-15 12:48:48,366 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-70) HA reservation
status for cluster Default is OK
2015-09-15 12:53:48,412 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-12) [50221cdc] HA
reservation status for cluster Default is OK
2015-09-15 12:58:48,459 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-3) HA reservation
status for cluster Default is OK
On 29.08.2015 22:48, Christian Hailer wrote:
Hello,
last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a single Centos
7 server, so I started by suspending the VMs in order to set the oVirt engine
host to maintenance mode. During the process of suspending the VMs the server
crashed, kernel panic…
After restarting the server I installed the updates via yum an restarted the
server again. Afterwards, all the VMs could be started again. Some hours later
my monitoring system registered some unresponsive hosts, I had a look in the
oVirt interface, 3 of the VMs were in the state “not responding”, marked by a
question mark.
I tried to shut down the VMs, but oVirt wasn’t able to do so. I tried to reset
the status in the database with the sql statement
update vm_dynamic set status = 0 where vm_guid = (select vm_guid from vm_static
where vm_name = 'MYVMNAME');
but that didn’t help, either. Only rebooting the whole hypervisor helped…
afterwards everything worked again. But only for a few hours, then one of the
VMs entered the “not responding” state again… again only a reboot helped.
Yesterday it happened again:
2015-08-28 17:44:22,664 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC
0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding
2015-08-28 17:44:22,692 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector ]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM DC is not responding.
Does anybody know what I can do? Where should I have a look? Hints are greatly
appreciated!
Thanks,
Christian
-- Daniel Helgenberger m box bewegtbild GmbH
P: +49/30/2408781-22 F: +49/30/2408781-10
ACKERSTR. 19 D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767
-- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767

Hello Christian, just a quick round up: Did you still see the issue? It stopped for me after removing live snap shots. On 17.09.2015 07:39, Christian Hailer wrote:
Hi,
just to get it straight: most of my VMs had one or more existing snapshots. Do you think this is a problem currently? If I understand it correctly the BZ of Markus concerns only a short period of time while removing a snapshot, but my VMs stopped responding in the middle of the night without any interaction... I deleted all the snapshots, just in case :) my system is running fine for nearly three days now, I'm not quite sure but I think it helped that I changed the HDD and NIC of the Windows 2012 VMs to VirtIO devices...
Best regards, Christian
-----Ursprüngliche Nachricht----- Von: Daniel Helgenberger [mailto:daniel.helgenberger@m-box.de] Gesendet: Dienstag, 15. September 2015 22:24 An: Markus Stockhausen <stockhausen@collogia.de>; Christian Hailer <christian@hailer.eu> Cc: ydary@redhat.com; users@ovirt.org Betreff: Re: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface
On 15.09.2015 21:31, Markus Stockhausen wrote:
Hi Christian,
I think of a package similar like this:
qemu-debuginfo.x86_64 2:2.1.3-10.fc21
That allows gdb to show information about backtrace symbols. See comment 12 of https://bugzilla.redhat.com/show_bug.cgi?id=1262251 Makes error search much simpler - especially if qemu hangs.
Markus, thanks for the BZ. I think I do see the same issue. Actually my VM is currently the only with a live snapshot and (puppetmaster) does a lot of I/O.
Christian, maybe this BZ1262251 also applicable?
I'll go ahead and delete the live snapshot. If I see this issue again I will submit the trace to your BZ.
Markus
**********************************
Von: Christian Hailer [christian@hailer.eu]
Gesendet: Dienstag, 15. September 2015 21:24
An: Markus Stockhausen; 'Daniel Helgenberger'
Cc: ydary@redhat.com; users@ovirt.org
Betreff: AW: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Hi Markus,
gdb is available on CentOS 7, but what do you mean by qemu-debug? I Installed qemu-kvm-tools, maybe this is the pendant for CentOS?
qemu-kvm-tools.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-ev.x86_64 : KVM debugging and diagnostics tools qemu-kvm-tools-rhev.x86_64 : KVM debugging and diagnostics tools
Regards, Christian
Von: Markus Stockhausen [mailto:stockhausen@collogia.de]
Gesendet: Dienstag, 15. September 2015 20:40
An: Daniel Helgenberger <daniel.helgenberger@m-box.de>
Cc: Christian Hailer <christian@hailer.eu>; ydary@redhat.com; users@ovirt.org
Betreff: Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface
Do you have a chance to install qemu-debug? If yes I would try a backtrace. gdb -p <qemu-pid>
# bt Markus
Am 15.09.2015 4:15 nachm. schrieb Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Hello,
I do not want to hijack the thread but maybe my issue is related?
It might have started with ovirt 3.5.3; but I cannot tell for sure.
For me, one vm (foreman) is affected; the second time in 14 days. I can confirm this as I also loose any network connection to the VM and
the ability to connect a console.
Also, the only thing witch 'fixes' the issue is right now 'kill -9 <pid of qemu-kvm process>'
As far as I can tell the VM became unresponsive at around Sep 15 12:30:01; engine logged this at 12:34. Nothing obvious in VDSM logs (see
attached).
Below the engine.log part.
Versions:
ovirt-engine-3.5.4.2-1.el7.centos.noarch
vdsm-4.16.26-0.el7.centos
libvirt-1.2.8-16.el7_1.3
engine.log (1200 - 1300:
2015-09-15 12:03:47,949 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-56) [264d502a] HA
reservation status for cluster Default is OK
2015-09-15 12:08:02,708 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] (DefaultQuartzScheduler_Worker-89) [2e7bf56e] Attempting to update
VMs/Templates Ovf.
2015-09-15 12:08:02,709 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID:
00000002-0002-0002-0002-000000000088 Type: l
2015-09-15 12:08:02,780 INFO [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-89)
[5e9f4ba6] Lock freed to object EngineLock [exclusiveLocks= key: 00000002-0002-0002-0002-000000000088 value: OVF_UPDATE
2015-09-15 12:08:47,997 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-21) [3fc854a2] HA
reservation status for cluster Default is OK
2015-09-15 12:13:06,998 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] START, GetFileStatsVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000088, ignoreFailoverLimit = false), log id: 1503968
2015-09-15 12:13:07,137 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-8-thread-48)
[50221cdc] FINISH, GetFileStatsVDSCommand, return: {pfSense-2.0-RELEASE-i386.iso={status=0, ctime=1432286887.0, size=115709952},
Fedora-15-i686-Live8
2015-09-15 12:13:07,178 INFO [org.ovirt.engine.core.bll.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-48) [50221cdc] Finished
automatic refresh process for ISO file type with success, for storage domain id 84dcb2fc-fb63-442f-aa77-3e84dc7d5a72.
2015-09-15 12:13:48,043 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-87) [4fa1bb16] HA
reservation status for cluster Default is OK
2015-09-15 12:18:48,088 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-44) [6345e698] HA
reservation status for cluster Default is OK
2015-09-15 12:23:48,137 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-13) HA reservation
status for cluster Default is OK
2015-09-15 12:28:48,183 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-76) [154c91d5] HA
reservation status for cluster Default is OK
2015-09-15 12:33:48,229 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-36) [27c73ac6] HA
reservation status for cluster Default is OK
2015-09-15 12:34:49,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-41) [5f2a4b68] VM
foreman 8b57ff1d-2800-48ad-b267-fd8e9e2f6fb2 moved from Up --> NotResponding
2015-09-15 12:34:49,578 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-41)
[5f2a4b68] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM foreman is not responding.
2015-09-15 12:38:48,273 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-10) [7a800766] HA
reservation status for cluster Default is OK
2015-09-15 12:43:48,320 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-42) [440f1c40] HA
reservation status for cluster Default is OK
2015-09-15 12:48:48,366 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-70) HA reservation
status for cluster Default is OK
2015-09-15 12:53:48,412 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-12) [50221cdc] HA
reservation status for cluster Default is OK
2015-09-15 12:58:48,459 INFO [org.ovirt.engine.core.bll.scheduling.HaReservationHandling] (DefaultQuartzScheduler_Worker-3) HA reservation
status for cluster Default is OK
On 29.08.2015 22:48, Christian Hailer wrote:
Hello,
last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a single Centos
7 server, so I started by suspending the VMs in order to set the oVirt engine
host to maintenance mode. During the process of suspending the VMs the server
crashed, kernel panic…
After restarting the server I installed the updates via yum an restarted the
server again. Afterwards, all the VMs could be started again. Some hours later
my monitoring system registered some unresponsive hosts, I had a look in the
oVirt interface, 3 of the VMs were in the state “not responding”, marked by a
question mark.
I tried to shut down the VMs, but oVirt wasn’t able to do so. I tried to reset
the status in the database with the sql statement
update vm_dynamic set status = 0 where vm_guid = (select vm_guid from vm_static
where vm_name = 'MYVMNAME');
but that didn’t help, either. Only rebooting the whole hypervisor helped…
afterwards everything worked again. But only for a few hours, then one of the
VMs entered the “not responding” state again… again only a reboot helped.
Yesterday it happened again:
2015-08-28 17:44:22,664 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC
0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding
2015-08-28 17:44:22,692 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector ]
(DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM DC is not responding.
Does anybody know what I can do? Where should I have a look? Hints are greatly
appreciated!
Thanks,
Christian
-- Daniel Helgenberger m box bewegtbild GmbH
P: +49/30/2408781-22 F: +49/30/2408781-10
ACKERSTR. 19 D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767
-- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767
participants (3)
-
Christian Hailer
-
Daniel Helgenberger
-
Markus Stockhausen