
--_004_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_ Content-Type: multipart/alternative; boundary="_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_" --_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, I have an ovirt 3.1 build. I have 3 hosts and a few virtua= l machines setup that I am using for testing. I am using gluster storage s= etup as a distribution between the 3 hosts. I can migrate linux guests acr= oss my 3 hosts, but I cannot migrate windows hosts. I get "Migration faile= d due to Error: Fatal error during migration. The event id is 65. Is ther= e something additional that needs to be done to windows guests for them to = support live migration? Thanks, Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com<mailto:rick.ingersoll@mjritsolutions.com> http://www.mjritsolutions.com<http://www.mjritsolutions.com/> [mjritsolutions_logo_signature] --_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <!--[if !mso]><style>v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} </style><![endif]--><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri","sans-serif"; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72"> <div class=3D"WordSection1"> <p class=3D"MsoNormal">Hi,<o:p></o:p></p> <p class=3D"MsoNormal"> &nbs= p; I have an ovirt 3.1 build. I h= ave 3 hosts and a few virtual machines setup that I am using for testing.&n= bsp; I am using gluster storage setup as a distribution between the 3 hosts= . I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed du= e to Error: Fatal error during migration. The event id is 65. I= s there something additional that needs to be done to windows guests for th= em to support live migration?<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">Thanks, <o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"= Arial","sans-serif";color:black">Rick Ingersoll<o:p></o:p></= span></b></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"= Arial","sans-serif";color:#262626">IT Consultant<o:p></o:p><= /span></b></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 234-5100 main<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 234-5101 direct<o:p></o:p></span></p=
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 757-5605 mobile<o:p></o:p></span></p=
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 747-7409 fax<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif""><a href=3D"mailto:rick.ingersoll@mjritsolu= tions.com"><span style=3D"color:blue">rick.ingersoll@mjritsolutions.com</sp= an></a><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif""><a href=3D"http://www.mjritsolutions.com/"=
<span style=3D"color:blue">http://www.mjritsolutions.com</span></a><o:p></= o:p></span></p> <p class=3D"MsoNormal"><img border=3D"0" width=3D"200" height=3D"42" id=3D"= Picture_x0020_1" src=3D"cid:image001.jpg@01CE75E9.B5CA8020" alt=3D"mjritsol= utions_logo_signature"><o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> </div> </body> </html>
--_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_-- --_004_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_ Content-Type: image/jpeg; name="image001.jpg" Content-Description: image001.jpg Content-Disposition: inline; filename="image001.jpg"; size=2146; creation-date="Mon, 01 Jul 2013 03:29:49 GMT"; modification-date="Mon, 01 Jul 2013 03:29:49 GMT" Content-ID: <image001.jpg@01CE75E9.B5CA8020> Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAAqAMgDASIA AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3 uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii gAooooAKKKSgBaSisTxpLJD4N1aSGR45FtmKujYIOOxoYm7K5t155p+t+KNS8VX8QuooNJtLp4tx gXc+D91T/M9q4Pw5DrWszebLqmoR2aH5nFw+XP8AdHP5mvbtGtoRpFr+6T/VjkjJ+ue5qE+bUzjJ 1NdjRpaKKs1CiiigAooooAKKKKACiiigAooooAKKKKACiiigBKK4fXrvU7TxpLc6czyraWaTS2u4 4lj3EMAOme/4VS8V+I59c0mWbR5ZItNtvLMswJVpZGIwg/3c5PvQB6LRVOy1S1vrq6treXfNaEJM NpG0kdM9+navM0u5n0LTfOmvJEfVpUkWGRvMdcD5Rzn6UAesV5z44+Iuo+F/ERsLS1tJIvJSTdKG zk59CPSq7yQ3Oo21v4fi1xL+G5QzLcSthI887gWPtVm/giufjOI54o5U/swna6hhnnsamd7aGdS9 tDnf+F0az/z46d+T/wDxVXdP8cap40zpN7bWUNjdkQSvFu34PXbk9feqfw91W7v9XsNPurKwfTwG i3NapvYqhI+bqTxyavaVdTahq+mSTraoEuQwENqkZ6kYyOcVEbvqZQ5pbs7eHwdaW8KQwzSJGg2q qqMAVuWtuLW1jgViwjXaCepritfs76TxPdSS2WqXtqY4xCLK68sIcfNkZ9apQ6o+hJqr28OqWl4l mJEt7+USpjeBvHOc81qdJ6RRXK3q+ILPRJtQ/tuF/KgM2z7EozgZxnNSaBrl5qWtyQXDJ5IsILgK q4w7jJ59KAOmorD1HU7m28VaRYxsot7lJTKCuSSoyOe1M1TUb5PFFjptrOsUdzbSsWMYbDDof/rU Ab9FcgkniB/Ekmk/2zCNlqLjzfsa85bGMZqDVvEeq6dNq0CTxO9qtqkbmIfecgM2P6UAdtRXJ64f EGi6Nc6h/bMM32dd3lmzVd3IGM7veq6+LdQtNVvpLq28/SoGjV3iX57fcgbJHdefwoA7SiuLvPE2 oTNqzaTKtxDCbbyGii8zaj/fbA5bHpSWWv3v9qWcc2qStHNMIytxpbQhs9g2etAHaUtV74kWFwQS CImwR9K4PSNd1ez0Tw6lggvHuEuHlikbLShG7MehxnFAHolFcm/jBb260z+zpAFlaZbmCRcOhWMs FI6jkVHps3iLVPDkeqprFtE0kTSCJrVQoIzwWzwOOtAHYUVXsHmewt2ujGZzGpkMZypbHOPaigCJ dKt11l9TG/7Q8IgIz8u0HPT1qC98O6ffaQ2mmLybVn3lYfl5zu/nWpRQBz934M0+6v57xZ763lnY NIILgoGPrinweD9MtoLKKLzwLO4NzGTJks57t61u0UAZWo+HrPUb6C+YywXkH3Z4H2MR6H1Hsa4b xPHqlr8SG1HT42CpZLEZGhLg5zkADvXp1FJq5Mo3PH7C3vdLltZbKKVJbX7ha3LA5Ug5H41Z0Oyn g1OwVoZiEmXLGMjvXq9FMdkjAvvB9hfahNe+fewTT4MnkTlAxAwDj6UyLwRpaJdLI93ObmLyXeac swXOcA9uRXRUUDOefwbbywNDJqervEy7SjXhKkemPSnyeELI3QuILm/tXEKQ/wCjzlMoowBW9RQB gyeErWYQNJe6k08DM0c5uT5ihgARu9OKlsvDNtZ6lHftc3tzcRoURrmcvtB64rZooApLpVuusvqg L/aHhEB+b5doOenrmqd94W0/UJb6Sfzt16IxJtfGNn3SvpWzRQBz0/g22uoWhudS1eaF+GjkvGKs PQitSy0m2sJ7qWENm6KmQMcj5V2jA+gq7RQBgf8ACGaUn2j7Os9r58iSnyJSmxlzgrjp1PFOHhS2 M8Mtxe6ldeQ4kRJ7ksoYdDj8a3aKAMfQdFbTdEayuJHcyM7NlyxUMfuhj1wO9Fj4YsdPOneQZv8A iXrIsO588P8Aez61sUUAZVx4b0651eLU3h23UYZSyHG8EEfMO/BNUF8EWSWhtUv9VS2KlfJW7Oza eox6V0lFAGXaeHrOxv4buAzK8NstqiGQlAg6cevvRWpRQB//2Q== --_004_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_--

--_005_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_ Content-Type: multipart/alternative; boundary="_000_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_" --_000_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I'm really struggling with this problem. I have the virtio 1.59 drivers ru= nning on the Windows guests. What else would I need to set to get migratio= n of Window guests working? Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com<mailto:rick.ingersoll@mjritsolutions.com> http://www.mjritsolutions.com<http://www.mjritsolutions.com/> [mjritsolutions_logo_signature] From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of= Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows Hi, I have an ovirt 3.1 build. I have 3 hosts and a few virtua= l machines setup that I am using for testing. I am using gluster storage s= etup as a distribution between the 3 hosts. I can migrate linux guests acr= oss my 3 hosts, but I cannot migrate windows hosts. I get "Migration faile= d due to Error: Fatal error during migration. The event id is 65. Is ther= e something additional that needs to be done to windows guests for them to = support live migration? Thanks, Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com<mailto:rick.ingersoll@mjritsolutions.com> http://www.mjritsolutions.com<http://www.mjritsolutions.com/> [mjritsolutions_logo_signature] --_000_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <!--[if !mso]><style>v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} </style><![endif]--><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:windowtext;} span.EmailStyle18 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><a name=3D"_MailEndCompose"><span style=3D"color:#1F= 497D">I’m really struggling with this problem. I have the virti= o 1.59 drivers running on the Windows guests. What else would I need = to set to get migration of Window guests working?<o:p></o:p></span></a></p> <p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa= n></p> <p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa= n></p> <div> <p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"= Arial","sans-serif";color:black">Rick Ingersoll<o:p></o:p></= span></b></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"= Arial","sans-serif";color:#262626">IT Consultant<o:p></o:p><= /span></b></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif";color:#1F497D">(919) 234-5100 main<o:p></o:= p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif";color:#1F497D">(919) 234-5101 direct<o:p></= o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif";color:#1F497D">(919) 757-5605 mobile<o:p></= o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif";color:#1F497D">(919) 747-7409 fax<o:p></o:p=
</span></p> <p class=3D"MsoNormal"><a href=3D"mailto:rick.ingersoll@mjritsolutions.com"= <span style=3D"font-size:9.0pt;font-family:"Arial","sans-se= rif";color:blue">rick.ingersoll@mjritsolutions.com</span></a><span sty= le=3D"font-size:9.0pt;font-family:"Arial","sans-serif";=
color:#1F497D"><o:p></o:p></span></p> <p class=3D"MsoNormal"><a href=3D"http://www.mjritsolutions.com/"><span sty= le=3D"font-size:9.0pt;font-family:"Arial","sans-serif";= color:blue">http://www.mjritsolutions.com</span></a><span style=3D"font-siz= e:9.0pt;font-family:"Arial","sans-serif";color:#1F497D"= ><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:#1F497D"><img border=3D"0" widt= h=3D"200" height=3D"42" id=3D"_x0000_i1026" src=3D"cid:image002.jpg@01CE766= B.65A36B70" alt=3D"mjritsolutions_logo_signature"><o:p></o:p></span></p> </div> <p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa= n></p> <div> <div style=3D"border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in = 0in 0in"> <p class=3D"MsoNormal"><b>From:</b> users-bounces@ovirt.org [mailto:users-b= ounces@ovirt.org] <b>On Behalf Of </b>Rick Ingersoll<br> <b>Sent:</b> Sunday, June 30, 2013 11:30 PM<br> <b>To:</b> users@ovirt.org<br> <b>Subject:</b> [Users] Migration of Windows<o:p></o:p></p> </div> </div> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">Hi,<o:p></o:p></p> <p class=3D"MsoNormal"> &nbs= p; I have an ovirt 3.1 build. I h= ave 3 hosts and a few virtual machines setup that I am using for testing.&n= bsp; I am using gluster storage setup as a distribution between the 3 hosts= . I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed du= e to Error: Fatal error during migration. The event id is 65. I= s there something additional that needs to be done to windows guests for th= em to support live migration?<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">Thanks, <o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"= Arial","sans-serif";color:black">Rick Ingersoll<o:p></o:p></= span></b></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"= Arial","sans-serif";color:#262626">IT Consultant<o:p></o:p><= /span></b></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 234-5100 main<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 234-5101 direct<o:p></o:p></span></p= > <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 757-5605 mobile<o:p></o:p></span></p= > <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif"">(919) 747-7409 fax<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif""><a href=3D"mailto:rick.ingersoll@mjritsolu= tions.com"><span style=3D"color:blue">rick.ingersoll@mjritsolutions.com</sp= an></a><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari= al","sans-serif""><a href=3D"http://www.mjritsolutions.com/"= ><span style=3D"color:blue">http://www.mjritsolutions.com</span></a><o:p></= o:p></span></p> <p class=3D"MsoNormal"><img border=3D"0" width=3D"200" height=3D"42" id=3D"= Picture_x0020_1" src=3D"cid:image003.jpg@01CE766B.65A36B70" alt=3D"mjritsol= utions_logo_signature"><o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> </div> </body> </html>
--_000_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_-- --_005_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_ Content-Type: image/jpeg; name="image002.jpg" Content-Description: image002.jpg Content-Disposition: inline; filename="image002.jpg"; size=2146; creation-date="Mon, 01 Jul 2013 18:58:08 GMT"; modification-date="Mon, 01 Jul 2013 18:58:08 GMT" Content-ID: <image002.jpg@01CE766B.65A36B70> Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAAqAMgDASIA AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3 uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii gAooooAKKKSgBaSisTxpLJD4N1aSGR45FtmKujYIOOxoYm7K5t155p+t+KNS8VX8QuooNJtLp4tx gXc+D91T/M9q4Pw5DrWszebLqmoR2aH5nFw+XP8AdHP5mvbtGtoRpFr+6T/VjkjJ+ue5qE+bUzjJ 1NdjRpaKKs1CiiigAooooAKKKKACiiigAooooAKKKKACiiigBKK4fXrvU7TxpLc6czyraWaTS2u4 4lj3EMAOme/4VS8V+I59c0mWbR5ZItNtvLMswJVpZGIwg/3c5PvQB6LRVOy1S1vrq6treXfNaEJM NpG0kdM9+navM0u5n0LTfOmvJEfVpUkWGRvMdcD5Rzn6UAesV5z44+Iuo+F/ERsLS1tJIvJSTdKG zk59CPSq7yQ3Oo21v4fi1xL+G5QzLcSthI887gWPtVm/giufjOI54o5U/swna6hhnnsamd7aGdS9 tDnf+F0az/z46d+T/wDxVXdP8cap40zpN7bWUNjdkQSvFu34PXbk9feqfw91W7v9XsNPurKwfTwG i3NapvYqhI+bqTxyavaVdTahq+mSTraoEuQwENqkZ6kYyOcVEbvqZQ5pbs7eHwdaW8KQwzSJGg2q qqMAVuWtuLW1jgViwjXaCepritfs76TxPdSS2WqXtqY4xCLK68sIcfNkZ9apQ6o+hJqr28OqWl4l mJEt7+USpjeBvHOc81qdJ6RRXK3q+ILPRJtQ/tuF/KgM2z7EozgZxnNSaBrl5qWtyQXDJ5IsILgK q4w7jJ59KAOmorD1HU7m28VaRYxsot7lJTKCuSSoyOe1M1TUb5PFFjptrOsUdzbSsWMYbDDof/rU Ab9FcgkniB/Ekmk/2zCNlqLjzfsa85bGMZqDVvEeq6dNq0CTxO9qtqkbmIfecgM2P6UAdtRXJ64f EGi6Nc6h/bMM32dd3lmzVd3IGM7veq6+LdQtNVvpLq28/SoGjV3iX57fcgbJHdefwoA7SiuLvPE2 oTNqzaTKtxDCbbyGii8zaj/fbA5bHpSWWv3v9qWcc2qStHNMIytxpbQhs9g2etAHaUtV74kWFwQS CImwR9K4PSNd1ez0Tw6lggvHuEuHlikbLShG7MehxnFAHolFcm/jBb260z+zpAFlaZbmCRcOhWMs FI6jkVHps3iLVPDkeqprFtE0kTSCJrVQoIzwWzwOOtAHYUVXsHmewt2ujGZzGpkMZypbHOPaigCJ dKt11l9TG/7Q8IgIz8u0HPT1qC98O6ffaQ2mmLybVn3lYfl5zu/nWpRQBz934M0+6v57xZ763lnY NIILgoGPrinweD9MtoLKKLzwLO4NzGTJks57t61u0UAZWo+HrPUb6C+YywXkH3Z4H2MR6H1Hsa4b xPHqlr8SG1HT42CpZLEZGhLg5zkADvXp1FJq5Mo3PH7C3vdLltZbKKVJbX7ha3LA5Ug5H41Z0Oyn g1OwVoZiEmXLGMjvXq9FMdkjAvvB9hfahNe+fewTT4MnkTlAxAwDj6UyLwRpaJdLI93ObmLyXeac swXOcA9uRXRUUDOefwbbywNDJqervEy7SjXhKkemPSnyeELI3QuILm/tXEKQ/wCjzlMoowBW9RQB gyeErWYQNJe6k08DM0c5uT5ihgARu9OKlsvDNtZ6lHftc3tzcRoURrmcvtB64rZooApLpVuusvqg L/aHhEB+b5doOenrmqd94W0/UJb6Sfzt16IxJtfGNn3SvpWzRQBz0/g22uoWhudS1eaF+GjkvGKs PQitSy0m2sJ7qWENm6KmQMcj5V2jA+gq7RQBgf8ACGaUn2j7Os9r58iSnyJSmxlzgrjp1PFOHhS2 M8Mtxe6ldeQ4kRJ7ksoYdDj8a3aKAMfQdFbTdEayuJHcyM7NlyxUMfuhj1wO9Fj4YsdPOneQZv8A iXrIsO588P8Aez61sUUAZVx4b0651eLU3h23UYZSyHG8EEfMO/BNUF8EWSWhtUv9VS2KlfJW7Oza eox6V0lFAGXaeHrOxv4buAzK8NstqiGQlAg6cevvRWpRQB//2Q== --_005_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_ Content-Type: image/jpeg; name="image003.jpg" Content-Description: image003.jpg Content-Disposition: inline; filename="image003.jpg"; size=2146; creation-date="Mon, 01 Jul 2013 18:58:09 GMT"; modification-date="Mon, 01 Jul 2013 18:58:09 GMT" Content-ID: <image003.jpg@01CE766B.65A36B70> Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAAqAMgDASIA AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3 uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii gAooooAKKKSgBaSisTxpLJD4N1aSGR45FtmKujYIOOxoYm7K5t155p+t+KNS8VX8QuooNJtLp4tx gXc+D91T/M9q4Pw5DrWszebLqmoR2aH5nFw+XP8AdHP5mvbtGtoRpFr+6T/VjkjJ+ue5qE+bUzjJ 1NdjRpaKKs1CiiigAooooAKKKKACiiigAooooAKKKKACiiigBKK4fXrvU7TxpLc6czyraWaTS2u4 4lj3EMAOme/4VS8V+I59c0mWbR5ZItNtvLMswJVpZGIwg/3c5PvQB6LRVOy1S1vrq6treXfNaEJM NpG0kdM9+navM0u5n0LTfOmvJEfVpUkWGRvMdcD5Rzn6UAesV5z44+Iuo+F/ERsLS1tJIvJSTdKG zk59CPSq7yQ3Oo21v4fi1xL+G5QzLcSthI887gWPtVm/giufjOI54o5U/swna6hhnnsamd7aGdS9 tDnf+F0az/z46d+T/wDxVXdP8cap40zpN7bWUNjdkQSvFu34PXbk9feqfw91W7v9XsNPurKwfTwG i3NapvYqhI+bqTxyavaVdTahq+mSTraoEuQwENqkZ6kYyOcVEbvqZQ5pbs7eHwdaW8KQwzSJGg2q qqMAVuWtuLW1jgViwjXaCepritfs76TxPdSS2WqXtqY4xCLK68sIcfNkZ9apQ6o+hJqr28OqWl4l mJEt7+USpjeBvHOc81qdJ6RRXK3q+ILPRJtQ/tuF/KgM2z7EozgZxnNSaBrl5qWtyQXDJ5IsILgK q4w7jJ59KAOmorD1HU7m28VaRYxsot7lJTKCuSSoyOe1M1TUb5PFFjptrOsUdzbSsWMYbDDof/rU Ab9FcgkniB/Ekmk/2zCNlqLjzfsa85bGMZqDVvEeq6dNq0CTxO9qtqkbmIfecgM2P6UAdtRXJ64f EGi6Nc6h/bMM32dd3lmzVd3IGM7veq6+LdQtNVvpLq28/SoGjV3iX57fcgbJHdefwoA7SiuLvPE2 oTNqzaTKtxDCbbyGii8zaj/fbA5bHpSWWv3v9qWcc2qStHNMIytxpbQhs9g2etAHaUtV74kWFwQS CImwR9K4PSNd1ez0Tw6lggvHuEuHlikbLShG7MehxnFAHolFcm/jBb260z+zpAFlaZbmCRcOhWMs FI6jkVHps3iLVPDkeqprFtE0kTSCJrVQoIzwWzwOOtAHYUVXsHmewt2ujGZzGpkMZypbHOPaigCJ dKt11l9TG/7Q8IgIz8u0HPT1qC98O6ffaQ2mmLybVn3lYfl5zu/nWpRQBz934M0+6v57xZ763lnY NIILgoGPrinweD9MtoLKKLzwLO4NzGTJks57t61u0UAZWo+HrPUb6C+YywXkH3Z4H2MR6H1Hsa4b xPHqlr8SG1HT42CpZLEZGhLg5zkADvXp1FJq5Mo3PH7C3vdLltZbKKVJbX7ha3LA5Ug5H41Z0Oyn g1OwVoZiEmXLGMjvXq9FMdkjAvvB9hfahNe+fewTT4MnkTlAxAwDj6UyLwRpaJdLI93ObmLyXeac swXOcA9uRXRUUDOefwbbywNDJqervEy7SjXhKkemPSnyeELI3QuILm/tXEKQ/wCjzlMoowBW9RQB gyeErWYQNJe6k08DM0c5uT5ihgARu9OKlsvDNtZ6lHftc3tzcRoURrmcvtB64rZooApLpVuusvqg L/aHhEB+b5doOenrmqd94W0/UJb6Sfzt16IxJtfGNn3SvpWzRQBz0/g22uoWhudS1eaF+GjkvGKs PQitSy0m2sJ7qWENm6KmQMcj5V2jA+gq7RQBgf8ACGaUn2j7Os9r58iSnyJSmxlzgrjp1PFOHhS2 M8Mtxe6ldeQ4kRJ7ksoYdDj8a3aKAMfQdFbTdEayuJHcyM7NlyxUMfuhj1wO9Fj4YsdPOneQZv8A iXrIsO588P8Aez61sUUAZVx4b0651eLU3h23UYZSyHG8EEfMO/BNUF8EWSWhtUv9VS2KlfJW7Oza eox6V0lFAGXaeHrOxv4buAzK8NstqiGQlAg6cevvRWpRQB//2Q== --_005_b06c83da7934412bab15d0635666b1aeBL2PR08MB196namprd08pro_--

Hey Rick, Can you provide /var/log/ovirt-engine/engine.log? Tim Hildred, RHCE, RHCVA Content Author II - Engineering Content Services, Red Hat, Inc. Brisbane, Australia Email: thildred@redhat.com Internal: 8588287 Mobile: +61 4 666 25242 IRC: thildred ----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, users@ovirt.org Sent: Tuesday, July 2, 2013 4:58:09 AM Subject: Re: [Users] Migration of Windows
I’m really struggling with this problem. I have the virtio 1.59 drivers running on the Windows guests. What else would I need to set to get migration of Window guests working?
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
Thanks,
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Here's the log output from running the migrate. 2013-07-02 12:08:27,352 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) [1a2280ef] Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:27,553 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 3cb2f3e1 2013-07-02 12:08:27,580 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.13:54321, method=online 2013-07-02 12:08:27,583 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 6849a01 2013-07-02 12:08:27,592 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateBrokerVDSCommand, log id: 6849a01 2013-07-02 12:08:27,653 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3cb2f3e1 2013-07-02 12:08:28,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-67) [6d5bc1a3] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:31,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-4) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:33,341 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-49) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:35,386 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:37,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-56) [36e2ebd7] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:40,234 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:40,235 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:40,852 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:40,907 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 71f2dbdb 2013-07-02 12:08:40,913 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:40,914 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,915 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration 2013-07-02 12:08:40,918 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:40,918 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,920 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 71f2dbdb 2013-07-02 12:08:41,552 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:41,688 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 20d18034 2013-07-02 12:08:41,714 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.12:54321, method=online 2013-07-02 12:08:41,716 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 207042b1 2013-07-02 12:08:41,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) FINISH, MigrateBrokerVDSCommand, log id: 207042b1 2013-07-02 12:08:41,753 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 20d18034 2013-07-02 12:08:43,530 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-52) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:45,595 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-31) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:47,641 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-89) [1fd2878] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:49,723 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-77) [1280cd02] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:52,398 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-83) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:54,298 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:54,299 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:54,352 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:54,480 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 595d458f 2013-07-02 12:08:54,485 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:54,486 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration 2013-07-02 12:08:54,490 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:54,491 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,492 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 595d458f 2013-07-02 12:08:55,141 INFO [org.ovirt.engine.core.bll.VdsSelector] (pool-3-thread-46) VDS ovirt1 02ee1572-e135-11e2-b77e-00145e6b5bf5 have failed running this VM in the current selection cycle VDS ovirt2 0a25070a-e136-11e2-9b8c-00145e6b5bf5 have failed running this VM in the current selection cycle 2013-07-02 12:08:55,142 WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) CanDoAction of action MigrateVm failed. Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TYPE__VM Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com -----Original Message----- From: Tim Hildred [mailto:thildred@redhat.com] Sent: Tuesday, July 02, 2013 9:52 AM To: Rick Ingersoll Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows Hey Rick, Can you provide /var/log/ovirt-engine/engine.log? Tim Hildred, RHCE, RHCVA Content Author II - Engineering Content Services, Red Hat, Inc. Brisbane, Australia Email: thildred@redhat.com Internal: 8588287 Mobile: +61 4 666 25242 IRC: thildred ----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, users@ovirt.org Sent: Tuesday, July 2, 2013 4:58:09 AM Subject: Re: [Users] Migration of Windows
I’m really struggling with this problem. I have the virtio 1.59 drivers running on the Windows guests. What else would I need to set to get migration of Window guests working?
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
Thanks,
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi all, I tried to make some changes to /etc/libvirt/libvirtd.conf on ovirt-node (the latest one) However, after reboot the old configuration keeps coming back. I tried "persist /etc/libvirt/libvirtd.conf"; the file is already persisted. OK, unpersisted the file, verified the change and persisted the file once more. Still, after reboot the old file keeps coming back. How can I make a persistent change to /etc/libvirt/libvirtd.conf on ovirt-node (ovirt-node-iso-2.6.1-20120228.fc18.iso)? Thanks! Winfried

Am Dienstag, den 02.07.2013, 20:56 +0200 schrieb Winfried de Heiden:
Hi all,
I tried to make some changes to /etc/libvirt/libvirtd.conf on ovirt-node (the latest one) However, after reboot the old configuration keeps coming back.
I tried "persist /etc/libvirt/libvirtd.conf"; the file is already persisted. OK, unpersisted the file, verified the change and persisted the file once more. Still, after reboot the old file keeps coming back.
How can I make a persistent change to /etc/libvirt/libvirtd.conf on ovirt-node (ovirt-node-iso-2.6.1-20120228.fc18.iso)?
Hey Winfried, a quick guess is that vdsm might be overwriting your config when it's getting started. But there is currently no way to disable a service on ovirt-node. Greetings fabian

Are there any other logs I can pull that might lead to what the issue might be? Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com -----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Tuesday, July 02, 2013 12:12 PM To: Tim Hildred Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows Here's the log output from running the migrate. 2013-07-02 12:08:27,352 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) [1a2280ef] Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:27,553 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 3cb2f3e1 2013-07-02 12:08:27,580 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.13:54321, method=online 2013-07-02 12:08:27,583 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 6849a01 2013-07-02 12:08:27,592 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateBrokerVDSCommand, log id: 6849a01 2013-07-02 12:08:27,653 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3cb2f3e1 2013-07-02 12:08:28,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-67) [6d5bc1a3] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:31,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-4) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:33,341 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-49) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:35,386 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:37,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-56) [36e2ebd7] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:40,234 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:40,235 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:40,852 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:40,907 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 71f2dbdb 2013-07-02 12:08:40,913 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:40,914 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,915 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration 2013-07-02 12:08:40,918 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:40,918 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,920 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 71f2dbdb 2013-07-02 12:08:41,552 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:41,688 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 20d18034 2013-07-02 12:08:41,714 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.12:54321, method=online 2013-07-02 12:08:41,716 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 207042b1 2013-07-02 12:08:41,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) FINISH, MigrateBrokerVDSCommand, log id: 207042b1 2013-07-02 12:08:41,753 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 20d18034 2013-07-02 12:08:43,530 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-52) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:45,595 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-31) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:47,641 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-89) [1fd2878] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:49,723 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-77) [1280cd02] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:52,398 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-83) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:54,298 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:54,299 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:54,352 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:54,480 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 595d458f 2013-07-02 12:08:54,485 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:54,486 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration 2013-07-02 12:08:54,490 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:54,491 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,492 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 595d458f 2013-07-02 12:08:55,141 INFO [org.ovirt.engine.core.bll.VdsSelector] (pool-3-thread-46) VDS ovirt1 02ee1572-e135-11e2-b77e-00145e6b5bf5 have failed running this VM in the current selection cycle VDS ovirt2 0a25070a-e136-11e2-9b8c-00145e6b5bf5 have failed running this VM in the current selection cycle 2013-07-02 12:08:55,142 WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) CanDoAction of action MigrateVm failed. Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TYPE__VM Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com -----Original Message----- From: Tim Hildred [mailto:thildred@redhat.com] Sent: Tuesday, July 02, 2013 9:52 AM To: Rick Ingersoll Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows Hey Rick, Can you provide /var/log/ovirt-engine/engine.log? Tim Hildred, RHCE, RHCVA Content Author II - Engineering Content Services, Red Hat, Inc. Brisbane, Australia Email: thildred@redhat.com Internal: 8588287 Mobile: +61 4 666 25242 IRC: thildred ----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, users@ovirt.org Sent: Tuesday, July 2, 2013 4:58:09 AM Subject: Re: [Users] Migration of Windows
I’m really struggling with this problem. I have the virtio 1.59 drivers running on the Windows guests. What else would I need to set to get migration of Window guests working?
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
Thanks,
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, "Tim Hildred" <thildred@redhat.com> Cc: users@ovirt.org Sent: Wednesday, July 3, 2013 1:43:51 AM Subject: Re: [Users] Migration of Windows
Are there any other logs I can pull that might lead to what the issue might be?
yes, please attach the source host (10.28.0.14) relevant vdsm log
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Tuesday, July 02, 2013 12:12 PM To: Tim Hildred Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows
Here's the log output from running the migrate.
2013-07-02 12:08:27,352 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) [1a2280ef] Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:27,553 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 3cb2f3e1 2013-07-02 12:08:27,580 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.13:54321, method=online 2013-07-02 12:08:27,583 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 6849a01 2013-07-02 12:08:27,592 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateBrokerVDSCommand, log id: 6849a01 2013-07-02 12:08:27,653 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3cb2f3e1 2013-07-02 12:08:28,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-67) [6d5bc1a3] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:31,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-4) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:33,341 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-49) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:35,386 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:37,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-56) [36e2ebd7] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:40,234 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:40,235 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:40,852 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:40,907 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 71f2dbdb 2013-07-02 12:08:40,913 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:40,914 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,915 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration
2013-07-02 12:08:40,918 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:40,918 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,920 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 71f2dbdb 2013-07-02 12:08:41,552 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:41,688 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 20d18034 2013-07-02 12:08:41,714 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.12:54321, method=online 2013-07-02 12:08:41,716 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 207042b1 2013-07-02 12:08:41,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) FINISH, MigrateBrokerVDSCommand, log id: 207042b1 2013-07-02 12:08:41,753 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 20d18034 2013-07-02 12:08:43,530 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-52) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:45,595 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-31) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:47,641 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-89) [1fd2878] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:49,723 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-77) [1280cd02] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:52,398 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-83) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:54,298 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:54,299 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:54,352 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:54,480 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 595d458f 2013-07-02 12:08:54,485 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:54,486 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration
2013-07-02 12:08:54,490 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:54,491 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,492 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 595d458f 2013-07-02 12:08:55,141 INFO [org.ovirt.engine.core.bll.VdsSelector] (pool-3-thread-46) VDS ovirt1 02ee1572-e135-11e2-b77e-00145e6b5bf5 have failed running this VM in the current selection cycle VDS ovirt2 0a25070a-e136-11e2-9b8c-00145e6b5bf5 have failed running this VM in the current selection cycle 2013-07-02 12:08:55,142 WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) CanDoAction of action MigrateVm failed. Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TYPE__VM
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: Tim Hildred [mailto:thildred@redhat.com] Sent: Tuesday, July 02, 2013 9:52 AM To: Rick Ingersoll Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows
Hey Rick,
Can you provide /var/log/ovirt-engine/engine.log?
Tim Hildred, RHCE, RHCVA Content Author II - Engineering Content Services, Red Hat, Inc. Brisbane, Australia Email: thildred@redhat.com Internal: 8588287 Mobile: +61 4 666 25242 IRC: thildred
----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, users@ovirt.org Sent: Tuesday, July 2, 2013 4:58:09 AM Subject: Re: [Users] Migration of Windows
I’m really struggling with this problem. I have the virtio 1.59 drivers running on the Windows guests. What else would I need to set to get migration of Window guests working?
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
Thanks,
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Here you go. Thanks!! Thread-94::DEBUG::2013-07-03 12:00:31,087::task::957::TaskManager.Task::(_decref) Task=`973e6800-21f8-46c9-8c0c-0892f8511ede`::ref 0 aborting False libvirtEventLoop::DEBUG::2013-07-03 12:00:31,341::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::event Resumed detail 0 opaque None libvirtEventLoop::DEBUG::2013-07-03 12:00:31,403::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::event Resumed detail 1 opaque None Thread-33729::DEBUG::2013-07-03 12:00:31,425::libvirtvm::380::vm.Vm::(cancel) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::canceling migration downtime thread Thread-33729::DEBUG::2013-07-03 12:00:31,426::libvirtvm::439::vm.Vm::(stop) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::stopping migration monitor thread Thread-33730::DEBUG::2013-07-03 12:00:31,426::libvirtvm::377::vm.Vm::(run) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::migration downtime thread exiting Thread-33729::ERROR::2013-07-03 12:00:31,427::vm::198::vm.Vm::(_recover) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::Domain not found: no domain with matching name 'MJR-DC2' Thread-33729::ERROR::2013-07-03 12:00:31,493::vm::286::vm.Vm::(run) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 271, in run self._startUnderlyingMigration() File "/usr/share/vdsm/libvirtvm.py", line 505, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/libvirtvm.py", line 541, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Domain not found: no domain with matching name 'MJR-DC2' Thread-33739::DEBUG::2013-07-03 12:00:31,552::BindingXMLRPC::913::vds::(wrapper) client [10.28.0.11]::call vmGetStats with ('25160f61-26f5-40ee-b245-e42c01048586',) {} Thread-33739::DEBUG::2013-07-03 12:00:31,552::BindingXMLRPC::920::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '3045', 'displayIp': '0', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': 28799L, 'hash': '-2094380594998128553', 'balloonInfo': {'balloon_max': 4194304, 'balloon_cur': 4194304}, 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr': '00:1a:4a:1c:00:72', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId': '25160f61-26f5-40ee-b245-e42c01048586', 'displayType': 'vnc', 'cpuUser': '-0.90', 'disks': {u'vda': {'readLatency': '3563772', 'apparentsize': '64424509440', 'writeLatency': '778826', 'imageID': '60a8c009-6ca5-4142-9e5a-8ac938e4d253', 'flushLatency': '0', 'readRate': '1773.17', 'truesize': '11490885632', 'writeRate': '5779.85'}}, 'monitorResponse': '0', 'statsAge': '0.47', 'elapsedTime': '59584', 'vmType': 'kvm', 'cpuSys': '24.43', 'appsList': [], 'guestIPs': ''}]} VM Channels Listener::DEBUG::2013-07-03 12:00:32,266::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-101::DEBUG::2013-07-03 12:00:36,087::task::568::TaskManager.Task::(_updateState) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::moving from state init -> state preparing Thread-101::INFO::2013-07-03 12:00:36,087::logUtils::41::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='00830513-bb50-4101-a2c1-9855bf30a552', spUUID='4c6b0fe1-47b1-4478-b344-1d8a2131e147', imgUUID='f1640a3e-431d-40dc-ba1f-d5c98bbdd8fa', volUUID='a8e391b9-0e3a-4de4-a22d-e0eed532590b', options=None) Thread-101::DEBUG::2013-07-03 12:00:36,093::fileVolume::561::Storage.Volume::(validateVolumePath) validate path for a8e391b9-0e3a-4de4-a22d-e0eed532590b Thread-101::INFO::2013-07-03 12:00:36,095::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '3957784576', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:00:36,095::task::1151::TaskManager.Task::(prepare) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::finished: {'truesize': '3957784576', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:00:36,095::task::568::TaskManager.Task::(_updateState) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::moving from state preparing -> state finished Thread-101::DEBUG::2013-07-03 12:00:36,095::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-101::DEBUG::2013-07-03 12:00:36,095::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-101::DEBUG::2013-07-03 12:00:36,095::task::957::TaskManager.Task::(_decref) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::ref 0 aborting False Thread-33742::DEBUG::2013-07-03 12:00:37,791::task::568::TaskManager.Task::(_updateState) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::moving from state init -> state preparing Thread-33742::INFO::2013-07-03 12:00:37,791::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33742::INFO::2013-07-03 12:00:37,791::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.0108690261841', 'lastCheck': '7.1', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00719094276428', 'lastCheck': '1.3', 'code': 0, 'valid': True}} Thread-33742::DEBUG::2013-07-03 12:00:37,791::task::1151::TaskManager.Task::(prepare) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.0108690261841', 'lastCheck': '7.1', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00719094276428', 'lastCheck': '1.3', 'code': 0, 'valid': True}} Thread-33742::DEBUG::2013-07-03 12:00:37,791::task::568::TaskManager.Task::(_updateState) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::moving from state preparing -> state finished Thread-33742::DEBUG::2013-07-03 12:00:37,792::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33742::DEBUG::2013-07-03 12:00:37,792::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33742::DEBUG::2013-07-03 12:00:37,792::task::957::TaskManager.Task::(_decref) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::ref 0 aborting False Thread-33748::DEBUG::2013-07-03 12:00:48,939::task::568::TaskManager.Task::(_updateState) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::moving from state init -> state preparing Thread-33748::INFO::2013-07-03 12:00:48,940::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33748::INFO::2013-07-03 12:00:48,940::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747418403625', 'lastCheck': '8.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00742316246033', 'lastCheck': '2.4', 'code': 0, 'valid': True}} Thread-33748::DEBUG::2013-07-03 12:00:48,940::task::1151::TaskManager.Task::(prepare) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747418403625', 'lastCheck': '8.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00742316246033', 'lastCheck': '2.4', 'code': 0, 'valid': True}} Thread-33748::DEBUG::2013-07-03 12:00:48,940::task::568::TaskManager.Task::(_updateState) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::moving from state preparing -> state finished Thread-33748::DEBUG::2013-07-03 12:00:48,940::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33748::DEBUG::2013-07-03 12:00:48,940::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33748::DEBUG::2013-07-03 12:00:48,940::task::957::TaskManager.Task::(_decref) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:00:59,294::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 17. Thread-33754::DEBUG::2013-07-03 12:00:59,945::task::568::TaskManager.Task::(_updateState) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::moving from state init -> state preparing Thread-33754::INFO::2013-07-03 12:00:59,945::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33754::INFO::2013-07-03 12:00:59,946::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747799873352', 'lastCheck': '9.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00735116004944', 'lastCheck': '3.4', 'code': 0, 'valid': True}} Thread-33754::DEBUG::2013-07-03 12:00:59,946::task::1151::TaskManager.Task::(prepare) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747799873352', 'lastCheck': '9.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00735116004944', 'lastCheck': '3.4', 'code': 0, 'valid': True}} Thread-33754::DEBUG::2013-07-03 12:00:59,946::task::568::TaskManager.Task::(_updateState) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::moving from state preparing -> state finished Thread-33754::DEBUG::2013-07-03 12:00:59,946::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33754::DEBUG::2013-07-03 12:00:59,946::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33754::DEBUG::2013-07-03 12:00:59,946::task::957::TaskManager.Task::(_decref) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:02,298::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-33760::DEBUG::2013-07-03 12:01:11,044::task::568::TaskManager.Task::(_updateState) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::moving from state init -> state preparing Thread-33760::INFO::2013-07-03 12:01:11,045::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33760::INFO::2013-07-03 12:01:11,045::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00796604156494', 'lastCheck': '0.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00802111625671', 'lastCheck': '4.5', 'code': 0, 'valid': True}} Thread-33760::DEBUG::2013-07-03 12:01:11,045::task::1151::TaskManager.Task::(prepare) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00796604156494', 'lastCheck': '0.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00802111625671', 'lastCheck': '4.5', 'code': 0, 'valid': True}} Thread-33760::DEBUG::2013-07-03 12:01:11,046::task::568::TaskManager.Task::(_updateState) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::moving from state preparing -> state finished Thread-33760::DEBUG::2013-07-03 12:01:11,046::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33760::DEBUG::2013-07-03 12:01:11,046::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33760::DEBUG::2013-07-03 12:01:11,046::task::957::TaskManager.Task::(_decref) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::ref 0 aborting False Thread-33766::DEBUG::2013-07-03 12:01:22,314::task::568::TaskManager.Task::(_updateState) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::moving from state init -> state preparing Thread-33766::INFO::2013-07-03 12:01:22,315::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33766::INFO::2013-07-03 12:01:22,315::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00790905952454', 'lastCheck': '1.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0083429813385', 'lastCheck': '5.8', 'code': 0, 'valid': True}} Thread-33766::DEBUG::2013-07-03 12:01:22,315::task::1151::TaskManager.Task::(prepare) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00790905952454', 'lastCheck': '1.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0083429813385', 'lastCheck': '5.8', 'code': 0, 'valid': True}} Thread-33766::DEBUG::2013-07-03 12:01:22,315::task::568::TaskManager.Task::(_updateState) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::moving from state preparing -> state finished Thread-33766::DEBUG::2013-07-03 12:01:22,315::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33766::DEBUG::2013-07-03 12:01:22,315::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33766::DEBUG::2013-07-03 12:01:22,316::task::957::TaskManager.Task::(_decref) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:29,327::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 17. Thread-94::DEBUG::2013-07-03 12:01:31,137::task::568::TaskManager.Task::(_updateState) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::moving from state init -> state preparing Thread-94::INFO::2013-07-03 12:01:31,137::logUtils::41::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='00830513-bb50-4101-a2c1-9855bf30a552', spUUID='4c6b0fe1-47b1-4478-b344-1d8a2131e147', imgUUID='60a8c009-6ca5-4142-9e5a-8ac938e4d253', volUUID='799087e1-1048-42fe-b345-8ed98985127c', options=None) Thread-94::DEBUG::2013-07-03 12:01:31,141::fileVolume::561::Storage.Volume::(validateVolumePath) validate path for 799087e1-1048-42fe-b345-8ed98985127c Thread-94::INFO::2013-07-03 12:01:31,143::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '11490885632', 'apparentsize': '64424509440'} Thread-94::DEBUG::2013-07-03 12:01:31,143::task::1151::TaskManager.Task::(prepare) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::finished: {'truesize': '11490885632', 'apparentsize': '64424509440'} Thread-94::DEBUG::2013-07-03 12:01:31,143::task::568::TaskManager.Task::(_updateState) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::moving from state preparing -> state finished Thread-94::DEBUG::2013-07-03 12:01:31,143::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-94::DEBUG::2013-07-03 12:01:31,144::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-94::DEBUG::2013-07-03 12:01:31,144::task::957::TaskManager.Task::(_decref) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:32,331::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-33772::DEBUG::2013-07-03 12:01:33,339::task::568::TaskManager.Task::(_updateState) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::moving from state init -> state preparing Thread-33772::INFO::2013-07-03 12:01:33,339::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33772::INFO::2013-07-03 12:01:33,339::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00766611099243', 'lastCheck': '2.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073778629303', 'lastCheck': '6.8', 'code': 0, 'valid': True}} Thread-33772::DEBUG::2013-07-03 12:01:33,339::task::1151::TaskManager.Task::(prepare) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00766611099243', 'lastCheck': '2.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073778629303', 'lastCheck': '6.8', 'code': 0, 'valid': True}} Thread-33772::DEBUG::2013-07-03 12:01:33,339::task::568::TaskManager.Task::(_updateState) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::moving from state preparing -> state finished Thread-33772::DEBUG::2013-07-03 12:01:33,340::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33772::DEBUG::2013-07-03 12:01:33,340::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33772::DEBUG::2013-07-03 12:01:33,340::task::957::TaskManager.Task::(_decref) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::ref 0 aborting False Thread-101::DEBUG::2013-07-03 12:01:36,148::task::568::TaskManager.Task::(_updateState) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::moving from state init -> state preparing Thread-101::INFO::2013-07-03 12:01:36,149::logUtils::41::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='00830513-bb50-4101-a2c1-9855bf30a552', spUUID='4c6b0fe1-47b1-4478-b344-1d8a2131e147', imgUUID='f1640a3e-431d-40dc-ba1f-d5c98bbdd8fa', volUUID='a8e391b9-0e3a-4de4-a22d-e0eed532590b', options=None) Thread-101::DEBUG::2013-07-03 12:01:36,154::fileVolume::561::Storage.Volume::(validateVolumePath) validate path for a8e391b9-0e3a-4de4-a22d-e0eed532590b Thread-101::INFO::2013-07-03 12:01:36,155::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '3958005760', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:01:36,155::task::1151::TaskManager.Task::(prepare) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::finished: {'truesize': '3958005760', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:01:36,155::task::568::TaskManager.Task::(_updateState) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::moving from state preparing -> state finished Thread-101::DEBUG::2013-07-03 12:01:36,156::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-101::DEBUG::2013-07-03 12:01:36,156::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-101::DEBUG::2013-07-03 12:01:36,156::task::957::TaskManager.Task::(_decref) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::ref 0 aborting False Thread-33778::DEBUG::2013-07-03 12:01:44,340::task::568::TaskManager.Task::(_updateState) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::moving from state init -> state preparing Thread-33778::INFO::2013-07-03 12:01:44,341::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33778::INFO::2013-07-03 12:01:44,341::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00757718086243', 'lastCheck': '3.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00704407691956', 'lastCheck': '7.8', 'code': 0, 'valid': True}} Thread-33778::DEBUG::2013-07-03 12:01:44,341::task::1151::TaskManager.Task::(prepare) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00757718086243', 'lastCheck': '3.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00704407691956', 'lastCheck': '7.8', 'code': 0, 'valid': True}} Thread-33778::DEBUG::2013-07-03 12:01:44,341::task::568::TaskManager.Task::(_updateState) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::moving from state preparing -> state finished Thread-33778::DEBUG::2013-07-03 12:01:44,341::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33778::DEBUG::2013-07-03 12:01:44,341::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33778::DEBUG::2013-07-03 12:01:44,342::task::957::TaskManager.Task::(_decref) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::ref 0 aborting False Thread-33784::DEBUG::2013-07-03 12:01:55,442::task::568::TaskManager.Task::(_updateState) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::moving from state init -> state preparing Thread-33784::INFO::2013-07-03 12:01:55,443::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33784::INFO::2013-07-03 12:01:55,443::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00750803947449', 'lastCheck': '4.7', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00830221176147', 'lastCheck': '8.9', 'code': 0, 'valid': True}} Thread-33784::DEBUG::2013-07-03 12:01:55,443::task::1151::TaskManager.Task::(prepare) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00750803947449', 'lastCheck': '4.7', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00830221176147', 'lastCheck': '8.9', 'code': 0, 'valid': True}} Thread-33784::DEBUG::2013-07-03 12:01:55,443::task::568::TaskManager.Task::(_updateState) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::moving from state preparing -> state finished Thread-33784::DEBUG::2013-07-03 12:01:55,443::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33784::DEBUG::2013-07-03 12:01:55,444::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33784::DEBUG::2013-07-03 12:01:55,444::task::957::TaskManager.Task::(_decref) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:59,357::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 17. VM Channels Listener::DEBUG::2013-07-03 12:02:02,360::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-33790::DEBUG::2013-07-03 12:02:06,847::task::568::TaskManager.Task::(_updateState) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::moving from state init -> state preparing Thread-33790::INFO::2013-07-03 12:02:06,848::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33790::INFO::2013-07-03 12:02:06,848::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00719690322876', 'lastCheck': '6.0', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00763297080994', 'lastCheck': '0.2', 'code': 0, 'valid': True}} Thread-33790::DEBUG::2013-07-03 12:02:06,848::task::1151::TaskManager.Task::(prepare) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00719690322876', 'lastCheck': '6.0', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00763297080994', 'lastCheck': '0.2', 'code': 0, 'valid': True}} Thread-33790::DEBUG::2013-07-03 12:02:06,848::task::568::TaskManager.Task::(_updateState) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::moving from state preparing -> state finished Thread-33790::DEBUG::2013-07-03 12:02:06,848::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33790::DEBUG::2013-07-03 12:02:06,849::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33790::DEBUG::2013-07-03 12:02:06,849::task::957::TaskManager.Task::(_decref) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::ref 0 aborting False ^C [root@ovirt3 vdsm]# vi vdsm.log Thread-25::DEBUG::2013-07-03 12:02:16,776::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ' Volume group "afc62003-75dc-4959-8e17-3ea96bbec6bb" not found\n'; <rc> = 5 Thread-25::WARNING::2013-07-03 12:02:16,777::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "afc62003-75dc-4959-8e17-3ea96bbec6bb" not found'] Thread-25::DEBUG::2013-07-03 12:02:16,777::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-25::DEBUG::2013-07-03 12:02:16,787::fileSD::131::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/ovirt-engine.mjritsolutions.local:_var_lib_exports_iso/afc62003-75dc-4959-8e17-3ea96bbec6bb Thread-25::DEBUG::2013-07-03 12:02:16,813::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-25::DEBUG::2013-07-03 12:02:16,821::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO_DOMAIN', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=4c6b0fe1-47b1-4478-b344-1d8a2131e147', 'REMOTE_PATH=no.one.reads.this:/rhev', 'ROLE=Regular', 'SDUUID=afc62003-75dc-4959-8e17-3ea96bbec6bb', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=0ee9a41b38dc39849b5e2782f49acbb7ae504958'] Thread-25::DEBUG::2013-07-03 12:02:16,823::fileSD::503::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-25::WARNING::2013-07-03 12:02:16,823::sd::361::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace afc62003-75dc-4959-8e17-3ea96bbec6bb_imageNS already registered Thread-25::WARNING::2013-07-03 12:02:16,823::sd::369::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace afc62003-75dc-4959-8e17-3ea96bbec6bb_volumeNS already registered Thread-33798::DEBUG::2013-07-03 12:02:18,012::task::568::TaskManager.Task::(_updateState) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::moving from state init -> state preparing Thread-33798::INFO::2013-07-03 12:02:18,012::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33798::INFO::2013-07-03 12:02:18,012::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00761485099792', 'lastCheck': '7.2', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073709487915', 'lastCheck': '1.2', 'code': 0, 'valid': True}} Thread-33798::DEBUG::2013-07-03 12:02:18,012::task::1151::TaskManager.Task::(prepare) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00761485099792', 'lastCheck': '7.2', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073709487915', 'lastCheck': '1.2', 'code': 0, 'valid': True}} Thread-33798::DEBUG::2013-07-03 12:02:18,013::task::568::TaskManager.Task::(_updateState) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::moving from state preparing -> state finished Thread-33798::DEBUG::2013-07-03 12:02:18,013::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33798::DEBUG::2013-07-03 12:02:18,013::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33798::DEBUG::2013-07-03 12:02:18,013::task::957::TaskManager.Task::(_decref) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::ref 0 aborting False Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com -----Original Message----- From: Omer Frenkel [mailto:ofrenkel@redhat.com] Sent: Wednesday, July 03, 2013 1:40 AM To: Rick Ingersoll Cc: Tim Hildred; users@ovirt.org Subject: Re: [Users] Migration of Windows ----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, "Tim Hildred" <thildred@redhat.com> Cc: users@ovirt.org Sent: Wednesday, July 3, 2013 1:43:51 AM Subject: Re: [Users] Migration of Windows
Are there any other logs I can pull that might lead to what the issue might be?
yes, please attach the source host (10.28.0.14) relevant vdsm log
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Tuesday, July 02, 2013 12:12 PM To: Tim Hildred Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows
Here's the log output from running the migrate.
2013-07-02 12:08:27,352 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) [1a2280ef] Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:27,553 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 3cb2f3e1 2013-07-02 12:08:27,580 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.13:54321, method=online 2013-07-02 12:08:27,583 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 6849a01 2013-07-02 12:08:27,592 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateBrokerVDSCommand, log id: 6849a01 2013-07-02 12:08:27,653 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3cb2f3e1 2013-07-02 12:08:28,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-67) [6d5bc1a3] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:31,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-4) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:33,341 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-49) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:35,386 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:37,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-56) [36e2ebd7] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:40,234 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:40,235 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:40,852 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:40,907 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 71f2dbdb 2013-07-02 12:08:40,913 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:40,914 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,915 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration
2013-07-02 12:08:40,918 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:40,918 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,920 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 71f2dbdb 2013-07-02 12:08:41,552 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:41,688 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 20d18034 2013-07-02 12:08:41,714 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.12:54321, method=online 2013-07-02 12:08:41,716 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 207042b1 2013-07-02 12:08:41,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) FINISH, MigrateBrokerVDSCommand, log id: 207042b1 2013-07-02 12:08:41,753 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 20d18034 2013-07-02 12:08:43,530 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-52) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:45,595 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-31) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:47,641 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-89) [1fd2878] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:49,723 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-77) [1280cd02] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:52,398 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-83) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:54,298 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:54,299 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:54,352 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:54,480 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 595d458f 2013-07-02 12:08:54,485 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:54,486 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration
2013-07-02 12:08:54,490 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:54,491 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,492 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 595d458f 2013-07-02 12:08:55,141 INFO [org.ovirt.engine.core.bll.VdsSelector] (pool-3-thread-46) VDS ovirt1 02ee1572-e135-11e2-b77e-00145e6b5bf5 have failed running this VM in the current selection cycle VDS ovirt2 0a25070a-e136-11e2-9b8c-00145e6b5bf5 have failed running this VM in the current selection cycle 2013-07-02 12:08:55,142 WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) CanDoAction of action MigrateVm failed. Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TY PE__VM
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: Tim Hildred [mailto:thildred@redhat.com] Sent: Tuesday, July 02, 2013 9:52 AM To: Rick Ingersoll Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows
Hey Rick,
Can you provide /var/log/ovirt-engine/engine.log?
Tim Hildred, RHCE, RHCVA Content Author II - Engineering Content Services, Red Hat, Inc. Brisbane, Australia Email: thildred@redhat.com Internal: 8588287 Mobile: +61 4 666 25242 IRC: thildred
----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, users@ovirt.org Sent: Tuesday, July 2, 2013 4:58:09 AM Subject: Re: [Users] Migration of Windows
I’m really struggling with this problem. I have the virtio 1.59 drivers running on the Windows guests. What else would I need to set to get migration of Window guests working?
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
Thanks,
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/03/2013 07:03 PM, Rick Ingersoll wrote:
Here you go. Thanks!!
Thread-94::DEBUG::2013-07-03 12:00:31,087::task::957::TaskManager.Task::(_decref) Task=`973e6800-21f8-46c9-8c0c-0892f8511ede`::ref 0 aborting False libvirtEventLoop::DEBUG::2013-07-03 12:00:31,341::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::event Resumed detail 0 opaque None libvirtEventLoop::DEBUG::2013-07-03 12:00:31,403::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::event Resumed detail 1 opaque None Thread-33729::DEBUG::2013-07-03 12:00:31,425::libvirtvm::380::vm.Vm::(cancel) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::canceling migration downtime thread Thread-33729::DEBUG::2013-07-03 12:00:31,426::libvirtvm::439::vm.Vm::(stop) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::stopping migration monitor thread Thread-33730::DEBUG::2013-07-03 12:00:31,426::libvirtvm::377::vm.Vm::(run) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::migration downtime thread exiting Thread-33729::ERROR::2013-07-03 12:00:31,427::vm::198::vm.Vm::(_recover) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::Domain not found: no domain with matching name 'MJR-DC2' Thread-33729::ERROR::2013-07-03 12:00:31,493::vm::286::vm.Vm::(run) vmId=`25160f61-26f5-40ee-b245-e42c01048586`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 271, in run self._startUnderlyingMigration() File "/usr/share/vdsm/libvirtvm.py", line 505, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/libvirtvm.py", line 541, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Domain not found: no domain with matching name 'MJR-DC2' Thread-33739::DEBUG::2013-07-03 12:00:31,552::BindingXMLRPC::913::vds::(wrapper) client [10.28.0.11]::call vmGetStats with ('25160f61-26f5-40ee-b245-e42c01048586',) {} Thread-33739::DEBUG::2013-07-03 12:00:31,552::BindingXMLRPC::920::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '3045', 'displayIp': '0', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': 28799L, 'hash': '-2094380594998128553', 'balloonInfo': {'balloon_max': 4194304, 'balloon_cur': 4194304}, 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr': '00:1a:4a:1c:00:72', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId': '25160f61-26f5-40ee-b245-e42c01048586', 'displayType': 'vnc', 'cpuUser': '-0.90', 'disks': {u'vda': {'readLatency': '3563772', 'apparentsize': '64424509440', 'writeLatency': '778826', 'imageID': '60a8c009-6ca5-4142-9e5a-8ac938e4d253', 'flu! shLatency' : '0', 'readRate': '1773.17', 'truesize': '11490885632', 'writeRate': '5779.85'}}, 'monitorResponse': '0', 'statsAge': '0.47', 'elapsedTime': '59584', 'vmType': 'kvm', 'cpuSys': '24.43', 'appsList': [], 'guestIPs': ''}]} VM Channels Listener::DEBUG::2013-07-03 12:00:32,266::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-101::DEBUG::2013-07-03 12:00:36,087::task::568::TaskManager.Task::(_updateState) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::moving from state init -> state preparing Thread-101::INFO::2013-07-03 12:00:36,087::logUtils::41::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='00830513-bb50-4101-a2c1-9855bf30a552', spUUID='4c6b0fe1-47b1-4478-b344-1d8a2131e147', imgUUID='f1640a3e-431d-40dc-ba1f-d5c98bbdd8fa', volUUID='a8e391b9-0e3a-4de4-a22d-e0eed532590b', options=None) Thread-101::DEBUG::2013-07-03 12:00:36,093::fileVolume::561::Storage.Volume::(validateVolumePath) validate path for a8e391b9-0e3a-4de4-a22d-e0eed532590b Thread-101::INFO::2013-07-03 12:00:36,095::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '3957784576', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:00:36,095::task::1151::TaskManager.Task::(prepare) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::finished: {'truesize': '3957784576', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:00:36,095::task::568::TaskManager.Task::(_updateState) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::moving from state preparing -> state finished Thread-101::DEBUG::2013-07-03 12:00:36,095::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-101::DEBUG::2013-07-03 12:00:36,095::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-101::DEBUG::2013-07-03 12:00:36,095::task::957::TaskManager.Task::(_decref) Task=`8de313be-6577-475a-b832-a84393ae2e4a`::ref 0 aborting False Thread-33742::DEBUG::2013-07-03 12:00:37,791::task::568::TaskManager.Task::(_updateState) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::moving from state init -> state preparing Thread-33742::INFO::2013-07-03 12:00:37,791::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33742::INFO::2013-07-03 12:00:37,791::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.0108690261841', 'lastCheck': '7.1', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00719094276428', 'lastCheck': '1.3', 'code': 0, 'valid': True}} Thread-33742::DEBUG::2013-07-03 12:00:37,791::task::1151::TaskManager.Task::(prepare) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.0108690261841', 'lastCheck': '7.1', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00719094276428', 'lastCheck': '1.3', 'code': 0, 'valid': True}} Thread-33742::DEBUG::2013-07-03 12:00:37,791::task::568::TaskManager.Task::(_updateState) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::moving from state preparing -> state finished Thread-33742::DEBUG::2013-07-03 12:00:37,792::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33742::DEBUG::2013-07-03 12:00:37,792::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33742::DEBUG::2013-07-03 12:00:37,792::task::957::TaskManager.Task::(_decref) Task=`f2574575-5d29-4edb-bdbb-e03dc24338a1`::ref 0 aborting False Thread-33748::DEBUG::2013-07-03 12:00:48,939::task::568::TaskManager.Task::(_updateState) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::moving from state init -> state preparing Thread-33748::INFO::2013-07-03 12:00:48,940::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33748::INFO::2013-07-03 12:00:48,940::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747418403625', 'lastCheck': '8.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00742316246033', 'lastCheck': '2.4', 'code': 0, 'valid': True}} Thread-33748::DEBUG::2013-07-03 12:00:48,940::task::1151::TaskManager.Task::(prepare) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747418403625', 'lastCheck': '8.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00742316246033', 'lastCheck': '2.4', 'code': 0, 'valid': True}} Thread-33748::DEBUG::2013-07-03 12:00:48,940::task::568::TaskManager.Task::(_updateState) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::moving from state preparing -> state finished Thread-33748::DEBUG::2013-07-03 12:00:48,940::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33748::DEBUG::2013-07-03 12:00:48,940::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33748::DEBUG::2013-07-03 12:00:48,940::task::957::TaskManager.Task::(_decref) Task=`0a363edb-397a-48a6-bdb2-82a9b63a61ef`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:00:59,294::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 17. Thread-33754::DEBUG::2013-07-03 12:00:59,945::task::568::TaskManager.Task::(_updateState) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::moving from state init -> state preparing Thread-33754::INFO::2013-07-03 12:00:59,945::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33754::INFO::2013-07-03 12:00:59,946::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747799873352', 'lastCheck': '9.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00735116004944', 'lastCheck': '3.4', 'code': 0, 'valid': True}} Thread-33754::DEBUG::2013-07-03 12:00:59,946::task::1151::TaskManager.Task::(prepare) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00747799873352', 'lastCheck': '9.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00735116004944', 'lastCheck': '3.4', 'code': 0, 'valid': True}} Thread-33754::DEBUG::2013-07-03 12:00:59,946::task::568::TaskManager.Task::(_updateState) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::moving from state preparing -> state finished Thread-33754::DEBUG::2013-07-03 12:00:59,946::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33754::DEBUG::2013-07-03 12:00:59,946::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33754::DEBUG::2013-07-03 12:00:59,946::task::957::TaskManager.Task::(_decref) Task=`c6ee4af0-c55c-4672-b42d-e18200040ee0`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:02,298::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-33760::DEBUG::2013-07-03 12:01:11,044::task::568::TaskManager.Task::(_updateState) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::moving from state init -> state preparing Thread-33760::INFO::2013-07-03 12:01:11,045::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33760::INFO::2013-07-03 12:01:11,045::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00796604156494', 'lastCheck': '0.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00802111625671', 'lastCheck': '4.5', 'code': 0, 'valid': True}} Thread-33760::DEBUG::2013-07-03 12:01:11,045::task::1151::TaskManager.Task::(prepare) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00796604156494', 'lastCheck': '0.3', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00802111625671', 'lastCheck': '4.5', 'code': 0, 'valid': True}} Thread-33760::DEBUG::2013-07-03 12:01:11,046::task::568::TaskManager.Task::(_updateState) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::moving from state preparing -> state finished Thread-33760::DEBUG::2013-07-03 12:01:11,046::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33760::DEBUG::2013-07-03 12:01:11,046::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33760::DEBUG::2013-07-03 12:01:11,046::task::957::TaskManager.Task::(_decref) Task=`400df6e7-80ac-46a7-8058-a2d8b00048d5`::ref 0 aborting False Thread-33766::DEBUG::2013-07-03 12:01:22,314::task::568::TaskManager.Task::(_updateState) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::moving from state init -> state preparing Thread-33766::INFO::2013-07-03 12:01:22,315::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33766::INFO::2013-07-03 12:01:22,315::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00790905952454', 'lastCheck': '1.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0083429813385', 'lastCheck': '5.8', 'code': 0, 'valid': True}} Thread-33766::DEBUG::2013-07-03 12:01:22,315::task::1151::TaskManager.Task::(prepare) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00790905952454', 'lastCheck': '1.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0083429813385', 'lastCheck': '5.8', 'code': 0, 'valid': True}} Thread-33766::DEBUG::2013-07-03 12:01:22,315::task::568::TaskManager.Task::(_updateState) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::moving from state preparing -> state finished Thread-33766::DEBUG::2013-07-03 12:01:22,315::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33766::DEBUG::2013-07-03 12:01:22,315::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33766::DEBUG::2013-07-03 12:01:22,316::task::957::TaskManager.Task::(_decref) Task=`9e946b59-144a-480e-9acb-e61fb3b55d94`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:29,327::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 17. Thread-94::DEBUG::2013-07-03 12:01:31,137::task::568::TaskManager.Task::(_updateState) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::moving from state init -> state preparing Thread-94::INFO::2013-07-03 12:01:31,137::logUtils::41::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='00830513-bb50-4101-a2c1-9855bf30a552', spUUID='4c6b0fe1-47b1-4478-b344-1d8a2131e147', imgUUID='60a8c009-6ca5-4142-9e5a-8ac938e4d253', volUUID='799087e1-1048-42fe-b345-8ed98985127c', options=None) Thread-94::DEBUG::2013-07-03 12:01:31,141::fileVolume::561::Storage.Volume::(validateVolumePath) validate path for 799087e1-1048-42fe-b345-8ed98985127c Thread-94::INFO::2013-07-03 12:01:31,143::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '11490885632', 'apparentsize': '64424509440'} Thread-94::DEBUG::2013-07-03 12:01:31,143::task::1151::TaskManager.Task::(prepare) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::finished: {'truesize': '11490885632', 'apparentsize': '64424509440'} Thread-94::DEBUG::2013-07-03 12:01:31,143::task::568::TaskManager.Task::(_updateState) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::moving from state preparing -> state finished Thread-94::DEBUG::2013-07-03 12:01:31,143::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-94::DEBUG::2013-07-03 12:01:31,144::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-94::DEBUG::2013-07-03 12:01:31,144::task::957::TaskManager.Task::(_decref) Task=`a564043d-a694-4e2d-90e0-cc765b470f13`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:32,331::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-33772::DEBUG::2013-07-03 12:01:33,339::task::568::TaskManager.Task::(_updateState) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::moving from state init -> state preparing Thread-33772::INFO::2013-07-03 12:01:33,339::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33772::INFO::2013-07-03 12:01:33,339::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00766611099243', 'lastCheck': '2.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073778629303', 'lastCheck': '6.8', 'code': 0, 'valid': True}} Thread-33772::DEBUG::2013-07-03 12:01:33,339::task::1151::TaskManager.Task::(prepare) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00766611099243', 'lastCheck': '2.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073778629303', 'lastCheck': '6.8', 'code': 0, 'valid': True}} Thread-33772::DEBUG::2013-07-03 12:01:33,339::task::568::TaskManager.Task::(_updateState) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::moving from state preparing -> state finished Thread-33772::DEBUG::2013-07-03 12:01:33,340::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33772::DEBUG::2013-07-03 12:01:33,340::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33772::DEBUG::2013-07-03 12:01:33,340::task::957::TaskManager.Task::(_decref) Task=`4ffa6c04-4e45-4cd4-ba42-26ed5bd7642d`::ref 0 aborting False Thread-101::DEBUG::2013-07-03 12:01:36,148::task::568::TaskManager.Task::(_updateState) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::moving from state init -> state preparing Thread-101::INFO::2013-07-03 12:01:36,149::logUtils::41::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='00830513-bb50-4101-a2c1-9855bf30a552', spUUID='4c6b0fe1-47b1-4478-b344-1d8a2131e147', imgUUID='f1640a3e-431d-40dc-ba1f-d5c98bbdd8fa', volUUID='a8e391b9-0e3a-4de4-a22d-e0eed532590b', options=None) Thread-101::DEBUG::2013-07-03 12:01:36,154::fileVolume::561::Storage.Volume::(validateVolumePath) validate path for a8e391b9-0e3a-4de4-a22d-e0eed532590b Thread-101::INFO::2013-07-03 12:01:36,155::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '3958005760', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:01:36,155::task::1151::TaskManager.Task::(prepare) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::finished: {'truesize': '3958005760', 'apparentsize': '42949672960'} Thread-101::DEBUG::2013-07-03 12:01:36,155::task::568::TaskManager.Task::(_updateState) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::moving from state preparing -> state finished Thread-101::DEBUG::2013-07-03 12:01:36,156::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-101::DEBUG::2013-07-03 12:01:36,156::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-101::DEBUG::2013-07-03 12:01:36,156::task::957::TaskManager.Task::(_decref) Task=`b99be371-2feb-43ec-907a-b41a21c0eed0`::ref 0 aborting False Thread-33778::DEBUG::2013-07-03 12:01:44,340::task::568::TaskManager.Task::(_updateState) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::moving from state init -> state preparing Thread-33778::INFO::2013-07-03 12:01:44,341::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33778::INFO::2013-07-03 12:01:44,341::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00757718086243', 'lastCheck': '3.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00704407691956', 'lastCheck': '7.8', 'code': 0, 'valid': True}} Thread-33778::DEBUG::2013-07-03 12:01:44,341::task::1151::TaskManager.Task::(prepare) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00757718086243', 'lastCheck': '3.6', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00704407691956', 'lastCheck': '7.8', 'code': 0, 'valid': True}} Thread-33778::DEBUG::2013-07-03 12:01:44,341::task::568::TaskManager.Task::(_updateState) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::moving from state preparing -> state finished Thread-33778::DEBUG::2013-07-03 12:01:44,341::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33778::DEBUG::2013-07-03 12:01:44,341::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33778::DEBUG::2013-07-03 12:01:44,342::task::957::TaskManager.Task::(_decref) Task=`2616debc-9291-4e4d-b8e2-2298bf53fb7c`::ref 0 aborting False Thread-33784::DEBUG::2013-07-03 12:01:55,442::task::568::TaskManager.Task::(_updateState) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::moving from state init -> state preparing Thread-33784::INFO::2013-07-03 12:01:55,443::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33784::INFO::2013-07-03 12:01:55,443::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00750803947449', 'lastCheck': '4.7', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00830221176147', 'lastCheck': '8.9', 'code': 0, 'valid': True}} Thread-33784::DEBUG::2013-07-03 12:01:55,443::task::1151::TaskManager.Task::(prepare) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00750803947449', 'lastCheck': '4.7', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00830221176147', 'lastCheck': '8.9', 'code': 0, 'valid': True}} Thread-33784::DEBUG::2013-07-03 12:01:55,443::task::568::TaskManager.Task::(_updateState) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::moving from state preparing -> state finished Thread-33784::DEBUG::2013-07-03 12:01:55,443::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33784::DEBUG::2013-07-03 12:01:55,444::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33784::DEBUG::2013-07-03 12:01:55,444::task::957::TaskManager.Task::(_decref) Task=`1ac05f20-a6ce-42c6-a647-9d39bc856c26`::ref 0 aborting False VM Channels Listener::DEBUG::2013-07-03 12:01:59,357::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 17. VM Channels Listener::DEBUG::2013-07-03 12:02:02,360::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 57. Thread-33790::DEBUG::2013-07-03 12:02:06,847::task::568::TaskManager.Task::(_updateState) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::moving from state init -> state preparing Thread-33790::INFO::2013-07-03 12:02:06,848::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33790::INFO::2013-07-03 12:02:06,848::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00719690322876', 'lastCheck': '6.0', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00763297080994', 'lastCheck': '0.2', 'code': 0, 'valid': True}} Thread-33790::DEBUG::2013-07-03 12:02:06,848::task::1151::TaskManager.Task::(prepare) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00719690322876', 'lastCheck': '6.0', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.00763297080994', 'lastCheck': '0.2', 'code': 0, 'valid': True}} Thread-33790::DEBUG::2013-07-03 12:02:06,848::task::568::TaskManager.Task::(_updateState) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::moving from state preparing -> state finished Thread-33790::DEBUG::2013-07-03 12:02:06,848::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33790::DEBUG::2013-07-03 12:02:06,849::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33790::DEBUG::2013-07-03 12:02:06,849::task::957::TaskManager.Task::(_decref) Task=`cad2df42-43fe-40e7-a934-cdac196eff1d`::ref 0 aborting False ^C [root@ovirt3 vdsm]# vi vdsm.log Thread-25::DEBUG::2013-07-03 12:02:16,776::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ' Volume group "afc62003-75dc-4959-8e17-3ea96bbec6bb" not found\n'; <rc> = 5 Thread-25::WARNING::2013-07-03 12:02:16,777::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "afc62003-75dc-4959-8e17-3ea96bbec6bb" not found'] Thread-25::DEBUG::2013-07-03 12:02:16,777::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-25::DEBUG::2013-07-03 12:02:16,787::fileSD::131::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/ovirt-engine.mjritsolutions.local:_var_lib_exports_iso/afc62003-75dc-4959-8e17-3ea96bbec6bb Thread-25::DEBUG::2013-07-03 12:02:16,813::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-25::DEBUG::2013-07-03 12:02:16,821::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO_DOMAIN', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=4c6b0fe1-47b1-4478-b344-1d8a2131e147', 'REMOTE_PATH=no.one.reads.this:/rhev', 'ROLE=Regular', 'SDUUID=afc62003-75dc-4959-8e17-3ea96bbec6bb', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=0ee9a41b38dc39849b5e2782f49acbb7ae504958'] Thread-25::DEBUG::2013-07-03 12:02:16,823::fileSD::503::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-25::WARNING::2013-07-03 12:02:16,823::sd::361::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace afc62003-75dc-4959-8e17-3ea96bbec6bb_imageNS already registered Thread-25::WARNING::2013-07-03 12:02:16,823::sd::369::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace afc62003-75dc-4959-8e17-3ea96bbec6bb_volumeNS already registered Thread-33798::DEBUG::2013-07-03 12:02:18,012::task::568::TaskManager.Task::(_updateState) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::moving from state init -> state preparing Thread-33798::INFO::2013-07-03 12:02:18,012::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-33798::INFO::2013-07-03 12:02:18,012::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00761485099792', 'lastCheck': '7.2', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073709487915', 'lastCheck': '1.2', 'code': 0, 'valid': True}} Thread-33798::DEBUG::2013-07-03 12:02:18,012::task::1151::TaskManager.Task::(prepare) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::finished: {'00830513-bb50-4101-a2c1-9855bf30a552': {'delay': '0.00761485099792', 'lastCheck': '7.2', 'code': 0, 'valid': True}, 'afc62003-75dc-4959-8e17-3ea96bbec6bb': {'delay': '0.0073709487915', 'lastCheck': '1.2', 'code': 0, 'valid': True}} Thread-33798::DEBUG::2013-07-03 12:02:18,013::task::568::TaskManager.Task::(_updateState) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::moving from state preparing -> state finished Thread-33798::DEBUG::2013-07-03 12:02:18,013::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-33798::DEBUG::2013-07-03 12:02:18,013::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-33798::DEBUG::2013-07-03 12:02:18,013::task::957::TaskManager.Task::(_decref) Task=`99474735-158c-4bde-b8bc-ba1eb08c9bda`::ref 0 aborting False
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: Omer Frenkel [mailto:ofrenkel@redhat.com] Sent: Wednesday, July 03, 2013 1:40 AM To: Rick Ingersoll Cc: Tim Hildred; users@ovirt.org Subject: Re: [Users] Migration of Windows
----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, "Tim Hildred" <thildred@redhat.com> Cc: users@ovirt.org Sent: Wednesday, July 3, 2013 1:43:51 AM Subject: Re: [Users] Migration of Windows
Are there any other logs I can pull that might lead to what the issue might be?
yes, please attach the source host (10.28.0.14) relevant vdsm log
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Tuesday, July 02, 2013 12:12 PM To: Tim Hildred Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows
Here's the log output from running the migrate.
2013-07-02 12:08:27,352 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) [1a2280ef] Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:27,553 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 3cb2f3e1 2013-07-02 12:08:27,580 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.13:54321, method=online 2013-07-02 12:08:27,583 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=0a25070a-e136-11e2-9b8c-00145e6b5bf5, dstHost=10.28.0.13:54321, migrationMethod=ONLINE), log id: 6849a01 2013-07-02 12:08:27,592 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateBrokerVDSCommand, log id: 6849a01 2013-07-02 12:08:27,653 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) [1a2280ef] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3cb2f3e1 2013-07-02 12:08:28,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-67) [6d5bc1a3] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:31,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-4) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:33,341 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-49) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:35,386 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:37,432 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-56) [36e2ebd7] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt2 ignoring it in the refresh until migration is done 2013-07-02 12:08:40,234 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:40,235 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:40,852 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-69) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:40,907 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 71f2dbdb 2013-07-02 12:08:40,913 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:40,914 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,915 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration
2013-07-02 12:08:40,918 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:40,918 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:40,920 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 71f2dbdb 2013-07-02 12:08:41,552 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) Running command: MigrateVmCommand internal: false. Entities affected : ID: 25160f61-26f5-40ee-b245-e42c01048586 Type: VM 2013-07-02 12:08:41,688 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) START, MigrateVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 20d18034 2013-07-02 12:08:41,714 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) VdsBroker::migrate::Entered (vm_guid=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstHost=10.28.0.12:54321, method=online 2013-07-02 12:08:41,716 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) START, MigrateBrokerVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586, srcHost=10.28.0.14, dstVdsId=02ee1572-e135-11e2-b77e-00145e6b5bf5, dstHost=10.28.0.12:54321, migrationMethod=ONLINE), log id: 207042b1 2013-07-02 12:08:41,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-3-thread-46) FINISH, MigrateBrokerVDSCommand, log id: 207042b1 2013-07-02 12:08:41,753 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-46) FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 20d18034 2013-07-02 12:08:43,530 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-52) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:45,595 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-31) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:47,641 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-89) [1fd2878] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:49,723 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-77) [1280cd02] vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:52,398 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-83) vds::refreshVmList vm id 25160f61-26f5-40ee-b245-e42c01048586 is migrating to vds ovirt1 ignoring it in the refresh until migration is done 2013-07-02 12:08:54,298 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) VM MJR-DC2 25160f61-26f5-40ee-b245-e42c01048586 moved from MigratingFrom --> Up 2013-07-02 12:08:54,299 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) adding VM 25160f61-26f5-40ee-b245-e42c01048586 to re-run list 2013-07-02 12:08:54,352 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-76) Rerun vm 25160f61-26f5-40ee-b245-e42c01048586. Called from vds ovirt3 2013-07-02 12:08:54,480 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) START, MigrateStatusVDSCommand(HostName = ovirt3, HostId = 2042e434-e137-11e2-9317-00145e6b5bf5, vmId=25160f61-26f5-40ee-b245-e42c01048586), log id: 595d458f 2013-07-02 12:08:54,485 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Failed in MigrateStatusVDS method 2013-07-02 12:08:54,486 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc mCode 12 mMessage Fatal error during migration
2013-07-02 12:08:54,490 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-46) HostName = ovirt3 2013-07-02 12:08:54,491 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration 2013-07-02 12:08:54,492 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-46) FINISH, MigrateStatusVDSCommand, log id: 595d458f 2013-07-02 12:08:55,141 INFO [org.ovirt.engine.core.bll.VdsSelector] (pool-3-thread-46) VDS ovirt1 02ee1572-e135-11e2-b77e-00145e6b5bf5 have failed running this VM in the current selection cycle VDS ovirt2 0a25070a-e136-11e2-9b8c-00145e6b5bf5 have failed running this VM in the current selection cycle 2013-07-02 12:08:55,142 WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (pool-3-thread-46) CanDoAction of action MigrateVm failed. Reasons:ACTION_TYPE_FAILED_VDS_VM_CLUSTER,VAR__ACTION__MIGRATE,VAR__TY PE__VM
Rick Ingersoll IT Consultant (919) 234-5100 main (919) 234-5101 direct (919) 757-5605 mobile (919) 747-7409 fax rick.ingersoll@mjritsolutions.com http://www.mjritsolutions.com
-----Original Message----- From: Tim Hildred [mailto:thildred@redhat.com] Sent: Tuesday, July 02, 2013 9:52 AM To: Rick Ingersoll Cc: users@ovirt.org Subject: Re: [Users] Migration of Windows
Hey Rick,
Can you provide /var/log/ovirt-engine/engine.log?
Tim Hildred, RHCE, RHCVA Content Author II - Engineering Content Services, Red Hat, Inc. Brisbane, Australia Email: thildred@redhat.com Internal: 8588287 Mobile: +61 4 666 25242 IRC: thildred
----- Original Message -----
From: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com> To: "Rick Ingersoll" <rick.ingersoll@mjritsolutions.com>, users@ovirt.org Sent: Tuesday, July 2, 2013 4:58:09 AM Subject: Re: [Users] Migration of Windows
I’m really struggling with this problem. I have the virtio 1.59 drivers running on the Windows guests. What else would I need to set to get migration of Window guests working?
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll@mjritsolutions.com
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Rick Ingersoll Sent: Sunday, June 30, 2013 11:30 PM To: users@ovirt.org Subject: [Users] Migration of Windows
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
was this resolved?

Hello, this topic is quite old but I got the same problem recently and couldn't find reliable information.. Here is the question: Is live migration of Windows guests, which use VirtIO disk interface possible in latest stable release (3.3.3) and latest libvirt on EL6? We are currently using 3.3.1 on EL6 and plan to upgrade to 3.3.3 this week. We can't live migrate Windows guests, which use VirtIO disk interface and scsi driver 0.1.59. Further on, Windows server 2012 doesn't update from virtio 0.1-74 iso as the drivers look the same to it. Windows with IDE interface migrate OK but the performance is much worse, so that isn't an option. I can later provide log files and package versions, but before I do that, I just want to ask users if I am missing something. Thanks. Jure Kranjc On 07/16/2013 02:10 PM, Itamar Heim wrote:
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?

On 25-2-2014 16:00, Jure Kranjc wrote:
Hello,
this topic is quite old but I got the same problem recently and couldn't find reliable information.. Here is the question:
Is live migration of Windows guests, which use VirtIO disk interface possible in latest stable release (3.3.3) and latest libvirt on EL6?
We are currently using 3.3.1 on EL6 and plan to upgrade to 3.3.3 this week. We can't live migrate Windows guests, which use VirtIO disk interface and scsi driver 0.1.59. Further on, Windows server 2012 doesn't update from virtio 0.1-74 iso as the drivers look the same to it. Windows with IDE interface migrate OK but the performance is much worse, so that isn't an option. I can later provide log files and package versions, but before I do that, I just want to ask users if I am missing something. Thanks.
I'm on 3.3.2-1-el6 with Windows 2012 using virtio and can migrate without problems. Joop

On 02/25/2014 05:00 PM, Jure Kranjc wrote:
Hello,
this topic is quite old but I got the same problem recently and couldn't find reliable information.. Here is the question:
Is live migration of Windows guests, which use VirtIO disk interface possible in latest stable release (3.3.3) and latest libvirt on EL6?
We are currently using 3.3.1 on EL6 and plan to upgrade to 3.3.3 this week. We can't live migrate Windows guests, which use VirtIO disk interface and scsi driver 0.1.59. Further on, Windows server 2012 doesn't update from virtio 0.1-74 iso as the drivers look the same to it. Windows with IDE interface migrate OK but the performance is much worse, so that isn't an option. I can later provide log files and package versions, but before I do that, I just want to ask users if I am missing something. Thanks.
this should work.
Jure Kranjc
On 07/16/2013 02:10 PM, Itamar Heim wrote:
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtual machines setup that I am using for testing. I am using gluster storage setup as a distribution between the 3 hosts. I can migrate linux guests across my 3 hosts, but I cannot migrate windows hosts. I get “Migration failed due to Error: Fatal error during migration. The event id is 65. Is there something additional that needs to be done to windows guests for them to support live migration?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (8)
-
Fabian Deutsch
-
Itamar Heim
-
Joop
-
Jure Kranjc
-
Omer Frenkel
-
Rick Ingersoll
-
Tim Hildred
-
Winfried de Heiden