<div dir="ltr"><div><div><div><div>Hi, <br><br></div>Yesterday I posted a message, but it was not delivered because of it&#39;s size (big log + a screenshot), so I don&#39;t know if the moderator will accept the message or not.<br><br></div>any way, this is only the text:<br><br><span style="color:rgb(0,0,255)">I redid the target configuration on FC22, I used the same raw file, and I got the same result, no vm.conf file.<br><br>I don&#39;t think it&#39;s a connection problem, iscsiadm shows the session, lvs shows the lvm, so the connection is up.<br><br>Yertserday, before reboot I did save vm.conf somewhere, so I had to copy it back to it&#39;s location and I got the VM engine up.<br><br>but the problem is still there.<br><br>Another
 thing, this time I tried to add hosted_storage, the iscsi login showed 
me the lun, but when I tried to added it, <b>the VM engine crashed</b>.<br><br>After restarting the VM engine, I got the hosted_storage displayed (click on system -&gt; choose Storage tab), <b>but not attached.</b><br><br>If I try to attach it to the default DC, I get this error :<br><br></span><div><span style="color:rgb(0,0,255)">Storage Domain(s) are already attached to a Data 
Center. Approving this operation might cause data corruption if both 
Data Centers are active.</span></div><span style="color:rgb(0,0,255)">- hosted_storage<br><br></span></div><span style="color:rgb(0,0,255)">and if I approve, the VM engine crashes.<br><br></span></div><span style="color:rgb(0,0,255)"><font color="#000000">Regards</font><br></span></div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-03 21:05 GMT+01:00 wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi,<br><br></div>I redid the target configuration on FC22, I used the same raw file, and I got the same result, no vm.conf file.<br><br>I don&#39;t think it&#39;s a connection problem, iscsiadm shows the session, lvs shows the lvm, so the connection is up.<br></div><br></div>Yertserday, before reboot I did save vm.conf somewhere, so I had to copy it back to it&#39;s location and I got the VM engine up.<br><br></div>but the problem is still there.<br><br></div>Another thing, this time I tried to add hosted_storage, the iscsi login showed me the lun, but when I tried to added it, the VM engine crashed.<br><br></div>After restarting the VM engine, I got the hosted_storage displayed (click on system -&gt; choose Storage tab), but not attached.<br><br></div>If I try to attach it to the default DC, I get this error (see image):<br><br><div><div><div><div>Storage Domain(s) are already attached to a Data 
Center. Approving this operation might cause data corruption if both 
Data Centers are active.</div><div><div>- hosted_storage<br><br></div><div>PS: joined log files<br></div><div>vdsm<br></div><div>ha agent<br></div><div>engine<br><br></div></div> </div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-03 10:39 GMT+01:00 Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 3, 2015 at 11:25 AM, wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div>Just to be clear, my test environment is composed of two machines:<br><br></div>1 - one hypervisor just one host<br><br></div>2 - a machine used as a storage, a raw file presented as iscsi device for VM engine storage, and multiple NFS4 shares for the other data domains (data, iso, export).<br><br></div>That&#39;s it.<br><br><br></div>Here is the vdsm log<br><br></div></div></blockquote><div><br></div><div>The first real error is this one:</div><div><br></div><div><div>Thread-49::ERROR::2015-09-03 01:19:29,710::monitor::250::Storage.Monitor::(_monitorDomain) Error monitoring domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17</div><div>Traceback (most recent call last):</div><div>  File &quot;/usr/share/vdsm/storage/monitor.py&quot;, line 246, in _monitorDomain</div><div>    self._performDomainSelftest()</div><div>  File &quot;/usr/lib/python2.7/site-packages/vdsm/utils.py&quot;, line 759, in wrapper</div><div>    value = meth(self, *a, **kw)</div><div>  File &quot;/usr/share/vdsm/storage/monitor.py&quot;, line 313, in _performDomainSelftest</div><div>    self.domain.selftest()</div><div>  File &quot;/usr/share/vdsm/storage/blockSD.py&quot;, line 857, in selftest</div><div>    lvm.chkVG(self.sdUUID)</div><div>  File &quot;/usr/share/vdsm/storage/lvm.py&quot;, line 1006, in chkVG</div><div>    raise se.StorageDomainAccessError(&quot;%s: %s&quot; % (vgName, err))</div><div>StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: (&#39;7bd9cad0-151a-4aa4-a7de-15dd64748f17: [\&#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\&#39;, \&#39;  /dev/mapper/33000000100000001: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687025664: Input/output error\&#39;, \&#39;  /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687083008: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/33000000100000001 was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536805376: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536862720: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134152192: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134209536: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147418112: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147475456: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134152192: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134209536: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134152192: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134209536: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073676288: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073733632: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949607424: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949664768: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900 was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134152192: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134209536: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406 was disabled\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 0: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134152192: Input/output error\&#39;, \&#39;  /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134209536: Input/output error\&#39;, \&#39;  WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf was disabled\&#39;, \&#39;  Volume group &quot;7bd9cad0-151a-4aa4-a7de-15dd64748f17&quot; not found\&#39;, \&#39;  Cannot process volume group 7bd9cad0-151a-4aa4-a7de-15dd64748f17\&#39;]&#39;,)</div><div>Thread-49::INFO::2015-09-03 01:19:29,741::monitor::273::Storage.Monitor::(_notifyStatusChanges) Domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 became INVALID</div><div>Thread-4067::DEBUG::2015-09-03 01:19:29,741::misc::777::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event</div></div><div><br></div><div>For some reason your iSCSI connection seams to fails and the hosted-engine-storage domain becomes invalid.</div><div>All the other issue are subsequent.</div><div>Can you please check its configuration and the network status?</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div>PS: After rebooting the host, I could not restart the VM engine, the same problem with ha agent, vm.conf file not found.</div></blockquote><div><br></div><div>vm.conf is now on the storage domain but at least you should be able to access it.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div><div class="gmail_extra"><div class="gmail_quote">2015-09-03 8:14 GMT+01:00 Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On Thu, Sep 3, 2015 at 2:07 AM, wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><span style="color:rgb(0,0,255)">Hi again,<br><br></span></div><span style="color:rgb(0,0,255)">I had to restart the installation all over, I used the freshly pushed new packages.<br><br></span></div><span style="color:rgb(0,0,255)">I had two problems:<br></span></div><span style="color:rgb(0,0,255)">1 - the engine&#39;s setup didn&#39;t terminate correctly when I chose to use ovirt-vmconsole with this error</span><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span> <span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">Restarting ovirt-vmconsole proxy service
</span><br><b><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ ERROR ]</span> </b><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"><b>Failed to execute stage &#39;Closing up&#39;: Failed to stop service &#39;ovirt-vmconsole-proxy-sshd&#39;</b>
</span><br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ] </span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">Stage: Clean up
</span><br>          Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log
<br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Generating answer file &#39;/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf&#39;
</span><br><span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Pre-termination
</span><br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Termination
</span><br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ ERROR ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Execution of setup failed</span><br>
<br></span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,255)">So I executed engine-cleanup which terminate with this error</span><br></span><br><span style="font-family:monospace">






</span><div>
<span style="font-family:monospace"><span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Clearing Engine database engine
</span><br></span><b><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ ERROR ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Failed to execute stage &#39;Misc configuration&#39;: must be owner of schema pg_catalog
</span></b><br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Clean up
</span><br>          Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log
<br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Generating answer file &#39;/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf&#39;
</span><br><span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Pre-termination
</span><br><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Termination
</span><br></span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[ ERROR ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Execution of cleanup failed</span><br>
<br></span></div>


<span style="font-family:monospace"><br><br></span></div><div><span style="color:rgb(0,0,255)"><span style="font-family:monospace">And then, I executed again engine-setup without ovirt-vmconsole<br></span></span></div><div><span style="color:rgb(0,0,255)"><span style="font-family:monospace">This time the setup completed.<br><br></span></span></div><div><span style="color:rgb(0,0,255)"><span style="font-family:monospace">2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success.<br><br></span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,255)">click on import, choose iscsi, connect to the target, scan, login, but no device chown !!!</span> (iscsi.jpeg)</span></div></div></blockquote><div><br></div></div></div><div>Can you please attach the relevant VDSM logs from the host you used to were using to import that storage domain?</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div> </div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><span style="font-family:monospace"></span></div><div><span style="color:rgb(0,0,255)"><span style="font-family:monospace">The only new thing I had, is the disk of the VM engine being shown under disks tab.</span></span><br></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-02 19:50 GMT+01:00 wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>I found this on vdsm log<br><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with (&#39;ab</span><br>1dc1a9-b6e9-4890-8485-1019da2f328f&#39;,) {}
<br>Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {&#39;status&#39;: {&#39;message&#39;:<br> &#39;<span style="background-color:rgb(243,243,243)"><b><span style="color:rgb(0,0,0)">Virtual machine does not exist</span></b></span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">&#39;, &#39;code&#39;: 1}}</span><br></span></div>


<span style="background-color:rgb(255,255,255)"><span></span></span><br><br></div>I really don&#39;t understand anything<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-02 17:01 GMT+01:00 wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><span style="color:rgb(0,0,255)">Thanks,<br><br></span></div><span style="color:rgb(0,0,255)">but before that I stuck again with the storage of the VM engine not detected after reboot.<br><br></span></div><span style="color:rgb(0,0,255)">the /rhev is populated, but ovirt-ha-agent crashes with </span><br><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)</span><br> Starting monitor engine-health, options {&#39;use_ssl&#39;: &#39;true&#39;, &#39;vm_uuid&#39;: &#39;ab1dc1a9-b6e9-4890-8485-1019da2f328f&#39;, &#39;address&#39;: &#39;0<br>&#39;}
<br>MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)<br> Success, id 139994237094736
<br>MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do<br>main) Success, id 139994236985168
<br>MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini<br>tialize_broker) Broker initialized, all submonitors started
<br>MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini<br>tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574<br>-4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
<br>MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star<br>t_monitoring) Reloading vm.conf from the shared storage domain
<br>MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <b>Error: &#39;&#39;Confi<br>guration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize&#39;&#39; - trying to restart agent
</b><br>MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a<br>gent, attempt &#39;9&#39;
<br>MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error<br>s occurred, giving up. Please review the log and consider filing a bug.
<br>MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down<br>
<br><br></span></div><div><span style="color:rgb(0,0,255)"><span style="font-family:monospace">I restared vdsm ha-agent and broker-agent wihtout success<br><br></span></span></div><div><span style="color:rgb(0,0,255)">When executed </span><br></div><div><span style="font-family:monospace">






</span><div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria ~]# hosted-engine --vm-status
</span><br>You must run deploy first<br>
<br></span></div>


<span style="font-family:monospace"><br><span style="color:rgb(0,0,255)">I got this </span><br></span><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria ~]# tree /var/run/vdsm/storage/
</span><br>/var/run/vdsm/storage/
<br>└── 8b25f3be-7574-4f7a-8851-363129704e52
<br>    ├── 8e49032f-680b-40c2-b422-80d86dc7beda
<br>    │   └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1<br>ade79d1
<br>    ├── a44d1302-3165-4632-9d99-3e035dfc3ac7
<br>    │   └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f<br>67f7342
<br>    ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7
<br>    │   └── ae352fab-7477-4376-aa27-04c321b4fbd1 -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32<br>1b4fbd1
<br>    └── bf3bdae1-7318-4443-a19b-7371de30b982
<br>        └── cbb10cf0-9600-465e-aed9-412f7157706b -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7<br>157706b<br>
<br></span></div>


and this<br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria rhev]# tree  </span><br>.
<br>└── data-center
<br>    ├── 00000001-0001-0001-0001-000000000221
<br>    └── mnt
<br>        ├── blockSD
<br>        │   └── 8b25f3be-7574-4f7a-8851-363129704e52
<br>        │       ├── dom_md
<br>        │       │   ├── ids -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids
<br>        │       │   ├── inbox -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox
<br>        │       │   ├── leases -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases
<br>        │       │   ├── master -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/master
<br>        │       │   ├── metadata -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata
<br>        │       │   └── outbox -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox
<br>        │       ├── ha_agent
<br>        │       │   ├── hosted-engine.lockspace -&gt; /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4<br>632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342
<br>        │       │   └── hosted-engine.metadata -&gt; /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40<br>c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1
<br>        │       └── images
<br>        │           ├── 8e49032f-680b-40c2-b422-80d86dc7beda
<br>        │           │   └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4<br>5e7-ac19-303c1ade79d1
<br>        │           ├── a44d1302-3165-4632-9d99-3e035dfc3ac7
<br>        │           │   └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4<br>c71-b332-c6c7f67f7342
<br>        │           ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7
<br>        │           │   └── ae352fab-7477-4376-aa27-04c321b4fbd1 -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4<br>376-aa27-04c321b4fbd1
<br>        │           └── bf3bdae1-7318-4443-a19b-7371de30b982
<br>        │               └── cbb10cf0-9600-465e-aed9-412f7157706b -&gt; /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4<br>65e-aed9-412f7157706b
<br>        ├── openSuse.wodel.wd:_nvms
<br>        └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf<br>
<br></span></div>


<br></div><div><span style="color:rgb(0,0,255)">Here I did find some symblic links blinking (not present) like this one</span> <br><span style="font-family:monospace">hosted-engine.metadata -&gt; /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40<br>c2-b422-80d86dc7beda/<b>f05762e5-e8cd-45e7-ac19-303c1ade79d1</b><br><br><br></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,255)">the lvscan command showed that the lv concerned is inactive, is this correct?</span><br></span></div><div>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria ~]# lvscan  </span><br>File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata&#39; [512,00 MiB] inherit
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox&#39; [128,00 MiB] inherit
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases&#39; [2,00 GiB] inherit
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids&#39; [128,00 MiB] inherit
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox&#39; [128,00 MiB] inherit
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/master&#39; [1,00 GiB] inherit
<br>  inactive          &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1&#39; [1,00 GiB] inherit
<br>  ACTIVE            &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342&#39; [128,00 MiB] inherit
<br>  <b>inactive          &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1&#39; [128,00 MiB] inherit
</b><br>  inactive          &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b&#39; [40,00 GiB] inherit<br></span></div>


<br></div><div><br></div><div><br></div><div><span style="color:rgb(0,0,255)">and this</span><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria ~]# vdsClient -s 0 prepareImage &quot;00000000-0000-0000-0000-000000000000&quot; &quot;8b25f3be-7574-4f7a-8851-363129704e52&quot; &quot;</span><br>bf3bdae1-7318-4443-a19b-7371de30b982&quot; &quot;cbb10cf0-9600-465e-aed9-412f7157706b&quot;
<br>{&#39;domainID&#39;: &#39;8b25f3be-7574-4f7a-8851-363129704e52&#39;,
<br> &#39;imageID&#39;: &#39;bf3bdae1-7318-4443-a19b-7371de30b982&#39;,
<br> &#39;leaseOffset&#39;: 112197632,
<br> &#39;leasePath&#39;: &#39;/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases&#39;,
<br> &#39;path&#39;: &#39;/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1<br>0cf0-9600-465e-aed9-412f7157706b&#39;,
<br> &#39;volType&#39;: &#39;path&#39;,
<br> &#39;volumeID&#39;: &#39;cbb10cf0-9600-465e-aed9-412f7157706b&#39;}<br>
<br></span></div>


<br></div><div><span style="color:rgb(0,0,255)">and</span><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52
</span><br>        uuid = 8b25f3be-7574-4f7a-8851-363129704e52
<br>        vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m
<br>        state = OK
<br>        version = 3
<br>        role = Regular
<br>        type = ISCSI
<br>        class = Data
<br>        pool = []
<br>        name = hosted_storage<br>
<br></span></div>


<br>






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">[root@noveria ~]# lvs
</span><br>File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash
<br><span>  LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move<br> Log Cpy%Sync Convert
<br></span>  0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m                               <br>                      <br>  ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi-------   1,00g                               <br>                      <br>  cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a-----  40,00g                               <br>                      <br>  f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m                               <br>                      <br>  ids                                  8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m                               <br>                      <br>  inbox                                8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m                               <br>                      <br>  leases                               8b25f3be-7574-4f7a-8851-363129704e52 -wi-a-----   2,00g                               <br>                      <br>  master                               8b25f3be-7574-4f7a-8851-363129704e52 -wi-a-----   1,00g                               <br>                      <br>  metadata                             8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m                               <br>                      <br>  outbox                               8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m                 <br></span></div>


<br><br><br></div><div><span style="color:rgb(0,0,255)">VDSM logs doesn&#39;t show me anything</span><br>MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) &lt;WorkerThread(Thread-4, started daemon 139990108333824)&gt;<br>MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) &lt;WorkerThread(Thread-3, started daemon 139990116726528)&gt;<br>MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64)<br>MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace &#39;Storage&#39;<br>MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500<br>MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None<br>MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists<br>MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None)<br>MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at &#39;/rhev/data-center&#39;<br>MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: [&#39;/rhev/data-center/hsm-tasks&#39;, &#39;/rhev/data-center/hsm-tasks/*&#39;, &#39;/rhev/data-center/mnt&#39;]<br>MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: []<br>MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers<br>MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at &#39;/rhev/data-center&#39;<br>storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)<br>MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher...<br>storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br>storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds<br>storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)<br>MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -&gt; state preparing<br>MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=&lt;functools.partial object at 0x7fc2f03fa6d8&gt;)<br>MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None<br>MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None<br>MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -&gt; state finished<br>MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br>MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False<br>MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface<br>MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock<br>MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests<br>MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt<br>MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels&#39; timeout to 30 seconds.<br>VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread.<br>MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at <a href="http://0.0.0.0:54321" target="_blank">0.0.0.0:54321</a><br>MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector &lt;rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440&gt;<br>storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)<br>storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan<br>storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm<br>MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector &lt;yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8&gt;<br>BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running<br>MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched<br>periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started<br>MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor<br>MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0<br>periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started<br>MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1<br>periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started<br>MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2<br>periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started<br>MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3<br>periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started<br>MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt<br>MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(&lt;class &#39;virt.periodic.UpdateVolumes&#39;&gt;)<br>MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(&lt;class &#39;virt.periodic.NumaInfoMonitor&#39;&gt;)<br>MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(&lt;class &#39;virt.periodic.BlockjobMonitor&#39;&gt;)<br>MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation &lt;virt.sampling.VMBulkSampler object at 0x7fc2e0151d10&gt;<br>MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(&lt;class &#39;virt.periodic.DriveWatermarkMonitor&#39;&gt;)<br>storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished<br>storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br>storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None)<br>Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from <a href="http://127.0.0.1:56311" target="_blank">127.0.0.1:56311</a><br>Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11<br>Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from <a href="http://127.0.0.1:56311" target="_blank">127.0.0.1:56311</a><br>Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from (&#39;127.0.0.1&#39;, 56311)<br>BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for <a href="http://127.0.0.1:56311" target="_blank">127.0.0.1:56311</a><br>Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56311" target="_blank">127.0.0.1:56311</a> started<br>Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br>Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {&#39;status&#39;: {&#39;message&#39;: &#39;Recovering from crash or Initializing&#39;, &#39;code&#39;: 99}}<br>Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56311" target="_blank">127.0.0.1:56311</a> stopped<br>Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from <a href="http://127.0.0.1:56312" target="_blank">127.0.0.1:56312</a><br>Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11<br>Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from <a href="http://127.0.0.1:56312" target="_blank">127.0.0.1:56312</a><br>Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from (&#39;127.0.0.1&#39;, 56312)<br>BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for <a href="http://127.0.0.1:56312" target="_blank">127.0.0.1:56312</a><br>Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56312" target="_blank">127.0.0.1:56312</a> started<br>Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br>Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {&#39;status&#39;: {&#39;message&#39;: &#39;Recovering from crash or Initializing&#39;, &#39;code&#39;: 99}}<br>Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56312" target="_blank">127.0.0.1:56312</a> stopped<br>storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation &#39;lvm reload operation&#39; got the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None)<br>storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation &#39;lvm reload operation&#39; released the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; got the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None)<br>storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; released the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None)<br>storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; got the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None)<br>storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; released the operation mutex<br>storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready<br>Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from <a href="http://127.0.0.1:56313" target="_blank">127.0.0.1:56313</a><br>Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11<br>Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from <a href="http://127.0.0.1:56313" target="_blank">127.0.0.1:56313</a><br>Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from (&#39;127.0.0.1&#39;, 56313)<br>BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for <a href="http://127.0.0.1:56313" target="_blank">127.0.0.1:56313</a><br>Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56313" target="_blank">127.0.0.1:56313</a> started<br>Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br>Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {&#39;status&#39;: {&#39;message&#39;: &#39;Done&#39;, &#39;code&#39;: 0}, &#39;info&#39;: {&#39;systemProductName&#39;: &#39;System Product Name&#39;, &#39;systemSerialNumber&#39;: &#39;System Serial Number&#39;, &#39;systemFamily&#39;: &#39;To be filled by O.E.M.&#39;, &#39;systemVersion&#39;: &#39;System Version&#39;, &#39;systemUUID&#39;: &#39;267A6B80-D7DA-11DD-81CF-C860009B3CD9&#39;, &#39;systemManufacturer&#39;: &#39;System manufacturer&#39;}}<br>Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56313" target="_blank">127.0.0.1:56313</a> stopped<br>Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from <a href="http://127.0.0.1:56314" target="_blank">127.0.0.1:56314</a><br>Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11<br>Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from <a href="http://127.0.0.1:56314" target="_blank">127.0.0.1:56314</a><br>Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from (&#39;127.0.0.1&#39;, 56314)<br>BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for <a href="http://127.0.0.1:56314" target="_blank">127.0.0.1:56314</a><br>Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56314" target="_blank">127.0.0.1:56314</a> started<br>Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br>Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -&gt; state preparing<br>Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID=&#39;00000000-0000-0000-0000-000000000000&#39;, conList=[{&#39;id&#39;: &#39;57bc98c0-560f-4e61-9d86-df92ad468d3b&#39;, &#39;connection&#39;: &#39;192.168.1.50&#39;, &#39;iqn&#39;: &#39;iqn.2015-08.openSuse.wodel:target00&#39;, &#39;portal&#39;: &#39;1&#39;, &#39;user&#39;: &#39;iscsiuser&#39;, &#39;password&#39;: &#39;********&#39;, &#39;port&#39;: &#39;3260&#39;}], options=None)<br>Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p <a href="http://192.168.1.50:3260" target="_blank">192.168.1.50:3260</a>,1 --op=new (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p <a href="http://192.168.1.50:3260" target="_blank">192.168.1.50:3260</a>,1 -n node.session.auth.authmethod -v &#39;****&#39; --op=update (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p <a href="http://192.168.1.50:3260" target="_blank">192.168.1.50:3260</a>,1 -n node.session.auth.username -v &#39;****&#39; --op=update (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p <a href="http://192.168.1.50:3260" target="_blank">192.168.1.50:3260</a>,1 -n node.session.auth.password -v &#39;****&#39; --op=update (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping.<br>Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p <a href="http://192.168.1.50:3260" target="_blank">192.168.1.50:3260</a>,1 -l (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p <a href="http://192.168.1.50:3260" target="_blank">192.168.1.50:3260</a>,1 -n node.startup -v manual --op=update (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds<br>Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)<br>Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan<br>Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished<br>Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; got the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None)<br>Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; released the operation mutex<br>Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: (&#39;8b25f3be-7574-4f7a-8851-363129704e52&#39;,)<br>Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain}<br>Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {&#39;statuslist&#39;: [{&#39;status&#39;: 0, &#39;id&#39;: &#39;57bc98c0-560f-4e61-9d86-df92ad468d3b&#39;}]}<br>Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {&#39;statuslist&#39;: [{&#39;status&#39;: 0, &#39;id&#39;: &#39;57bc98c0-560f-4e61-9d86-df92ad468d3b&#39;}]}<br>Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -&gt; state finished<br>Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br>Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False<br>Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56314" target="_blank">127.0.0.1:56314</a> stopped<br>Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from <a href="http://127.0.0.1:56316" target="_blank">127.0.0.1:56316</a><br>Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11<br>Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from <a href="http://127.0.0.1:56316" target="_blank">127.0.0.1:56316</a><br>Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from (&#39;127.0.0.1&#39;, 56316)<br>BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for <a href="http://127.0.0.1:56316" target="_blank">127.0.0.1:56316</a><br>Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for <a href="http://127.0.0.1:56316" target="_blank">127.0.0.1:56316</a> started<br>Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br>Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -&gt; state preparing<br>Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID=&#39;8b25f3be-7574-4f7a-8851-363129704e52&#39;, spUUID=&#39;00000000-0000-0000-0000-000000000000&#39;, imgUUID=&#39;bf3bdae1-7318-4443-a19b-7371de30b982&#39;, leafUUID=&#39;cbb10cf0-9600-465e-aed9-412f7157706b&#39;)<br>Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in &#39;/usr/share/vdsm/storage/hsm.py&#39; line &#39;3194&#39; at &#39;prepareImage&#39;<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource &#39;Storage.8b25f3be-7574-4f7a-8851-363129704e52&#39; for lock type &#39;shared&#39;<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource &#39;Storage.8b25f3be-7574-4f7a-8851-363129704e52&#39; is free. Now locking as &#39;shared&#39; (1 active user)<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared)<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds<br>Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)<br>Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)<br>Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method<br>Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan<br>Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished<br>Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br>Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None)<br>Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result<br>Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None)<br>Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend<br>Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None)<br>Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: &lt;err&gt; = &#39;1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n&#39;; &lt;rc&gt; = 0<br>Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: [&#39;1+0 records in&#39;, &#39;1+0 records out&#39;, &#39;2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s&#39;], size: 2048<br>Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]<br>Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata<br>Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend<br>Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation &#39;lvm invalidate operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation &#39;lvm invalidate operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; got the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config &#39; devices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ &#39;\&#39;&#39;a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39; --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None)<br>Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n&#39;; &lt;rc&gt; = 0<br>Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation &#39;lvm reload operation&#39; released the operation mutex<br>Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=[&#39;CLASS=Data&#39;, &#39;DESCRIPTION=hosted_storage&#39;, &#39;IOOPTIMEOUTSEC=10&#39;, &#39;LEASERETRIES=3&#39;, &#39;LEASETIMESEC=60&#39;, &#39;LOCKPOLICY=&#39;, &#39;LOCKRENEWALINTERVALSEC=5&#39;, &#39;LOGBLKSIZE=512&#39;, &#39;PHYBLKSIZE=4096&#39;, &#39;POOL_UUID=&#39;, u&#39;PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0&#39;, &#39;ROLE=Regular&#39;, &#39;SDUUID=8b25f3be-7574-4f7a-8851-363129704e52&#39;, &#39;TYPE=ISCSI&#39;, &#39;VERSION=3&#39;, &#39;VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m&#39;, &#39;_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da&#39;]<br>Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace &#39;8b25f3be-7574-4f7a-8851-363129704e52_imageNS&#39;<br>Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace &#39;8b25f3be-7574-4f7a-8851-363129704e52_volumeNS&#39;<br>Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace &#39;8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS&#39;<br>Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation &#39;lvm reload operation&#39; got the operation mutex<br><br></div><div><span style="color:rgb(0,0,255)">What should I do to bring the VM engine back? </span><br></div>


<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div>I will try this afternoon to do this, but just to clarify something.<br><br></div>the hosted_engine setup creates it&#39;s own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?<br></div></div></blockquote><div><br></div></span><div>No, ovirt-hosted-engine-setup doesn&#39;t create a special datacenter. The default is to add the host to the Default datacenter in the default cluster.</div><div>You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.</div><div><br></div><div>In setup there is a question like this:</div><div><div> Local storage datacenter name is an internal name</div><div> and currently will not be shown in engine&#39;s admin UI. </div><div> Please enter local datacenter name </div></div><div>which ask about &#39;Local storage datacenter&#39; which is basically the description we were using for the storage pool.<br></div><div><div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div>if yes, where will I import the hostedengine storage domain, into the default DC?<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-02 8:47 GMT+01:00 Roy Golan <span dir="ltr">&lt;<a href="mailto:rgolan@redhat.com" target="_blank">rgolan@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>I could finally terminate the installation, but still no vm engine on webui<br><br></div>I added a data domain, the default DC is up, but no engine VM.<br></div></blockquote><div><br><br><br></div></span><div>Good now you need to import the HostedEngine storage domain. Try to go to <i>Storage -&gt; Import Domain and put the path to the domain which you used in the hosted-engine setup.<br><br></i></div><div><i>After the domain is imported, the engine will be imported automatically. <br><br></i></div><div><i>This whole process will become automatic eventually. (patch is written currently)<br></i></div><div><div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-01 21:22 GMT+01:00 wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div>Something mounted on <span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap">/rhev/data-center/mnt I&#39;m not sure.<br><br></span></div><span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap">there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev<br></span></div><span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap">(ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.<br><br></span></div><span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap">but the mount command didn&#39;t show anything, unlike nfs, when I used it the mount and df commands showed up the engine&#39;s VM mount point.<br></span><div><div><div><div><span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap"><br></span></div></div></div></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>Hi,<br><br></div><span>After removing the -x from the sql files, the installation terminated successfully, but ...<br><br></span></div><span>I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine.<br></span></div><span>After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha<br><br></span></div><span>And the mount command doesn&#39;t show any iscsi mount, the disk is detected via fdisk -l<br></span></div><span>the lvs command returns all logical volumes created.<br><br></span></div><span>I think it&#39;s a mount problem, but since there are many lv, I don&#39;t how to mount them manually.<br></span></div></blockquote><div><br></div><div>Do you have something mounted under <span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap">/rhev/data-center/mnt ?</span></div><div><span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap">If not you probably hit this bug: </span><font color="#353535" face="sans-serif"><span style="white-space:pre-wrap"><a href="https://bugzilla.redhat.com/1258465" target="_blank">https://bugzilla.redhat.com/1258465</a></span></font></div><div><div><div><span style="color:rgb(53,53,53);font-family:sans-serif;white-space:pre-wrap"><br></span></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">






<div>
<span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move</span><br> Log Cpy%Sync Convert                                                                                                         <br>  3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m                               <br>                                                                                                                              <br>  be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a-----   1,00g                               <br>                                                                                                                              <br>  c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a-----  40,00g                               <br>                                                                                                                              <br>  feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m                               <br>                                                                                                                              <br>  ids                                  5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m                               <br>                                                                                                                              <br>  inbox                                5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m                               <br>                                                                                                                              <br>  leases                               5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a-----   2,00g                               <br>                                                                                                                              <br>  master                               5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a-----   1,00g                               <br>                                                                                                                              <br>  metadata                             5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m                               <br>                                                                                                                              <br>  outbox                               5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m      <br></span></div>


<br><div><br><br></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <span dir="ltr">&lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;</span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><span><div><div>Hi again,<br><br></div>I tried with the snapshot repository, but I am having this error while executing engine-setup<br><br>






<div>
<span style="font-family:monospace"><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Creating/refreshing Engine database schema
</span><br><span style="color:rgb(178,24,24);background-color:rgb(255,255,255)">[ ERROR ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Failed to execute stage &#39;Misc configuration&#39;: Command &#39;/usr/share/ovirt-engine/dbscripts/schema.sh&#39; failed to execu</span><br>te
<br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> DNF Performing DNF transaction rollback
</span><br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Rolling back database schema
</span><br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Clearing Engine database engine
</span><br><span style="color:rgb(178,24,24);background-color:rgb(255,255,255)">[ ERROR ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Engine database rollback failed: must be owner of schema pg_catalog  </span><br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Clean up
</span><br>          Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log
<br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Generating answer file &#39;/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf&#39;
</span><br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Pre-termination
</span><br><span style="color:rgb(84,255,84);background-color:rgb(255,255,255)">[ INFO  ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Stage: Termination
</span><br><span style="color:rgb(178,24,24);background-color:rgb(255,255,255)">[ ERROR ]</span><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)"> Execution of setup failed</span><br></span></div>


<br><br><br></div>and in the deployement log I have these errors<br><br>Saving custom users permissions on database objects...<br>upgrade script detected a change in Config, View or Stored Procedure...<br>Running upgrade shell script &#39;/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql&#39;...<br>Running upgrade shell script &#39;/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql&#39;...<br>Running upgrade shell script &#39;/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql&#39;...<br>Running upgrade shell script &#39;/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql&#39;...<br></span>Running upgrade shell script &#39;/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql&#39;...<div><div><br><br>2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: [&#39;/usr/share/ovirt-engine/dbscripts/schema.sh&#39;, &#39;-s&#39;, &#39;localhost&#39;, &#39;-p&#39;, &#39;5432&#39;, &#39;-u&#39;, &#39;engine&#39;, &#39;-d&#39;, &#39;engine&#39;, &#39;-l&#39;, &#39;/var/log<b>/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log&#39;, &#39;-c&#39;, &#39;apply&#39;] stderr:<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »<br><br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: `    Update section (w/o overriding current value)&#39;<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: </b><b><b>syntax error near the symbole unexpected</b> « ( »<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = &#39;03010250&#39; and current = true);&#39;<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: </b><b><b>syntax error near the symbole unexpected</b> « ( »</b><br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()&#39;<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( »<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column(&#39;materialized_views&#39;, &#39;min_refresh_rate_in_sec&#39;, &#39;int default 0&#39;);&#39;<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( »<br>/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);&#39;<br>2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception<br>Traceback (most recent call last):<br>  File &quot;/usr/lib/python2.7/site-packages/otopi/context.py&quot;, line 146, in _executeMethod<br>    method[&#39;method&#39;]()<br>  File &quot;/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py&quot;, line 291, in _misc<br>    oenginecons.EngineDBEnv.PGPASS_FILE<br>  File &quot;/usr/lib/python2.7/site-packages/otopi/plugin.py&quot;, line 946, in execute<br>    command=args[0],<br>RuntimeError: Command &#39;/usr/share/ovirt-engine/dbscripts/schema.sh&#39; failed to execute<br>2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage &#39;Misc configuration&#39;: Command &#39;/usr/share/ovirt-engine/dbscripts/schema.sh&#39; failed to execute<br>2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting &#39;DNF Transaction&#39;<br>2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback<br>2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager <a href="http://dnfpackager.info:94" target="_blank">dnfpackager.info:94</a> DNF Performing DNF transaction rollback<br></div></div></div></div></blockquote><div><br></div><div><br></div><div>It was an issue with package building: all the sql files where executable and so the issue.</div><div>We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts</div><div><div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <span dir="ltr">&lt;<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span>On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi &lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt; wrote:<br>
&gt; Hi,<br>
&gt;<br>
</span><span>&gt; I am using the repo of the 3.6 version<br>
&gt; (<a href="http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/" rel="noreferrer" target="_blank">http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/</a>)<br>
&gt;<br>
&gt; I installed the ovirt-hosted-engine-setup with it&#39;s dependencies,and the<br>
&gt; ovirt-hosted-engine-ha package is one of them.<br>
&gt;<br>
&gt; Correction: The problem with this version<br>
&gt; ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm,<br>
&gt; is that after the installation is done, the service ovirt-ha-agent crashes<br>
&gt; after being started, see the bug :<br>
&gt;<br>
&gt; <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1254745" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1254745</a><br>
&gt;<br>
&gt; A new version was produced, I downloaded it manually a few days ago, this is<br>
&gt; it :<br>
&gt; ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm<br>
&gt;<br>
&gt; This one did correct the problem, but it&#39;s not present anymore on the<br>
&gt; repository.<br>
<br>
</span>Was this on ovirt-3.6-pre?<br>
<br>
ovirt-3.6-snapshot has a newer version.<br>
<span><br>
&gt;<br>
&gt; For Simone: yes I did added an NFS4 data domain, but no success so far, no<br>
&gt; VM engine present.<br>
&gt;<br>
&gt; Regards.<br>
&gt;<br>
&gt; 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi &lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;:<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David &lt;<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi &lt;<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>&gt;<br>
&gt;&gt;&gt; wrote:<br>
&gt;&gt;&gt; &gt; Hi,<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3.<br>
&gt;&gt;&gt; &gt; VM engine is also a FC22<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; Problem:<br>
&gt;&gt;&gt; &gt; - No VM engine on webui<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; This is still not supported, see/follow [1].<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; [1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1224889" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1224889</a><br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; ? :-)<br>
<br>
</span>Sorry :-(<br>
<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1160094" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1160094</a></blockquote><div><br></div></div></div><div>This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5</div><div><br></div><div>I&#39;ll try to reproduce verifying another patch</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
I see that all patches there are merged, but bug is in POST.<br>
<div><div><br>
&gt;&gt;<br>
&gt;&gt; Did you also try adding an additional storage domain for regular VMs?<br>
&gt;&gt; engine-VM will be shown on the engine only when you add at least one<br>
&gt;&gt; additional storage domain for regulars VM and the whole datacenter goes up:<br>
&gt;&gt; <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1</a><br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; &gt; Test Environment<br>
&gt;&gt;&gt; &gt; Just two machines:<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; 1 - Machine 1 used as storage :<br>
&gt;&gt;&gt; &gt;    - iscsi target with a raw file for the VM engine storage<br>
&gt;&gt;&gt; &gt;    - NFS4 for other data domains<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; 2 - Machine 2 used as hypervisor<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; The installation went without problem, but as always, the VM engine is<br>
&gt;&gt;&gt; &gt; not<br>
&gt;&gt;&gt; &gt; present on the webui.<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; PS:<br>
&gt;&gt;&gt; &gt; 1- I gave the VM engine just 2Gb of memory since I don&#39;t have too much<br>
&gt;&gt;&gt; &gt; RAM<br>
&gt;&gt;&gt; &gt; on hypervisor, could that be the cause of the problem?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Shouldn&#39;t be related<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; &gt; 2- This version of<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm<br>
&gt;&gt;&gt; &gt; package is causing the ovirt-ha-agent to crash, it was replaced with<br>
&gt;&gt;&gt; &gt; another<br>
&gt;&gt;&gt; &gt; which I still have<br>
&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt; &gt; ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm,<br>
&gt;&gt;&gt; &gt; but it&#39;s not present on the repository, I had to update the package<br>
&gt;&gt;&gt; &gt; manually<br>
&gt;&gt;&gt; &gt; at the end of ovirt-hosted-engine-setup installation.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Not sure I follow.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; What exact repo was used?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; hosted-engine --deploy does not update/install packages for you (as<br>
&gt;&gt;&gt; does engine-setup),<br>
&gt;&gt;&gt; it&#39;s up to you to make sure what you want/need is installed prior to<br>
&gt;&gt;&gt; running it.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Best,<br>
&gt;&gt;&gt; --<br>
&gt;&gt;&gt; Didi<br>
&gt;&gt;&gt; _______________________________________________<br>
&gt;&gt;&gt; Users mailing list<br>
&gt;&gt;&gt; <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
&gt;&gt;&gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;<br>
<br>
<br>
<br>
</div></div><span><font color="#888888">--<br>
Didi<br>
</font></span></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>
</blockquote></div><br></div>
</blockquote></div><br></div>