<div dir="ltr">Thanks for the help thus far.  Storage could be related but all other VMs on same storage are running ok.  The storage is mounted via NFS from within one of the three hosts, I realize this is not ideal.  This was setup by a previous admin more as a proof of concept and VMs were put on there that should not have been placed in a proof of concept environment.. it was intended to be rebuilt with proper storage down the road.  <div><br></div><div>So the storage is on HOST0 and the other hosts mount NFS</div><div><br></div><div><div>cultivar0.grove.silverorange.c<wbr>om:/exports/data          4861742080 1039352832 3822389248  22% /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_<wbr>exports_data</div><div>cultivar0.grove.silverorange.c<wbr>om:/exports/iso           4861742080 1039352832 3822389248  22% /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_<wbr>exports_iso</div><div>cultivar0.grove.silverorange.c<wbr>om:/exports/import_export 4861742080 1039352832 3822389248  22% /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_<wbr>exports_import__export</div></div><div>cultivar0.grove.silverorange.c<wbr>om:/exports/hosted_engine 4861742080 1039352832 3822389248  22% /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_<wbr>exports_hosted__engine<br></div><div><br></div><div>Like I said, the VM data storage itself seems to be working ok, as all other VMs appear to be running. </div><div><br></div><div>I&#39;m curious why the broker log says this file is not found when it is correct and I can see the file at that path:</div><div><br></div><div>RequestError: failed to read metadata: [Errno 2] No such file or directory: &#39;/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a/8582bdfc-ef54-<wbr>47af-9f1e-f5b7ec1f1cf8&#39;<br></div><div><br></div><div><div> ls -al /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a/8582bdfc-ef54-<wbr>47af-9f1e-f5b7ec1f1cf8</div><div>-rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59 /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a/8582bdfc-ef54-<wbr>47af-9f1e-f5b7ec1f1cf8</div></div><div><br></div><div>Is this due to the symlink problem you guys are referring to that was addressed in RC1 or something else?  Could there possibly be a permissions problem somewhere?</div><div><br></div><div>Assuming that all three hosts have 4.2 rpms installed and the host engine will not start is it safe for me to update hosts to 4.2 RC1 rpms?   Or perhaps install that repo and *only* update the ovirt HA packages?   Assuming that I cannot yet apply the same updates to the inaccessible hosted engine VM. </div><div><br></div><div>I should also mention one more thing.  I originally upgraded the engine VM first using new RPMS then engine-setup.  It failed due to not being in global maintenance, so I set global maintenance and ran it again, which appeared to complete as intended but never came back up after.  Just in case this might have anything at all to do with what could have happened. </div><div><br></div><div>Thanks very much again, I very much appreciate the help!</div><div><br></div><div>- Jayme</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 8:44 AM, Simone Tiraboschi <span dir="ltr">&lt;<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak <span dir="ltr">&lt;<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
the hosted engine agent issue might be fixed by restarting<br>
ovirt-ha-broker or updating to newest ovirt-hosted-engine-ha and<br>
-setup. We improved handling of the missing symlink.<br></blockquote><div><br></div></span>Available just in oVirt 4.2.1 RC1<div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
All the other issues seem to point to some storage problem I am afraid.<br>
<br>
You said you started the VM, do you see it in virsh -r list?<br>
<br>
Best regards<br>
<br>
Martin Sivak<br>
<div><div class="m_-4518867235499526890gmail-h5"><br>
On Thu, Jan 11, 2018 at 10:00 PM, Jayme &lt;<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>&gt; wrote:<br>
&gt; Please help, I&#39;m really not sure what else to try at this point.  Thank you<br>
&gt; for reading!<br>
&gt;<br>
&gt;<br>
&gt; I&#39;m still working on trying to get my hosted engine running after a botched<br>
&gt; upgrade to 4.2.  Storage is NFS mounted from within one of the hosts.  Right<br>
&gt; now I have 3 centos7 hosts that are fully updated with yum packages from<br>
&gt; ovirt 4.2, the engine was fully updated with yum packages and failed to come<br>
&gt; up after reboot.  As of right now, everything should have full yum updates<br>
&gt; and all having 4.2 rpms.  I have global maintenance mode on right now and<br>
&gt; started hosted-engine on one of the three host and the status is currently:<br>
&gt; Engine status : {&quot;reason&quot;: &quot;failed liveliness check”; &quot;health&quot;: &quot;bad&quot;, &quot;vm&quot;:<br>
&gt; &quot;up&quot;, &quot;detail&quot;: &quot;Up&quot;}<br>
&gt;<br>
&gt;<br>
&gt; this is what I get when trying to enter hosted-vm --console<br>
&gt;<br>
&gt;<br>
&gt; The engine VM is running on this host<br>
&gt;<br>
&gt; error: failed to get domain &#39;HostedEngine&#39;<br>
&gt;<br>
&gt; error: Domain not found: no domain with matching name &#39;HostedEngine&#39;<br>
&gt;<br>
&gt;<br>
&gt; Here are logs from various sources when I start the VM on HOST3:<br>
&gt;<br>
&gt;<br>
&gt; hosted-engine --vm-start<br>
&gt;<br>
&gt; Command VM.getStats with args {&#39;vmID&#39;:<br>
&gt; &#39;4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c&#39;} failed:<br>
&gt;<br>
&gt; (code=1, message=Virtual machine does not exist: {&#39;vmId&#39;:<br>
&gt; u&#39;4013c829-c9d7-4b72-90d5-6fe5<wbr>8137504c&#39;})<br>
&gt;<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 systemd-machined: New machine qemu-110-Cultivar.<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 systemd: Started Virtual Machine<br>
&gt; qemu-110-Cultivar.<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 systemd: Starting Virtual Machine<br>
&gt; qemu-110-Cultivar.<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 kvm: 3 guests now active<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/vdsm/vdsm.log &lt;==<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py&quot;, line 48, in<br>
&gt; method<br>
&gt;<br>
&gt;     ret = func(*args, **kwargs)<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/hsm.py&quot;, line 2718, in<br>
&gt; getStorageDomainInfo<br>
&gt;<br>
&gt;     dom = self.validateSdUUID(sdUUID)<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/hsm.py&quot;, line 304, in<br>
&gt; validateSdUUID<br>
&gt;<br>
&gt;     sdDom.validate()<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/fileSD.py&quot;, line 515,<br>
&gt; in validate<br>
&gt;<br>
&gt;     raise se.StorageDomainAccessError(se<wbr>lf.sdUUID)<br>
&gt;<br>
&gt; StorageDomainAccessError: Domain is either partially accessible or entirely<br>
&gt; inaccessible: (u&#39;248f46f0-d793-4581-9810-c9d<wbr>965e2f286&#39;,)<br>
&gt;<br>
&gt; jsonrpc/2::ERROR::2018-01-11<br>
&gt; 16:55:16,144::dispatcher::82::<wbr>storage.Dispatcher::(wrapper) FINISH<br>
&gt; getStorageDomainInfo error=Domain is either partially accessible or entirely<br>
&gt; inaccessible: (u&#39;248f46f0-d793-4581-9810-c9d<wbr>965e2f286&#39;,)<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/libvirt/qemu/Cultivar<wbr>.log &lt;==<br>
&gt;<br>
&gt; LC_ALL=C PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
&gt; QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
&gt; guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
&gt; secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/<wbr>domain-108-Cultivar/master-<wbr>key.aes<br>
&gt; -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
&gt; Conroe -m 8192 -realtime mlock=off -smp<br>
&gt; 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
&gt; 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
&gt; &#39;type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
&gt; Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-4300-1034-<wbr>8035-CAC04F424331,uuid=4013c82<wbr>9-c9d7-4b72-90d5-6fe58137504c&#39;<br>
&gt; -no-user-config -nodefaults -chardev<br>
&gt; socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-108-<wbr>Cultivar/monitor.sock,server,<wbr>nowait<br>
&gt; -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
&gt; base=2018-01-11T20:33:19,drift<wbr>fix=slew -global<br>
&gt; kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
&gt; piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
&gt; virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4 -drive<br>
&gt; file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-<wbr>c8e04cf387f9/23aa0a66-fa6c-<wbr>4967-a1e5-fbe47c0cd705,format=<wbr>raw,if=none,id=drive-virtio-<wbr>disk0,serial=c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=threads<br>
&gt; -device<br>
&gt; virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
&gt; -drive if=none,id=drive-ide0-1-0,read<wbr>only=on -device<br>
&gt; ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
&gt; tap,fd=30,id=hostnet0,vhost=on<wbr>,vhostfd=32 -device<br>
&gt; virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:<wbr>83,bus=pci.0,addr=0x3<br>
&gt; -chardev<br>
&gt; socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,<wbr>server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
&gt; -chardev<br>
&gt; socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,<wbr>server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=org.qemu.<wbr>guest_agent.0<br>
&gt; -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
&gt; -chardev<br>
&gt; socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.<wbr>hosted-engine-setup.0<br>
&gt; -chardev pty,id=charconsole0 -device<br>
&gt; virtconsole,chardev=charconsol<wbr>e0,id=console0 -spice<br>
&gt; tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
&gt; -device cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -object<br>
&gt; rng-random,id=objrng0,filename<wbr>=/dev/urandom -device<br>
&gt; virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
&gt;<br>
&gt; 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
&gt; device redirected to /dev/pts/2 (label charconsole0)<br>
&gt;<br>
&gt; 2018-01-11 20:38:11.640+0000: shutting down, reason=shutdown<br>
&gt;<br>
&gt; 2018-01-11 20:39:02.122+0000: starting up libvirt version: 3.2.0, package:<br>
&gt; 14.el7_4.7 (CentOS BuildSystem &lt;<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>&gt;,<br>
&gt; <a href="tel:2018-01-04-19" value="+12018010419" target="_blank">2018-01-04-19</a>:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
&gt; 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1), hostname: cultivar3<br>
&gt;<br>
&gt; LC_ALL=C PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
&gt; QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
&gt; guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
&gt; secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/<wbr>domain-109-Cultivar/master-<wbr>key.aes<br>
&gt; -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
&gt; Conroe -m 8192 -realtime mlock=off -smp<br>
&gt; 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
&gt; 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
&gt; &#39;type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
&gt; Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-4300-1034-<wbr>8035-CAC04F424331,uuid=4013c82<wbr>9-c9d7-4b72-90d5-6fe58137504c&#39;<br>
&gt; -no-user-config -nodefaults -chardev<br>
&gt; socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-109-<wbr>Cultivar/monitor.sock,server,<wbr>nowait<br>
&gt; -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
&gt; base=2018-01-11T20:39:02,drift<wbr>fix=slew -global<br>
&gt; kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
&gt; piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
&gt; virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4 -drive<br>
&gt; file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-<wbr>c8e04cf387f9/23aa0a66-fa6c-<wbr>4967-a1e5-fbe47c0cd705,format=<wbr>raw,if=none,id=drive-virtio-<wbr>disk0,serial=c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=threads<br>
&gt; -device<br>
&gt; virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
&gt; -drive if=none,id=drive-ide0-1-0,read<wbr>only=on -device<br>
&gt; ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
&gt; tap,fd=30,id=hostnet0,vhost=on<wbr>,vhostfd=32 -device<br>
&gt; virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:<wbr>83,bus=pci.0,addr=0x3<br>
&gt; -chardev<br>
&gt; socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,<wbr>server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
&gt; -chardev<br>
&gt; socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,<wbr>server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=org.qemu.<wbr>guest_agent.0<br>
&gt; -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
&gt; -chardev<br>
&gt; socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.<wbr>hosted-engine-setup.0<br>
&gt; -chardev pty,id=charconsole0 -device<br>
&gt; virtconsole,chardev=charconsol<wbr>e0,id=console0 -spice<br>
&gt; tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
&gt; -device cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -object<br>
&gt; rng-random,id=objrng0,filename<wbr>=/dev/urandom -device<br>
&gt; virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
&gt;<br>
&gt; 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
&gt; device redirected to /dev/pts/2 (label charconsole0)<br>
&gt;<br>
&gt; 2018-01-11 20:53:11.407+0000: shutting down, reason=shutdown<br>
&gt;<br>
&gt; 2018-01-11 20:55:57.210+0000: starting up libvirt version: 3.2.0, package:<br>
&gt; 14.el7_4.7 (CentOS BuildSystem &lt;<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>&gt;,<br>
&gt; <a href="tel:2018-01-04-19" value="+12018010419" target="_blank">2018-01-04-19</a>:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
&gt; 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1), hostname:<br>
&gt; <a href="http://cultivar3.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar3.grove.silverorange.c<wbr>om</a><br>
&gt;<br>
&gt; LC_ALL=C PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
&gt; QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
&gt; guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
&gt; secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/<wbr>domain-110-Cultivar/master-<wbr>key.aes<br>
&gt; -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
&gt; Conroe -m 8192 -realtime mlock=off -smp<br>
&gt; 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
&gt; 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
&gt; &#39;type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
&gt; Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-4300-1034-<wbr>8035-CAC04F424331,uuid=4013c82<wbr>9-c9d7-4b72-90d5-6fe58137504c&#39;<br>
&gt; -no-user-config -nodefaults -chardev<br>
&gt; socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-110-<wbr>Cultivar/monitor.sock,server,<wbr>nowait<br>
&gt; -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
&gt; base=2018-01-11T20:55:57,drift<wbr>fix=slew -global<br>
&gt; kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
&gt; piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
&gt; virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4 -drive<br>
&gt; file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-<wbr>c8e04cf387f9/23aa0a66-fa6c-<wbr>4967-a1e5-fbe47c0cd705,format=<wbr>raw,if=none,id=drive-virtio-<wbr>disk0,serial=c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=threads<br>
&gt; -device<br>
&gt; virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
&gt; -drive if=none,id=drive-ide0-1-0,read<wbr>only=on -device<br>
&gt; ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
&gt; tap,fd=30,id=hostnet0,vhost=on<wbr>,vhostfd=32 -device<br>
&gt; virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:<wbr>83,bus=pci.0,addr=0x3<br>
&gt; -chardev<br>
&gt; socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,<wbr>server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
&gt; -chardev<br>
&gt; socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,<wbr>server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=org.qemu.<wbr>guest_agent.0<br>
&gt; -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
&gt; -chardev<br>
&gt; socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.<wbr>hosted-engine-setup.0<br>
&gt; -chardev pty,id=charconsole0 -device<br>
&gt; virtconsole,chardev=charconsol<wbr>e0,id=console0 -spice<br>
&gt; tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
&gt; -device cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -object<br>
&gt; rng-random,id=objrng0,filename<wbr>=/dev/urandom -device<br>
&gt; virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
&gt;<br>
&gt; 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
&gt; device redirected to /dev/pts/2 (label charconsole0)<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/ovirt-hosted-engine-h<wbr>a/broker.log &lt;==<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>broker/storage_broker.py&quot;,<br>
&gt; line 151, in get_raw_stats<br>
&gt;<br>
&gt;     f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)<br>
&gt;<br>
&gt; OSError: [Errno 2] No such file or directory:<br>
&gt; &#39;/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a/8582bdfc-ef54-<wbr>47af-9f1e-f5b7ec1f1cf8&#39;<br>
&gt;<br>
&gt; StatusStorageThread::ERROR::20<wbr>18-01-11<br>
&gt; 16:55:15,761::status_broker::9<wbr>2::ovirt_hosted_engine_ha.brok<wbr>er.status_broker.StatusBroker.<wbr>Update::(run)<br>
&gt; Failed to read state.<br>
&gt;<br>
&gt; Traceback (most recent call last):<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>broker/status_broker.py&quot;,<br>
&gt; line 88, in run<br>
&gt;<br>
&gt;     self._storage_broker.get_raw_<wbr>stats()<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>broker/storage_broker.py&quot;,<br>
&gt; line 162, in get_raw_stats<br>
&gt;<br>
&gt;     .format(str(e)))<br>
&gt;<br>
&gt; RequestError: failed to read metadata: [Errno 2] No such file or directory:<br>
&gt; &#39;/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a/8582bdfc-ef54-<wbr>47af-9f1e-f5b7ec1f1cf8&#39;<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/ovirt-hosted-engine-h<wbr>a/agent.log &lt;==<br>
&gt;<br>
&gt;     result = refresh_method()<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>env/config.py&quot;,<br>
&gt; line 519, in refresh_vm_conf<br>
&gt;<br>
&gt;     content = self._get_file_content_from_sh<wbr>ared_storage(VM)<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>env/config.py&quot;,<br>
&gt; line 484, in _get_file_content_from_shared_<wbr>storage<br>
&gt;<br>
&gt;     config_volume_path = self._get_config_volume_path()<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>env/config.py&quot;,<br>
&gt; line 188, in _get_config_volume_path<br>
&gt;<br>
&gt;     conf_vol_uuid<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/<wbr>lib/heconflib.py&quot;,<br>
&gt; line 358, in get_volume_path<br>
&gt;<br>
&gt;     root=envconst.SD_RUN_DIR,<br>
&gt;<br>
&gt; RuntimeError: Path to volume 4838749f-216d-406b-b245-98d034<wbr>3fcf7f not found<br>
&gt; in /run/vdsm/storag<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/vdsm/vdsm.log &lt;==<br>
&gt;<br>
&gt; periodic/42::ERROR::2018-01-11<br>
&gt; 16:56:11,446::vmstats::260::vi<wbr>rt.vmstats::(send_metrics) VM metrics<br>
&gt; collection failed<br>
&gt;<br>
&gt; Traceback (most recent call last):<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vmstats.py&quot;, line 197, in<br>
&gt; send_metrics<br>
&gt;<br>
&gt;     data[prefix + &#39;.cpu.usage&#39;] = stat[&#39;cpuUsage&#39;]<br>
&gt;<br>
&gt; KeyError: &#39;cpuUsage&#39;<br>
&gt;<br>
&gt;<br>
</div></div>&gt; ______________________________<wbr>_________________<br>
&gt; Users mailing list<br>
&gt; <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
&gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
&gt;<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>