<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak <span dir="ltr">&lt;<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
the hosted engine agent issue might be fixed by restarting<br>
ovirt-ha-broker or updating to newest ovirt-hosted-engine-ha and<br>
-setup. We improved handling of the missing symlink.<br></blockquote><div><br></div>Available just in oVirt 4.2.1 RC1<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
All the other issues seem to point to some storage problem I am afraid.<br>
<br>
You said you started the VM, do you see it in virsh -r list?<br>
<br>
Best regards<br>
<br>
Martin Sivak<br>
<div><div class="gmail-h5"><br>
On Thu, Jan 11, 2018 at 10:00 PM, Jayme &lt;<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>&gt; wrote:<br>
&gt; Please help, I&#39;m really not sure what else to try at this point.  Thank you<br>
&gt; for reading!<br>
&gt;<br>
&gt;<br>
&gt; I&#39;m still working on trying to get my hosted engine running after a botched<br>
&gt; upgrade to 4.2.  Storage is NFS mounted from within one of the hosts.  Right<br>
&gt; now I have 3 centos7 hosts that are fully updated with yum packages from<br>
&gt; ovirt 4.2, the engine was fully updated with yum packages and failed to come<br>
&gt; up after reboot.  As of right now, everything should have full yum updates<br>
&gt; and all having 4.2 rpms.  I have global maintenance mode on right now and<br>
&gt; started hosted-engine on one of the three host and the status is currently:<br>
&gt; Engine status : {&quot;reason&quot;: &quot;failed liveliness check”; &quot;health&quot;: &quot;bad&quot;, &quot;vm&quot;:<br>
&gt; &quot;up&quot;, &quot;detail&quot;: &quot;Up&quot;}<br>
&gt;<br>
&gt;<br>
&gt; this is what I get when trying to enter hosted-vm --console<br>
&gt;<br>
&gt;<br>
&gt; The engine VM is running on this host<br>
&gt;<br>
&gt; error: failed to get domain &#39;HostedEngine&#39;<br>
&gt;<br>
&gt; error: Domain not found: no domain with matching name &#39;HostedEngine&#39;<br>
&gt;<br>
&gt;<br>
&gt; Here are logs from various sources when I start the VM on HOST3:<br>
&gt;<br>
&gt;<br>
&gt; hosted-engine --vm-start<br>
&gt;<br>
&gt; Command VM.getStats with args {&#39;vmID&#39;:<br>
&gt; &#39;4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c&#39;} failed:<br>
&gt;<br>
&gt; (code=1, message=Virtual machine does not exist: {&#39;vmId&#39;:<br>
&gt; u&#39;4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c&#39;})<br>
&gt;<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 systemd-machined: New machine qemu-110-Cultivar.<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 systemd: Started Virtual Machine<br>
&gt; qemu-110-Cultivar.<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 systemd: Starting Virtual Machine<br>
&gt; qemu-110-Cultivar.<br>
&gt;<br>
&gt; Jan 11 16:55:57 cultivar3 kvm: 3 guests now active<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/vdsm/vdsm.log &lt;==<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py&quot;, line 48, in<br>
&gt; method<br>
&gt;<br>
&gt;     ret = func(*args, **kwargs)<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/hsm.py&quot;, line 2718, in<br>
&gt; getStorageDomainInfo<br>
&gt;<br>
&gt;     dom = self.validateSdUUID(sdUUID)<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/hsm.py&quot;, line 304, in<br>
&gt; validateSdUUID<br>
&gt;<br>
&gt;     sdDom.validate()<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/fileSD.<wbr>py&quot;, line 515,<br>
&gt; in validate<br>
&gt;<br>
&gt;     raise se.StorageDomainAccessError(<wbr>self.sdUUID)<br>
&gt;<br>
&gt; StorageDomainAccessError: Domain is either partially accessible or entirely<br>
&gt; inaccessible: (u&#39;248f46f0-d793-4581-9810-<wbr>c9d965e2f286&#39;,)<br>
&gt;<br>
&gt; jsonrpc/2::ERROR::2018-01-11<br>
&gt; 16:55:16,144::dispatcher::82::<wbr>storage.Dispatcher::(wrapper) FINISH<br>
&gt; getStorageDomainInfo error=Domain is either partially accessible or entirely<br>
&gt; inaccessible: (u&#39;248f46f0-d793-4581-9810-<wbr>c9d965e2f286&#39;,)<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/libvirt/qemu/<wbr>Cultivar.log &lt;==<br>
&gt;<br>
&gt; LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
&gt; QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
&gt; guest=Cultivar,debug-threads=<wbr>on -S -object<br>
&gt; secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-108-Cultivar/<wbr>master-key.aes<br>
&gt; -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
&gt; Conroe -m 8192 -realtime mlock=off -smp<br>
&gt; 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
&gt; 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
&gt; &#39;type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
&gt; Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c&#39;<br>
&gt; -no-user-config -nodefaults -chardev<br>
&gt; socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>108-Cultivar/monitor.sock,<wbr>server,nowait<br>
&gt; -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
&gt; base=2018-01-11T20:33:19,<wbr>driftfix=slew -global<br>
&gt; kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
&gt; piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
&gt; virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
&gt; file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
&gt; -device<br>
&gt; virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
&gt; -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
&gt; ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
&gt; tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
&gt; virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
&gt; -chardev<br>
&gt; socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
&gt; -chardev<br>
&gt; socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
&gt; -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
&gt; -chardev<br>
&gt; socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
&gt; -chardev pty,id=charconsole0 -device<br>
&gt; virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
&gt; tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
&gt; -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
&gt; rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
&gt; virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
&gt;<br>
&gt; 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
&gt; device redirected to /dev/pts/2 (label charconsole0)<br>
&gt;<br>
&gt; 2018-01-11 20:38:11.640+0000: shutting down, reason=shutdown<br>
&gt;<br>
&gt; 2018-01-11 20:39:02.122+0000: starting up libvirt version: 3.2.0, package:<br>
&gt; 14.el7_4.7 (CentOS BuildSystem &lt;<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>&gt;,<br>
&gt; <a href="tel:2018-01-04-19" value="+12018010419">2018-01-04-19</a>:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
&gt; 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname: cultivar3<br>
&gt;<br>
&gt; LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
&gt; QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
&gt; guest=Cultivar,debug-threads=<wbr>on -S -object<br>
&gt; secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-109-Cultivar/<wbr>master-key.aes<br>
&gt; -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
&gt; Conroe -m 8192 -realtime mlock=off -smp<br>
&gt; 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
&gt; 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
&gt; &#39;type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
&gt; Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c&#39;<br>
&gt; -no-user-config -nodefaults -chardev<br>
&gt; socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>109-Cultivar/monitor.sock,<wbr>server,nowait<br>
&gt; -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
&gt; base=2018-01-11T20:39:02,<wbr>driftfix=slew -global<br>
&gt; kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
&gt; piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
&gt; virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
&gt; file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
&gt; -device<br>
&gt; virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
&gt; -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
&gt; ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
&gt; tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
&gt; virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
&gt; -chardev<br>
&gt; socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
&gt; -chardev<br>
&gt; socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
&gt; -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
&gt; -chardev<br>
&gt; socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
&gt; -chardev pty,id=charconsole0 -device<br>
&gt; virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
&gt; tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
&gt; -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
&gt; rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
&gt; virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
&gt;<br>
&gt; 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
&gt; device redirected to /dev/pts/2 (label charconsole0)<br>
&gt;<br>
&gt; 2018-01-11 20:53:11.407+0000: shutting down, reason=shutdown<br>
&gt;<br>
&gt; 2018-01-11 20:55:57.210+0000: starting up libvirt version: 3.2.0, package:<br>
&gt; 14.el7_4.7 (CentOS BuildSystem &lt;<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>&gt;,<br>
&gt; <a href="tel:2018-01-04-19" value="+12018010419">2018-01-04-19</a>:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
&gt; 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
&gt; <a href="http://cultivar3.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar3.grove.silverorange.<wbr>com</a><br>
&gt;<br>
&gt; LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
&gt; QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
&gt; guest=Cultivar,debug-threads=<wbr>on -S -object<br>
&gt; secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-110-Cultivar/<wbr>master-key.aes<br>
&gt; -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
&gt; Conroe -m 8192 -realtime mlock=off -smp<br>
&gt; 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
&gt; 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
&gt; &#39;type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
&gt; Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c&#39;<br>
&gt; -no-user-config -nodefaults -chardev<br>
&gt; socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>110-Cultivar/monitor.sock,<wbr>server,nowait<br>
&gt; -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
&gt; base=2018-01-11T20:55:57,<wbr>driftfix=slew -global<br>
&gt; kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
&gt; piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
&gt; virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
&gt; file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
&gt; -device<br>
&gt; virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
&gt; -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
&gt; ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
&gt; tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
&gt; virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
&gt; -chardev<br>
&gt; socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
&gt; -chardev<br>
&gt; socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
&gt; -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
&gt; -chardev<br>
&gt; socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
&gt; -device<br>
&gt; virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
&gt; -chardev pty,id=charconsole0 -device<br>
&gt; virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
&gt; tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
&gt; -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
&gt; rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
&gt; virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
&gt;<br>
&gt; 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
&gt; device redirected to /dev/pts/2 (label charconsole0)<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/ovirt-hosted-engine-<wbr>ha/broker.log &lt;==<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py&quot;,<br>
&gt; line 151, in get_raw_stats<br>
&gt;<br>
&gt;     f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)<br>
&gt;<br>
&gt; OSError: [Errno 2] No such file or directory:<br>
&gt; &#39;/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8&#39;<br>
&gt;<br>
&gt; StatusStorageThread::ERROR::<wbr>2018-01-11<br>
&gt; 16:55:15,761::status_broker::<wbr>92::ovirt_hosted_engine_ha.<wbr>broker.status_broker.<wbr>StatusBroker.Update::(run)<br>
&gt; Failed to read state.<br>
&gt;<br>
&gt; Traceback (most recent call last):<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/status_broker.py&quot;,<br>
&gt; line 88, in run<br>
&gt;<br>
&gt;     self._storage_broker.get_raw_<wbr>stats()<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py&quot;,<br>
&gt; line 162, in get_raw_stats<br>
&gt;<br>
&gt;     .format(str(e)))<br>
&gt;<br>
&gt; RequestError: failed to read metadata: [Errno 2] No such file or directory:<br>
&gt; &#39;/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8&#39;<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/ovirt-hosted-engine-<wbr>ha/agent.log &lt;==<br>
&gt;<br>
&gt;     result = refresh_method()<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py&quot;,<br>
&gt; line 519, in refresh_vm_conf<br>
&gt;<br>
&gt;     content = self._get_file_content_from_<wbr>shared_storage(VM)<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py&quot;,<br>
&gt; line 484, in _get_file_content_from_shared_<wbr>storage<br>
&gt;<br>
&gt;     config_volume_path = self._get_config_volume_path()<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py&quot;,<br>
&gt; line 188, in _get_config_volume_path<br>
&gt;<br>
&gt;     conf_vol_uuid<br>
&gt;<br>
&gt;   File<br>
&gt; &quot;/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/lib/heconflib.py&quot;,<br>
&gt; line 358, in get_volume_path<br>
&gt;<br>
&gt;     root=envconst.SD_RUN_DIR,<br>
&gt;<br>
&gt; RuntimeError: Path to volume 4838749f-216d-406b-b245-<wbr>98d0343fcf7f not found<br>
&gt; in /run/vdsm/storag<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; ==&gt; /var/log/vdsm/vdsm.log &lt;==<br>
&gt;<br>
&gt; periodic/42::ERROR::2018-01-11<br>
&gt; 16:56:11,446::vmstats::260::<wbr>virt.vmstats::(send_metrics) VM metrics<br>
&gt; collection failed<br>
&gt;<br>
&gt; Traceback (most recent call last):<br>
&gt;<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py&quot;<wbr>, line 197, in<br>
&gt; send_metrics<br>
&gt;<br>
&gt;     data[prefix + &#39;.cpu.usage&#39;] = stat[&#39;cpuUsage&#39;]<br>
&gt;<br>
&gt; KeyError: &#39;cpuUsage&#39;<br>
&gt;<br>
&gt;<br>
</div></div>&gt; ______________________________<wbr>_________________<br>
&gt; Users mailing list<br>
&gt; <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
&gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
&gt;<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</blockquote></div><br></div></div>