<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak <span dir="ltr"><<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
the hosted engine agent issue might be fixed by restarting<br>
ovirt-ha-broker or updating to newest ovirt-hosted-engine-ha and<br>
-setup. We improved handling of the missing symlink.<br></blockquote><div><br></div>Available just in oVirt 4.2.1 RC1<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
All the other issues seem to point to some storage problem I am afraid.<br>
<br>
You said you started the VM, do you see it in virsh -r list?<br>
<br>
Best regards<br>
<br>
Martin Sivak<br>
<div><div class="gmail-h5"><br>
On Thu, Jan 11, 2018 at 10:00 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
> Please help, I'm really not sure what else to try at this point. Thank you<br>
> for reading!<br>
><br>
><br>
> I'm still working on trying to get my hosted engine running after a botched<br>
> upgrade to 4.2. Storage is NFS mounted from within one of the hosts. Right<br>
> now I have 3 centos7 hosts that are fully updated with yum packages from<br>
> ovirt 4.2, the engine was fully updated with yum packages and failed to come<br>
> up after reboot. As of right now, everything should have full yum updates<br>
> and all having 4.2 rpms. I have global maintenance mode on right now and<br>
> started hosted-engine on one of the three host and the status is currently:<br>
> Engine status : {"reason": "failed liveliness check”; "health": "bad", "vm":<br>
> "up", "detail": "Up"}<br>
><br>
><br>
> this is what I get when trying to enter hosted-vm --console<br>
><br>
><br>
> The engine VM is running on this host<br>
><br>
> error: failed to get domain 'HostedEngine'<br>
><br>
> error: Domain not found: no domain with matching name 'HostedEngine'<br>
><br>
><br>
> Here are logs from various sources when I start the VM on HOST3:<br>
><br>
><br>
> hosted-engine --vm-start<br>
><br>
> Command VM.getStats with args {'vmID':<br>
> '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'} failed:<br>
><br>
> (code=1, message=Virtual machine does not exist: {'vmId':<br>
> u'4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'})<br>
><br>
><br>
> Jan 11 16:55:57 cultivar3 systemd-machined: New machine qemu-110-Cultivar.<br>
><br>
> Jan 11 16:55:57 cultivar3 systemd: Started Virtual Machine<br>
> qemu-110-Cultivar.<br>
><br>
> Jan 11 16:55:57 cultivar3 systemd: Starting Virtual Machine<br>
> qemu-110-Cultivar.<br>
><br>
> Jan 11 16:55:57 cultivar3 kvm: 3 guests now active<br>
><br>
><br>
> ==> /var/log/vdsm/vdsm.log <==<br>
><br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py", line 48, in<br>
> method<br>
><br>
> ret = func(*args, **kwargs)<br>
><br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/hsm.py", line 2718, in<br>
> getStorageDomainInfo<br>
><br>
> dom = self.validateSdUUID(sdUUID)<br>
><br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/hsm.py", line 304, in<br>
> validateSdUUID<br>
><br>
> sdDom.validate()<br>
><br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/fileSD.<wbr>py", line 515,<br>
> in validate<br>
><br>
> raise se.StorageDomainAccessError(<wbr>self.sdUUID)<br>
><br>
> StorageDomainAccessError: Domain is either partially accessible or entirely<br>
> inaccessible: (u'248f46f0-d793-4581-9810-<wbr>c9d965e2f286',)<br>
><br>
> jsonrpc/2::ERROR::2018-01-11<br>
> 16:55:16,144::dispatcher::82::<wbr>storage.Dispatcher::(wrapper) FINISH<br>
> getStorageDomainInfo error=Domain is either partially accessible or entirely<br>
> inaccessible: (u'248f46f0-d793-4581-9810-<wbr>c9d965e2f286',)<br>
><br>
><br>
> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
><br>
> LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
> guest=Cultivar,debug-threads=<wbr>on -S -object<br>
> secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-108-Cultivar/<wbr>master-key.aes<br>
> -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
> Conroe -m 8192 -realtime mlock=off -smp<br>
> 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
> 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
> 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
> Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
> -no-user-config -nodefaults -chardev<br>
> socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>108-Cultivar/monitor.sock,<wbr>server,nowait<br>
> -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
> base=2018-01-11T20:33:19,<wbr>driftfix=slew -global<br>
> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
> piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
> virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
> file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
> -device<br>
> virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
> -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
> ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
> tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
> virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
> -chardev<br>
> socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
> -chardev<br>
> socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
> -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
> -chardev<br>
> socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
> -chardev pty,id=charconsole0 -device<br>
> virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
> tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
> -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
> rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
> virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
><br>
> 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
> device redirected to /dev/pts/2 (label charconsole0)<br>
><br>
> 2018-01-11 20:38:11.640+0000: shutting down, reason=shutdown<br>
><br>
> 2018-01-11 20:39:02.122+0000: starting up libvirt version: 3.2.0, package:<br>
> 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
> <a href="tel:2018-01-04-19" value="+12018010419">2018-01-04-19</a>:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
> 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname: cultivar3<br>
><br>
> LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
> guest=Cultivar,debug-threads=<wbr>on -S -object<br>
> secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-109-Cultivar/<wbr>master-key.aes<br>
> -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
> Conroe -m 8192 -realtime mlock=off -smp<br>
> 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
> 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
> 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
> Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
> -no-user-config -nodefaults -chardev<br>
> socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>109-Cultivar/monitor.sock,<wbr>server,nowait<br>
> -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
> base=2018-01-11T20:39:02,<wbr>driftfix=slew -global<br>
> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
> piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
> virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
> file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
> -device<br>
> virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
> -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
> ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
> tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
> virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
> -chardev<br>
> socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
> -chardev<br>
> socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
> -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
> -chardev<br>
> socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
> -chardev pty,id=charconsole0 -device<br>
> virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
> tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
> -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
> rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
> virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
><br>
> 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
> device redirected to /dev/pts/2 (label charconsole0)<br>
><br>
> 2018-01-11 20:53:11.407+0000: shutting down, reason=shutdown<br>
><br>
> 2018-01-11 20:55:57.210+0000: starting up libvirt version: 3.2.0, package:<br>
> 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
> <a href="tel:2018-01-04-19" value="+12018010419">2018-01-04-19</a>:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
> 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
> <a href="http://cultivar3.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar3.grove.silverorange.<wbr>com</a><br>
><br>
> LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
> guest=Cultivar,debug-threads=<wbr>on -S -object<br>
> secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-110-Cultivar/<wbr>master-key.aes<br>
> -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
> Conroe -m 8192 -realtime mlock=off -smp<br>
> 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
> 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
> 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
> Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
> -no-user-config -nodefaults -chardev<br>
> socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>110-Cultivar/monitor.sock,<wbr>server,nowait<br>
> -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
> base=2018-01-11T20:55:57,<wbr>driftfix=slew -global<br>
> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
> piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
> virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
> file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
> -device<br>
> virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
> -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
> ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
> tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
> virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
> -chardev<br>
> socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
> -chardev<br>
> socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
> -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
> -chardev<br>
> socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
> -chardev pty,id=charconsole0 -device<br>
> virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
> tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
> -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
> rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
> virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
><br>
> 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
> device redirected to /dev/pts/2 (label charconsole0)<br>
><br>
><br>
> ==> /var/log/ovirt-hosted-engine-<wbr>ha/broker.log <==<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py",<br>
> line 151, in get_raw_stats<br>
><br>
> f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)<br>
><br>
> OSError: [Errno 2] No such file or directory:<br>
> '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
><br>
> StatusStorageThread::ERROR::<wbr>2018-01-11<br>
> 16:55:15,761::status_broker::<wbr>92::ovirt_hosted_engine_ha.<wbr>broker.status_broker.<wbr>StatusBroker.Update::(run)<br>
> Failed to read state.<br>
><br>
> Traceback (most recent call last):<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/status_broker.py",<br>
> line 88, in run<br>
><br>
> self._storage_broker.get_raw_<wbr>stats()<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py",<br>
> line 162, in get_raw_stats<br>
><br>
> .format(str(e)))<br>
><br>
> RequestError: failed to read metadata: [Errno 2] No such file or directory:<br>
> '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
><br>
><br>
> ==> /var/log/ovirt-hosted-engine-<wbr>ha/agent.log <==<br>
><br>
> result = refresh_method()<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py",<br>
> line 519, in refresh_vm_conf<br>
><br>
> content = self._get_file_content_from_<wbr>shared_storage(VM)<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py",<br>
> line 484, in _get_file_content_from_shared_<wbr>storage<br>
><br>
> config_volume_path = self._get_config_volume_path()<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py",<br>
> line 188, in _get_config_volume_path<br>
><br>
> conf_vol_uuid<br>
><br>
> File<br>
> "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/lib/heconflib.py",<br>
> line 358, in get_volume_path<br>
><br>
> root=envconst.SD_RUN_DIR,<br>
><br>
> RuntimeError: Path to volume 4838749f-216d-406b-b245-<wbr>98d0343fcf7f not found<br>
> in /run/vdsm/storag<br>
><br>
><br>
><br>
> ==> /var/log/vdsm/vdsm.log <==<br>
><br>
> periodic/42::ERROR::2018-01-11<br>
> 16:56:11,446::vmstats::260::<wbr>virt.vmstats::(send_metrics) VM metrics<br>
> collection failed<br>
><br>
> Traceback (most recent call last):<br>
><br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py"<wbr>, line 197, in<br>
> send_metrics<br>
><br>
> data[prefix + '.cpu.usage'] = stat['cpuUsage']<br>
><br>
> KeyError: 'cpuUsage'<br>
><br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
><br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</blockquote></div><br></div></div>