<div dir="ltr">The lock space issue was an issue I needed to clear but I don't believe it has resolved the problem. I shutdown agent and broker on all hosts and disconnected hosted-storage then enabled broker/agent on just one host and connected storage. I started the VM and actually didn't get any errors in the logs barely at all which was good to see, however the VM is still not running:<div><br></div><div>HOST3:</div><div><br></div><div>Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}<br></div><div><br></div><div><div>==> /var/log/messages <==</div><div>Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered disabled state</div><div>Jan 12 12:42:57 cultivar3 kernel: device vnet0 entered promiscuous mode</div><div>Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered blocking state</div><div>Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered forwarding state</div><div>Jan 12 12:42:57 cultivar3 lldpad: recvfrom(Event interface): No buffer space available</div><div>Jan 12 12:42:57 cultivar3 systemd-machined: New machine qemu-111-Cultivar.</div><div>Jan 12 12:42:57 cultivar3 systemd: Started Virtual Machine qemu-111-Cultivar.</div><div>Jan 12 12:42:57 cultivar3 systemd: Starting Virtual Machine qemu-111-Cultivar.</div><div>Jan 12 12:42:57 cultivar3 kvm: 3 guests now active</div><div>Jan 12 12:44:38 cultivar3 libvirtd: 2018-01-12 16:44:38.737+0000: 1535: error : qemuDomainAgentAvailable:6010 : Guest agent is not responding: QEMU guest agent is not connected</div></div><div><br></div><div>Interestingly though, now I'm seeing this in the logs which may be a new clue:</div><div><br></div><div><br></div><div><div>==> /var/log/vdsm/vdsm.log <==</div><div> File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 126, in findDomain</div><div> return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))</div><div> File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 116, in findDomainPath</div><div> raise se.StorageDomainDoesNotExist(sdUUID)</div><div>StorageDomainDoesNotExist: Storage domain does not exist: (u'248f46f0-d793-4581-9810-c9d965e2f286',)</div><div>jsonrpc/4::ERROR::2018-01-12 12:40:30,380::dispatcher::82::storage.Dispatcher::(wrapper) FINISH getStorageDomainInfo error=Storage domain does not exist: (u'248f46f0-d793-4581-9810-c9d965e2f286',)</div><div>periodic/42::ERROR::2018-01-12 12:40:35,430::api::196::root::(_getHaInfo) failed to retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup finished?</div><div>periodic/43::ERROR::2018-01-12 12:40:50,473::api::196::root::(_getHaInfo) failed to retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup finished?</div><div>periodic/40::ERROR::2018-01-12 12:41:05,519::api::196::root::(_getHaInfo) failed to retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup finished?</div><div>periodic/43::ERROR::2018-01-12 12:41:20,566::api::196::root::(_getHaInfo) failed to retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup finished?</div><div><br></div><div>==> /var/log/ovirt-hosted-engine-ha/broker.log <==</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 151, in get_raw_stats</div><div> f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)</div><div>OSError: [Errno 2] No such file or directory: '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'</div><div>StatusStorageThread::ERROR::2018-01-12 12:32:06,049::status_broker::92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run) Failed to read state.</div><div>Traceback (most recent call last):</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 88, in run</div><div> self._storage_broker.get_raw_stats()</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 162, in get_raw_stats</div><div> .format(str(e)))</div><div>RequestError: failed to read metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 12:02 PM, Martin Sivak <span dir="ltr"><<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The lock is the issue.<br>
<br>
- try running sanlock client status on all hosts<br>
- also make sure you do not have some forgotten host still connected<br>
to the lockspace, but without ha daemons running (and with the VM)<br>
<br>
I need to go to our president election now, I might check the email<br>
later tonight.<br>
<br>
Martin<br>
<div><div class="h5"><br>
On Fri, Jan 12, 2018 at 4:59 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
> Here are the newest logs from me trying to start hosted vm:<br>
><br>
> ==> /var/log/messages <==<br>
> Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered blocking<br>
> state<br>
> Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered disabled<br>
> state<br>
> Jan 12 11:58:14 cultivar0 kernel: device vnet4 entered promiscuous mode<br>
> Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered blocking<br>
> state<br>
> Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
> forwarding state<br>
> Jan 12 11:58:14 cultivar0 lldpad: recvfrom(Event interface): No buffer space<br>
> available<br>
> Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8715]<br>
> manager: (vnet4): new Tun device<br>
> (/org/freedesktop/<wbr>NetworkManager/Devices/140)<br>
> Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8795]<br>
> device (vnet4): state change: unmanaged -> unavailable (reason<br>
> 'connection-assumed') [10 20 41]<br>
><br>
> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
> 2018-01-12 15:58:14.879+0000: starting up libvirt version: 3.2.0, package:<br>
> 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
> 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
> 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
> <a href="http://cultivar0.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar0.grove.silverorange.<wbr>com</a><br>
> LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
> guest=Cultivar,debug-threads=<wbr>on -S -object<br>
> secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-119-Cultivar/<wbr>master-key.aes<br>
> -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
> Conroe -m 8192 -realtime mlock=off -smp<br>
> 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
> 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
> 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
> Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-3300-<wbr>1042-8031-B4C04F4B4831,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
> -no-user-config -nodefaults -chardev<br>
> socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>119-Cultivar/monitor.sock,<wbr>server,nowait<br>
> -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
> base=2018-01-12T15:58:14,<wbr>driftfix=slew -global<br>
> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
> piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
> virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
> file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
> -device<br>
> virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
> -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
> ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
> tap,fd=35,id=hostnet0,vhost=<wbr>on,vhostfd=38 -device<br>
> virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
> -chardev<br>
> socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
> -chardev<br>
> socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
> -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
> -chardev<br>
> socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
> -device<br>
> virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
> -chardev pty,id=charconsole0 -device<br>
> virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
> tls-port=5904,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
> -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
> rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
> virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
><br>
> ==> /var/log/messages <==<br>
> Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8807]<br>
> device (vnet4): state change: unavailable -> disconnected (reason 'none')<br>
> [20 30 0]<br>
> Jan 12 11:58:14 cultivar0 systemd-machined: New machine qemu-119-Cultivar.<br>
> Jan 12 11:58:14 cultivar0 systemd: Started Virtual Machine<br>
> qemu-119-Cultivar.<br>
> Jan 12 11:58:14 cultivar0 systemd: Starting Virtual Machine<br>
> qemu-119-Cultivar.<br>
><br>
> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
> 2018-01-12T15:58:15.094002Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
> device redirected to /dev/pts/1 (label charconsole0)<br>
><br>
> ==> /var/log/messages <==<br>
> Jan 12 11:58:15 cultivar0 kvm: 5 guests now active<br>
><br>
> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
> 2018-01-12 15:58:15.217+0000: shutting down, reason=failed<br>
><br>
> ==> /var/log/messages <==<br>
> Jan 12 11:58:15 cultivar0 libvirtd: 2018-01-12 15:58:15.217+0000: 1908:<br>
> error : virLockManagerSanlockAcquire:<wbr>1041 : resource busy: Failed to acquire<br>
> lock: Lease is held by another host<br>
><br>
> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
> 2018-01-12T15:58:15.219934Z qemu-kvm: terminating on signal 15 from pid 1773<br>
> (/usr/sbin/libvirtd)<br>
><br>
> ==> /var/log/messages <==<br>
> Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered disabled<br>
> state<br>
> Jan 12 11:58:15 cultivar0 kernel: device vnet4 left promiscuous mode<br>
> Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered disabled<br>
> state<br>
> Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info> [1515772695.2348]<br>
> device (vnet4): state change: disconnected -> unmanaged (reason 'unmanaged')<br>
> [30 10 3]<br>
> Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info> [1515772695.2349]<br>
> device (vnet4): released from master device ovirtmgmt<br>
> Jan 12 11:58:15 cultivar0 kvm: 4 guests now active<br>
> Jan 12 11:58:15 cultivar0 systemd-machined: Machine qemu-119-Cultivar<br>
> terminated.<br>
><br>
> ==> /var/log/vdsm/vdsm.log <==<br>
> vm/4013c829::ERROR::2018-01-12<br>
> 11:58:15,444::vm::914::virt.<wbr>vm::(_startUnderlyingVm)<br>
> (vmId='4013c829-c9d7-4b72-<wbr>90d5-6fe58137504c') The vm start process failed<br>
> Traceback (most recent call last):<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 843, in<br>
> _startUnderlyingVm<br>
> self._run()<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 2721, in<br>
> _run<br>
> dom.createWithFlags(flags)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/<wbr>libvirtconnection.py", line<br>
> 126, in wrapper<br>
> ret = f(*args, **kwargs)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/utils.py", line 512, in<br>
> wrapper<br>
> return func(inst, *args, **kwargs)<br>
> File "/usr/lib64/python2.7/site-<wbr>packages/libvirt.py", line 1069, in<br>
> createWithFlags<br>
> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',<br>
> dom=self)<br>
> libvirtError: resource busy: Failed to acquire lock: Lease is held by<br>
> another host<br>
> jsonrpc/6::ERROR::2018-01-12<br>
> 11:58:16,421::__init__::611::<wbr>jsonrpc.JsonRpcServer::(_<wbr>handle_request)<br>
> Internal server error<br>
> Traceback (most recent call last):<br>
> File "/usr/lib/python2.7/site-<wbr>packages/yajsonrpc/__init__.<wbr>py", line 606,<br>
> in _handle_request<br>
> res = method(**params)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/rpc/Bridge.py", line 201, in<br>
> _dynamicMethod<br>
> result = fn(*methodArgs)<br>
> File "<string>", line 2, in getAllVmIoTunePolicies<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py", line 48, in<br>
> method<br>
> ret = func(*args, **kwargs)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/API.py", line 1354, in<br>
> getAllVmIoTunePolicies<br>
> io_tune_policies_dict = self._cif.<wbr>getAllVmIoTunePolicies()<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/clientIF.py", line 524, in<br>
> getAllVmIoTunePolicies<br>
> 'current_values': v.getIoTune()}<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3481, in<br>
> getIoTune<br>
> result = self.getIoTuneResponse()<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3500, in<br>
> getIoTuneResponse<br>
> res = self._dom.blockIoTune(<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py", line 47,<br>
> in __getattr__<br>
> % self.vmid)<br>
> NotConnectedError: VM '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c' was not defined<br>
> yet or was undefined<br>
><br>
> ==> /var/log/messages <==<br>
> Jan 12 11:58:16 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR Internal<br>
> server error#012Traceback (most recent call last):#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/yajsonrpc/__init__.<wbr>py", line 606, in<br>
> _handle_request#012 res = method(**params)#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/rpc/Bridge.py", line 201, in<br>
> _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>", line 2,<br>
> in getAllVmIoTunePolicies#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py", line 48, in<br>
> method#012 ret = func(*args, **kwargs)#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/API.py", line 1354, in<br>
> getAllVmIoTunePolicies#012 io_tune_policies_dict =<br>
> self._cif.<wbr>getAllVmIoTunePolicies()#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/clientIF.py", line 524, in<br>
> getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3481, in<br>
> getIoTune#012 result = self.getIoTuneResponse()#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3500, in<br>
> getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File<br>
> "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py", line 47, in<br>
> __getattr__#012 % self.vmid)#<wbr>012NotConnectedError: VM<br>
> '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c' was not defined yet or was undefined<br>
><br>
> On Fri, Jan 12, 2018 at 11:55 AM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
>><br>
>> One other tidbit I noticed is that it seems like there are less errors if<br>
>> I started in paused mode:<br>
>><br>
>> but still shows: Engine status : {"reason": "bad vm<br>
>> status", "health": "bad", "vm": "up", "detail": "Paused"}<br>
>><br>
>> ==> /var/log/messages <==<br>
>> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> blocking state<br>
>> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> disabled state<br>
>> Jan 12 11:55:05 cultivar0 kernel: device vnet4 entered promiscuous mode<br>
>> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> blocking state<br>
>> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> forwarding state<br>
>> Jan 12 11:55:05 cultivar0 lldpad: recvfrom(Event interface): No buffer<br>
>> space available<br>
>> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info> [1515772505.3625]<br>
>> manager: (vnet4): new Tun device<br>
>> (/org/freedesktop/<wbr>NetworkManager/Devices/139)<br>
>><br>
>> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>> 2018-01-12 15:55:05.370+0000: starting up libvirt version: 3.2.0, package:<br>
>> 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>> 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
>> 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
>> <a href="http://cultivar0.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar0.grove.silverorange.<wbr>com</a><br>
>> LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
>> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>> guest=Cultivar,debug-threads=<wbr>on -S -object<br>
>> secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-118-Cultivar/<wbr>master-key.aes<br>
>> -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu<br>
>> Conroe -m 8192 -realtime mlock=off -smp<br>
>> 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>> 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
>> 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
>> Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-3300-<wbr>1042-8031-B4C04F4B4831,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
>> -no-user-config -nodefaults -chardev<br>
>> socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>118-Cultivar/monitor.sock,<wbr>server,nowait<br>
>> -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
>> base=2018-01-12T15:55:05,<wbr>driftfix=slew -global<br>
>> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device<br>
>> piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>> virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
>> file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> -device<br>
>> virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
>> -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
>> ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
>> tap,fd=35,id=hostnet0,vhost=<wbr>on,vhostfd=38 -device<br>
>> virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
>> -chardev<br>
>> socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
>> -device<br>
>> virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
>> -chardev<br>
>> socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
>> -device<br>
>> virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
>> -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
>> virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
>> -chardev<br>
>> socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
>> -device<br>
>> virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
>> -chardev pty,id=charconsole0 -device<br>
>> virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
>> tls-port=5904,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
>> -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
>> rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
>> virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
>><br>
>> ==> /var/log/messages <==<br>
>> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info> [1515772505.3689]<br>
>> device (vnet4): state change: unmanaged -> unavailable (reason<br>
>> 'connection-assumed') [10 20 41]<br>
>> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info> [1515772505.3702]<br>
>> device (vnet4): state change: unavailable -> disconnected (reason 'none')<br>
>> [20 30 0]<br>
>> Jan 12 11:55:05 cultivar0 systemd-machined: New machine qemu-118-Cultivar.<br>
>> Jan 12 11:55:05 cultivar0 systemd: Started Virtual Machine<br>
>> qemu-118-Cultivar.<br>
>> Jan 12 11:55:05 cultivar0 systemd: Starting Virtual Machine<br>
>> qemu-118-Cultivar.<br>
>><br>
>> ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>> 2018-01-12T15:55:05.586827Z qemu-kvm: -chardev pty,id=charconsole0: char<br>
>> device redirected to /dev/pts/1 (label charconsole0)<br>
>><br>
>> ==> /var/log/messages <==<br>
>> Jan 12 11:55:05 cultivar0 kvm: 5 guests now active<br>
>><br>
>> On Fri, Jan 12, 2018 at 11:36 AM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
>>><br>
>>> Yeah I am in global maintenance:<br>
>>><br>
>>> state=GlobalMaintenance<br>
>>><br>
>>> host0: {"reason": "vm not running on this host", "health": "bad", "vm":<br>
>>> "down", "detail": "unknown"}<br>
>>> host2: {"reason": "vm not running on this host", "health": "bad", "vm":<br>
>>> "down", "detail": "unknown"}<br>
>>> host3: {"reason": "vm not running on this host", "health": "bad", "vm":<br>
>>> "down", "detail": "unknown"}<br>
>>><br>
>>> I understand the lock is an issue, I'll try to make sure it is fully<br>
>>> stopped on all three before starting but I don't think that is the issue at<br>
>>> hand either. What concerns me is mostly that it seems to be unable to read<br>
>>> the meta data, I think that might be the heart of the problem but I'm not<br>
>>> sure what is causing it.<br>
>>><br>
>>> On Fri, Jan 12, 2018 at 11:33 AM, Martin Sivak <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>> wrote:<br>
>>>><br>
>>>> > On all three hosts I ran hosted-engine --vm-shutdown; hosted-engine<br>
>>>> > --vm-poweroff<br>
>>>><br>
>>>> Are you in global maintenance? I think you were in one of the previous<br>
>>>> emails, but worth checking.<br>
>>>><br>
>>>> > I started ovirt-ha-broker with systemctl as root user but it does<br>
>>>> > appear to be running under vdsm:<br>
>>>><br>
>>>> That is the correct behavior.<br>
>>>><br>
>>>> > libvirtError: resource busy: Failed to acquire lock: Lease is held by<br>
>>>> > another host<br>
>>>><br>
>>>> sanlock seems to think the VM runs somewhere and it is possible that<br>
>>>> some other host tried to start the VM as well unless you are in global<br>
>>>> maintenance (that is why I asked the first question here).<br>
>>>><br>
>>>> Martin<br>
>>>><br>
>>>> On Fri, Jan 12, 2018 at 4:28 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
>>>> > Martin,<br>
>>>> ><br>
>>>> > Thanks so much for keeping with me, this is driving me crazy! I<br>
>>>> > really do<br>
>>>> > appreciate it, thanks again<br>
>>>> ><br>
>>>> > Let's go through this:<br>
>>>> ><br>
>>>> > HE VM is down - YES<br>
>>>> ><br>
>>>> ><br>
>>>> > HE agent fails when opening metadata using the symlink - YES<br>
>>>> ><br>
>>>> ><br>
>>>> > the symlink is there and readable by vdsm:kvm - it appears to be:<br>
>>>> ><br>
>>>> ><br>
>>>> > lrwxrwxrwx. 1 vdsm kvm 159 Jan 10 21:20<br>
>>>> > 14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a<br>
>>>> > -><br>
>>>> ><br>
>>>> > /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_hosted__engine/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/images/14a20941-<wbr>1b84-4b82-be8f-ace38d7c037a<br>
>>>> ><br>
>>>> ><br>
>>>> > And the files in the linked directory exist and have vdsm:kvm perms as<br>
>>>> > well:<br>
>>>> ><br>
>>>> ><br>
>>>> > # cd<br>
>>>> ><br>
>>>> > /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_hosted__engine/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/images/14a20941-<wbr>1b84-4b82-be8f-ace38d7c037a<br>
>>>> ><br>
>>>> > [root@cultivar0 14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a]# ls -al<br>
>>>> ><br>
>>>> > total 2040<br>
>>>> ><br>
>>>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .<br>
>>>> ><br>
>>>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..<br>
>>>> ><br>
>>>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 11:19<br>
>>>> > 8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>>>> ><br>
>>>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016<br>
>>>> > 8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8.lease<br>
>>>> ><br>
>>>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016<br>
>>>> > 8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8.meta<br>
>>>> ><br>
>>>> ><br>
>>>> > I started ovirt-ha-broker with systemctl as root user but it does<br>
>>>> > appear to<br>
>>>> > be running under vdsm:<br>
>>>> ><br>
>>>> ><br>
>>>> > vdsm 16928 0.6 0.0 1618244 43328 ? Ssl 10:33 0:18<br>
>>>> > /usr/bin/python /usr/share/ovirt-hosted-<wbr>engine-ha/ovirt-ha-broker<br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> > Here is something I tried:<br>
>>>> ><br>
>>>> ><br>
>>>> > - On all three hosts I ran hosted-engine --vm-shutdown; hosted-engine<br>
>>>> > --vm-poweroff<br>
>>>> ><br>
>>>> > - On HOST0 (cultivar0) I disconnected and reconnected storage using<br>
>>>> > hosted-engine<br>
>>>> ><br>
>>>> > - Tried starting up the hosted VM on cultivar0 while tailing the logs:<br>
>>>> ><br>
>>>> ><br>
>>>> > # hosted-engine --vm-start<br>
>>>> ><br>
>>>> > VM exists and is down, cleaning up and restarting<br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>>>> ><br>
>>>> > jsonrpc/2::ERROR::2018-01-12<br>
>>>> > 11:27:27,194::vm::1766::virt.<wbr>vm::(_getRunningVmStats)<br>
>>>> > (vmId='4013c829-c9d7-4b72-<wbr>90d5-6fe58137504c') Error fetching vm stats<br>
>>>> ><br>
>>>> > Traceback (most recent call last):<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 1762,<br>
>>>> > in<br>
>>>> > _getRunningVmStats<br>
>>>> ><br>
>>>> > vm_sample.interval)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py"<wbr>, line<br>
>>>> > 45, in<br>
>>>> > produce<br>
>>>> ><br>
>>>> > networks(vm, stats, first_sample, last_sample, interval)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py"<wbr>, line<br>
>>>> > 322, in<br>
>>>> > networks<br>
>>>> ><br>
>>>> > if nic.name.startswith('hostdev')<wbr>:<br>
>>>> ><br>
>>>> > AttributeError: name<br>
>>>> ><br>
>>>> > jsonrpc/3::ERROR::2018-01-12<br>
>>>> > 11:27:27,221::__init__::611::<wbr>jsonrpc.JsonRpcServer::(_<wbr>handle_request)<br>
>>>> > Internal server error<br>
>>>> ><br>
>>>> > Traceback (most recent call last):<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/yajsonrpc/__init__.<wbr>py", line<br>
>>>> > 606,<br>
>>>> > in _handle_request<br>
>>>> ><br>
>>>> > res = method(**params)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/rpc/Bridge.py", line<br>
>>>> > 201, in<br>
>>>> > _dynamicMethod<br>
>>>> ><br>
>>>> > result = fn(*methodArgs)<br>
>>>> ><br>
>>>> > File "<string>", line 2, in getAllVmIoTunePolicies<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py", line 48,<br>
>>>> > in<br>
>>>> > method<br>
>>>> ><br>
>>>> > ret = func(*args, **kwargs)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/API.py", line 1354, in<br>
>>>> > getAllVmIoTunePolicies<br>
>>>> ><br>
>>>> > io_tune_policies_dict = self._cif.<wbr>getAllVmIoTunePolicies()<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/clientIF.py", line 524,<br>
>>>> > in<br>
>>>> > getAllVmIoTunePolicies<br>
>>>> ><br>
>>>> > 'current_values': v.getIoTune()}<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3481,<br>
>>>> > in<br>
>>>> > getIoTune<br>
>>>> ><br>
>>>> > result = self.getIoTuneResponse()<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3500,<br>
>>>> > in<br>
>>>> > getIoTuneResponse<br>
>>>> ><br>
>>>> > res = self._dom.blockIoTune(<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py", line<br>
>>>> > 47,<br>
>>>> > in __getattr__<br>
>>>> ><br>
>>>> > % self.vmid)<br>
>>>> ><br>
>>>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c' was not<br>
>>>> > defined<br>
>>>> > yet or was undefined<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/messages <==<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR<br>
>>>> > Internal<br>
>>>> > server error#012Traceback (most recent call last):#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/yajsonrpc/__init__.<wbr>py", line 606, in<br>
>>>> > _handle_request#012 res = method(**params)#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/rpc/Bridge.py", line 201, in<br>
>>>> > _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>",<br>
>>>> > line 2,<br>
>>>> > in getAllVmIoTunePolicies#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py", line 48, in<br>
>>>> > method#012 ret = func(*args, **kwargs)#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/API.py", line 1354, in<br>
>>>> > getAllVmIoTunePolicies#012 io_tune_policies_dict =<br>
>>>> > self._cif.<wbr>getAllVmIoTunePolicies()#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/clientIF.py", line 524, in<br>
>>>> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012<br>
>>>> > File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3481, in<br>
>>>> > getIoTune#012 result = self.getIoTuneResponse()#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 3500, in<br>
>>>> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File<br>
>>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py", line 47, in<br>
>>>> > __getattr__#012 % self.vmid)#<wbr>012NotConnectedError: VM<br>
>>>> > '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c' was not defined yet or was<br>
>>>> > undefined<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>>>> > blocking<br>
>>>> > state<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>>>> > disabled<br>
>>>> > state<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 entered promiscuous<br>
>>>> > mode<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>>>> > blocking<br>
>>>> > state<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>>>> > forwarding state<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 lldpad: recvfrom(Event interface): No buffer<br>
>>>> > space<br>
>>>> > available<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>>>> > [1515770847.4264]<br>
>>>> > manager: (vnet4): new Tun device<br>
>>>> > (/org/freedesktop/<wbr>NetworkManager/Devices/135)<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>>>> > [1515770847.4342]<br>
>>>> > device (vnet4): state change: unmanaged -> unavailable (reason<br>
>>>> > 'connection-assumed') [10 20 41]<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>>>> > [1515770847.4353]<br>
>>>> > device (vnet4): state change: unavailable -> disconnected (reason<br>
>>>> > 'none')<br>
>>>> > [20 30 0]<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>>>> ><br>
>>>> > 2018-01-12 15:27:27.435+0000: starting up libvirt version: 3.2.0,<br>
>>>> > package:<br>
>>>> > 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>>>> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
>>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
>>>> > <a href="http://cultivar0.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar0.grove.silverorange.<wbr>com</a><br>
>>>> ><br>
>>>> > LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
>>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>>>> > guest=Cultivar,debug-threads=<wbr>on -S -object<br>
>>>> ><br>
>>>> > secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-114-Cultivar/<wbr>master-key.aes<br>
>>>> > -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>>>> > -cpu<br>
>>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>>>> > 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
>>>> > 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
>>>> ><br>
>>>> > Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-3300-<wbr>1042-8031-B4C04F4B4831,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
>>>> > -no-user-config -nodefaults -chardev<br>
>>>> ><br>
>>>> > socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>114-Cultivar/monitor.sock,<wbr>server,nowait<br>
>>>> > -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc<br>
>>>> > base=2018-01-12T15:27:27,<wbr>driftfix=slew -global<br>
>>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on<br>
>>>> > -device<br>
>>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>>>> > virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive<br>
>>>> ><br>
>>>> > file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>>>> > -device<br>
>>>> ><br>
>>>> > virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
>>>> > -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
>>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
>>>> > tap,fd=35,id=hostnet0,vhost=<wbr>on,vhostfd=38 -device<br>
>>>> ><br>
>>>> > virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
>>>> > -chardev<br>
>>>> ><br>
>>>> > socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
>>>> > -device<br>
>>>> ><br>
>>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
>>>> > -chardev<br>
>>>> ><br>
>>>> > socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
>>>> > -device<br>
>>>> ><br>
>>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
>>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
>>>> ><br>
>>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
>>>> > -chardev<br>
>>>> ><br>
>>>> > socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
>>>> > -device<br>
>>>> ><br>
>>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
>>>> > -chardev pty,id=charconsole0 -device<br>
>>>> > virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
>>>> ><br>
>>>> > tls-port=5904,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
>>>> > -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2 -object<br>
>>>> > rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
>>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg<br>
>>>> > timestamp=on<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/messages <==<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 systemd-machined: New machine<br>
>>>> > qemu-114-Cultivar.<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 systemd: Started Virtual Machine<br>
>>>> > qemu-114-Cultivar.<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 systemd: Starting Virtual Machine<br>
>>>> > qemu-114-Cultivar.<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>>>> ><br>
>>>> > 2018-01-12T15:27:27.651669Z qemu-kvm: -chardev pty,id=charconsole0:<br>
>>>> > char<br>
>>>> > device redirected to /dev/pts/2 (label charconsole0)<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/messages <==<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kvm: 5 guests now active<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>>>> ><br>
>>>> > 2018-01-12 15:27:27.773+0000: shutting down, reason=failed<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/messages <==<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 libvirtd: 2018-01-12 15:27:27.773+0000:<br>
>>>> > 1910:<br>
>>>> > error : virLockManagerSanlockAcquire:<wbr>1041 : resource busy: Failed to<br>
>>>> > acquire<br>
>>>> > lock: Lease is held by another host<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>>>> ><br>
>>>> > 2018-01-12T15:27:27.776135Z qemu-kvm: terminating on signal 15 from<br>
>>>> > pid 1773<br>
>>>> > (/usr/sbin/libvirtd)<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/messages <==<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>>>> > disabled<br>
>>>> > state<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 left promiscuous mode<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>>>> > disabled<br>
>>>> > state<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>>>> > [1515770847.7989]<br>
>>>> > device (vnet4): state change: disconnected -> unmanaged (reason<br>
>>>> > 'unmanaged')<br>
>>>> > [30 10 3]<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>>>> > [1515770847.7989]<br>
>>>> > device (vnet4): released from master device ovirtmgmt<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 kvm: 4 guests now active<br>
>>>> ><br>
>>>> > Jan 12 11:27:27 cultivar0 systemd-machined: Machine qemu-114-Cultivar<br>
>>>> > terminated.<br>
>>>> ><br>
>>>> ><br>
>>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>>>> ><br>
>>>> > vm/4013c829::ERROR::2018-01-12<br>
>>>> > 11:27:28,001::vm::914::virt.<wbr>vm::(_startUnderlyingVm)<br>
>>>> > (vmId='4013c829-c9d7-4b72-<wbr>90d5-6fe58137504c') The vm start process<br>
>>>> > failed<br>
>>>> ><br>
>>>> > Traceback (most recent call last):<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 843,<br>
>>>> > in<br>
>>>> > _startUnderlyingVm<br>
>>>> ><br>
>>>> > self._run()<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line 2721,<br>
>>>> > in<br>
>>>> > _run<br>
>>>> ><br>
>>>> > dom.createWithFlags(flags)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/<wbr>libvirtconnection.py",<br>
>>>> > line<br>
>>>> > 126, in wrapper<br>
>>>> ><br>
>>>> > ret = f(*args, **kwargs)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/utils.py", line 512, in<br>
>>>> > wrapper<br>
>>>> ><br>
>>>> > return func(inst, *args, **kwargs)<br>
>>>> ><br>
>>>> > File "/usr/lib64/python2.7/site-<wbr>packages/libvirt.py", line 1069, in<br>
>>>> > createWithFlags<br>
>>>> ><br>
>>>> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()<br>
>>>> > failed',<br>
>>>> > dom=self)<br>
>>>> ><br>
>>>> > libvirtError: resource busy: Failed to acquire lock: Lease is held by<br>
>>>> > another host<br>
>>>> ><br>
>>>> > periodic/47::ERROR::2018-01-12<br>
>>>> > 11:27:32,858::periodic::215::<wbr>virt.periodic.Operation::(__<wbr>call__)<br>
>>>> > <vdsm.virt.sampling.<wbr>VMBulkstatsMonitor object at 0x3692590> operation<br>
>>>> > failed<br>
>>>> ><br>
>>>> > Traceback (most recent call last):<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/periodic.<wbr>py", line<br>
>>>> > 213,<br>
>>>> > in __call__<br>
>>>> ><br>
>>>> > self._func()<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/sampling.<wbr>py", line<br>
>>>> > 522,<br>
>>>> > in __call__<br>
>>>> ><br>
>>>> > self._send_metrics()<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/sampling.<wbr>py", line<br>
>>>> > 538,<br>
>>>> > in _send_metrics<br>
>>>> ><br>
>>>> > vm_sample.interval)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py"<wbr>, line<br>
>>>> > 45, in<br>
>>>> > produce<br>
>>>> ><br>
>>>> > networks(vm, stats, first_sample, last_sample, interval)<br>
>>>> ><br>
>>>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py"<wbr>, line<br>
>>>> > 322, in<br>
>>>> > networks<br>
>>>> ><br>
>>>> > if nic.name.startswith('hostdev')<wbr>:<br>
>>>> ><br>
>>>> > AttributeError: name<br>
>>>> ><br>
>>>> ><br>
>>>> > On Fri, Jan 12, 2018 at 11:14 AM, Martin Sivak <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>> > wrote:<br>
>>>> >><br>
>>>> >> Hmm that rules out most of NFS related permission issues.<br>
>>>> >><br>
>>>> >> So the current status is (I need to sum it up to get the full<br>
>>>> >> picture):<br>
>>>> >><br>
>>>> >> - HE VM is down<br>
>>>> >> - HE agent fails when opening metadata using the symlink<br>
>>>> >> - the symlink is there<br>
>>>> >> - the symlink is readable by vdsm:kvm<br>
>>>> >><br>
>>>> >> Hmm can you check under which user is ovirt-ha-broker started?<br>
>>>> >><br>
>>>> >> Martin<br>
>>>> >><br>
>>>> >><br>
>>>> >> On Fri, Jan 12, 2018 at 4:10 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
>>>> >> > Same thing happens with data images of other VMs as well though,<br>
>>>> >> > and<br>
>>>> >> > those<br>
>>>> >> > seem to be running ok so I'm not sure if it's the problem.<br>
>>>> >> ><br>
>>>> >> > On Fri, Jan 12, 2018 at 11:08 AM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
>>>> >> >><br>
>>>> >> >> Martin,<br>
>>>> >> >><br>
>>>> >> >> I can as VDSM user but not as root . I get permission denied<br>
>>>> >> >> trying to<br>
>>>> >> >> touch one of the files as root, is that normal?<br>
>>>> >> >><br>
>>>> >> >> On Fri, Jan 12, 2018 at 11:03 AM, Martin Sivak <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>> >> >> wrote:<br>
>>>> >> >>><br>
>>>> >> >>> Hmm, then it might be a permission issue indeed. Can you touch<br>
>>>> >> >>> the<br>
>>>> >> >>> file? Open it? (try hexdump) Just to make sure NFS does not<br>
>>>> >> >>> prevent<br>
>>>> >> >>> you from doing that.<br>
>>>> >> >>><br>
>>>> >> >>> Martin<br>
>>>> >> >>><br>
>>>> >> >>> On Fri, Jan 12, 2018 at 3:57 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>> wrote:<br>
>>>> >> >>> > Sorry, I think we got confused about the symlink, there are<br>
>>>> >> >>> > symlinks<br>
>>>> >> >>> > in<br>
>>>> >> >>> > /var/run that point the /rhev when I was doing an LS it was<br>
>>>> >> >>> > listing<br>
>>>> >> >>> > the<br>
>>>> >> >>> > files in /rhev<br>
>>>> >> >>> ><br>
>>>> >> >>> > /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286<br>
>>>> >> >>> ><br>
>>>> >> >>> > 14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a -><br>
>>>> >> >>> ><br>
>>>> >> >>> ><br>
>>>> >> >>> ><br>
>>>> >> >>> > /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_hosted__engine/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/images/14a20941-<wbr>1b84-4b82-be8f-ace38d7c037a<br>
>>>> >> >>> ><br>
>>>> >> >>> > ls -al<br>
>>>> >> >>> ><br>
>>>> >> >>> ><br>
>>>> >> >>> ><br>
>>>> >> >>> > /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_hosted__engine/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/images/14a20941-<wbr>1b84-4b82-be8f-ace38d7c037a<br>
>>>> >> >>> > total 2040<br>
>>>> >> >>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .<br>
>>>> >> >>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..<br>
>>>> >> >>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 10:56<br>
>>>> >> >>> > 8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>>>> >> >>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016<br>
>>>> >> >>> > 8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8.lease<br>
>>>> >> >>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016<br>
>>>> >> >>> > 8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8.meta<br>
>>>> >> >>> ><br>
>>>> >> >>> > Is it possible that this is the wrong image for hosted engine?<br>
>>>> >> >>> ><br>
>>>> >> >>> > this is all I get in vdsm log when running hosted-engine<br>
>>>> >> >>> > --connect-storage<br>
>>>> >> >>> ><br>
>>>> >> >>> > jsonrpc/4::ERROR::2018-01-12<br>
>>>> >> >>> ><br>
>>>> >> >>> ><br>
>>>> >> >>> > 10:52:53,019::__init__::611::<wbr>jsonrpc.JsonRpcServer::(_<wbr>handle_request)<br>
>>>> >> >>> > Internal server error<br>
>>>> >> >>> > Traceback (most recent call last):<br>
>>>> >> >>> > File<br>
>>>> >> >>> > "/usr/lib/python2.7/site-<wbr>packages/yajsonrpc/__init__.<wbr>py",<br>
>>>> >> >>> > line<br>
>>>> >> >>> > 606,<br>
>>>> >> >>> > in _handle_request<br>
>>>> >> >>> > res = method(**params)<br>
>>>> >> >>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/rpc/Bridge.py",<br>
>>>> >> >>> > line<br>
>>>> >> >>> > 201,<br>
>>>> >> >>> > in<br>
>>>> >> >>> > _dynamicMethod<br>
>>>> >> >>> > result = fn(*methodArgs)<br>
>>>> >> >>> > File "<string>", line 2, in getAllVmIoTunePolicies<br>
>>>> >> >>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py",<br>
>>>> >> >>> > line<br>
>>>> >> >>> > 48,<br>
>>>> >> >>> > in<br>
>>>> >> >>> > method<br>
>>>> >> >>> > ret = func(*args, **kwargs)<br>
>>>> >> >>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/API.py", line<br>
>>>> >> >>> > 1354, in<br>
>>>> >> >>> > getAllVmIoTunePolicies<br>
>>>> >> >>> > io_tune_policies_dict = self._cif.<wbr>getAllVmIoTunePolicies()<br>
>>>> >> >>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/clientIF.py",<br>
>>>> >> >>> > line<br>
>>>> >> >>> > 524,<br>
>>>> >> >>> > in<br>
>>>> >> >>> > getAllVmIoTunePolicies<br>
>>>> >> >>> > 'current_values': v.getIoTune()}<br>
>>>> >> >>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line<br>
>>>> >> >>> > 3481,<br>
>>>> >> >>> > in<br>
>>>> >> >>> > getIoTune<br>
>>>> >> >>> > result = self.getIoTuneResponse()<br>
>>>> >> >>> > File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vm.py", line<br>
>>>> >> >>> > 3500,<br>
>>>> >> >>> > in<br>
>>>> >> >>> > getIoTuneResponse<br>
>>>> >> >>> > res = self._dom.blockIoTune(<br>
>>>> >> >>> > File<br>
>>>> >> >>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py",<br>
>>>> >> >>> > line<br>
>>>> >> >>> > 47,<br>
>>>> >> >>> > in __getattr__<br>
>>>> >> >>> > % self.vmid)<br>
>>>> >> >>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
>>>> >> >>> > was not<br>
>>>> >> >>> > defined<br>
>>>> >> >>> > yet or was undefined<br>
>>>> >> >>> ><br>
>>>> >> >>> > On Fri, Jan 12, 2018 at 10:48 AM, Martin Sivak<br>
>>>> >> >>> > <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>> >> >>> > wrote:<br>
>>>> >> >>> >><br>
>>>> >> >>> >> Hi,<br>
>>>> >> >>> >><br>
>>>> >> >>> >> what happens when you try hosted-engine --connect-storage? Do<br>
>>>> >> >>> >> you<br>
>>>> >> >>> >> see<br>
>>>> >> >>> >> any errors in the vdsm log?<br>
>>>> >> >>> >><br>
>>>> >> >>> >> Best regards<br>
>>>> >> >>> >><br>
>>>> >> >>> >> Martin Sivak<br>
>>>> >> >>> >><br>
>>>> >> >>> >> On Fri, Jan 12, 2018 at 3:41 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>><br>
>>>> >> >>> >> wrote:<br>
>>>> >> >>> >> > Ok this is what I've done:<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > - All three hosts in global maintenance mode<br>
>>>> >> >>> >> > - Ran: systemctl stop ovirt-ha-broker; systemctl stop<br>
>>>> >> >>> >> > ovirt-ha-broker --<br>
>>>> >> >>> >> > on<br>
>>>> >> >>> >> > all three hosts<br>
>>>> >> >>> >> > - Moved ALL files in<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<br>
>>>> >> >>> >> > to<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/backup<br>
>>>> >> >>> >> > - Ran: systemctl start ovirt-ha-broker; systemctl start<br>
>>>> >> >>> >> > ovirt-ha-broker<br>
>>>> >> >>> >> > --<br>
>>>> >> >>> >> > on all three hosts<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > - attempt start of engine vm from HOST0 (cultivar0):<br>
>>>> >> >>> >> > hosted-engine<br>
>>>> >> >>> >> > --vm-start<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > Lots of errors in the logs still, it appears to be having<br>
>>>> >> >>> >> > problems<br>
>>>> >> >>> >> > with<br>
>>>> >> >>> >> > that<br>
>>>> >> >>> >> > directory still:<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > Jan 12 10:40:13 cultivar0 journal: ovirt-ha-broker<br>
>>>> >> >>> >> > ovirt_hosted_engine_ha.broker.<wbr>storage_broker.StorageBroker<br>
>>>> >> >>> >> > ERROR<br>
>>>> >> >>> >> > Failed<br>
>>>> >> >>> >> > to<br>
>>>> >> >>> >> > write metadata for host 1 to<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8#012Traceback<br>
>>>> >> >>> >> > (most recent call last):#012 File<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py",<br>
>>>> >> >>> >> > line 202, in put_stats#012 f = os.open(path, direct_flag<br>
>>>> >> >>> >> > |<br>
>>>> >> >>> >> > os.O_WRONLY |<br>
>>>> >> >>> >> > os.O_SYNC)#012OSError: [Errno 2] No such file or directory:<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > There are no new files or symlinks in<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > - Jayme<br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> > On Fri, Jan 12, 2018 at 10:23 AM, Martin Sivak<br>
>>>> >> >>> >> > <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>> >> >>> >> > wrote:<br>
>>>> >> >>> >> >><br>
>>>> >> >>> >> >> > Can you please stop all hosted engine tooling (<br>
>>>> >> >>> >> >><br>
>>>> >> >>> >> >> On all hosts I should have added.<br>
>>>> >> >>> >> >><br>
>>>> >> >>> >> >> Martin<br>
>>>> >> >>> >> >><br>
>>>> >> >>> >> >> On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak<br>
>>>> >> >>> >> >> <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>> >> >>> >> >> wrote:<br>
>>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2] No such<br>
>>>> >> >>> >> >> >> file<br>
>>>> >> >>> >> >> >> or<br>
>>>> >> >>> >> >> >> directory:<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> ls -al<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> Is this due to the symlink problem you guys are<br>
>>>> >> >>> >> >> >> referring to<br>
>>>> >> >>> >> >> >> that<br>
>>>> >> >>> >> >> >> was<br>
>>>> >> >>> >> >> >> addressed in RC1 or something else?<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> > No, this file is the symlink. It should point to<br>
>>>> >> >>> >> >> > somewhere<br>
>>>> >> >>> >> >> > inside<br>
>>>> >> >>> >> >> > /rhev/. I see it is a 1G file in your case. That is<br>
>>>> >> >>> >> >> > really<br>
>>>> >> >>> >> >> > interesting.<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> > Can you please stop all hosted engine tooling<br>
>>>> >> >>> >> >> > (ovirt-ha-agent,<br>
>>>> >> >>> >> >> > ovirt-ha-broker), move the file (metadata file is not<br>
>>>> >> >>> >> >> > important<br>
>>>> >> >>> >> >> > when<br>
>>>> >> >>> >> >> > services are stopped, but better safe than sorry) and<br>
>>>> >> >>> >> >> > restart<br>
>>>> >> >>> >> >> > all<br>
>>>> >> >>> >> >> > services again?<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> >> Could there possibly be a permissions<br>
>>>> >> >>> >> >> >> problem somewhere?<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> > Maybe, but the file itself looks out of the ordinary. I<br>
>>>> >> >>> >> >> > wonder<br>
>>>> >> >>> >> >> > how it<br>
>>>> >> >>> >> >> > got there.<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> > Best regards<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> > Martin Sivak<br>
>>>> >> >>> >> >> ><br>
>>>> >> >>> >> >> > On Fri, Jan 12, 2018 at 3:09 PM, Jayme <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>><br>
>>>> >> >>> >> >> > wrote:<br>
>>>> >> >>> >> >> >> Thanks for the help thus far. Storage could be related<br>
>>>> >> >>> >> >> >> but<br>
>>>> >> >>> >> >> >> all<br>
>>>> >> >>> >> >> >> other<br>
>>>> >> >>> >> >> >> VMs on<br>
>>>> >> >>> >> >> >> same storage are running ok. The storage is mounted via<br>
>>>> >> >>> >> >> >> NFS<br>
>>>> >> >>> >> >> >> from<br>
>>>> >> >>> >> >> >> within one<br>
>>>> >> >>> >> >> >> of the three hosts, I realize this is not ideal. This<br>
>>>> >> >>> >> >> >> was<br>
>>>> >> >>> >> >> >> setup<br>
>>>> >> >>> >> >> >> by<br>
>>>> >> >>> >> >> >> a<br>
>>>> >> >>> >> >> >> previous admin more as a proof of concept and VMs were<br>
>>>> >> >>> >> >> >> put on<br>
>>>> >> >>> >> >> >> there<br>
>>>> >> >>> >> >> >> that<br>
>>>> >> >>> >> >> >> should not have been placed in a proof of concept<br>
>>>> >> >>> >> >> >> environment..<br>
>>>> >> >>> >> >> >> it<br>
>>>> >> >>> >> >> >> was<br>
>>>> >> >>> >> >> >> intended to be rebuilt with proper storage down the<br>
>>>> >> >>> >> >> >> road.<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> So the storage is on HOST0 and the other hosts mount NFS<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> cultivar0.grove.silverorange.<wbr>com:/exports/data<br>
>>>> >> >>> >> >> >> 4861742080<br>
>>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_data<br>
>>>> >> >>> >> >> >> cultivar0.grove.silverorange.<wbr>com:/exports/iso<br>
>>>> >> >>> >> >> >> 4861742080<br>
>>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_iso<br>
>>>> >> >>> >> >> >> cultivar0.grove.silverorange.<wbr>com:/exports/import_export<br>
>>>> >> >>> >> >> >> 4861742080<br>
>>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_import__export<br>
>>>> >> >>> >> >> >> cultivar0.grove.silverorange.<wbr>com:/exports/hosted_engine<br>
>>>> >> >>> >> >> >> 4861742080<br>
>>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /rhev/data-center/mnt/<wbr>cultivar0.grove.silverorange.<wbr>com:_exports_hosted__engine<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> Like I said, the VM data storage itself seems to be<br>
>>>> >> >>> >> >> >> working<br>
>>>> >> >>> >> >> >> ok,<br>
>>>> >> >>> >> >> >> as<br>
>>>> >> >>> >> >> >> all<br>
>>>> >> >>> >> >> >> other<br>
>>>> >> >>> >> >> >> VMs appear to be running.<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> I'm curious why the broker log says this file is not<br>
>>>> >> >>> >> >> >> found<br>
>>>> >> >>> >> >> >> when<br>
>>>> >> >>> >> >> >> it<br>
>>>> >> >>> >> >> >> is<br>
>>>> >> >>> >> >> >> correct and I can see the file at that path:<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2] No such<br>
>>>> >> >>> >> >> >> file<br>
>>>> >> >>> >> >> >> or<br>
>>>> >> >>> >> >> >> directory:<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> ls -al<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> /var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> Is this due to the symlink problem you guys are<br>
>>>> >> >>> >> >> >> referring to<br>
>>>> >> >>> >> >> >> that<br>
>>>> >> >>> >> >> >> was<br>
>>>> >> >>> >> >> >> addressed in RC1 or something else? Could there<br>
>>>> >> >>> >> >> >> possibly be<br>
>>>> >> >>> >> >> >> a<br>
>>>> >> >>> >> >> >> permissions<br>
>>>> >> >>> >> >> >> problem somewhere?<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> Assuming that all three hosts have 4.2 rpms installed<br>
>>>> >> >>> >> >> >> and the<br>
>>>> >> >>> >> >> >> host<br>
>>>> >> >>> >> >> >> engine<br>
>>>> >> >>> >> >> >> will not start is it safe for me to update hosts to 4.2<br>
>>>> >> >>> >> >> >> RC1<br>
>>>> >> >>> >> >> >> rpms?<br>
>>>> >> >>> >> >> >> Or<br>
>>>> >> >>> >> >> >> perhaps install that repo and *only* update the ovirt HA<br>
>>>> >> >>> >> >> >> packages?<br>
>>>> >> >>> >> >> >> Assuming that I cannot yet apply the same updates to the<br>
>>>> >> >>> >> >> >> inaccessible<br>
>>>> >> >>> >> >> >> hosted<br>
>>>> >> >>> >> >> >> engine VM.<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> I should also mention one more thing. I originally<br>
>>>> >> >>> >> >> >> upgraded<br>
>>>> >> >>> >> >> >> the<br>
>>>> >> >>> >> >> >> engine<br>
>>>> >> >>> >> >> >> VM<br>
>>>> >> >>> >> >> >> first using new RPMS then engine-setup. It failed due<br>
>>>> >> >>> >> >> >> to not<br>
>>>> >> >>> >> >> >> being<br>
>>>> >> >>> >> >> >> in<br>
>>>> >> >>> >> >> >> global maintenance, so I set global maintenance and ran<br>
>>>> >> >>> >> >> >> it<br>
>>>> >> >>> >> >> >> again,<br>
>>>> >> >>> >> >> >> which<br>
>>>> >> >>> >> >> >> appeared to complete as intended but never came back up<br>
>>>> >> >>> >> >> >> after.<br>
>>>> >> >>> >> >> >> Just<br>
>>>> >> >>> >> >> >> in<br>
>>>> >> >>> >> >> >> case<br>
>>>> >> >>> >> >> >> this might have anything at all to do with what could<br>
>>>> >> >>> >> >> >> have<br>
>>>> >> >>> >> >> >> happened.<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> Thanks very much again, I very much appreciate the help!<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> - Jayme<br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> >> >> On Fri, Jan 12, 2018 at 8:44 AM, Simone Tiraboschi<br>
>>>> >> >>> >> >> >> <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>><br>
>>>> >> >>> >> >> >> wrote:<br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak<br>
>>>> >> >>> >> >> >>> <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>> >> >>> >> >> >>> wrote:<br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> Hi,<br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> the hosted engine agent issue might be fixed by<br>
>>>> >> >>> >> >> >>>> restarting<br>
>>>> >> >>> >> >> >>>> ovirt-ha-broker or updating to newest<br>
>>>> >> >>> >> >> >>>> ovirt-hosted-engine-ha<br>
>>>> >> >>> >> >> >>>> and<br>
>>>> >> >>> >> >> >>>> -setup. We improved handling of the missing symlink.<br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>> Available just in oVirt 4.2.1 RC1<br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> All the other issues seem to point to some storage<br>
>>>> >> >>> >> >> >>>> problem<br>
>>>> >> >>> >> >> >>>> I<br>
>>>> >> >>> >> >> >>>> am<br>
>>>> >> >>> >> >> >>>> afraid.<br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> You said you started the VM, do you see it in virsh -r<br>
>>>> >> >>> >> >> >>>> list?<br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> Best regards<br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> Martin Sivak<br>
>>>> >> >>> >> >> >>>><br>
>>>> >> >>> >> >> >>>> On Thu, Jan 11, 2018 at 10:00 PM, Jayme<br>
>>>> >> >>> >> >> >>>> <<a href="mailto:jaymef@gmail.com">jaymef@gmail.com</a>><br>
>>>> >> >>> >> >> >>>> wrote:<br>
>>>> >> >>> >> >> >>>> > Please help, I'm really not sure what else to try at<br>
>>>> >> >>> >> >> >>>> > this<br>
>>>> >> >>> >> >> >>>> > point.<br>
>>>> >> >>> >> >> >>>> > Thank<br>
>>>> >> >>> >> >> >>>> > you<br>
>>>> >> >>> >> >> >>>> > for reading!<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > I'm still working on trying to get my hosted engine<br>
>>>> >> >>> >> >> >>>> > running<br>
>>>> >> >>> >> >> >>>> > after a<br>
>>>> >> >>> >> >> >>>> > botched<br>
>>>> >> >>> >> >> >>>> > upgrade to 4.2. Storage is NFS mounted from within<br>
>>>> >> >>> >> >> >>>> > one<br>
>>>> >> >>> >> >> >>>> > of<br>
>>>> >> >>> >> >> >>>> > the<br>
>>>> >> >>> >> >> >>>> > hosts.<br>
>>>> >> >>> >> >> >>>> > Right<br>
>>>> >> >>> >> >> >>>> > now I have 3 centos7 hosts that are fully updated<br>
>>>> >> >>> >> >> >>>> > with<br>
>>>> >> >>> >> >> >>>> > yum<br>
>>>> >> >>> >> >> >>>> > packages<br>
>>>> >> >>> >> >> >>>> > from<br>
>>>> >> >>> >> >> >>>> > ovirt 4.2, the engine was fully updated with yum<br>
>>>> >> >>> >> >> >>>> > packages<br>
>>>> >> >>> >> >> >>>> > and<br>
>>>> >> >>> >> >> >>>> > failed to<br>
>>>> >> >>> >> >> >>>> > come<br>
>>>> >> >>> >> >> >>>> > up after reboot. As of right now, everything should<br>
>>>> >> >>> >> >> >>>> > have<br>
>>>> >> >>> >> >> >>>> > full<br>
>>>> >> >>> >> >> >>>> > yum<br>
>>>> >> >>> >> >> >>>> > updates<br>
>>>> >> >>> >> >> >>>> > and all having 4.2 rpms. I have global maintenance<br>
>>>> >> >>> >> >> >>>> > mode<br>
>>>> >> >>> >> >> >>>> > on<br>
>>>> >> >>> >> >> >>>> > right<br>
>>>> >> >>> >> >> >>>> > now<br>
>>>> >> >>> >> >> >>>> > and<br>
>>>> >> >>> >> >> >>>> > started hosted-engine on one of the three host and<br>
>>>> >> >>> >> >> >>>> > the<br>
>>>> >> >>> >> >> >>>> > status is<br>
>>>> >> >>> >> >> >>>> > currently:<br>
>>>> >> >>> >> >> >>>> > Engine status : {"reason": "failed liveliness<br>
>>>> >> >>> >> >> >>>> > check”;<br>
>>>> >> >>> >> >> >>>> > "health":<br>
>>>> >> >>> >> >> >>>> > "bad",<br>
>>>> >> >>> >> >> >>>> > "vm":<br>
>>>> >> >>> >> >> >>>> > "up", "detail": "Up"}<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > this is what I get when trying to enter hosted-vm<br>
>>>> >> >>> >> >> >>>> > --console<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > The engine VM is running on this host<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > error: failed to get domain 'HostedEngine'<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > error: Domain not found: no domain with matching<br>
>>>> >> >>> >> >> >>>> > name<br>
>>>> >> >>> >> >> >>>> > 'HostedEngine'<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Here are logs from various sources when I start the<br>
>>>> >> >>> >> >> >>>> > VM on<br>
>>>> >> >>> >> >> >>>> > HOST3:<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > hosted-engine --vm-start<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Command VM.getStats with args {'vmID':<br>
>>>> >> >>> >> >> >>>> > '4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'} failed:<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > (code=1, message=Virtual machine does not exist:<br>
>>>> >> >>> >> >> >>>> > {'vmId':<br>
>>>> >> >>> >> >> >>>> > u'4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'})<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd-machined: New<br>
>>>> >> >>> >> >> >>>> > machine<br>
>>>> >> >>> >> >> >>>> > qemu-110-Cultivar.<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Started Virtual<br>
>>>> >> >>> >> >> >>>> > Machine<br>
>>>> >> >>> >> >> >>>> > qemu-110-Cultivar.<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Starting Virtual<br>
>>>> >> >>> >> >> >>>> > Machine<br>
>>>> >> >>> >> >> >>>> > qemu-110-Cultivar.<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 kvm: 3 guests now active<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/common/api.py",<br>
>>>> >> >>> >> >> >>>> > line<br>
>>>> >> >>> >> >> >>>> > 48,<br>
>>>> >> >>> >> >> >>>> > in<br>
>>>> >> >>> >> >> >>>> > method<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ret = func(*args, **kwargs)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/hsm.py",<br>
>>>> >> >>> >> >> >>>> > line<br>
>>>> >> >>> >> >> >>>> > 2718, in<br>
>>>> >> >>> >> >> >>>> > getStorageDomainInfo<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > dom = self.validateSdUUID(sdUUID)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/hsm.py",<br>
>>>> >> >>> >> >> >>>> > line<br>
>>>> >> >>> >> >> >>>> > 304, in<br>
>>>> >> >>> >> >> >>>> > validateSdUUID<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > sdDom.validate()<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/storage/fileSD.<wbr>py",<br>
>>>> >> >>> >> >> >>>> > line<br>
>>>> >> >>> >> >> >>>> > 515,<br>
>>>> >> >>> >> >> >>>> > in validate<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > raise se.StorageDomainAccessError(<wbr>self.sdUUID)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > StorageDomainAccessError: Domain is either partially<br>
>>>> >> >>> >> >> >>>> > accessible<br>
>>>> >> >>> >> >> >>>> > or<br>
>>>> >> >>> >> >> >>>> > entirely<br>
>>>> >> >>> >> >> >>>> > inaccessible:<br>
>>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-<wbr>c9d965e2f286',)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > jsonrpc/2::ERROR::2018-01-11<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 16:55:16,144::dispatcher::82::<wbr>storage.Dispatcher::(wrapper)<br>
>>>> >> >>> >> >> >>>> > FINISH<br>
>>>> >> >>> >> >> >>>> > getStorageDomainInfo error=Domain is either<br>
>>>> >> >>> >> >> >>>> > partially<br>
>>>> >> >>> >> >> >>>> > accessible<br>
>>>> >> >>> >> >> >>>> > or<br>
>>>> >> >>> >> >> >>>> > entirely<br>
>>>> >> >>> >> >> >>>> > inaccessible:<br>
>>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-<wbr>c9d965e2f286',)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ==> /var/log/libvirt/qemu/<wbr>Cultivar.log <==<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > LC_ALL=C<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
>>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=<wbr>on -S -object<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-108-Cultivar/<wbr>master-key.aes<br>
>>>> >> >>> >> >> >>>> > -machine<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>>>> >> >>> >> >> >>>> > -cpu<br>
>>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
>>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
>>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>108-Cultivar/monitor.sock,<wbr>server,nowait<br>
>>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=<wbr>monitor,mode=control<br>
>>>> >> >>> >> >> >>>> > -rtc<br>
>>>> >> >>> >> >> >>>> > base=2018-01-11T20:33:19,<wbr>driftfix=slew -global<br>
>>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot<br>
>>>> >> >>> >> >> >>>> > -boot<br>
>>>> >> >>> >> >> >>>> > strict=on<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4<br>
>>>> >> >>> >> >> >>>> > -drive<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
>>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
>>>> >> >>> >> >> >>>> > -netdev<br>
>>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
>>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
>>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>>>> >> >>> >> >> >>>> > virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
>>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2<br>
>>>> >> >>> >> >> >>>> > -object<br>
>>>> >> >>> >> >> >>>> > rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5<br>
>>>> >> >>> >> >> >>>> > -msg<br>
>>>> >> >>> >> >> >>>> > timestamp=on<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev<br>
>>>> >> >>> >> >> >>>> > pty,id=charconsole0:<br>
>>>> >> >>> >> >> >>>> > char<br>
>>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label charconsole0)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 2018-01-11 20:38:11.640+0000: shutting down,<br>
>>>> >> >>> >> >> >>>> > reason=shutdown<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 2018-01-11 20:39:02.122+0000: starting up libvirt<br>
>>>> >> >>> >> >> >>>> > version:<br>
>>>> >> >>> >> >> >>>> > 3.2.0,<br>
>>>> >> >>> >> >> >>>> > package:<br>
>>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem<br>
>>>> >> >>> >> >> >>>> > <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu<br>
>>>> >> >>> >> >> >>>> > version:<br>
>>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
>>>> >> >>> >> >> >>>> > cultivar3<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > LC_ALL=C<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
>>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=<wbr>on -S -object<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-109-Cultivar/<wbr>master-key.aes<br>
>>>> >> >>> >> >> >>>> > -machine<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>>>> >> >>> >> >> >>>> > -cpu<br>
>>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
>>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
>>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>109-Cultivar/monitor.sock,<wbr>server,nowait<br>
>>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=<wbr>monitor,mode=control<br>
>>>> >> >>> >> >> >>>> > -rtc<br>
>>>> >> >>> >> >> >>>> > base=2018-01-11T20:39:02,<wbr>driftfix=slew -global<br>
>>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot<br>
>>>> >> >>> >> >> >>>> > -boot<br>
>>>> >> >>> >> >> >>>> > strict=on<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4<br>
>>>> >> >>> >> >> >>>> > -drive<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
>>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
>>>> >> >>> >> >> >>>> > -netdev<br>
>>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
>>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
>>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>>>> >> >>> >> >> >>>> > virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
>>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2<br>
>>>> >> >>> >> >> >>>> > -object<br>
>>>> >> >>> >> >> >>>> > rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5<br>
>>>> >> >>> >> >> >>>> > -msg<br>
>>>> >> >>> >> >> >>>> > timestamp=on<br>
>>>> >> >>> >> >> >>>> ><br>
</div></div>>>>> >> >>> >> >> >>>> > 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev<br>
<span class="">>>>> >> >>> >> >> >>>> > pty,id=charconsole0:<br>
>>>> >> >>> >> >> >>>> > char<br>
>>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label charconsole0)<br>
>>>> >> >>> >> >> >>>> ><br>
</span>>>>> >> >>> >> >> >>>> > 2018-01-11 20:53:11.407+0000: shutting down,<br>
>>>> >> >>> >> >> >>>> > reason=shutdown<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 2018-01-11 20:55:57.210+0000: starting up libvirt<br>
<span class="">>>>> >> >>> >> >> >>>> > version:<br>
>>>> >> >>> >> >> >>>> > 3.2.0,<br>
>>>> >> >>> >> >> >>>> > package:<br>
>>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem<br>
>>>> >> >>> >> >> >>>> > <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu<br>
>>>> >> >>> >> >> >>>> > version:<br>
>>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname:<br>
</span>>>>> >> >>> >> >> >>>> > <a href="http://cultivar3.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar3.grove.silverorange.<wbr>com</a><br>
<span class="">>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > LC_ALL=C<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin<br>
>>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=<wbr>on -S -object<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
</span>>>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-110-Cultivar/<wbr>master-key.aes<br>
<span class="">>>>> >> >>> >> >> >>>> > -machine<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>>>> >> >>> >> >> >>>> > -cpu<br>
>>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c -smbios<br>
>>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,<wbr>product=oVirt<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.<wbr>centos,serial=44454C4C-4300-<wbr>1034-8035-CAC04F424331,uuid=<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c'<br>
>>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
</span>>>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>110-Cultivar/monitor.sock,<wbr>server,nowait<br>
>>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=<wbr>monitor,mode=control<br>
>>>> >> >>> >> >> >>>> > -rtc<br>
>>>> >> >>> >> >> >>>> > base=2018-01-11T20:55:57,<wbr>driftfix=slew -global<br>
<div><div class="h5">>>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot<br>
>>>> >> >>> >> >> >>>> > -boot<br>
>>>> >> >>> >> >> >>>> > strict=on<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4<br>
>>>> >> >>> >> >> >>>> > -drive<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/c2dde892-f978-<wbr>4dfc-a421-c8e04cf387f9/<wbr>23aa0a66-fa6c-4967-a1e5-<wbr>fbe47c0cd705,format=raw,if=<wbr>none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1<br>
>>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
>>>> >> >>> >> >> >>>> > -netdev<br>
>>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=<wbr>on,vhostfd=32 -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>7f:d6:83,bus=pci.0,addr=0x3<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.com.redhat.rhevm.<wbr>vdsm,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.qemu.guest_<wbr>agent.0,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0<br>
>>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0<br>
>>>> >> >>> >> >> >>>> > -chardev<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4013c829-c9d7-4b72-90d5-<wbr>6fe58137504c.org.ovirt.hosted-<wbr>engine-setup.0,server,nowait<br>
>>>> >> >>> >> >> >>>> > -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-<wbr>serial0.0,nr=4,chardev=<wbr>charchannel3,id=channel3,name=<wbr>org.ovirt.hosted-engine-setup.<wbr>0<br>
>>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>>>> >> >>> >> >> >>>> > virtconsole,chardev=<wbr>charconsole0,id=console0 -spice<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,<wbr>tls-channel=default,seamless-<wbr>migration=on<br>
>>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.<wbr>0,addr=0x2<br>
>>>> >> >>> >> >> >>>> > -object<br>
>>>> >> >>> >> >> >>>> > rng-random,id=objrng0,<wbr>filename=/dev/urandom -device<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5<br>
>>>> >> >>> >> >> >>>> > -msg<br>
>>>> >> >>> >> >> >>>> > timestamp=on<br>
>>>> >> >>> >> >> >>>> ><br>
</div></div>>>>> >> >>> >> >> >>>> > 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev<br>
<span class="">>>>> >> >>> >> >> >>>> > pty,id=charconsole0:<br>
>>>> >> >>> >> >> >>>> > char<br>
>>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label charconsole0)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
</span><span class="">>>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-<wbr>ha/broker.log <==<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py",<br>
</span><span class="">>>>> >> >>> >> >> >>>> > line 151, in get_raw_stats<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > f = os.open(path, direct_flag | os.O_RDONLY |<br>
>>>> >> >>> >> >> >>>> > os.O_SYNC)<br>
>>>> >> >>> >> >> >>>> ><br>
</span><span class="">>>>> >> >>> >> >> >>>> > OSError: [Errno 2] No such file or directory:<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
>>>> >> >>> >> >> >>>> ><br>
</span><span class="">>>>> >> >>> >> >> >>>> > StatusStorageThread::ERROR::<wbr>2018-01-11<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 16:55:15,761::status_broker::<wbr>92::ovirt_hosted_engine_ha.<wbr>broker.status_broker.<wbr>StatusBroker.Update::(run)<br>
>>>> >> >>> >> >> >>>> > Failed to read state.<br>
>>>> >> >>> >> >> >>>> ><br>
</span><span class="">>>>> >> >>> >> >> >>>> > Traceback (most recent call last):<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
</span><span class="">>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/status_broker.py",<br>
>>>> >> >>> >> >> >>>> > line 88, in run<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > self._storage_broker.get_raw_<wbr>stats()<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/broker/storage_broker.py",<br>
>>>> >> >>> >> >> >>>> > line 162, in get_raw_stats<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > .format(str(e)))<br>
>>>> >> >>> >> >> >>>> ><br>
</span><span class="">>>>> >> >>> >> >> >>>> > RequestError: failed to read metadata: [Errno 2] No<br>
>>>> >> >>> >> >> >>>> > such<br>
>>>> >> >>> >> >> >>>> > file or<br>
>>>> >> >>> >> >> >>>> > directory:<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/<wbr>248f46f0-d793-4581-9810-<wbr>c9d965e2f286/14a20941-1b84-<wbr>4b82-be8f-ace38d7c037a/<wbr>8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8'<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
</span><div><div class="h5">>>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-<wbr>ha/agent.log <==<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > result = refresh_method()<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py",<br>
>>>> >> >>> >> >> >>>> > line 519, in refresh_vm_conf<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > content =<br>
>>>> >> >>> >> >> >>>> > self._get_file_content_from_<wbr>shared_storage(VM)<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py",<br>
>>>> >> >>> >> >> >>>> > line 484, in _get_file_content_from_shared_<wbr>storage<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > config_volume_path =<br>
>>>> >> >>> >> >> >>>> > self._get_config_volume_path()<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/env/config.py",<br>
>>>> >> >>> >> >> >>>> > line 188, in _get_config_volume_path<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > conf_vol_uuid<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/ovirt_hosted_engine_<wbr>ha/lib/heconflib.py",<br>
>>>> >> >>> >> >> >>>> > line 358, in get_volume_path<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > root=envconst.SD_RUN_DIR,<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > RuntimeError: Path to volume<br>
>>>> >> >>> >> >> >>>> > 4838749f-216d-406b-b245-<wbr>98d0343fcf7f<br>
>>>> >> >>> >> >> >>>> > not<br>
>>>> >> >>> >> >> >>>> > found<br>
>>>> >> >>> >> >> >>>> > in /run/vdsm/storag<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > periodic/42::ERROR::2018-01-11<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > 16:56:11,446::vmstats::260::<wbr>virt.vmstats::(send_metrics)<br>
>>>> >> >>> >> >> >>>> > VM<br>
>>>> >> >>> >> >> >>>> > metrics<br>
</div></div>>>>> >> >>> >> >> >>>> > collection failed<br>
<span class="im HOEnZb">>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > Traceback (most recent call last):<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > File<br>
>>>> >> >>> >> >> >>>> ><br>
</span><div class="HOEnZb"><div class="h5">>>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/vmstats.py"<wbr>,<br>
>>>> >> >>> >> >> >>>> > line<br>
>>>> >> >>> >> >> >>>> > 197, in<br>
>>>> >> >>> >> >> >>>> > send_metrics<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > data[prefix + '.cpu.usage'] = stat['cpuUsage']<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > KeyError: 'cpuUsage'<br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> > ______________________________<wbr>_________________<br>
>>>> >> >>> >> >> >>>> > Users mailing list<br>
>>>> >> >>> >> >> >>>> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>> >> >>> >> >> >>>> > <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>>>> >> >>> >> >> >>>> ><br>
>>>> >> >>> >> >> >>>> ______________________________<wbr>_________________<br>
>>>> >> >>> >> >> >>>> Users mailing list<br>
>>>> >> >>> >> >> >>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>> >> >>> >> >> >>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >>><br>
>>>> >> >>> >> >> >><br>
>>>> >> >>> >> ><br>
>>>> >> >>> >> ><br>
>>>> >> >>> ><br>
>>>> >> >>> ><br>
>>>> >> >><br>
>>>> >> >><br>
>>>> >> ><br>
>>>> ><br>
>>>> ><br>
>>><br>
>>><br>
>><br>
><br>
</div></div></blockquote></div><br></div>