<div dir="ltr">I GOT IT WORKING!!!!</div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 2:16 PM, Jayme <span dir="ltr"><<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Martin, actually might be some good news here. I could not get to console using hosted-engine console but I connected through virsh and got a console to the hosted VM and was able to login, this is a great start. Now to find out what is wrong with the VM. </div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 2:11 PM, Jayme <span dir="ltr"><<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>No luck I'm afraid. It's very odd that I wouldn't be able to get a console to it, if the status is up and seen by virsh. Any clue? <br></div><div><br></div>Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}<br><div><br></div><div><div># virsh -r list</div><div> Id Name State</div><div>------------------------------<wbr>----------------------</div><div> 118 Cultivar running<br></div></div><div><br></div><div><br></div><div><div># hosted-engine --console</div><div>The engine VM is running on this host</div><div>error: failed to get domain 'HostedEngine'</div><div>error: Domain not found: no domain with matching name 'HostedEngine'</div></div><div><br></div><div><div># hosted-engine --console 118</div><div>The engine VM is running on this host</div><div>error: failed to get domain 'HostedEngine'</div><div>error: Domain not found: no domain with matching name 'HostedEngine'</div></div><div><br></div><div># hosted-engine --console Cultivar<br></div><div><div>The engine VM is running on this host</div><div>error: failed to get domain 'HostedEngine'</div><div>error: Domain not found: no domain with matching name 'HostedEngine'</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 12, 2018 at 2:05 PM, Martin Sivak <span dir="ltr"><<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Try listing the domains with<br>
<br>
virsh -r list<br>
<br>
maybe it just has some weird name...<br>
<br>
Martin<br>
<br>
On Fri, Jan 12, 2018 at 6:56 PM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>> wrote:<br>
> I thought that it might be a good sign but unfortunately I cannot access it<br>
> with console :( if I could get console access to it I might be able to fix<br>
> the problem. But seeing is how the console is also not working leads me to<br>
> believe there is a bigger issue at hand here.<br>
><br>
> hosted-engine --console<br>
> The engine VM is running on this host<br>
> error: failed to get domain 'HostedEngine'<br>
> error: Domain not found: no domain with matching name 'HostedEngine'<br>
><br>
> I really wonder if this is all a symlinking problem in some way. Is it<br>
> possible for me to upgrade host to 4.2 RC2 without being able to upgrade the<br>
> engine first or should I keep everything on 4.2 as it is?<br>
><br>
> On Fri, Jan 12, 2018 at 1:49 PM, Martin Sivak <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> the VM is up according to the status (at least for a while). You<br>
>> should be able to use console and diagnose anything that happened<br>
>> inside (line the need for fsck and such) now.<br>
>><br>
>> Check the presence of those links again now, the metadata file content<br>
>> is not important, but the file has to exist (agents will populate it<br>
>> with status data). I have no new idea about what is wrong with that<br>
>> though.<br>
>><br>
>> Best regards<br>
>><br>
>> Martin<br>
>><br>
>><br>
>><br>
>> On Fri, Jan 12, 2018 at 5:47 PM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>> wrote:<br>
>> > The lock space issue was an issue I needed to clear but I don't believe<br>
>> > it<br>
>> > has resolved the problem. I shutdown agent and broker on all hosts and<br>
>> > disconnected hosted-storage then enabled broker/agent on just one host<br>
>> > and<br>
>> > connected storage. I started the VM and actually didn't get any errors<br>
>> > in<br>
>> > the logs barely at all which was good to see, however the VM is still<br>
>> > not<br>
>> > running:<br>
>> ><br>
>> > HOST3:<br>
>> ><br>
>> > Engine status : {"reason": "failed liveliness<br>
>> > check",<br>
>> > "health": "bad", "vm": "up", "detail": "Up"}<br>
>> ><br>
>> > ==> /var/log/messages <==<br>
>> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered<br>
>> > disabled<br>
>> > state<br>
>> > Jan 12 12:42:57 cultivar3 kernel: device vnet0 entered promiscuous mode<br>
>> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered<br>
>> > blocking<br>
>> > state<br>
>> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered<br>
>> > forwarding state<br>
>> > Jan 12 12:42:57 cultivar3 lldpad: recvfrom(Event interface): No buffer<br>
>> > space<br>
>> > available<br>
>> > Jan 12 12:42:57 cultivar3 systemd-machined: New machine<br>
>> > qemu-111-Cultivar.<br>
>> > Jan 12 12:42:57 cultivar3 systemd: Started Virtual Machine<br>
>> > qemu-111-Cultivar.<br>
>> > Jan 12 12:42:57 cultivar3 systemd: Starting Virtual Machine<br>
>> > qemu-111-Cultivar.<br>
>> > Jan 12 12:42:57 cultivar3 kvm: 3 guests now active<br>
>> > Jan 12 12:44:38 cultivar3 libvirtd: 2018-01-12 16:44:38.737+0000: 1535:<br>
>> > error : qemuDomainAgentAvailable:6010 : Guest agent is not responding:<br>
>> > QEMU<br>
>> > guest agent is not connected<br>
>> ><br>
>> > Interestingly though, now I'm seeing this in the logs which may be a new<br>
>> > clue:<br>
>> ><br>
>> ><br>
>> > ==> /var/log/vdsm/vdsm.log <==<br>
>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/nfsSD.py", line<br>
>> > 126,<br>
>> > in findDomain<br>
>> > return NfsStorageDomain(NfsStorageDom<wbr>ain.findDomainPath(sdUUID))<br>
>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/nfsSD.py", line<br>
>> > 116,<br>
>> > in findDomainPath<br>
>> > raise se.StorageDomainDoesNotExist(s<wbr>dUUID)<br>
>> > StorageDomainDoesNotExist: Storage domain does not exist:<br>
>> > (u'248f46f0-d793-4581-9810-c9d<wbr>965e2f286',)<br>
>> > jsonrpc/4::ERROR::2018-01-12<br>
>> > 12:40:30,380::dispatcher::82::<wbr>storage.Dispatcher::(wrapper) FINISH<br>
>> > getStorageDomainInfo error=Storage domain does not exist:<br>
>> > (u'248f46f0-d793-4581-9810-c9d<wbr>965e2f286',)<br>
>> > periodic/42::ERROR::2018-01-12<br>
>> > 12:40:35,430::api::196::root::<wbr>(_getHaInfo)<br>
>> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or<br>
>> > directory'Is the Hosted Engine setup finished?<br>
>> > periodic/43::ERROR::2018-01-12<br>
>> > 12:40:50,473::api::196::root::<wbr>(_getHaInfo)<br>
>> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or<br>
>> > directory'Is the Hosted Engine setup finished?<br>
>> > periodic/40::ERROR::2018-01-12<br>
>> > 12:41:05,519::api::196::root::<wbr>(_getHaInfo)<br>
>> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or<br>
>> > directory'Is the Hosted Engine setup finished?<br>
>> > periodic/43::ERROR::2018-01-12<br>
>> > 12:41:20,566::api::196::root::<wbr>(_getHaInfo)<br>
>> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or<br>
>> > directory'Is the Hosted Engine setup finished?<br>
>> ><br>
>> > ==> /var/log/ovirt-hosted-engine-h<wbr>a/broker.log <==<br>
>> > File<br>
>> ><br>
>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/storage_broker.py",<br>
>> > line 151, in get_raw_stats<br>
>> > f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)<br>
>> > OSError: [Errno 2] No such file or directory:<br>
>> ><br>
>> > '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> > StatusStorageThread::ERROR::20<wbr>18-01-12<br>
>> ><br>
>> > 12:32:06,049::status_broker::9<wbr>2::ovirt_hosted_engine_ha.brok<wbr>er.status_broker.StatusBroker.<wbr>Update::(run)<br>
>> > Failed to read state.<br>
>> > Traceback (most recent call last):<br>
>> > File<br>
>> ><br>
>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/status_broker.py",<br>
>> > line 88, in run<br>
>> > self._storage_broker.get_raw_<wbr>stats()<br>
>> > File<br>
>> ><br>
>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/storage_broker.py",<br>
>> > line 162, in get_raw_stats<br>
>> > .format(str(e)))<br>
>> > RequestError: failed to read metadata: [Errno 2] No such file or<br>
>> > directory:<br>
>> ><br>
>> > '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> ><br>
>> > On Fri, Jan 12, 2018 at 12:02 PM, Martin Sivak <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> The lock is the issue.<br>
>> >><br>
>> >> - try running sanlock client status on all hosts<br>
>> >> - also make sure you do not have some forgotten host still connected<br>
>> >> to the lockspace, but without ha daemons running (and with the VM)<br>
>> >><br>
>> >> I need to go to our president election now, I might check the email<br>
>> >> later tonight.<br>
>> >><br>
>> >> Martin<br>
>> >><br>
>> >> On Fri, Jan 12, 2018 at 4:59 PM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>> wrote:<br>
>> >> > Here are the newest logs from me trying to start hosted vm:<br>
>> >> ><br>
>> >> > ==> /var/log/messages <==<br>
>> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> > blocking<br>
>> >> > state<br>
>> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> > disabled<br>
>> >> > state<br>
>> >> > Jan 12 11:58:14 cultivar0 kernel: device vnet4 entered promiscuous<br>
>> >> > mode<br>
>> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> > blocking<br>
>> >> > state<br>
>> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> > forwarding state<br>
>> >> > Jan 12 11:58:14 cultivar0 lldpad: recvfrom(Event interface): No<br>
>> >> > buffer<br>
>> >> > space<br>
>> >> > available<br>
>> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info><br>
>> >> > [1515772694.8715]<br>
>> >> > manager: (vnet4): new Tun device<br>
>> >> > (/org/freedesktop/NetworkManag<wbr>er/Devices/140)<br>
>> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info><br>
>> >> > [1515772694.8795]<br>
>> >> > device (vnet4): state change: unmanaged -> unavailable (reason<br>
>> >> > 'connection-assumed') [10 20 41]<br>
>> >> ><br>
>> >> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> > 2018-01-12 15:58:14.879+0000: starting up libvirt version: 3.2.0,<br>
>> >> > package:<br>
>> >> > 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>> >> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
>> >> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1), hostname:<br>
>> >> > <a href="http://cultivar0.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar0.grove.silverorange.c<wbr>om</a><br>
>> >> > LC_ALL=C PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
>> >> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>> >> > guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
>> >> ><br>
>> >> ><br>
>> >> > secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/d<wbr>omain-119-Cultivar/master-key.<wbr>aes<br>
>> >> > -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>> >> > -cpu<br>
>> >> > Conroe -m 8192 -realtime mlock=off -smp<br>
>> >> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>> >> > 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
>> >> > 'type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
>> >> ><br>
>> >> ><br>
>> >> > Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-3300-1042-8<wbr>031-B4C04F4B4831,uuid=4013c829<wbr>-c9d7-4b72-90d5-6fe58137504c'<br>
>> >> > -no-user-config -nodefaults -chardev<br>
>> >> ><br>
>> >> ><br>
>> >> > socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-119-<wbr>Cultivar/monitor.sock,server,n<wbr>owait<br>
>> >> > -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
>> >> > base=2018-01-12T15:58:14,drift<wbr>fix=slew -global<br>
>> >> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on<br>
>> >> > -device<br>
>> >> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>> >> > virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4 -drive<br>
>> >> ><br>
>> >> ><br>
>> >> > file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-c8<wbr>e04cf387f9/23aa0a66-fa6c-4967-<wbr>a1e5-fbe47c0cd705,format=raw,<wbr>if=none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> >> > -device<br>
>> >> ><br>
>> >> ><br>
>> >> > virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
>> >> > -drive if=none,id=drive-ide0-1-0,read<wbr>only=on -device<br>
>> >> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
>> >> > tap,fd=35,id=hostnet0,vhost=on<wbr>,vhostfd=38 -device<br>
>> >> ><br>
>> >> ><br>
>> >> > virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:83<wbr>,bus=pci.0,addr=0x3<br>
>> >> > -chardev<br>
>> >> ><br>
>> >> ><br>
>> >> > socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,se<wbr>rver,nowait<br>
>> >> > -device<br>
>> >> ><br>
>> >> ><br>
>> >> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
>> >> > -chardev<br>
>> >> ><br>
>> >> ><br>
>> >> > socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,s<wbr>erver,nowait<br>
>> >> > -device<br>
>> >> ><br>
>> >> ><br>
>> >> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=<a href="http://org.qemu.gu">org.qemu.gu</a><wbr>est_agent.0<br>
>> >> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
>> >> ><br>
>> >> ><br>
>> >> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
>> >> > -chardev<br>
>> >> ><br>
>> >> ><br>
>> >> > socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
>> >> > -device<br>
>> >> ><br>
>> >> ><br>
>> >> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.h<wbr>osted-engine-setup.0<br>
>> >> > -chardev pty,id=charconsole0 -device<br>
>> >> > virtconsole,chardev=charconsol<wbr>e0,id=console0 -spice<br>
>> >> ><br>
>> >> ><br>
>> >> > tls-port=5904,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
>> >> > -device cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -object<br>
>> >> > rng-random,id=objrng0,filename<wbr>=/dev/urandom -device<br>
>> >> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg<br>
>> >> > timestamp=on<br>
>> >> ><br>
>> >> > ==> /var/log/messages <==<br>
>> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info><br>
>> >> > [1515772694.8807]<br>
>> >> > device (vnet4): state change: unavailable -> disconnected (reason<br>
>> >> > 'none')<br>
>> >> > [20 30 0]<br>
>> >> > Jan 12 11:58:14 cultivar0 systemd-machined: New machine<br>
>> >> > qemu-119-Cultivar.<br>
>> >> > Jan 12 11:58:14 cultivar0 systemd: Started Virtual Machine<br>
>> >> > qemu-119-Cultivar.<br>
>> >> > Jan 12 11:58:14 cultivar0 systemd: Starting Virtual Machine<br>
>> >> > qemu-119-Cultivar.<br>
>> >> ><br>
>> >> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> > 2018-01-12T15:58:15.094002Z qemu-kvm: -chardev pty,id=charconsole0:<br>
>> >> > char<br>
>> >> > device redirected to /dev/pts/1 (label charconsole0)<br>
>> >> ><br>
>> >> > ==> /var/log/messages <==<br>
>> >> > Jan 12 11:58:15 cultivar0 kvm: 5 guests now active<br>
>> >> ><br>
>> >> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> > 2018-01-12 15:58:15.217+0000: shutting down, reason=failed<br>
>> >> ><br>
>> >> > ==> /var/log/messages <==<br>
>> >> > Jan 12 11:58:15 cultivar0 libvirtd: 2018-01-12 15:58:15.217+0000:<br>
>> >> > 1908:<br>
>> >> > error : virLockManagerSanlockAcquire:1<wbr>041 : resource busy: Failed to<br>
>> >> > acquire<br>
>> >> > lock: Lease is held by another host<br>
>> >> ><br>
>> >> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> > 2018-01-12T15:58:15.219934Z qemu-kvm: terminating on signal 15 from<br>
>> >> > pid<br>
>> >> > 1773<br>
>> >> > (/usr/sbin/libvirtd)<br>
>> >> ><br>
>> >> > ==> /var/log/messages <==<br>
>> >> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> > disabled<br>
>> >> > state<br>
>> >> > Jan 12 11:58:15 cultivar0 kernel: device vnet4 left promiscuous mode<br>
>> >> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> > disabled<br>
>> >> > state<br>
>> >> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info><br>
>> >> > [1515772695.2348]<br>
>> >> > device (vnet4): state change: disconnected -> unmanaged (reason<br>
>> >> > 'unmanaged')<br>
>> >> > [30 10 3]<br>
>> >> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info><br>
>> >> > [1515772695.2349]<br>
>> >> > device (vnet4): released from master device ovirtmgmt<br>
>> >> > Jan 12 11:58:15 cultivar0 kvm: 4 guests now active<br>
>> >> > Jan 12 11:58:15 cultivar0 systemd-machined: Machine qemu-119-Cultivar<br>
>> >> > terminated.<br>
>> >> ><br>
>> >> > ==> /var/log/vdsm/vdsm.log <==<br>
>> >> > vm/4013c829::ERROR::2018-01-12<br>
>> >> > 11:58:15,444::vm::914::virt.vm<wbr>::(_startUnderlyingVm)<br>
>> >> > (vmId='4013c829-c9d7-4b72-90d5<wbr>-6fe58137504c') The vm start process<br>
>> >> > failed<br>
>> >> > Traceback (most recent call last):<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 843,<br>
>> >> > in<br>
>> >> > _startUnderlyingVm<br>
>> >> > self._run()<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 2721,<br>
>> >> > in<br>
>> >> > _run<br>
>> >> > dom.createWithFlags(flags)<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/libvirtconnection.py"<wbr>,<br>
>> >> > line<br>
>> >> > 126, in wrapper<br>
>> >> > ret = f(*args, **kwargs)<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/utils.py", line 512, in<br>
>> >> > wrapper<br>
>> >> > return func(inst, *args, **kwargs)<br>
>> >> > File "/usr/lib64/python2.7/site-pac<wbr>kages/libvirt.py", line 1069, in<br>
>> >> > createWithFlags<br>
>> >> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()<br>
>> >> > failed',<br>
>> >> > dom=self)<br>
>> >> > libvirtError: resource busy: Failed to acquire lock: Lease is held by<br>
>> >> > another host<br>
>> >> > jsonrpc/6::ERROR::2018-01-12<br>
>> >> > 11:58:16,421::__init__::611::j<wbr>sonrpc.JsonRpcServer::(_handle<wbr>_request)<br>
>> >> > Internal server error<br>
>> >> > Traceback (most recent call last):<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/yajsonrpc/__init__.py", line<br>
>> >> > 606,<br>
>> >> > in _handle_request<br>
>> >> > res = method(**params)<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/rpc/Bridge.py", line<br>
>> >> > 201,<br>
>> >> > in<br>
>> >> > _dynamicMethod<br>
>> >> > result = fn(*methodArgs)<br>
>> >> > File "<string>", line 2, in getAllVmIoTunePolicies<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py", line<br>
>> >> > 48,<br>
>> >> > in<br>
>> >> > method<br>
>> >> > ret = func(*args, **kwargs)<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/API.py", line 1354, in<br>
>> >> > getAllVmIoTunePolicies<br>
>> >> > io_tune_policies_dict = self._cif.getAllVmIoTunePolici<wbr>es()<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/clientIF.py", line 524,<br>
>> >> > in<br>
>> >> > getAllVmIoTunePolicies<br>
>> >> > 'current_values': v.getIoTune()}<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 3481,<br>
>> >> > in<br>
>> >> > getIoTune<br>
>> >> > result = self.getIoTuneResponse()<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 3500,<br>
>> >> > in<br>
>> >> > getIoTuneResponse<br>
>> >> > res = self._dom.blockIoTune(<br>
>> >> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/virdomain.py",<br>
>> >> > line<br>
>> >> > 47,<br>
>> >> > in __getattr__<br>
>> >> > % self.vmid)<br>
>> >> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c' was not<br>
>> >> > defined<br>
>> >> > yet or was undefined<br>
>> >> ><br>
>> >> > ==> /var/log/messages <==<br>
>> >> > Jan 12 11:58:16 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR<br>
>> >> > Internal<br>
>> >> > server error#012Traceback (most recent call last):#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/yajsonrpc/__init__.py", line 606,<br>
>> >> > in<br>
>> >> > _handle_request#012 res = method(**params)#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/rpc/Bridge.py", line 201, in<br>
>> >> > _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>",<br>
>> >> > line 2,<br>
>> >> > in getAllVmIoTunePolicies#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py", line 48, in<br>
>> >> > method#012 ret = func(*args, **kwargs)#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/API.py", line 1354, in<br>
>> >> > getAllVmIoTunePolicies#012 io_tune_policies_dict =<br>
>> >> > self._cif.getAllVmIoTunePolici<wbr>es()#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/clientIF.py", line 524, in<br>
>> >> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012<br>
>> >> > File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 3481, in<br>
>> >> > getIoTune#012 result = self.getIoTuneResponse()#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 3500, in<br>
>> >> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File<br>
>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/virdomain.py", line 47,<br>
>> >> > in<br>
>> >> > __getattr__#012 % self.vmid)#012NotConnectedErro<wbr>r: VM<br>
>> >> > '4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c' was not defined yet or was<br>
>> >> > undefined<br>
>> >> ><br>
>> >> > On Fri, Jan 12, 2018 at 11:55 AM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>> wrote:<br>
>> >> >><br>
>> >> >> One other tidbit I noticed is that it seems like there are less<br>
>> >> >> errors<br>
>> >> >> if<br>
>> >> >> I started in paused mode:<br>
>> >> >><br>
>> >> >> but still shows: Engine status : {"reason":<br>
>> >> >> "bad<br>
>> >> >> vm<br>
>> >> >> status", "health": "bad", "vm": "up", "detail": "Paused"}<br>
>> >> >><br>
>> >> >> ==> /var/log/messages <==<br>
>> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> >> blocking state<br>
>> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> >> disabled state<br>
>> >> >> Jan 12 11:55:05 cultivar0 kernel: device vnet4 entered promiscuous<br>
>> >> >> mode<br>
>> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> >> blocking state<br>
>> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered<br>
>> >> >> forwarding state<br>
>> >> >> Jan 12 11:55:05 cultivar0 lldpad: recvfrom(Event interface): No<br>
>> >> >> buffer<br>
>> >> >> space available<br>
>> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info><br>
>> >> >> [1515772505.3625]<br>
>> >> >> manager: (vnet4): new Tun device<br>
>> >> >> (/org/freedesktop/NetworkManag<wbr>er/Devices/139)<br>
>> >> >><br>
>> >> >> ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >> 2018-01-12 15:55:05.370+0000: starting up libvirt version: 3.2.0,<br>
>> >> >> package:<br>
>> >> >> 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>> >> >> 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
>> >> >> 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1), hostname:<br>
>> >> >> <a href="http://cultivar0.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar0.grove.silverorange.c<wbr>om</a><br>
>> >> >> LC_ALL=C PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
>> >> >> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>> >> >> guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
>> >> >><br>
>> >> >><br>
>> >> >> secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/d<wbr>omain-118-Cultivar/master-key.<wbr>aes<br>
>> >> >> -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>> >> >> -cpu<br>
>> >> >> Conroe -m 8192 -realtime mlock=off -smp<br>
>> >> >> 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>> >> >> 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
>> >> >> 'type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
>> >> >><br>
>> >> >><br>
>> >> >> Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-3300-1042-8<wbr>031-B4C04F4B4831,uuid=4013c829<wbr>-c9d7-4b72-90d5-6fe58137504c'<br>
>> >> >> -no-user-config -nodefaults -chardev<br>
>> >> >><br>
>> >> >><br>
>> >> >> socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-118-<wbr>Cultivar/monitor.sock,server,n<wbr>owait<br>
>> >> >> -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
>> >> >> base=2018-01-12T15:55:05,drift<wbr>fix=slew -global<br>
>> >> >> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on<br>
>> >> >> -device<br>
>> >> >> piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>> >> >> virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4 -drive<br>
>> >> >><br>
>> >> >><br>
>> >> >> file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-c8<wbr>e04cf387f9/23aa0a66-fa6c-4967-<wbr>a1e5-fbe47c0cd705,format=raw,<wbr>if=none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> >> >> -device<br>
>> >> >><br>
>> >> >><br>
>> >> >> virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
>> >> >> -drive if=none,id=drive-ide0-1-0,read<wbr>only=on -device<br>
>> >> >> ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
>> >> >> tap,fd=35,id=hostnet0,vhost=on<wbr>,vhostfd=38 -device<br>
>> >> >><br>
>> >> >><br>
>> >> >> virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:83<wbr>,bus=pci.0,addr=0x3<br>
>> >> >> -chardev<br>
>> >> >><br>
>> >> >><br>
>> >> >> socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,se<wbr>rver,nowait<br>
>> >> >> -device<br>
>> >> >><br>
>> >> >><br>
>> >> >> virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
>> >> >> -chardev<br>
>> >> >><br>
>> >> >><br>
>> >> >> socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,s<wbr>erver,nowait<br>
>> >> >> -device<br>
>> >> >><br>
>> >> >><br>
>> >> >> virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=<a href="http://org.qemu.gu">org.qemu.gu</a><wbr>est_agent.0<br>
>> >> >> -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
>> >> >><br>
>> >> >><br>
>> >> >> virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
>> >> >> -chardev<br>
>> >> >><br>
>> >> >><br>
>> >> >> socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
>> >> >> -device<br>
>> >> >><br>
>> >> >><br>
>> >> >> virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.h<wbr>osted-engine-setup.0<br>
>> >> >> -chardev pty,id=charconsole0 -device<br>
>> >> >> virtconsole,chardev=charconsol<wbr>e0,id=console0 -spice<br>
>> >> >><br>
>> >> >><br>
>> >> >> tls-port=5904,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
>> >> >> -device cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -object<br>
>> >> >> rng-random,id=objrng0,filename<wbr>=/dev/urandom -device<br>
>> >> >> virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg<br>
>> >> >> timestamp=on<br>
>> >> >><br>
>> >> >> ==> /var/log/messages <==<br>
>> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info><br>
>> >> >> [1515772505.3689]<br>
>> >> >> device (vnet4): state change: unmanaged -> unavailable (reason<br>
>> >> >> 'connection-assumed') [10 20 41]<br>
>> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info><br>
>> >> >> [1515772505.3702]<br>
>> >> >> device (vnet4): state change: unavailable -> disconnected (reason<br>
>> >> >> 'none')<br>
>> >> >> [20 30 0]<br>
>> >> >> Jan 12 11:55:05 cultivar0 systemd-machined: New machine<br>
>> >> >> qemu-118-Cultivar.<br>
>> >> >> Jan 12 11:55:05 cultivar0 systemd: Started Virtual Machine<br>
>> >> >> qemu-118-Cultivar.<br>
>> >> >> Jan 12 11:55:05 cultivar0 systemd: Starting Virtual Machine<br>
>> >> >> qemu-118-Cultivar.<br>
>> >> >><br>
>> >> >> ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >> 2018-01-12T15:55:05.586827Z qemu-kvm: -chardev pty,id=charconsole0:<br>
>> >> >> char<br>
>> >> >> device redirected to /dev/pts/1 (label charconsole0)<br>
>> >> >><br>
>> >> >> ==> /var/log/messages <==<br>
>> >> >> Jan 12 11:55:05 cultivar0 kvm: 5 guests now active<br>
>> >> >><br>
>> >> >> On Fri, Jan 12, 2018 at 11:36 AM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>> wrote:<br>
>> >> >>><br>
>> >> >>> Yeah I am in global maintenance:<br>
>> >> >>><br>
>> >> >>> state=GlobalMaintenance<br>
>> >> >>><br>
>> >> >>> host0: {"reason": "vm not running on this host", "health": "bad",<br>
>> >> >>> "vm":<br>
>> >> >>> "down", "detail": "unknown"}<br>
>> >> >>> host2: {"reason": "vm not running on this host", "health": "bad",<br>
>> >> >>> "vm":<br>
>> >> >>> "down", "detail": "unknown"}<br>
>> >> >>> host3: {"reason": "vm not running on this host", "health": "bad",<br>
>> >> >>> "vm":<br>
>> >> >>> "down", "detail": "unknown"}<br>
>> >> >>><br>
>> >> >>> I understand the lock is an issue, I'll try to make sure it is<br>
>> >> >>> fully<br>
>> >> >>> stopped on all three before starting but I don't think that is the<br>
>> >> >>> issue at<br>
>> >> >>> hand either. What concerns me is mostly that it seems to be<br>
>> >> >>> unable<br>
>> >> >>> to read<br>
>> >> >>> the meta data, I think that might be the heart of the problem but<br>
>> >> >>> I'm<br>
>> >> >>> not<br>
>> >> >>> sure what is causing it.<br>
>> >> >>><br>
>> >> >>> On Fri, Jan 12, 2018 at 11:33 AM, Martin Sivak <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>> wrote:<br>
>> >> >>>><br>
>> >> >>>> > On all three hosts I ran hosted-engine --vm-shutdown;<br>
>> >> >>>> > hosted-engine<br>
>> >> >>>> > --vm-poweroff<br>
>> >> >>>><br>
>> >> >>>> Are you in global maintenance? I think you were in one of the<br>
>> >> >>>> previous<br>
>> >> >>>> emails, but worth checking.<br>
>> >> >>>><br>
>> >> >>>> > I started ovirt-ha-broker with systemctl as root user but it<br>
>> >> >>>> > does<br>
>> >> >>>> > appear to be running under vdsm:<br>
>> >> >>>><br>
>> >> >>>> That is the correct behavior.<br>
>> >> >>>><br>
>> >> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is<br>
>> >> >>>> > held<br>
>> >> >>>> > by<br>
>> >> >>>> > another host<br>
>> >> >>>><br>
>> >> >>>> sanlock seems to think the VM runs somewhere and it is possible<br>
>> >> >>>> that<br>
>> >> >>>> some other host tried to start the VM as well unless you are in<br>
>> >> >>>> global<br>
>> >> >>>> maintenance (that is why I asked the first question here).<br>
>> >> >>>><br>
>> >> >>>> Martin<br>
>> >> >>>><br>
>> >> >>>> On Fri, Jan 12, 2018 at 4:28 PM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>> wrote:<br>
>> >> >>>> > Martin,<br>
>> >> >>>> ><br>
>> >> >>>> > Thanks so much for keeping with me, this is driving me crazy! I<br>
>> >> >>>> > really do<br>
>> >> >>>> > appreciate it, thanks again<br>
>> >> >>>> ><br>
>> >> >>>> > Let's go through this:<br>
>> >> >>>> ><br>
>> >> >>>> > HE VM is down - YES<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > HE agent fails when opening metadata using the symlink - YES<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > the symlink is there and readable by vdsm:kvm - it appears to<br>
>> >> >>>> > be:<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > lrwxrwxrwx. 1 vdsm kvm 159 Jan 10 21:20<br>
>> >> >>>> > 14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a<br>
>> >> >>>> > -><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_hosted__engine/248f46f0-<wbr>d793-4581-9810-c9d965e2f286/im<wbr>ages/14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > And the files in the linked directory exist and have vdsm:kvm<br>
>> >> >>>> > perms<br>
>> >> >>>> > as<br>
>> >> >>>> > well:<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > # cd<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_hosted__engine/248f46f0-<wbr>d793-4581-9810-c9d965e2f286/im<wbr>ages/14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a<br>
>> >> >>>> ><br>
>> >> >>>> > [root@cultivar0 14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a]# ls -al<br>
>> >> >>>> ><br>
>> >> >>>> > total 2040<br>
>> >> >>>> ><br>
>> >> >>>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .<br>
>> >> >>>> ><br>
>> >> >>>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..<br>
>> >> >>>> ><br>
>> >> >>>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 11:19<br>
>> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec<wbr>1f1cf8<br>
>> >> >>>> ><br>
>> >> >>>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016<br>
>> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec<wbr>1f1cf8.lease<br>
>> >> >>>> ><br>
>> >> >>>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016<br>
>> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec<wbr>1f1cf8.meta<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > I started ovirt-ha-broker with systemctl as root user but it<br>
>> >> >>>> > does<br>
>> >> >>>> > appear to<br>
>> >> >>>> > be running under vdsm:<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > vdsm 16928 0.6 0.0 1618244 43328 ? Ssl 10:33 0:18<br>
>> >> >>>> > /usr/bin/python<br>
>> >> >>>> > /usr/share/ovirt-hosted-engine<wbr>-ha/ovirt-ha-broker<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > Here is something I tried:<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > - On all three hosts I ran hosted-engine --vm-shutdown;<br>
>> >> >>>> > hosted-engine<br>
>> >> >>>> > --vm-poweroff<br>
>> >> >>>> ><br>
>> >> >>>> > - On HOST0 (cultivar0) I disconnected and reconnected storage<br>
>> >> >>>> > using<br>
>> >> >>>> > hosted-engine<br>
>> >> >>>> ><br>
>> >> >>>> > - Tried starting up the hosted VM on cultivar0 while tailing the<br>
>> >> >>>> > logs:<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > # hosted-engine --vm-start<br>
>> >> >>>> ><br>
>> >> >>>> > VM exists and is down, cleaning up and restarting<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>> >> >>>> ><br>
>> >> >>>> > jsonrpc/2::ERROR::2018-01-12<br>
>> >> >>>> > 11:27:27,194::vm::1766::virt.v<wbr>m::(_getRunningVmStats)<br>
>> >> >>>> > (vmId='4013c829-c9d7-4b72-90d5<wbr>-6fe58137504c') Error fetching vm<br>
>> >> >>>> > stats<br>
>> >> >>>> ><br>
>> >> >>>> > Traceback (most recent call last):<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line<br>
>> >> >>>> > 1762,<br>
>> >> >>>> > in<br>
>> >> >>>> > _getRunningVmStats<br>
>> >> >>>> ><br>
>> >> >>>> > vm_sample.interval)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vmstats.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 45, in<br>
>> >> >>>> > produce<br>
>> >> >>>> ><br>
>> >> >>>> > networks(vm, stats, first_sample, last_sample, interval)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vmstats.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 322, in<br>
>> >> >>>> > networks<br>
>> >> >>>> ><br>
>> >> >>>> > if nic.name.startswith('hostdev')<wbr>:<br>
>> >> >>>> ><br>
>> >> >>>> > AttributeError: name<br>
>> >> >>>> ><br>
>> >> >>>> > jsonrpc/3::ERROR::2018-01-12<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > 11:27:27,221::__init__::611::j<wbr>sonrpc.JsonRpcServer::(_handle<wbr>_request)<br>
>> >> >>>> > Internal server error<br>
>> >> >>>> ><br>
>> >> >>>> > Traceback (most recent call last):<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/yajsonrpc/__init__.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 606,<br>
>> >> >>>> > in _handle_request<br>
>> >> >>>> ><br>
>> >> >>>> > res = method(**params)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/rpc/Bridge.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 201, in<br>
>> >> >>>> > _dynamicMethod<br>
>> >> >>>> ><br>
>> >> >>>> > result = fn(*methodArgs)<br>
>> >> >>>> ><br>
>> >> >>>> > File "<string>", line 2, in getAllVmIoTunePolicies<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 48,<br>
>> >> >>>> > in<br>
>> >> >>>> > method<br>
>> >> >>>> ><br>
>> >> >>>> > ret = func(*args, **kwargs)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/API.py", line<br>
>> >> >>>> > 1354,<br>
>> >> >>>> > in<br>
>> >> >>>> > getAllVmIoTunePolicies<br>
>> >> >>>> ><br>
>> >> >>>> > io_tune_policies_dict = self._cif.getAllVmIoTunePolici<wbr>es()<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/clientIF.py", line<br>
>> >> >>>> > 524,<br>
>> >> >>>> > in<br>
>> >> >>>> > getAllVmIoTunePolicies<br>
>> >> >>>> ><br>
>> >> >>>> > 'current_values': v.getIoTune()}<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line<br>
>> >> >>>> > 3481,<br>
>> >> >>>> > in<br>
>> >> >>>> > getIoTune<br>
>> >> >>>> ><br>
>> >> >>>> > result = self.getIoTuneResponse()<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line<br>
>> >> >>>> > 3500,<br>
>> >> >>>> > in<br>
>> >> >>>> > getIoTuneResponse<br>
>> >> >>>> ><br>
>> >> >>>> > res = self._dom.blockIoTune(<br>
>> >> >>>> ><br>
>> >> >>>> > File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/virdomain.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 47,<br>
>> >> >>>> > in __getattr__<br>
>> >> >>>> ><br>
>> >> >>>> > % self.vmid)<br>
>> >> >>>> ><br>
>> >> >>>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c' was<br>
>> >> >>>> > not<br>
>> >> >>>> > defined<br>
>> >> >>>> > yet or was undefined<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/messages <==<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 journal: vdsm jsonrpc.JsonRpcServer<br>
>> >> >>>> > ERROR<br>
>> >> >>>> > Internal<br>
>> >> >>>> > server error#012Traceback (most recent call last):#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/yajsonrpc/__init__.py", line<br>
>> >> >>>> > 606,<br>
>> >> >>>> > in<br>
>> >> >>>> > _handle_request#012 res = method(**params)#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/rpc/Bridge.py", line 201,<br>
>> >> >>>> > in<br>
>> >> >>>> > _dynamicMethod#012 result = fn(*methodArgs)#012 File<br>
>> >> >>>> > "<string>",<br>
>> >> >>>> > line 2,<br>
>> >> >>>> > in getAllVmIoTunePolicies#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py", line 48,<br>
>> >> >>>> > in<br>
>> >> >>>> > method#012 ret = func(*args, **kwargs)#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/API.py", line 1354, in<br>
>> >> >>>> > getAllVmIoTunePolicies#012 io_tune_policies_dict =<br>
>> >> >>>> > self._cif.getAllVmIoTunePolici<wbr>es()#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/clientIF.py", line 524,<br>
>> >> >>>> > in<br>
>> >> >>>> > getAllVmIoTunePolicies#012 'current_values':<br>
>> >> >>>> > v.getIoTune()}#012<br>
>> >> >>>> > File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 3481,<br>
>> >> >>>> > in<br>
>> >> >>>> > getIoTune#012 result = self.getIoTuneResponse()#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line 3500,<br>
>> >> >>>> > in<br>
>> >> >>>> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/virdomain.py", line<br>
>> >> >>>> > 47,<br>
>> >> >>>> > in<br>
>> >> >>>> > __getattr__#012 % self.vmid)#012NotConnectedErro<wbr>r: VM<br>
>> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c' was not defined yet or<br>
>> >> >>>> > was<br>
>> >> >>>> > undefined<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)<br>
>> >> >>>> > entered<br>
>> >> >>>> > blocking<br>
>> >> >>>> > state<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)<br>
>> >> >>>> > entered<br>
>> >> >>>> > disabled<br>
>> >> >>>> > state<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 entered<br>
>> >> >>>> > promiscuous<br>
>> >> >>>> > mode<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)<br>
>> >> >>>> > entered<br>
>> >> >>>> > blocking<br>
>> >> >>>> > state<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)<br>
>> >> >>>> > entered<br>
>> >> >>>> > forwarding state<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 lldpad: recvfrom(Event interface): No<br>
>> >> >>>> > buffer<br>
>> >> >>>> > space<br>
>> >> >>>> > available<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>> >> >>>> > [1515770847.4264]<br>
>> >> >>>> > manager: (vnet4): new Tun device<br>
>> >> >>>> > (/org/freedesktop/NetworkManag<wbr>er/Devices/135)<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>> >> >>>> > [1515770847.4342]<br>
>> >> >>>> > device (vnet4): state change: unmanaged -> unavailable (reason<br>
>> >> >>>> > 'connection-assumed') [10 20 41]<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>> >> >>>> > [1515770847.4353]<br>
>> >> >>>> > device (vnet4): state change: unavailable -> disconnected<br>
>> >> >>>> > (reason<br>
>> >> >>>> > 'none')<br>
>> >> >>>> > [20 30 0]<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >>>> ><br>
>> >> >>>> > 2018-01-12 15:27:27.435+0000: starting up libvirt version:<br>
>> >> >>>> > 3.2.0,<br>
>> >> >>>> > package:<br>
>> >> >>>> > 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>> >> >>>> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>), qemu version:<br>
>> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1), hostname:<br>
>> >> >>>> > <a href="http://cultivar0.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar0.grove.silverorange.c<wbr>om</a><br>
>> >> >>>> ><br>
>> >> >>>> > LC_ALL=C PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
>> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name<br>
>> >> >>>> > guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/d<wbr>omain-114-Cultivar/master-key.<wbr>aes<br>
>> >> >>>> > -machine<br>
>> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>> >> >>>> > -cpu<br>
>> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -uuid<br>
>> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
>> >> >>>> > 'type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-3300-1042-8<wbr>031-B4C04F4B4831,uuid=4013c829<wbr>-c9d7-4b72-90d5-6fe58137504c'<br>
>> >> >>>> > -no-user-config -nodefaults -chardev<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-114-<wbr>Cultivar/monitor.sock,server,n<wbr>owait<br>
>> >> >>>> > -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
>> >> >>>> > base=2018-01-12T15:27:27,drift<wbr>fix=slew -global<br>
>> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot<br>
>> >> >>>> > strict=on<br>
>> >> >>>> > -device<br>
>> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
>> >> >>>> > virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4 -drive<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-c8<wbr>e04cf387f9/23aa0a66-fa6c-4967-<wbr>a1e5-fbe47c0cd705,format=raw,<wbr>if=none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> >> >>>> > -device<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
>> >> >>>> > -drive if=none,id=drive-ide0-1-0,read<wbr>only=on -device<br>
>> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -netdev<br>
>> >> >>>> > tap,fd=35,id=hostnet0,vhost=on<wbr>,vhostfd=38 -device<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:83<wbr>,bus=pci.0,addr=0x3<br>
>> >> >>>> > -chardev<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,se<wbr>rver,nowait<br>
>> >> >>>> > -device<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
>> >> >>>> > -chardev<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,s<wbr>erver,nowait<br>
>> >> >>>> > -device<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=<a href="http://org.qemu.gu">org.qemu.gu</a><wbr>est_agent.0<br>
>> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
>> >> >>>> > -chardev<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
>> >> >>>> > -device<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.h<wbr>osted-engine-setup.0<br>
>> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>> >> >>>> > virtconsole,chardev=charconsol<wbr>e0,id=console0 -spice<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > tls-port=5904,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
>> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -object<br>
>> >> >>>> > rng-random,id=objrng0,filename<wbr>=/dev/urandom -device<br>
>> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5 -msg<br>
>> >> >>>> > timestamp=on<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/messages <==<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: New machine<br>
>> >> >>>> > qemu-114-Cultivar.<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 systemd: Started Virtual Machine<br>
>> >> >>>> > qemu-114-Cultivar.<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 systemd: Starting Virtual Machine<br>
>> >> >>>> > qemu-114-Cultivar.<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >>>> ><br>
>> >> >>>> > 2018-01-12T15:27:27.651669Z qemu-kvm: -chardev<br>
>> >> >>>> > pty,id=charconsole0:<br>
>> >> >>>> > char<br>
>> >> >>>> > device redirected to /dev/pts/2 (label charconsole0)<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/messages <==<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kvm: 5 guests now active<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >>>> ><br>
>> >> >>>> > 2018-01-12 15:27:27.773+0000: shutting down, reason=failed<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/messages <==<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 libvirtd: 2018-01-12<br>
>> >> >>>> > 15:27:27.773+0000:<br>
>> >> >>>> > 1910:<br>
>> >> >>>> > error : virLockManagerSanlockAcquire:1<wbr>041 : resource busy:<br>
>> >> >>>> > Failed<br>
>> >> >>>> > to<br>
>> >> >>>> > acquire<br>
>> >> >>>> > lock: Lease is held by another host<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >>>> ><br>
>> >> >>>> > 2018-01-12T15:27:27.776135Z qemu-kvm: terminating on signal 15<br>
>> >> >>>> > from<br>
>> >> >>>> > pid 1773<br>
>> >> >>>> > (/usr/sbin/libvirtd)<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/messages <==<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)<br>
>> >> >>>> > entered<br>
>> >> >>>> > disabled<br>
>> >> >>>> > state<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 left promiscuous<br>
>> >> >>>> > mode<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)<br>
>> >> >>>> > entered<br>
>> >> >>>> > disabled<br>
>> >> >>>> > state<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>> >> >>>> > [1515770847.7989]<br>
>> >> >>>> > device (vnet4): state change: disconnected -> unmanaged (reason<br>
>> >> >>>> > 'unmanaged')<br>
>> >> >>>> > [30 10 3]<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info><br>
>> >> >>>> > [1515770847.7989]<br>
>> >> >>>> > device (vnet4): released from master device ovirtmgmt<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 kvm: 4 guests now active<br>
>> >> >>>> ><br>
>> >> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: Machine<br>
>> >> >>>> > qemu-114-Cultivar<br>
>> >> >>>> > terminated.<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>> >> >>>> ><br>
>> >> >>>> > vm/4013c829::ERROR::2018-01-12<br>
>> >> >>>> > 11:27:28,001::vm::914::virt.vm<wbr>::(_startUnderlyingVm)<br>
>> >> >>>> > (vmId='4013c829-c9d7-4b72-90d5<wbr>-6fe58137504c') The vm start<br>
>> >> >>>> > process<br>
>> >> >>>> > failed<br>
>> >> >>>> ><br>
>> >> >>>> > Traceback (most recent call last):<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line<br>
>> >> >>>> > 843,<br>
>> >> >>>> > in<br>
>> >> >>>> > _startUnderlyingVm<br>
>> >> >>>> ><br>
>> >> >>>> > self._run()<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py", line<br>
>> >> >>>> > 2721,<br>
>> >> >>>> > in<br>
>> >> >>>> > _run<br>
>> >> >>>> ><br>
>> >> >>>> > dom.createWithFlags(flags)<br>
>> >> >>>> ><br>
>> >> >>>> > File<br>
>> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/libvirtconnection.py"<wbr>,<br>
>> >> >>>> > line<br>
>> >> >>>> > 126, in wrapper<br>
>> >> >>>> ><br>
>> >> >>>> > ret = f(*args, **kwargs)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/utils.py", line<br>
>> >> >>>> > 512,<br>
>> >> >>>> > in<br>
>> >> >>>> > wrapper<br>
>> >> >>>> ><br>
>> >> >>>> > return func(inst, *args, **kwargs)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib64/python2.7/site-pac<wbr>kages/libvirt.py", line<br>
>> >> >>>> > 1069,<br>
>> >> >>>> > in<br>
>> >> >>>> > createWithFlags<br>
>> >> >>>> ><br>
>> >> >>>> > if ret == -1: raise libvirtError<br>
>> >> >>>> > ('virDomainCreateWithFlags()<br>
>> >> >>>> > failed',<br>
>> >> >>>> > dom=self)<br>
>> >> >>>> ><br>
>> >> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is<br>
>> >> >>>> > held<br>
>> >> >>>> > by<br>
>> >> >>>> > another host<br>
>> >> >>>> ><br>
>> >> >>>> > periodic/47::ERROR::2018-01-12<br>
>> >> >>>> > 11:27:32,858::periodic::215::v<wbr>irt.periodic.Operation::(__cal<wbr>l__)<br>
>> >> >>>> > <vdsm.virt.sampling.VMBulkstat<wbr>sMonitor object at 0x3692590><br>
>> >> >>>> > operation<br>
>> >> >>>> > failed<br>
>> >> >>>> ><br>
>> >> >>>> > Traceback (most recent call last):<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/periodic.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 213,<br>
>> >> >>>> > in __call__<br>
>> >> >>>> ><br>
>> >> >>>> > self._func()<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/sampling.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 522,<br>
>> >> >>>> > in __call__<br>
>> >> >>>> ><br>
>> >> >>>> > self._send_metrics()<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/sampling.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 538,<br>
>> >> >>>> > in _send_metrics<br>
>> >> >>>> ><br>
>> >> >>>> > vm_sample.interval)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vmstats.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 45, in<br>
>> >> >>>> > produce<br>
>> >> >>>> ><br>
>> >> >>>> > networks(vm, stats, first_sample, last_sample, interval)<br>
>> >> >>>> ><br>
>> >> >>>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vmstats.py",<br>
>> >> >>>> > line<br>
>> >> >>>> > 322, in<br>
>> >> >>>> > networks<br>
>> >> >>>> ><br>
>> >> >>>> > if nic.name.startswith('hostdev')<wbr>:<br>
>> >> >>>> ><br>
>> >> >>>> > AttributeError: name<br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>>> > On Fri, Jan 12, 2018 at 11:14 AM, Martin Sivak<br>
>> >> >>>> > <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>>> > wrote:<br>
>> >> >>>> >><br>
>> >> >>>> >> Hmm that rules out most of NFS related permission issues.<br>
>> >> >>>> >><br>
>> >> >>>> >> So the current status is (I need to sum it up to get the full<br>
>> >> >>>> >> picture):<br>
>> >> >>>> >><br>
>> >> >>>> >> - HE VM is down<br>
>> >> >>>> >> - HE agent fails when opening metadata using the symlink<br>
>> >> >>>> >> - the symlink is there<br>
>> >> >>>> >> - the symlink is readable by vdsm:kvm<br>
>> >> >>>> >><br>
>> >> >>>> >> Hmm can you check under which user is ovirt-ha-broker started?<br>
>> >> >>>> >><br>
>> >> >>>> >> Martin<br>
>> >> >>>> >><br>
>> >> >>>> >><br>
>> >> >>>> >> On Fri, Jan 12, 2018 at 4:10 PM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>><br>
>> >> >>>> >> wrote:<br>
>> >> >>>> >> > Same thing happens with data images of other VMs as well<br>
>> >> >>>> >> > though,<br>
>> >> >>>> >> > and<br>
>> >> >>>> >> > those<br>
>> >> >>>> >> > seem to be running ok so I'm not sure if it's the problem.<br>
>> >> >>>> >> ><br>
>> >> >>>> >> > On Fri, Jan 12, 2018 at 11:08 AM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>><br>
>> >> >>>> >> > wrote:<br>
>> >> >>>> >> >><br>
>> >> >>>> >> >> Martin,<br>
>> >> >>>> >> >><br>
>> >> >>>> >> >> I can as VDSM user but not as root . I get permission denied<br>
>> >> >>>> >> >> trying to<br>
>> >> >>>> >> >> touch one of the files as root, is that normal?<br>
>> >> >>>> >> >><br>
>> >> >>>> >> >> On Fri, Jan 12, 2018 at 11:03 AM, Martin Sivak<br>
>> >> >>>> >> >> <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>>> >> >> wrote:<br>
>> >> >>>> >> >>><br>
>> >> >>>> >> >>> Hmm, then it might be a permission issue indeed. Can you<br>
>> >> >>>> >> >>> touch<br>
>> >> >>>> >> >>> the<br>
>> >> >>>> >> >>> file? Open it? (try hexdump) Just to make sure NFS does not<br>
>> >> >>>> >> >>> prevent<br>
>> >> >>>> >> >>> you from doing that.<br>
>> >> >>>> >> >>><br>
>> >> >>>> >> >>> Martin<br>
>> >> >>>> >> >>><br>
>> >> >>>> >> >>> On Fri, Jan 12, 2018 at 3:57 PM, Jayme <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>><br>
>> >> >>>> >> >>> wrote:<br>
>> >> >>>> >> >>> > Sorry, I think we got confused about the symlink, there<br>
>> >> >>>> >> >>> > are<br>
>> >> >>>> >> >>> > symlinks<br>
>> >> >>>> >> >>> > in<br>
>> >> >>>> >> >>> > /var/run that point the /rhev when I was doing an LS it<br>
>> >> >>>> >> >>> > was<br>
>> >> >>>> >> >>> > listing<br>
>> >> >>>> >> >>> > the<br>
>> >> >>>> >> >>> > files in /rhev<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > 14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a -><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_hosted__engine/248f46f0-<wbr>d793-4581-9810-c9d965e2f286/im<wbr>ages/14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > ls -al<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_hosted__engine/248f46f0-<wbr>d793-4581-9810-c9d965e2f286/im<wbr>ages/14a20941-1b84-4b82-be8f-<wbr>ace38d7c037a<br>
>> >> >>>> >> >>> > total 2040<br>
>> >> >>>> >> >>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .<br>
>> >> >>>> >> >>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..<br>
>> >> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 10:56<br>
>> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec<wbr>1f1cf8<br>
>> >> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016<br>
>> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec<wbr>1f1cf8.lease<br>
>> >> >>>> >> >>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016<br>
>> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec<wbr>1f1cf8.meta<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > Is it possible that this is the wrong image for hosted<br>
>> >> >>>> >> >>> > engine?<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > this is all I get in vdsm log when running hosted-engine<br>
>> >> >>>> >> >>> > --connect-storage<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > jsonrpc/4::ERROR::2018-01-12<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > 10:52:53,019::__init__::611::j<wbr>sonrpc.JsonRpcServer::(_handle<wbr>_request)<br>
>> >> >>>> >> >>> > Internal server error<br>
>> >> >>>> >> >>> > Traceback (most recent call last):<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/yajsonrpc/__init__.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 606,<br>
>> >> >>>> >> >>> > in _handle_request<br>
>> >> >>>> >> >>> > res = method(**params)<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/rpc/Bridge.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 201,<br>
>> >> >>>> >> >>> > in<br>
>> >> >>>> >> >>> > _dynamicMethod<br>
>> >> >>>> >> >>> > result = fn(*methodArgs)<br>
>> >> >>>> >> >>> > File "<string>", line 2, in getAllVmIoTunePolicies<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 48,<br>
>> >> >>>> >> >>> > in<br>
>> >> >>>> >> >>> > method<br>
>> >> >>>> >> >>> > ret = func(*args, **kwargs)<br>
>> >> >>>> >> >>> > File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/API.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 1354, in<br>
>> >> >>>> >> >>> > getAllVmIoTunePolicies<br>
>> >> >>>> >> >>> > io_tune_policies_dict =<br>
>> >> >>>> >> >>> > self._cif.getAllVmIoTunePolici<wbr>es()<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/clientIF.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 524,<br>
>> >> >>>> >> >>> > in<br>
>> >> >>>> >> >>> > getAllVmIoTunePolicies<br>
>> >> >>>> >> >>> > 'current_values': v.getIoTune()}<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 3481,<br>
>> >> >>>> >> >>> > in<br>
>> >> >>>> >> >>> > getIoTune<br>
>> >> >>>> >> >>> > result = self.getIoTuneResponse()<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vm.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 3500,<br>
>> >> >>>> >> >>> > in<br>
>> >> >>>> >> >>> > getIoTuneResponse<br>
>> >> >>>> >> >>> > res = self._dom.blockIoTune(<br>
>> >> >>>> >> >>> > File<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/virdomain.py",<br>
>> >> >>>> >> >>> > line<br>
>> >> >>>> >> >>> > 47,<br>
>> >> >>>> >> >>> > in __getattr__<br>
>> >> >>>> >> >>> > % self.vmid)<br>
>> >> >>>> >> >>> > NotConnectedError: VM<br>
>> >> >>>> >> >>> > '4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c'<br>
>> >> >>>> >> >>> > was not<br>
>> >> >>>> >> >>> > defined<br>
>> >> >>>> >> >>> > yet or was undefined<br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> > On Fri, Jan 12, 2018 at 10:48 AM, Martin Sivak<br>
>> >> >>>> >> >>> > <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>>> >> >>> > wrote:<br>
>> >> >>>> >> >>> >><br>
>> >> >>>> >> >>> >> Hi,<br>
>> >> >>>> >> >>> >><br>
>> >> >>>> >> >>> >> what happens when you try hosted-engine<br>
>> >> >>>> >> >>> >> --connect-storage?<br>
>> >> >>>> >> >>> >> Do<br>
>> >> >>>> >> >>> >> you<br>
>> >> >>>> >> >>> >> see<br>
>> >> >>>> >> >>> >> any errors in the vdsm log?<br>
>> >> >>>> >> >>> >><br>
>> >> >>>> >> >>> >> Best regards<br>
>> >> >>>> >> >>> >><br>
>> >> >>>> >> >>> >> Martin Sivak<br>
>> >> >>>> >> >>> >><br>
>> >> >>>> >> >>> >> On Fri, Jan 12, 2018 at 3:41 PM, Jayme<br>
>> >> >>>> >> >>> >> <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>><br>
>> >> >>>> >> >>> >> wrote:<br>
>> >> >>>> >> >>> >> > Ok this is what I've done:<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > - All three hosts in global maintenance mode<br>
>> >> >>>> >> >>> >> > - Ran: systemctl stop ovirt-ha-broker; systemctl stop<br>
>> >> >>>> >> >>> >> > ovirt-ha-broker --<br>
>> >> >>>> >> >>> >> > on<br>
>> >> >>>> >> >>> >> > all three hosts<br>
>> >> >>>> >> >>> >> > - Moved ALL files in<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/<br>
>> >> >>>> >> >>> >> > to<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/backup<br>
>> >> >>>> >> >>> >> > - Ran: systemctl start ovirt-ha-broker; systemctl<br>
>> >> >>>> >> >>> >> > start<br>
>> >> >>>> >> >>> >> > ovirt-ha-broker<br>
>> >> >>>> >> >>> >> > --<br>
>> >> >>>> >> >>> >> > on all three hosts<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > - attempt start of engine vm from HOST0 (cultivar0):<br>
>> >> >>>> >> >>> >> > hosted-engine<br>
>> >> >>>> >> >>> >> > --vm-start<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > Lots of errors in the logs still, it appears to be<br>
>> >> >>>> >> >>> >> > having<br>
>> >> >>>> >> >>> >> > problems<br>
>> >> >>>> >> >>> >> > with<br>
>> >> >>>> >> >>> >> > that<br>
>> >> >>>> >> >>> >> > directory still:<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > Jan 12 10:40:13 cultivar0 journal: ovirt-ha-broker<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > ovirt_hosted_engine_ha.broker.<wbr>storage_broker.StorageBroker<br>
>> >> >>>> >> >>> >> > ERROR<br>
>> >> >>>> >> >>> >> > Failed<br>
>> >> >>>> >> >>> >> > to<br>
>> >> >>>> >> >>> >> > write metadata for host 1 to<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8#012Traceback<br>
>> >> >>>> >> >>> >> > (most recent call last):#012 File<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/storage_broker.py",<br>
>> >> >>>> >> >>> >> > line 202, in put_stats#012 f = os.open(path,<br>
>> >> >>>> >> >>> >> > direct_flag<br>
>> >> >>>> >> >>> >> > |<br>
>> >> >>>> >> >>> >> > os.O_WRONLY |<br>
>> >> >>>> >> >>> >> > os.O_SYNC)#012OSError: [Errno 2] No such file or<br>
>> >> >>>> >> >>> >> > directory:<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > There are no new files or symlinks in<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > - Jayme<br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> > On Fri, Jan 12, 2018 at 10:23 AM, Martin Sivak<br>
>> >> >>>> >> >>> >> > <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>>> >> >>> >> > wrote:<br>
>> >> >>>> >> >>> >> >><br>
>> >> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling (<br>
>> >> >>>> >> >>> >> >><br>
>> >> >>>> >> >>> >> >> On all hosts I should have added.<br>
>> >> >>>> >> >>> >> >><br>
>> >> >>>> >> >>> >> >> Martin<br>
>> >> >>>> >> >>> >> >><br>
>> >> >>>> >> >>> >> >> On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak<br>
>> >> >>>> >> >>> >> >> <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>>> >> >>> >> >> wrote:<br>
>> >> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2]<br>
>> >> >>>> >> >>> >> >> >> No<br>
>> >> >>>> >> >>> >> >> >> such<br>
>> >> >>>> >> >>> >> >> >> file<br>
>> >> >>>> >> >>> >> >> >> or<br>
>> >> >>>> >> >>> >> >> >> directory:<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> ls -al<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>> >> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are<br>
>> >> >>>> >> >>> >> >> >> referring to<br>
>> >> >>>> >> >>> >> >> >> that<br>
>> >> >>>> >> >>> >> >> >> was<br>
>> >> >>>> >> >>> >> >> >> addressed in RC1 or something else?<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> > No, this file is the symlink. It should point to<br>
>> >> >>>> >> >>> >> >> > somewhere<br>
>> >> >>>> >> >>> >> >> > inside<br>
>> >> >>>> >> >>> >> >> > /rhev/. I see it is a 1G file in your case. That is<br>
>> >> >>>> >> >>> >> >> > really<br>
>> >> >>>> >> >>> >> >> > interesting.<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling<br>
>> >> >>>> >> >>> >> >> > (ovirt-ha-agent,<br>
>> >> >>>> >> >>> >> >> > ovirt-ha-broker), move the file (metadata file is<br>
>> >> >>>> >> >>> >> >> > not<br>
>> >> >>>> >> >>> >> >> > important<br>
>> >> >>>> >> >>> >> >> > when<br>
>> >> >>>> >> >>> >> >> > services are stopped, but better safe than sorry)<br>
>> >> >>>> >> >>> >> >> > and<br>
>> >> >>>> >> >>> >> >> > restart<br>
>> >> >>>> >> >>> >> >> > all<br>
>> >> >>>> >> >>> >> >> > services again?<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> >> Could there possibly be a permissions<br>
>> >> >>>> >> >>> >> >> >> problem somewhere?<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> > Maybe, but the file itself looks out of the<br>
>> >> >>>> >> >>> >> >> > ordinary.<br>
>> >> >>>> >> >>> >> >> > I<br>
>> >> >>>> >> >>> >> >> > wonder<br>
>> >> >>>> >> >>> >> >> > how it<br>
>> >> >>>> >> >>> >> >> > got there.<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> > Best regards<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> > Martin Sivak<br>
>> >> >>>> >> >>> >> >> ><br>
>> >> >>>> >> >>> >> >> > On Fri, Jan 12, 2018 at 3:09 PM, Jayme<br>
>> >> >>>> >> >>> >> >> > <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>><br>
>> >> >>>> >> >>> >> >> > wrote:<br>
>> >> >>>> >> >>> >> >> >> Thanks for the help thus far. Storage could be<br>
>> >> >>>> >> >>> >> >> >> related<br>
>> >> >>>> >> >>> >> >> >> but<br>
>> >> >>>> >> >>> >> >> >> all<br>
>> >> >>>> >> >>> >> >> >> other<br>
>> >> >>>> >> >>> >> >> >> VMs on<br>
>> >> >>>> >> >>> >> >> >> same storage are running ok. The storage is<br>
>> >> >>>> >> >>> >> >> >> mounted<br>
>> >> >>>> >> >>> >> >> >> via<br>
>> >> >>>> >> >>> >> >> >> NFS<br>
>> >> >>>> >> >>> >> >> >> from<br>
>> >> >>>> >> >>> >> >> >> within one<br>
>> >> >>>> >> >>> >> >> >> of the three hosts, I realize this is not ideal.<br>
>> >> >>>> >> >>> >> >> >> This<br>
>> >> >>>> >> >>> >> >> >> was<br>
>> >> >>>> >> >>> >> >> >> setup<br>
>> >> >>>> >> >>> >> >> >> by<br>
>> >> >>>> >> >>> >> >> >> a<br>
>> >> >>>> >> >>> >> >> >> previous admin more as a proof of concept and VMs<br>
>> >> >>>> >> >>> >> >> >> were<br>
>> >> >>>> >> >>> >> >> >> put on<br>
>> >> >>>> >> >>> >> >> >> there<br>
>> >> >>>> >> >>> >> >> >> that<br>
>> >> >>>> >> >>> >> >> >> should not have been placed in a proof of concept<br>
>> >> >>>> >> >>> >> >> >> environment..<br>
>> >> >>>> >> >>> >> >> >> it<br>
>> >> >>>> >> >>> >> >> >> was<br>
>> >> >>>> >> >>> >> >> >> intended to be rebuilt with proper storage down<br>
>> >> >>>> >> >>> >> >> >> the<br>
>> >> >>>> >> >>> >> >> >> road.<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> So the storage is on HOST0 and the other hosts<br>
>> >> >>>> >> >>> >> >> >> mount<br>
>> >> >>>> >> >>> >> >> >> NFS<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.c<wbr>om:/exports/data<br>
>> >> >>>> >> >>> >> >> >> 4861742080<br>
>> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_data<br>
>> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.c<wbr>om:/exports/iso<br>
>> >> >>>> >> >>> >> >> >> 4861742080<br>
>> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_iso<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.c<wbr>om:/exports/import_export<br>
>> >> >>>> >> >>> >> >> >> 4861742080<br>
>> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_import__export<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.c<wbr>om:/exports/hosted_engine<br>
>> >> >>>> >> >>> >> >> >> 4861742080<br>
>> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar<wbr>0.grove.silverorange.com:_expo<wbr>rts_hosted__engine<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> Like I said, the VM data storage itself seems to<br>
>> >> >>>> >> >>> >> >> >> be<br>
>> >> >>>> >> >>> >> >> >> working<br>
>> >> >>>> >> >>> >> >> >> ok,<br>
>> >> >>>> >> >>> >> >> >> as<br>
>> >> >>>> >> >>> >> >> >> all<br>
>> >> >>>> >> >>> >> >> >> other<br>
>> >> >>>> >> >>> >> >> >> VMs appear to be running.<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> I'm curious why the broker log says this file is<br>
>> >> >>>> >> >>> >> >> >> not<br>
>> >> >>>> >> >>> >> >> >> found<br>
>> >> >>>> >> >>> >> >> >> when<br>
>> >> >>>> >> >>> >> >> >> it<br>
>> >> >>>> >> >>> >> >> >> is<br>
>> >> >>>> >> >>> >> >> >> correct and I can see the file at that path:<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2]<br>
>> >> >>>> >> >>> >> >> >> No<br>
>> >> >>>> >> >>> >> >> >> such<br>
>> >> >>>> >> >>> >> >> >> file<br>
>> >> >>>> >> >>> >> >> >> or<br>
>> >> >>>> >> >>> >> >> >> directory:<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> ls -al<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>> >> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0<wbr>-d793-4581-9810-c9d965e2f286/1<wbr>4a20941-1b84-4b82-be8f-ace38d7<wbr>c037a/8582bdfc-ef54-47af-9f1e-<wbr>f5b7ec1f1cf8<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are<br>
>> >> >>>> >> >>> >> >> >> referring to<br>
>> >> >>>> >> >>> >> >> >> that<br>
>> >> >>>> >> >>> >> >> >> was<br>
>> >> >>>> >> >>> >> >> >> addressed in RC1 or something else? Could there<br>
>> >> >>>> >> >>> >> >> >> possibly be<br>
>> >> >>>> >> >>> >> >> >> a<br>
>> >> >>>> >> >>> >> >> >> permissions<br>
>> >> >>>> >> >>> >> >> >> problem somewhere?<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> Assuming that all three hosts have 4.2 rpms<br>
>> >> >>>> >> >>> >> >> >> installed<br>
>> >> >>>> >> >>> >> >> >> and the<br>
>> >> >>>> >> >>> >> >> >> host<br>
>> >> >>>> >> >>> >> >> >> engine<br>
>> >> >>>> >> >>> >> >> >> will not start is it safe for me to update hosts<br>
>> >> >>>> >> >>> >> >> >> to<br>
>> >> >>>> >> >>> >> >> >> 4.2<br>
>> >> >>>> >> >>> >> >> >> RC1<br>
>> >> >>>> >> >>> >> >> >> rpms?<br>
>> >> >>>> >> >>> >> >> >> Or<br>
>> >> >>>> >> >>> >> >> >> perhaps install that repo and *only* update the<br>
>> >> >>>> >> >>> >> >> >> ovirt<br>
>> >> >>>> >> >>> >> >> >> HA<br>
>> >> >>>> >> >>> >> >> >> packages?<br>
>> >> >>>> >> >>> >> >> >> Assuming that I cannot yet apply the same updates<br>
>> >> >>>> >> >>> >> >> >> to<br>
>> >> >>>> >> >>> >> >> >> the<br>
>> >> >>>> >> >>> >> >> >> inaccessible<br>
>> >> >>>> >> >>> >> >> >> hosted<br>
>> >> >>>> >> >>> >> >> >> engine VM.<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> I should also mention one more thing. I<br>
>> >> >>>> >> >>> >> >> >> originally<br>
>> >> >>>> >> >>> >> >> >> upgraded<br>
>> >> >>>> >> >>> >> >> >> the<br>
>> >> >>>> >> >>> >> >> >> engine<br>
>> >> >>>> >> >>> >> >> >> VM<br>
>> >> >>>> >> >>> >> >> >> first using new RPMS then engine-setup. It failed<br>
>> >> >>>> >> >>> >> >> >> due<br>
>> >> >>>> >> >>> >> >> >> to not<br>
>> >> >>>> >> >>> >> >> >> being<br>
>> >> >>>> >> >>> >> >> >> in<br>
>> >> >>>> >> >>> >> >> >> global maintenance, so I set global maintenance<br>
>> >> >>>> >> >>> >> >> >> and<br>
>> >> >>>> >> >>> >> >> >> ran<br>
>> >> >>>> >> >>> >> >> >> it<br>
>> >> >>>> >> >>> >> >> >> again,<br>
>> >> >>>> >> >>> >> >> >> which<br>
>> >> >>>> >> >>> >> >> >> appeared to complete as intended but never came<br>
>> >> >>>> >> >>> >> >> >> back<br>
>> >> >>>> >> >>> >> >> >> up<br>
>> >> >>>> >> >>> >> >> >> after.<br>
>> >> >>>> >> >>> >> >> >> Just<br>
>> >> >>>> >> >>> >> >> >> in<br>
>> >> >>>> >> >>> >> >> >> case<br>
>> >> >>>> >> >>> >> >> >> this might have anything at all to do with what<br>
>> >> >>>> >> >>> >> >> >> could<br>
>> >> >>>> >> >>> >> >> >> have<br>
>> >> >>>> >> >>> >> >> >> happened.<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> Thanks very much again, I very much appreciate the<br>
>> >> >>>> >> >>> >> >> >> help!<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> - Jayme<br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> >> >> On Fri, Jan 12, 2018 at 8:44 AM, Simone Tiraboschi<br>
>> >> >>>> >> >>> >> >> >> <<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>><br>
>> >> >>>> >> >>> >> >> >> wrote:<br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak<br>
>> >> >>>> >> >>> >> >> >>> <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
>> >> >>>> >> >>> >> >> >>> wrote:<br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> Hi,<br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> the hosted engine agent issue might be fixed by<br>
>> >> >>>> >> >>> >> >> >>>> restarting<br>
>> >> >>>> >> >>> >> >> >>>> ovirt-ha-broker or updating to newest<br>
>> >> >>>> >> >>> >> >> >>>> ovirt-hosted-engine-ha<br>
>> >> >>>> >> >>> >> >> >>>> and<br>
>> >> >>>> >> >>> >> >> >>>> -setup. We improved handling of the missing<br>
>> >> >>>> >> >>> >> >> >>>> symlink.<br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>> Available just in oVirt 4.2.1 RC1<br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> All the other issues seem to point to some<br>
>> >> >>>> >> >>> >> >> >>>> storage<br>
>> >> >>>> >> >>> >> >> >>>> problem<br>
>> >> >>>> >> >>> >> >> >>>> I<br>
>> >> >>>> >> >>> >> >> >>>> am<br>
>> >> >>>> >> >>> >> >> >>>> afraid.<br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> You said you started the VM, do you see it in<br>
>> >> >>>> >> >>> >> >> >>>> virsh<br>
>> >> >>>> >> >>> >> >> >>>> -r<br>
>> >> >>>> >> >>> >> >> >>>> list?<br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> Best regards<br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> Martin Sivak<br>
>> >> >>>> >> >>> >> >> >>>><br>
>> >> >>>> >> >>> >> >> >>>> On Thu, Jan 11, 2018 at 10:00 PM, Jayme<br>
>> >> >>>> >> >>> >> >> >>>> <<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>><br>
>> >> >>>> >> >>> >> >> >>>> wrote:<br>
>> >> >>>> >> >>> >> >> >>>> > Please help, I'm really not sure what else to<br>
>> >> >>>> >> >>> >> >> >>>> > try<br>
>> >> >>>> >> >>> >> >> >>>> > at<br>
>> >> >>>> >> >>> >> >> >>>> > this<br>
>> >> >>>> >> >>> >> >> >>>> > point.<br>
>> >> >>>> >> >>> >> >> >>>> > Thank<br>
>> >> >>>> >> >>> >> >> >>>> > you<br>
>> >> >>>> >> >>> >> >> >>>> > for reading!<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > I'm still working on trying to get my hosted<br>
>> >> >>>> >> >>> >> >> >>>> > engine<br>
>> >> >>>> >> >>> >> >> >>>> > running<br>
>> >> >>>> >> >>> >> >> >>>> > after a<br>
>> >> >>>> >> >>> >> >> >>>> > botched<br>
>> >> >>>> >> >>> >> >> >>>> > upgrade to 4.2. Storage is NFS mounted from<br>
>> >> >>>> >> >>> >> >> >>>> > within<br>
>> >> >>>> >> >>> >> >> >>>> > one<br>
>> >> >>>> >> >>> >> >> >>>> > of<br>
>> >> >>>> >> >>> >> >> >>>> > the<br>
>> >> >>>> >> >>> >> >> >>>> > hosts.<br>
>> >> >>>> >> >>> >> >> >>>> > Right<br>
>> >> >>>> >> >>> >> >> >>>> > now I have 3 centos7 hosts that are fully<br>
>> >> >>>> >> >>> >> >> >>>> > updated<br>
>> >> >>>> >> >>> >> >> >>>> > with<br>
>> >> >>>> >> >>> >> >> >>>> > yum<br>
>> >> >>>> >> >>> >> >> >>>> > packages<br>
>> >> >>>> >> >>> >> >> >>>> > from<br>
>> >> >>>> >> >>> >> >> >>>> > ovirt 4.2, the engine was fully updated with<br>
>> >> >>>> >> >>> >> >> >>>> > yum<br>
>> >> >>>> >> >>> >> >> >>>> > packages<br>
>> >> >>>> >> >>> >> >> >>>> > and<br>
>> >> >>>> >> >>> >> >> >>>> > failed to<br>
>> >> >>>> >> >>> >> >> >>>> > come<br>
>> >> >>>> >> >>> >> >> >>>> > up after reboot. As of right now, everything<br>
>> >> >>>> >> >>> >> >> >>>> > should<br>
>> >> >>>> >> >>> >> >> >>>> > have<br>
>> >> >>>> >> >>> >> >> >>>> > full<br>
>> >> >>>> >> >>> >> >> >>>> > yum<br>
>> >> >>>> >> >>> >> >> >>>> > updates<br>
>> >> >>>> >> >>> >> >> >>>> > and all having 4.2 rpms. I have global<br>
>> >> >>>> >> >>> >> >> >>>> > maintenance<br>
>> >> >>>> >> >>> >> >> >>>> > mode<br>
>> >> >>>> >> >>> >> >> >>>> > on<br>
>> >> >>>> >> >>> >> >> >>>> > right<br>
>> >> >>>> >> >>> >> >> >>>> > now<br>
>> >> >>>> >> >>> >> >> >>>> > and<br>
>> >> >>>> >> >>> >> >> >>>> > started hosted-engine on one of the three host<br>
>> >> >>>> >> >>> >> >> >>>> > and<br>
>> >> >>>> >> >>> >> >> >>>> > the<br>
>> >> >>>> >> >>> >> >> >>>> > status is<br>
>> >> >>>> >> >>> >> >> >>>> > currently:<br>
>> >> >>>> >> >>> >> >> >>>> > Engine status : {"reason": "failed liveliness<br>
>> >> >>>> >> >>> >> >> >>>> > check”;<br>
>> >> >>>> >> >>> >> >> >>>> > "health":<br>
>> >> >>>> >> >>> >> >> >>>> > "bad",<br>
>> >> >>>> >> >>> >> >> >>>> > "vm":<br>
>> >> >>>> >> >>> >> >> >>>> > "up", "detail": "Up"}<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > this is what I get when trying to enter<br>
>> >> >>>> >> >>> >> >> >>>> > hosted-vm<br>
>> >> >>>> >> >>> >> >> >>>> > --console<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > The engine VM is running on this host<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > error: failed to get domain 'HostedEngine'<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > error: Domain not found: no domain with<br>
>> >> >>>> >> >>> >> >> >>>> > matching<br>
>> >> >>>> >> >>> >> >> >>>> > name<br>
>> >> >>>> >> >>> >> >> >>>> > 'HostedEngine'<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Here are logs from various sources when I<br>
>> >> >>>> >> >>> >> >> >>>> > start<br>
>> >> >>>> >> >>> >> >> >>>> > the<br>
>> >> >>>> >> >>> >> >> >>>> > VM on<br>
>> >> >>>> >> >>> >> >> >>>> > HOST3:<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > hosted-engine --vm-start<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Command VM.getStats with args {'vmID':<br>
>> >> >>>> >> >>> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58<wbr>137504c'}<br>
>> >> >>>> >> >>> >> >> >>>> > failed:<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > (code=1, message=Virtual machine does not<br>
>> >> >>>> >> >>> >> >> >>>> > exist:<br>
>> >> >>>> >> >>> >> >> >>>> > {'vmId':<br>
>> >> >>>> >> >>> >> >> >>>> > u'4013c829-c9d7-4b72-90d5-6fe5<wbr>8137504c'})<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd-machined:<br>
>> >> >>>> >> >>> >> >> >>>> > New<br>
>> >> >>>> >> >>> >> >> >>>> > machine<br>
>> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Started<br>
>> >> >>>> >> >>> >> >> >>>> > Virtual<br>
>> >> >>>> >> >>> >> >> >>>> > Machine<br>
>> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Starting<br>
>> >> >>>> >> >>> >> >> >>>> > Virtual<br>
>> >> >>>> >> >>> >> >> >>>> > Machine<br>
>> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 kvm: 3 guests now<br>
>> >> >>>> >> >>> >> >> >>>> > active<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/common/api.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line<br>
>> >> >>>> >> >>> >> >> >>>> > 48,<br>
>> >> >>>> >> >>> >> >> >>>> > in<br>
>> >> >>>> >> >>> >> >> >>>> > method<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ret = func(*args, **kwargs)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/hsm.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line<br>
>> >> >>>> >> >>> >> >> >>>> > 2718, in<br>
>> >> >>>> >> >>> >> >> >>>> > getStorageDomainInfo<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > dom = self.validateSdUUID(sdUUID)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/hsm.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line<br>
>> >> >>>> >> >>> >> >> >>>> > 304, in<br>
>> >> >>>> >> >>> >> >> >>>> > validateSdUUID<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > sdDom.validate()<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/fileSD.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line<br>
>> >> >>>> >> >>> >> >> >>>> > 515,<br>
>> >> >>>> >> >>> >> >> >>>> > in validate<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > raise<br>
>> >> >>>> >> >>> >> >> >>>> > se.StorageDomainAccessError(se<wbr>lf.sdUUID)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > StorageDomainAccessError: Domain is either<br>
>> >> >>>> >> >>> >> >> >>>> > partially<br>
>> >> >>>> >> >>> >> >> >>>> > accessible<br>
>> >> >>>> >> >>> >> >> >>>> > or<br>
>> >> >>>> >> >>> >> >> >>>> > entirely<br>
>> >> >>>> >> >>> >> >> >>>> > inaccessible:<br>
>> >> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d<wbr>965e2f286',)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > jsonrpc/2::ERROR::2018-01-11<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 16:55:16,144::dispatcher::82::<wbr>storage.Dispatcher::(wrapper)<br>
>> >> >>>> >> >>> >> >> >>>> > FINISH<br>
>> >> >>>> >> >>> >> >> >>>> > getStorageDomainInfo error=Domain is either<br>
>> >> >>>> >> >>> >> >> >>>> > partially<br>
>> >> >>>> >> >>> >> >> >>>> > accessible<br>
>> >> >>>> >> >>> >> >> >>>> > or<br>
>> >> >>>> >> >>> >> >> >>>> > entirely<br>
>> >> >>>> >> >>> >> >> >>>> > inaccessible:<br>
>> >> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d<wbr>965e2f286',)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar<wbr>.log <==<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > LC_ALL=C<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
>> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm<br>
>> >> >>>> >> >>> >> >> >>>> > -name<br>
>> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/d<wbr>omain-108-Cultivar/master-key.<wbr>aes<br>
>> >> >>>> >> >>> >> >> >>>> > -machine<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>> >> >>>> >> >>> >> >> >>>> > -cpu<br>
>> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1<br>
>> >> >>>> >> >>> >> >> >>>> > -uuid<br>
>> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
>> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-4300-1034-8<wbr>035-CAC04F424331,uuid=4013c829<wbr>-c9d7-4b72-90d5-6fe58137504c'<br>
>> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-108-<wbr>Cultivar/monitor.sock,server,n<wbr>owait<br>
>> >> >>>> >> >>> >> >> >>>> > -mon<br>
>> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor<wbr>,mode=control<br>
>> >> >>>> >> >>> >> >> >>>> > -rtc<br>
>> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:33:19,drift<wbr>fix=slew -global<br>
>> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet<br>
>> >> >>>> >> >>> >> >> >>>> > -no-reboot<br>
>> >> >>>> >> >>> >> >> >>>> > -boot<br>
>> >> >>>> >> >>> >> >> >>>> > strict=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4<br>
>> >> >>>> >> >>> >> >> >>>> > -drive<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-c8<wbr>e04cf387f9/23aa0a66-fa6c-4967-<wbr>a1e5-fbe47c0cd705,format=raw,<wbr>if=none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
>> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,read<wbr>only=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
>> >> >>>> >> >>> >> >> >>>> > -netdev<br>
>> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on<wbr>,vhostfd=32<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:83<wbr>,bus=pci.0,addr=0x3<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,se<wbr>rver,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,s<wbr>erver,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=<a href="http://org.qemu.gu">org.qemu.gu</a><wbr>est_agent.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.h<wbr>osted-engine-setup.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsol<wbr>e0,id=console0<br>
>> >> >>>> >> >>> >> >> >>>> > -spice<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2<br>
>> >> >>>> >> >>> >> >> >>>> > -object<br>
>> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename<wbr>=/dev/urandom<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5<br>
>> >> >>>> >> >>> >> >> >>>> > -msg<br>
>> >> >>>> >> >>> >> >> >>>> > timestamp=on<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev<br>
>> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:<br>
>> >> >>>> >> >>> >> >> >>>> > char<br>
>> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label<br>
>> >> >>>> >> >>> >> >> >>>> > charconsole0)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:38:11.640+0000: shutting down,<br>
>> >> >>>> >> >>> >> >> >>>> > reason=shutdown<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:39:02.122+0000: starting up<br>
>> >> >>>> >> >>> >> >> >>>> > libvirt<br>
>> >> >>>> >> >>> >> >> >>>> > version:<br>
>> >> >>>> >> >>> >> >> >>>> > 3.2.0,<br>
>> >> >>>> >> >>> >> >> >>>> > package:<br>
>> >> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem<br>
>> >> >>>> >> >>> >> >> >>>> > <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>),<br>
>> >> >>>> >> >>> >> >> >>>> > qemu<br>
>> >> >>>> >> >>> >> >> >>>> > version:<br>
>> >> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1),<br>
>> >> >>>> >> >>> >> >> >>>> > hostname:<br>
>> >> >>>> >> >>> >> >> >>>> > cultivar3<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > LC_ALL=C<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
>> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm<br>
>> >> >>>> >> >>> >> >> >>>> > -name<br>
>> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/d<wbr>omain-109-Cultivar/master-key.<wbr>aes<br>
>> >> >>>> >> >>> >> >> >>>> > -machine<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>> >> >>>> >> >>> >> >> >>>> > -cpu<br>
>> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1<br>
>> >> >>>> >> >>> >> >> >>>> > -uuid<br>
>> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
>> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-4300-1034-8<wbr>035-CAC04F424331,uuid=4013c829<wbr>-c9d7-4b72-90d5-6fe58137504c'<br>
>> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-109-<wbr>Cultivar/monitor.sock,server,n<wbr>owait<br>
>> >> >>>> >> >>> >> >> >>>> > -mon<br>
>> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor<wbr>,mode=control<br>
>> >> >>>> >> >>> >> >> >>>> > -rtc<br>
>> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:39:02,drift<wbr>fix=slew -global<br>
>> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet<br>
>> >> >>>> >> >>> >> >> >>>> > -no-reboot<br>
>> >> >>>> >> >>> >> >> >>>> > -boot<br>
>> >> >>>> >> >>> >> >> >>>> > strict=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4<br>
>> >> >>>> >> >>> >> >> >>>> > -drive<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-c8<wbr>e04cf387f9/23aa0a66-fa6c-4967-<wbr>a1e5-fbe47c0cd705,format=raw,<wbr>if=none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
>> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,read<wbr>only=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
>> >> >>>> >> >>> >> >> >>>> > -netdev<br>
>> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on<wbr>,vhostfd=32<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:83<wbr>,bus=pci.0,addr=0x3<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,se<wbr>rver,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,s<wbr>erver,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=<a href="http://org.qemu.gu">org.qemu.gu</a><wbr>est_agent.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.h<wbr>osted-engine-setup.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsol<wbr>e0,id=console0<br>
>> >> >>>> >> >>> >> >> >>>> > -spice<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2<br>
>> >> >>>> >> >>> >> >> >>>> > -object<br>
>> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename<wbr>=/dev/urandom<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5<br>
>> >> >>>> >> >>> >> >> >>>> > -msg<br>
>> >> >>>> >> >>> >> >> >>>> > timestamp=on<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev<br>
>> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:<br>
>> >> >>>> >> >>> >> >> >>>> > char<br>
>> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label<br>
>> >> >>>> >> >>> >> >> >>>> > charconsole0)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:53:11.407+0000: shutting down,<br>
>> >> >>>> >> >>> >> >> >>>> > reason=shutdown<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:55:57.210+0000: starting up<br>
>> >> >>>> >> >>> >> >> >>>> > libvirt<br>
>> >> >>>> >> >>> >> >> >>>> > version:<br>
>> >> >>>> >> >>> >> >> >>>> > 3.2.0,<br>
>> >> >>>> >> >>> >> >> >>>> > package:<br>
>> >> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem<br>
>> >> >>>> >> >>> >> >> >>>> > <<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>>,<br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>),<br>
>> >> >>>> >> >>> >> >> >>>> > qemu<br>
>> >> >>>> >> >>> >> >> >>>> > version:<br>
>> >> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7<wbr>_4.13.1),<br>
>> >> >>>> >> >>> >> >> >>>> > hostname:<br>
>> >> >>>> >> >>> >> >> >>>> > <a href="http://cultivar3.grove.silverorange.com" rel="noreferrer" target="_blank">cultivar3.grove.silverorange.c<wbr>om</a><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > LC_ALL=C<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/loca<wbr>l/bin:/usr/sbin:/usr/bin<br>
>> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm<br>
>> >> >>>> >> >>> >> >> >>>> > -name<br>
>> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=o<wbr>n -S -object<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=ra<wbr>w,file=/var/lib/libvirt/qemu/d<wbr>omain-110-Cultivar/master-key.<wbr>aes<br>
>> >> >>>> >> >>> >> >> >>>> > -machine<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off,dump-guest-core=off<br>
>> >> >>>> >> >>> >> >> >>>> > -cpu<br>
>> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp<br>
>> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=<wbr>1,threads=1<br>
>> >> >>>> >> >>> >> >> >>>> > -uuid<br>
>> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe581<wbr>37504c -smbios<br>
>> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,pro<wbr>duct=oVirt<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.cent<wbr>os,serial=44454C4C-4300-1034-8<wbr>035-CAC04F424331,uuid=4013c829<wbr>-c9d7-4b72-90d5-6fe58137504c'<br>
>> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-110-<wbr>Cultivar/monitor.sock,server,n<wbr>owait<br>
>> >> >>>> >> >>> >> >> >>>> > -mon<br>
>> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor<wbr>,mode=control<br>
>> >> >>>> >> >>> >> >> >>>> > -rtc<br>
>> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:55:57,drift<wbr>fix=slew -global<br>
>> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet<br>
>> >> >>>> >> >>> >> >> >>>> > -no-reboot<br>
>> >> >>>> >> >>> >> >> >>>> > -boot<br>
>> >> >>>> >> >>> >> >> >>>> > strict=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-se<wbr>rial0,bus=pci.0,addr=0x4<br>
>> >> >>>> >> >>> >> >> >>>> > -drive<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/248<wbr>f46f0-d793-4581-9810-c9d965e2f<wbr>286/c2dde892-f978-4dfc-a421-c8<wbr>e04cf387f9/23aa0a66-fa6c-4967-<wbr>a1e5-fbe47c0cd705,format=raw,<wbr>if=none,id=drive-virtio-disk0,<wbr>serial=c2dde892-f978-4dfc-<wbr>a421-c8e04cf387f9,cache=none,<wbr>werror=stop,rerror=stop,aio=<wbr>threads<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
>> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,read<wbr>only=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
>> >> >>>> >> >>> >> >> >>>> > -netdev<br>
>> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on<wbr>,vhostfd=32<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=hostnet0<wbr>,id=net0,mac=00:16:3e:7f:d6:83<wbr>,bus=pci.0,addr=0x3<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.com.redhat.rhevm.vdsm,se<wbr>rver,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.qemu.guest_agent.0,s<wbr>erver,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=<a href="http://org.qemu.gu">org.qemu.gu</a><wbr>est_agent.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=<wbr>vdagent<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=3,chardev=charchannel<wbr>2,id=channel2,name=com.redhat.<wbr>spice.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>013c829-c9d7-4b72-90d5-6fe5813<wbr>7504c.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-seri<wbr>al0.0,nr=4,chardev=charchannel<wbr>3,id=channel3,name=org.ovirt.h<wbr>osted-engine-setup.0<br>
>> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device<br>
>> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsol<wbr>e0,id=console0<br>
>> >> >>>> >> >>> >> >> >>>> > -spice<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=<wbr>/etc/pki/vdsm/libvirt-spice,tl<wbr>s-channel=default,seamless-mig<wbr>ration=on<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2<br>
>> >> >>>> >> >>> >> >> >>>> > -object<br>
>> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename<wbr>=/dev/urandom<br>
>> >> >>>> >> >>> >> >> >>>> > -device<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x5<br>
>> >> >>>> >> >>> >> >> >>>> > -msg<br>
>> >> >>>> >> >>> >> >> >>>> > timestamp=on<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev<br>
>> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:<br>
>> >> >>>> >> >>> >> >> >>>> > char<br>
>> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label<br>
>> >> >>>> >> >>> >> >> >>>> > charconsole0)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-h<wbr>a/broker.log<br>
>> >> >>>> >> >>> >> >> >>>> > <==<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/storage_broker.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 151, in get_raw_stats<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > f = os.open(path, direct_flag |<br>
>> >> >>>> >> >>> >> >> >>>> > os.O_RDONLY |<br>
>> >> >>>> >> >>> >> >> >>>> > os.O_SYNC)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > OSError: [Errno 2] No such file or directory:<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > StatusStorageThread::ERROR::20<wbr>18-01-11<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 16:55:15,761::status_broker::9<wbr>2::ovirt_hosted_engine_ha.brok<wbr>er.status_broker.StatusBroker.<wbr>Update::(run)<br>
>> >> >>>> >> >>> >> >> >>>> > Failed to read state.<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/status_broker.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 88, in run<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > self._storage_broker.get_raw_<wbr>stats()<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/bro<wbr>ker/storage_broker.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 162, in get_raw_stats<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > .format(str(e)))<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > RequestError: failed to read metadata: [Errno<br>
>> >> >>>> >> >>> >> >> >>>> > 2]<br>
>> >> >>>> >> >>> >> >> >>>> > No<br>
>> >> >>>> >> >>> >> >> >>>> > such<br>
>> >> >>>> >> >>> >> >> >>>> > file or<br>
>> >> >>>> >> >>> >> >> >>>> > directory:<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/248f46f<wbr>0-d793-4581-9810-c9d965e2f286/<wbr>14a20941-1b84-4b82-be8f-ace38d<wbr>7c037a/8582bdfc-ef54-47af-<wbr>9f1e-f5b7ec1f1cf8'<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-h<wbr>a/agent.log<br>
>> >> >>>> >> >>> >> >> >>>> > <==<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > result = refresh_method()<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/env<wbr>/config.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 519, in refresh_vm_conf<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > content =<br>
>> >> >>>> >> >>> >> >> >>>> > self._get_file_content_from_sh<wbr>ared_storage(VM)<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/env<wbr>/config.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 484, in<br>
>> >> >>>> >> >>> >> >> >>>> > _get_file_content_from_shared_<wbr>storage<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > config_volume_path =<br>
>> >> >>>> >> >>> >> >> >>>> > self._get_config_volume_path()<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/env<wbr>/config.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 188, in _get_config_volume_path<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > conf_vol_uuid<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/lib<wbr>/heconflib.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line 358, in get_volume_path<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > root=envconst.SD_RUN_DIR,<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > RuntimeError: Path to volume<br>
>> >> >>>> >> >>> >> >> >>>> > 4838749f-216d-406b-b245-98d034<wbr>3fcf7f<br>
>> >> >>>> >> >>> >> >> >>>> > not<br>
>> >> >>>> >> >>> >> >> >>>> > found<br>
>> >> >>>> >> >>> >> >> >>>> > in /run/vdsm/storag<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > periodic/42::ERROR::2018-01-11<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > 16:56:11,446::vmstats::260::vi<wbr>rt.vmstats::(send_metrics)<br>
>> >> >>>> >> >>> >> >> >>>> > VM<br>
>> >> >>>> >> >>> >> >> >>>> > metrics<br>
>> >> >>>> >> >>> >> >> >>>> > collection failed<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > File<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/virt/vmstats.py",<br>
>> >> >>>> >> >>> >> >> >>>> > line<br>
>> >> >>>> >> >>> >> >> >>>> > 197, in<br>
>> >> >>>> >> >>> >> >> >>>> > send_metrics<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > data[prefix + '.cpu.usage'] =<br>
>> >> >>>> >> >>> >> >> >>>> > stat['cpuUsage']<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > KeyError: 'cpuUsage'<br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> > ______________________________<wbr>_________________<br>
>> >> >>>> >> >>> >> >> >>>> > Users mailing list<br>
>> >> >>>> >> >>> >> >> >>>> > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
>> >> >>>> >> >>> >> >> >>>> > <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
>> >> >>>> >> >>> >> >> >>>> ><br>
>> >> >>>> >> >>> >> >> >>>> ______________________________<wbr>_________________<br>
>> >> >>>> >> >>> >> >> >>>> Users mailing list<br>
>> >> >>>> >> >>> >> >> >>>> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
>> >> >>>> >> >>> >> >> >>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >>><br>
>> >> >>>> >> >>> >> >> >><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> >> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >>> ><br>
>> >> >>>> >> >><br>
>> >> >>>> >> >><br>
>> >> >>>> >> ><br>
>> >> >>>> ><br>
>> >> >>>> ><br>
>> >> >>><br>
>> >> >>><br>
>> >> >><br>
>> >> ><br>
>> ><br>
>> ><br>
><br>
><br>
</blockquote></div><br></div>
</blockquote></div><br></div>
</blockquote></div><br></div>