Re: [ovirt-users] unable to bring up hosted engine after botched 4.2 upgrade
by Jayme
No luck I'm afraid. It's very odd that I wouldn't be able to get a console
to it, if the status is up and seen by virsh. Any clue?
Engine status : {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "Up"}
# virsh -r list
Id Name State
----------------------------------------------------
118 Cultivar running
# hosted-engine --console
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
# hosted-engine --console 118
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
# hosted-engine --console Cultivar
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
On Fri, Jan 12, 2018 at 2:05 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> Try listing the domains with
>
> virsh -r list
>
> maybe it just has some weird name...
>
> Martin
>
> On Fri, Jan 12, 2018 at 6:56 PM, Jayme <jaymef(a)gmail.com> wrote:
> > I thought that it might be a good sign but unfortunately I cannot access
> it
> > with console :( if I could get console access to it I might be able to
> fix
> > the problem. But seeing is how the console is also not working leads me
> to
> > believe there is a bigger issue at hand here.
> >
> > hosted-engine --console
> > The engine VM is running on this host
> > error: failed to get domain 'HostedEngine'
> > error: Domain not found: no domain with matching name 'HostedEngine'
> >
> > I really wonder if this is all a symlinking problem in some way. Is it
> > possible for me to upgrade host to 4.2 RC2 without being able to upgrade
> the
> > engine first or should I keep everything on 4.2 as it is?
> >
> > On Fri, Jan 12, 2018 at 1:49 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> >>
> >> Hi,
> >>
> >> the VM is up according to the status (at least for a while). You
> >> should be able to use console and diagnose anything that happened
> >> inside (line the need for fsck and such) now.
> >>
> >> Check the presence of those links again now, the metadata file content
> >> is not important, but the file has to exist (agents will populate it
> >> with status data). I have no new idea about what is wrong with that
> >> though.
> >>
> >> Best regards
> >>
> >> Martin
> >>
> >>
> >>
> >> On Fri, Jan 12, 2018 at 5:47 PM, Jayme <jaymef(a)gmail.com> wrote:
> >> > The lock space issue was an issue I needed to clear but I don't
> believe
> >> > it
> >> > has resolved the problem. I shutdown agent and broker on all hosts
> and
> >> > disconnected hosted-storage then enabled broker/agent on just one host
> >> > and
> >> > connected storage. I started the VM and actually didn't get any
> errors
> >> > in
> >> > the logs barely at all which was good to see, however the VM is still
> >> > not
> >> > running:
> >> >
> >> > HOST3:
> >> >
> >> > Engine status : {"reason": "failed liveliness
> >> > check",
> >> > "health": "bad", "vm": "up", "detail": "Up"}
> >> >
> >> > ==> /var/log/messages <==
> >> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
> >> > disabled
> >> > state
> >> > Jan 12 12:42:57 cultivar3 kernel: device vnet0 entered promiscuous
> mode
> >> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
> >> > blocking
> >> > state
> >> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
> >> > forwarding state
> >> > Jan 12 12:42:57 cultivar3 lldpad: recvfrom(Event interface): No buffer
> >> > space
> >> > available
> >> > Jan 12 12:42:57 cultivar3 systemd-machined: New machine
> >> > qemu-111-Cultivar.
> >> > Jan 12 12:42:57 cultivar3 systemd: Started Virtual Machine
> >> > qemu-111-Cultivar.
> >> > Jan 12 12:42:57 cultivar3 systemd: Starting Virtual Machine
> >> > qemu-111-Cultivar.
> >> > Jan 12 12:42:57 cultivar3 kvm: 3 guests now active
> >> > Jan 12 12:44:38 cultivar3 libvirtd: 2018-01-12 16:44:38.737+0000:
> 1535:
> >> > error : qemuDomainAgentAvailable:6010 : Guest agent is not responding:
> >> > QEMU
> >> > guest agent is not connected
> >> >
> >> > Interestingly though, now I'm seeing this in the logs which may be a
> new
> >> > clue:
> >> >
> >> >
> >> > ==> /var/log/vdsm/vdsm.log <==
> >> > File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line
> >> > 126,
> >> > in findDomain
> >> > return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
> >> > File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line
> >> > 116,
> >> > in findDomainPath
> >> > raise se.StorageDomainDoesNotExist(sdUUID)
> >> > StorageDomainDoesNotExist: Storage domain does not exist:
> >> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> > jsonrpc/4::ERROR::2018-01-12
> >> > 12:40:30,380::dispatcher::82::storage.Dispatcher::(wrapper) FINISH
> >> > getStorageDomainInfo error=Storage domain does not exist:
> >> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> > periodic/42::ERROR::2018-01-12
> >> > 12:40:35,430::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> > periodic/43::ERROR::2018-01-12
> >> > 12:40:50,473::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> > periodic/40::ERROR::2018-01-12
> >> > 12:41:05,519::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> > periodic/43::ERROR::2018-01-12
> >> > 12:41:20,566::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> >
> >> > ==> /var/log/ovirt-hosted-engine-ha/broker.log <==
> >> > File
> >> >
> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> >> > line 151, in get_raw_stats
> >> > f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
> >> > OSError: [Errno 2] No such file or directory:
> >> >
> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> > StatusStorageThread::ERROR::2018-01-12
> >> >
> >> > 12:32:06,049::status_broker::92::ovirt_hosted_engine_ha.
> broker.status_broker.StatusBroker.Update::(run)
> >> > Failed to read state.
> >> > Traceback (most recent call last):
> >> > File
> >> >
> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/status_broker.py",
> >> > line 88, in run
> >> > self._storage_broker.get_raw_stats()
> >> > File
> >> >
> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> >> > line 162, in get_raw_stats
> >> > .format(str(e)))
> >> > RequestError: failed to read metadata: [Errno 2] No such file or
> >> > directory:
> >> >
> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >
> >> > On Fri, Jan 12, 2018 at 12:02 PM, Martin Sivak <msivak(a)redhat.com>
> >> > wrote:
> >> >>
> >> >> The lock is the issue.
> >> >>
> >> >> - try running sanlock client status on all hosts
> >> >> - also make sure you do not have some forgotten host still connected
> >> >> to the lockspace, but without ha daemons running (and with the VM)
> >> >>
> >> >> I need to go to our president election now, I might check the email
> >> >> later tonight.
> >> >>
> >> >> Martin
> >> >>
> >> >> On Fri, Jan 12, 2018 at 4:59 PM, Jayme <jaymef(a)gmail.com> wrote:
> >> >> > Here are the newest logs from me trying to start hosted vm:
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > blocking
> >> >> > state
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > disabled
> >> >> > state
> >> >> > Jan 12 11:58:14 cultivar0 kernel: device vnet4 entered promiscuous
> >> >> > mode
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > blocking
> >> >> > state
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > forwarding state
> >> >> > Jan 12 11:58:14 cultivar0 lldpad: recvfrom(Event interface): No
> >> >> > buffer
> >> >> > space
> >> >> > available
> >> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772694.8715]
> >> >> > manager: (vnet4): new Tun device
> >> >> > (/org/freedesktop/NetworkManager/Devices/140)
> >> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772694.8795]
> >> >> > device (vnet4): state change: unmanaged -> unavailable (reason
> >> >> > 'connection-assumed') [10 20 41]
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12 15:58:14.879+0000: starting up libvirt version: 3.2.0,
> >> >> > package:
> >> >> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> >> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
> >> >> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> >> > cultivar0.grove.silverorange.com
> >> >> > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >> >> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> >> > guest=Cultivar,debug-threads=on -S -object
> >> >> >
> >> >> >
> >> >> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-119-Cultivar/master-key.aes
> >> >> > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> >> >> > -cpu
> >> >> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> >> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> >> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >
> >> >> >
> >> >> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> > -no-user-config -nodefaults -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 119-Cultivar/monitor.sock,server,nowait
> >> >> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> >> > base=2018-01-12T15:58:14,driftfix=slew -global
> >> >> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> >> >> > -device
> >> >> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> >> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> >> >
> >> >> >
> >> >> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> >> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >> >> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> >> >
> >> >> >
> >> >> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> >> > -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> > -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> >> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> > -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> >> > -chardev pty,id=charconsole0 -device
> >> >> > virtconsole,chardev=charconsole0,id=console0 -spice
> >> >> >
> >> >> >
> >> >> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> >> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> >> > rng-random,id=objrng0,filename=/dev/urandom -device
> >> >> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >> >> > timestamp=on
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772694.8807]
> >> >> > device (vnet4): state change: unavailable -> disconnected (reason
> >> >> > 'none')
> >> >> > [20 30 0]
> >> >> > Jan 12 11:58:14 cultivar0 systemd-machined: New machine
> >> >> > qemu-119-Cultivar.
> >> >> > Jan 12 11:58:14 cultivar0 systemd: Started Virtual Machine
> >> >> > qemu-119-Cultivar.
> >> >> > Jan 12 11:58:14 cultivar0 systemd: Starting Virtual Machine
> >> >> > qemu-119-Cultivar.
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12T15:58:15.094002Z qemu-kvm: -chardev pty,id=charconsole0:
> >> >> > char
> >> >> > device redirected to /dev/pts/1 (label charconsole0)
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:15 cultivar0 kvm: 5 guests now active
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12 15:58:15.217+0000: shutting down, reason=failed
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:15 cultivar0 libvirtd: 2018-01-12 15:58:15.217+0000:
> >> >> > 1908:
> >> >> > error : virLockManagerSanlockAcquire:1041 : resource busy: Failed
> to
> >> >> > acquire
> >> >> > lock: Lease is held by another host
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12T15:58:15.219934Z qemu-kvm: terminating on signal 15 from
> >> >> > pid
> >> >> > 1773
> >> >> > (/usr/sbin/libvirtd)
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > disabled
> >> >> > state
> >> >> > Jan 12 11:58:15 cultivar0 kernel: device vnet4 left promiscuous
> mode
> >> >> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > disabled
> >> >> > state
> >> >> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772695.2348]
> >> >> > device (vnet4): state change: disconnected -> unmanaged (reason
> >> >> > 'unmanaged')
> >> >> > [30 10 3]
> >> >> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772695.2349]
> >> >> > device (vnet4): released from master device ovirtmgmt
> >> >> > Jan 12 11:58:15 cultivar0 kvm: 4 guests now active
> >> >> > Jan 12 11:58:15 cultivar0 systemd-machined: Machine
> qemu-119-Cultivar
> >> >> > terminated.
> >> >> >
> >> >> > ==> /var/log/vdsm/vdsm.log <==
> >> >> > vm/4013c829::ERROR::2018-01-12
> >> >> > 11:58:15,444::vm::914::virt.vm::(_startUnderlyingVm)
> >> >> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start process
> >> >> > failed
> >> >> > Traceback (most recent call last):
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 843,
> >> >> > in
> >> >> > _startUnderlyingVm
> >> >> > self._run()
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 2721,
> >> >> > in
> >> >> > _run
> >> >> > dom.createWithFlags(flags)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/
> libvirtconnection.py",
> >> >> > line
> >> >> > 126, in wrapper
> >> >> > ret = f(*args, **kwargs)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
> 512, in
> >> >> > wrapper
> >> >> > return func(inst, *args, **kwargs)
> >> >> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line
> 1069, in
> >> >> > createWithFlags
> >> >> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
> >> >> > failed',
> >> >> > dom=self)
> >> >> > libvirtError: resource busy: Failed to acquire lock: Lease is held
> by
> >> >> > another host
> >> >> > jsonrpc/6::ERROR::2018-01-12
> >> >> > 11:58:16,421::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >> >> > Internal server error
> >> >> > Traceback (most recent call last):
> >> >> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line
> >> >> > 606,
> >> >> > in _handle_request
> >> >> > res = method(**params)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> >> >> > 201,
> >> >> > in
> >> >> > _dynamicMethod
> >> >> > result = fn(*methodArgs)
> >> >> > File "<string>", line 2, in getAllVmIoTunePolicies
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> >> >> > 48,
> >> >> > in
> >> >> > method
> >> >> > ret = func(*args, **kwargs)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354,
> in
> >> >> > getAllVmIoTunePolicies
> >> >> > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
> 524,
> >> >> > in
> >> >> > getAllVmIoTunePolicies
> >> >> > 'current_values': v.getIoTune()}
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3481,
> >> >> > in
> >> >> > getIoTune
> >> >> > result = self.getIoTuneResponse()
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3500,
> >> >> > in
> >> >> > getIoTuneResponse
> >> >> > res = self._dom.blockIoTune(
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> >> >> > line
> >> >> > 47,
> >> >> > in __getattr__
> >> >> > % self.vmid)
> >> >> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c' was
> not
> >> >> > defined
> >> >> > yet or was undefined
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:16 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR
> >> >> > Internal
> >> >> > server error#012Traceback (most recent call last):#012 File
> >> >> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 606,
> >> >> > in
> >> >> > _handle_request#012 res = method(**params)#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
> in
> >> >> > _dynamicMethod#012 result = fn(*methodArgs)#012 File
> "<string>",
> >> >> > line 2,
> >> >> > in getAllVmIoTunePolicies#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> >> >> > method#012 ret = func(*args, **kwargs)#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> >> >> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> >> >> > self._cif.getAllVmIoTunePolicies()#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> >> >> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012
> >> >> > File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> >> >> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> >> >> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 47,
> >> >> > in
> >> >> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> >> >> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was
> >> >> > undefined
> >> >> >
> >> >> > On Fri, Jan 12, 2018 at 11:55 AM, Jayme <jaymef(a)gmail.com> wrote:
> >> >> >>
> >> >> >> One other tidbit I noticed is that it seems like there are less
> >> >> >> errors
> >> >> >> if
> >> >> >> I started in paused mode:
> >> >> >>
> >> >> >> but still shows: Engine status : {"reason":
> >> >> >> "bad
> >> >> >> vm
> >> >> >> status", "health": "bad", "vm": "up", "detail": "Paused"}
> >> >> >>
> >> >> >> ==> /var/log/messages <==
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> blocking state
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> disabled state
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: device vnet4 entered promiscuous
> >> >> >> mode
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> blocking state
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> forwarding state
> >> >> >> Jan 12 11:55:05 cultivar0 lldpad: recvfrom(Event interface): No
> >> >> >> buffer
> >> >> >> space available
> >> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> >> >> >> [1515772505.3625]
> >> >> >> manager: (vnet4): new Tun device
> >> >> >> (/org/freedesktop/NetworkManager/Devices/139)
> >> >> >>
> >> >> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >> 2018-01-12 15:55:05.370+0000: starting up libvirt version: 3.2.0,
> >> >> >> package:
> >> >> >> 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> >> >> 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
> >> >> >> 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> >> >> cultivar0.grove.silverorange.com
> >> >> >> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >> >> >> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> >> >> guest=Cultivar,debug-threads=on -S -object
> >> >> >>
> >> >> >>
> >> >> >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-118-Cultivar/master-key.aes
> >> >> >> -machine pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >> -cpu
> >> >> >> Conroe -m 8192 -realtime mlock=off -smp
> >> >> >> 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> >> >> 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> >> >> 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>
> >> >> >>
> >> >> >> Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >> -no-user-config -nodefaults -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 118-Cultivar/monitor.sock,server,nowait
> >> >> >> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> >> >> base=2018-01-12T15:55:05,driftfix=slew -global
> >> >> >> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> >> >> >> -device
> >> >> >> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> >> >> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> >> >>
> >> >> >>
> >> >> >> file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> >> >> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >> >> >> tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> >> >>
> >> >> >>
> >> >> >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> >> >> -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >> -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> >> >> -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >> -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> >> >> -chardev pty,id=charconsole0 -device
> >> >> >> virtconsole,chardev=charconsole0,id=console0 -spice
> >> >> >>
> >> >> >>
> >> >> >> tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> >> >> -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> >> >> rng-random,id=objrng0,filename=/dev/urandom -device
> >> >> >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >> >> >> timestamp=on
> >> >> >>
> >> >> >> ==> /var/log/messages <==
> >> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> >> >> >> [1515772505.3689]
> >> >> >> device (vnet4): state change: unmanaged -> unavailable (reason
> >> >> >> 'connection-assumed') [10 20 41]
> >> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> >> >> >> [1515772505.3702]
> >> >> >> device (vnet4): state change: unavailable -> disconnected (reason
> >> >> >> 'none')
> >> >> >> [20 30 0]
> >> >> >> Jan 12 11:55:05 cultivar0 systemd-machined: New machine
> >> >> >> qemu-118-Cultivar.
> >> >> >> Jan 12 11:55:05 cultivar0 systemd: Started Virtual Machine
> >> >> >> qemu-118-Cultivar.
> >> >> >> Jan 12 11:55:05 cultivar0 systemd: Starting Virtual Machine
> >> >> >> qemu-118-Cultivar.
> >> >> >>
> >> >> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >> 2018-01-12T15:55:05.586827Z qemu-kvm: -chardev
> pty,id=charconsole0:
> >> >> >> char
> >> >> >> device redirected to /dev/pts/1 (label charconsole0)
> >> >> >>
> >> >> >> ==> /var/log/messages <==
> >> >> >> Jan 12 11:55:05 cultivar0 kvm: 5 guests now active
> >> >> >>
> >> >> >> On Fri, Jan 12, 2018 at 11:36 AM, Jayme <jaymef(a)gmail.com> wrote:
> >> >> >>>
> >> >> >>> Yeah I am in global maintenance:
> >> >> >>>
> >> >> >>> state=GlobalMaintenance
> >> >> >>>
> >> >> >>> host0: {"reason": "vm not running on this host", "health":
> "bad",
> >> >> >>> "vm":
> >> >> >>> "down", "detail": "unknown"}
> >> >> >>> host2: {"reason": "vm not running on this host", "health": "bad",
> >> >> >>> "vm":
> >> >> >>> "down", "detail": "unknown"}
> >> >> >>> host3: {"reason": "vm not running on this host", "health": "bad",
> >> >> >>> "vm":
> >> >> >>> "down", "detail": "unknown"}
> >> >> >>>
> >> >> >>> I understand the lock is an issue, I'll try to make sure it is
> >> >> >>> fully
> >> >> >>> stopped on all three before starting but I don't think that is
> the
> >> >> >>> issue at
> >> >> >>> hand either. What concerns me is mostly that it seems to be
> >> >> >>> unable
> >> >> >>> to read
> >> >> >>> the meta data, I think that might be the heart of the problem but
> >> >> >>> I'm
> >> >> >>> not
> >> >> >>> sure what is causing it.
> >> >> >>>
> >> >> >>> On Fri, Jan 12, 2018 at 11:33 AM, Martin Sivak <
> msivak(a)redhat.com>
> >> >> >>> wrote:
> >> >> >>>>
> >> >> >>>> > On all three hosts I ran hosted-engine --vm-shutdown;
> >> >> >>>> > hosted-engine
> >> >> >>>> > --vm-poweroff
> >> >> >>>>
> >> >> >>>> Are you in global maintenance? I think you were in one of the
> >> >> >>>> previous
> >> >> >>>> emails, but worth checking.
> >> >> >>>>
> >> >> >>>> > I started ovirt-ha-broker with systemctl as root user but it
> >> >> >>>> > does
> >> >> >>>> > appear to be running under vdsm:
> >> >> >>>>
> >> >> >>>> That is the correct behavior.
> >> >> >>>>
> >> >> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is
> >> >> >>>> > held
> >> >> >>>> > by
> >> >> >>>> > another host
> >> >> >>>>
> >> >> >>>> sanlock seems to think the VM runs somewhere and it is possible
> >> >> >>>> that
> >> >> >>>> some other host tried to start the VM as well unless you are in
> >> >> >>>> global
> >> >> >>>> maintenance (that is why I asked the first question here).
> >> >> >>>>
> >> >> >>>> Martin
> >> >> >>>>
> >> >> >>>> On Fri, Jan 12, 2018 at 4:28 PM, Jayme <jaymef(a)gmail.com>
> wrote:
> >> >> >>>> > Martin,
> >> >> >>>> >
> >> >> >>>> > Thanks so much for keeping with me, this is driving me
> crazy! I
> >> >> >>>> > really do
> >> >> >>>> > appreciate it, thanks again
> >> >> >>>> >
> >> >> >>>> > Let's go through this:
> >> >> >>>> >
> >> >> >>>> > HE VM is down - YES
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > HE agent fails when opening metadata using the symlink - YES
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > the symlink is there and readable by vdsm:kvm - it appears to
> >> >> >>>> > be:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > lrwxrwxrwx. 1 vdsm kvm 159 Jan 10 21:20
> >> >> >>>> > 14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> > ->
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > And the files in the linked directory exist and have vdsm:kvm
> >> >> >>>> > perms
> >> >> >>>> > as
> >> >> >>>> > well:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > # cd
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >
> >> >> >>>> > [root@cultivar0 14a20941-1b84-4b82-be8f-ace38d7c037a]# ls -al
> >> >> >>>> >
> >> >> >>>> > total 2040
> >> >> >>>> >
> >> >> >>>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >> >> >>>> >
> >> >> >>>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >> >> >>>> >
> >> >> >>>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 11:19
> >> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >> >> >>>> >
> >> >> >>>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >> >> >>>> >
> >> >> >>>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > I started ovirt-ha-broker with systemctl as root user but it
> >> >> >>>> > does
> >> >> >>>> > appear to
> >> >> >>>> > be running under vdsm:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > vdsm 16928 0.6 0.0 1618244 43328 ? Ssl 10:33
> 0:18
> >> >> >>>> > /usr/bin/python
> >> >> >>>> > /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > Here is something I tried:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > - On all three hosts I ran hosted-engine --vm-shutdown;
> >> >> >>>> > hosted-engine
> >> >> >>>> > --vm-poweroff
> >> >> >>>> >
> >> >> >>>> > - On HOST0 (cultivar0) I disconnected and reconnected storage
> >> >> >>>> > using
> >> >> >>>> > hosted-engine
> >> >> >>>> >
> >> >> >>>> > - Tried starting up the hosted VM on cultivar0 while tailing
> the
> >> >> >>>> > logs:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > # hosted-engine --vm-start
> >> >> >>>> >
> >> >> >>>> > VM exists and is down, cleaning up and restarting
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >
> >> >> >>>> > jsonrpc/2::ERROR::2018-01-12
> >> >> >>>> > 11:27:27,194::vm::1766::virt.vm::(_getRunningVmStats)
> >> >> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') Error fetching
> vm
> >> >> >>>> > stats
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 1762,
> >> >> >>>> > in
> >> >> >>>> > _getRunningVmStats
> >> >> >>>> >
> >> >> >>>> > vm_sample.interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 45, in
> >> >> >>>> > produce
> >> >> >>>> >
> >> >> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 322, in
> >> >> >>>> > networks
> >> >> >>>> >
> >> >> >>>> > if nic.name.startswith('hostdev'):
> >> >> >>>> >
> >> >> >>>> > AttributeError: name
> >> >> >>>> >
> >> >> >>>> > jsonrpc/3::ERROR::2018-01-12
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > 11:27:27,221::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >> >> >>>> > Internal server error
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.
> py",
> >> >> >>>> > line
> >> >> >>>> > 606,
> >> >> >>>> > in _handle_request
> >> >> >>>> >
> >> >> >>>> > res = method(**params)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
> >> >> >>>> > line
> >> >> >>>> > 201, in
> >> >> >>>> > _dynamicMethod
> >> >> >>>> >
> >> >> >>>> > result = fn(*methodArgs)
> >> >> >>>> >
> >> >> >>>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
> >> >> >>>> > line
> >> >> >>>> > 48,
> >> >> >>>> > in
> >> >> >>>> > method
> >> >> >>>> >
> >> >> >>>> > ret = func(*args, **kwargs)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
> >> >> >>>> > 1354,
> >> >> >>>> > in
> >> >> >>>> > getAllVmIoTunePolicies
> >> >> >>>> >
> >> >> >>>> > io_tune_policies_dict = self._cif.
> getAllVmIoTunePolicies()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py",
> line
> >> >> >>>> > 524,
> >> >> >>>> > in
> >> >> >>>> > getAllVmIoTunePolicies
> >> >> >>>> >
> >> >> >>>> > 'current_values': v.getIoTune()}
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 3481,
> >> >> >>>> > in
> >> >> >>>> > getIoTune
> >> >> >>>> >
> >> >> >>>> > result = self.getIoTuneResponse()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 3500,
> >> >> >>>> > in
> >> >> >>>> > getIoTuneResponse
> >> >> >>>> >
> >> >> >>>> > res = self._dom.blockIoTune(
> >> >> >>>> >
> >> >> >>>> > File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> >> >> >>>> > line
> >> >> >>>> > 47,
> >> >> >>>> > in __getattr__
> >> >> >>>> >
> >> >> >>>> > % self.vmid)
> >> >> >>>> >
> >> >> >>>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c'
> was
> >> >> >>>> > not
> >> >> >>>> > defined
> >> >> >>>> > yet or was undefined
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 journal: vdsm jsonrpc.JsonRpcServer
> >> >> >>>> > ERROR
> >> >> >>>> > Internal
> >> >> >>>> > server error#012Traceback (most recent call last):#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line
> >> >> >>>> > 606,
> >> >> >>>> > in
> >> >> >>>> > _handle_request#012 res = method(**params)#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> 201,
> >> >> >>>> > in
> >> >> >>>> > _dynamicMethod#012 result = fn(*methodArgs)#012 File
> >> >> >>>> > "<string>",
> >> >> >>>> > line 2,
> >> >> >>>> > in getAllVmIoTunePolicies#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> 48,
> >> >> >>>> > in
> >> >> >>>> > method#012 ret = func(*args, **kwargs)#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> >> >> >>>> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> >> >> >>>> > self._cif.getAllVmIoTunePolicies()#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
> 524,
> >> >> >>>> > in
> >> >> >>>> > getAllVmIoTunePolicies#012 'current_values':
> >> >> >>>> > v.getIoTune()}#012
> >> >> >>>> > File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3481,
> >> >> >>>> > in
> >> >> >>>> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3500,
> >> >> >>>> > in
> >> >> >>>> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012
> File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> line
> >> >> >>>> > 47,
> >> >> >>>> > in
> >> >> >>>> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or
> >> >> >>>> > was
> >> >> >>>> > undefined
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > blocking
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > disabled
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 entered
> >> >> >>>> > promiscuous
> >> >> >>>> > mode
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > blocking
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > forwarding state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 lldpad: recvfrom(Event interface):
> No
> >> >> >>>> > buffer
> >> >> >>>> > space
> >> >> >>>> > available
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.4264]
> >> >> >>>> > manager: (vnet4): new Tun device
> >> >> >>>> > (/org/freedesktop/NetworkManager/Devices/135)
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.4342]
> >> >> >>>> > device (vnet4): state change: unmanaged -> unavailable (reason
> >> >> >>>> > 'connection-assumed') [10 20 41]
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.4353]
> >> >> >>>> > device (vnet4): state change: unavailable -> disconnected
> >> >> >>>> > (reason
> >> >> >>>> > 'none')
> >> >> >>>> > [20 30 0]
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12 15:27:27.435+0000: starting up libvirt version:
> >> >> >>>> > 3.2.0,
> >> >> >>>> > package:
> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> >> >>>> > cultivar0.grove.silverorange.com
> >> >> >>>> >
> >> >> >>>> > LC_ALL=C PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-114-Cultivar/master-key.aes
> >> >> >>>> > -machine
> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> >> >> >>>> > -cpu
> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 114-Cultivar/monitor.sock,server,nowait
> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> >> >>>> > base=2018-01-12T15:27:27,driftfix=slew -global
> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot
> >> >> >>>> > strict=on
> >> >> >>>> > -device
> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> >> >>>> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -netdev
> >> >> >>>> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> > -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> > -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> > -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0 -spice
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom -device
> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >> >> >>>> > timestamp=on
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: New machine
> >> >> >>>> > qemu-114-Cultivar.
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd: Started Virtual Machine
> >> >> >>>> > qemu-114-Cultivar.
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd: Starting Virtual Machine
> >> >> >>>> > qemu-114-Cultivar.
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12T15:27:27.651669Z qemu-kvm: -chardev
> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> > char
> >> >> >>>> > device redirected to /dev/pts/2 (label charconsole0)
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kvm: 5 guests now active
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12 15:27:27.773+0000: shutting down, reason=failed
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 libvirtd: 2018-01-12
> >> >> >>>> > 15:27:27.773+0000:
> >> >> >>>> > 1910:
> >> >> >>>> > error : virLockManagerSanlockAcquire:1041 : resource busy:
> >> >> >>>> > Failed
> >> >> >>>> > to
> >> >> >>>> > acquire
> >> >> >>>> > lock: Lease is held by another host
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12T15:27:27.776135Z qemu-kvm: terminating on signal 15
> >> >> >>>> > from
> >> >> >>>> > pid 1773
> >> >> >>>> > (/usr/sbin/libvirtd)
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > disabled
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 left
> promiscuous
> >> >> >>>> > mode
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > disabled
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.7989]
> >> >> >>>> > device (vnet4): state change: disconnected -> unmanaged
> (reason
> >> >> >>>> > 'unmanaged')
> >> >> >>>> > [30 10 3]
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.7989]
> >> >> >>>> > device (vnet4): released from master device ovirtmgmt
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kvm: 4 guests now active
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: Machine
> >> >> >>>> > qemu-114-Cultivar
> >> >> >>>> > terminated.
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >
> >> >> >>>> > vm/4013c829::ERROR::2018-01-12
> >> >> >>>> > 11:27:28,001::vm::914::virt.vm::(_startUnderlyingVm)
> >> >> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start
> >> >> >>>> > process
> >> >> >>>> > failed
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 843,
> >> >> >>>> > in
> >> >> >>>> > _startUnderlyingVm
> >> >> >>>> >
> >> >> >>>> > self._run()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 2721,
> >> >> >>>> > in
> >> >> >>>> > _run
> >> >> >>>> >
> >> >> >>>> > dom.createWithFlags(flags)
> >> >> >>>> >
> >> >> >>>> > File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> >> >> >>>> > line
> >> >> >>>> > 126, in wrapper
> >> >> >>>> >
> >> >> >>>> > ret = f(*args, **kwargs)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
> >> >> >>>> > 512,
> >> >> >>>> > in
> >> >> >>>> > wrapper
> >> >> >>>> >
> >> >> >>>> > return func(inst, *args, **kwargs)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line
> >> >> >>>> > 1069,
> >> >> >>>> > in
> >> >> >>>> > createWithFlags
> >> >> >>>> >
> >> >> >>>> > if ret == -1: raise libvirtError
> >> >> >>>> > ('virDomainCreateWithFlags()
> >> >> >>>> > failed',
> >> >> >>>> > dom=self)
> >> >> >>>> >
> >> >> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is
> >> >> >>>> > held
> >> >> >>>> > by
> >> >> >>>> > another host
> >> >> >>>> >
> >> >> >>>> > periodic/47::ERROR::2018-01-12
> >> >> >>>> > 11:27:32,858::periodic::215::virt.periodic.Operation::(__
> call__)
> >> >> >>>> > <vdsm.virt.sampling.VMBulkstatsMonitor object at 0x3692590>
> >> >> >>>> > operation
> >> >> >>>> > failed
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.
> py",
> >> >> >>>> > line
> >> >> >>>> > 213,
> >> >> >>>> > in __call__
> >> >> >>>> >
> >> >> >>>> > self._func()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.
> py",
> >> >> >>>> > line
> >> >> >>>> > 522,
> >> >> >>>> > in __call__
> >> >> >>>> >
> >> >> >>>> > self._send_metrics()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.
> py",
> >> >> >>>> > line
> >> >> >>>> > 538,
> >> >> >>>> > in _send_metrics
> >> >> >>>> >
> >> >> >>>> > vm_sample.interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 45, in
> >> >> >>>> > produce
> >> >> >>>> >
> >> >> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 322, in
> >> >> >>>> > networks
> >> >> >>>> >
> >> >> >>>> > if nic.name.startswith('hostdev'):
> >> >> >>>> >
> >> >> >>>> > AttributeError: name
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > On Fri, Jan 12, 2018 at 11:14 AM, Martin Sivak
> >> >> >>>> > <msivak(a)redhat.com>
> >> >> >>>> > wrote:
> >> >> >>>> >>
> >> >> >>>> >> Hmm that rules out most of NFS related permission issues.
> >> >> >>>> >>
> >> >> >>>> >> So the current status is (I need to sum it up to get the full
> >> >> >>>> >> picture):
> >> >> >>>> >>
> >> >> >>>> >> - HE VM is down
> >> >> >>>> >> - HE agent fails when opening metadata using the symlink
> >> >> >>>> >> - the symlink is there
> >> >> >>>> >> - the symlink is readable by vdsm:kvm
> >> >> >>>> >>
> >> >> >>>> >> Hmm can you check under which user is ovirt-ha-broker
> started?
> >> >> >>>> >>
> >> >> >>>> >> Martin
> >> >> >>>> >>
> >> >> >>>> >>
> >> >> >>>> >> On Fri, Jan 12, 2018 at 4:10 PM, Jayme <jaymef(a)gmail.com>
> >> >> >>>> >> wrote:
> >> >> >>>> >> > Same thing happens with data images of other VMs as well
> >> >> >>>> >> > though,
> >> >> >>>> >> > and
> >> >> >>>> >> > those
> >> >> >>>> >> > seem to be running ok so I'm not sure if it's the problem.
> >> >> >>>> >> >
> >> >> >>>> >> > On Fri, Jan 12, 2018 at 11:08 AM, Jayme <jaymef(a)gmail.com>
> >> >> >>>> >> > wrote:
> >> >> >>>> >> >>
> >> >> >>>> >> >> Martin,
> >> >> >>>> >> >>
> >> >> >>>> >> >> I can as VDSM user but not as root . I get permission
> denied
> >> >> >>>> >> >> trying to
> >> >> >>>> >> >> touch one of the files as root, is that normal?
> >> >> >>>> >> >>
> >> >> >>>> >> >> On Fri, Jan 12, 2018 at 11:03 AM, Martin Sivak
> >> >> >>>> >> >> <msivak(a)redhat.com>
> >> >> >>>> >> >> wrote:
> >> >> >>>> >> >>>
> >> >> >>>> >> >>> Hmm, then it might be a permission issue indeed. Can you
> >> >> >>>> >> >>> touch
> >> >> >>>> >> >>> the
> >> >> >>>> >> >>> file? Open it? (try hexdump) Just to make sure NFS does
> not
> >> >> >>>> >> >>> prevent
> >> >> >>>> >> >>> you from doing that.
> >> >> >>>> >> >>>
> >> >> >>>> >> >>> Martin
> >> >> >>>> >> >>>
> >> >> >>>> >> >>> On Fri, Jan 12, 2018 at 3:57 PM, Jayme <jaymef(a)gmail.com
> >
> >> >> >>>> >> >>> wrote:
> >> >> >>>> >> >>> > Sorry, I think we got confused about the symlink, there
> >> >> >>>> >> >>> > are
> >> >> >>>> >> >>> > symlinks
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > /var/run that point the /rhev when I was doing an LS it
> >> >> >>>> >> >>> > was
> >> >> >>>> >> >>> > listing
> >> >> >>>> >> >>> > the
> >> >> >>>> >> >>> > files in /rhev
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > 14a20941-1b84-4b82-be8f-ace38d7c037a ->
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > ls -al
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >> >>> > total 2040
> >> >> >>>> >> >>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >> >> >>>> >> >>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >> >> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 10:56
> >> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >> >> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >> >> >>>> >> >>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > Is it possible that this is the wrong image for hosted
> >> >> >>>> >> >>> > engine?
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > this is all I get in vdsm log when running
> hosted-engine
> >> >> >>>> >> >>> > --connect-storage
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > jsonrpc/4::ERROR::2018-01-12
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > 10:52:53,019::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >> >> >>>> >> >>> > Internal server error
> >> >> >>>> >> >>> > Traceback (most recent call last):
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.
> py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 606,
> >> >> >>>> >> >>> > in _handle_request
> >> >> >>>> >> >>> > res = method(**params)
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 201,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > _dynamicMethod
> >> >> >>>> >> >>> > result = fn(*methodArgs)
> >> >> >>>> >> >>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 48,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > method
> >> >> >>>> >> >>> > ret = func(*args, **kwargs)
> >> >> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 1354, in
> >> >> >>>> >> >>> > getAllVmIoTunePolicies
> >> >> >>>> >> >>> > io_tune_policies_dict =
> >> >> >>>> >> >>> > self._cif.getAllVmIoTunePolicies()
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 524,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > getAllVmIoTunePolicies
> >> >> >>>> >> >>> > 'current_values': v.getIoTune()}
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 3481,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > getIoTune
> >> >> >>>> >> >>> > result = self.getIoTuneResponse()
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 3500,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > getIoTuneResponse
> >> >> >>>> >> >>> > res = self._dom.blockIoTune(
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.
> py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 47,
> >> >> >>>> >> >>> > in __getattr__
> >> >> >>>> >> >>> > % self.vmid)
> >> >> >>>> >> >>> > NotConnectedError: VM
> >> >> >>>> >> >>> > '4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> > was not
> >> >> >>>> >> >>> > defined
> >> >> >>>> >> >>> > yet or was undefined
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > On Fri, Jan 12, 2018 at 10:48 AM, Martin Sivak
> >> >> >>>> >> >>> > <msivak(a)redhat.com>
> >> >> >>>> >> >>> > wrote:
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> Hi,
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> what happens when you try hosted-engine
> >> >> >>>> >> >>> >> --connect-storage?
> >> >> >>>> >> >>> >> Do
> >> >> >>>> >> >>> >> you
> >> >> >>>> >> >>> >> see
> >> >> >>>> >> >>> >> any errors in the vdsm log?
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> Best regards
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> Martin Sivak
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> On Fri, Jan 12, 2018 at 3:41 PM, Jayme
> >> >> >>>> >> >>> >> <jaymef(a)gmail.com>
> >> >> >>>> >> >>> >> wrote:
> >> >> >>>> >> >>> >> > Ok this is what I've done:
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > - All three hosts in global maintenance mode
> >> >> >>>> >> >>> >> > - Ran: systemctl stop ovirt-ha-broker; systemctl
> stop
> >> >> >>>> >> >>> >> > ovirt-ha-broker --
> >> >> >>>> >> >>> >> > on
> >> >> >>>> >> >>> >> > all three hosts
> >> >> >>>> >> >>> >> > - Moved ALL files in
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >> >> >>>> >> >>> >> > to
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/backup
> >> >> >>>> >> >>> >> > - Ran: systemctl start ovirt-ha-broker; systemctl
> >> >> >>>> >> >>> >> > start
> >> >> >>>> >> >>> >> > ovirt-ha-broker
> >> >> >>>> >> >>> >> > --
> >> >> >>>> >> >>> >> > on all three hosts
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > - attempt start of engine vm from HOST0 (cultivar0):
> >> >> >>>> >> >>> >> > hosted-engine
> >> >> >>>> >> >>> >> > --vm-start
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > Lots of errors in the logs still, it appears to be
> >> >> >>>> >> >>> >> > having
> >> >> >>>> >> >>> >> > problems
> >> >> >>>> >> >>> >> > with
> >> >> >>>> >> >>> >> > that
> >> >> >>>> >> >>> >> > directory still:
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > Jan 12 10:40:13 cultivar0 journal: ovirt-ha-broker
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > ovirt_hosted_engine_ha.broker.
> storage_broker.StorageBroker
> >> >> >>>> >> >>> >> > ERROR
> >> >> >>>> >> >>> >> > Failed
> >> >> >>>> >> >>> >> > to
> >> >> >>>> >> >>> >> > write metadata for host 1 to
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8#012Traceback
> >> >> >>>> >> >>> >> > (most recent call last):#012 File
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >> >> >>>> >> >>> >> > line 202, in put_stats#012 f = os.open(path,
> >> >> >>>> >> >>> >> > direct_flag
> >> >> >>>> >> >>> >> > |
> >> >> >>>> >> >>> >> > os.O_WRONLY |
> >> >> >>>> >> >>> >> > os.O_SYNC)#012OSError: [Errno 2] No such file or
> >> >> >>>> >> >>> >> > directory:
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > There are no new files or symlinks in
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > - Jayme
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > On Fri, Jan 12, 2018 at 10:23 AM, Martin Sivak
> >> >> >>>> >> >>> >> > <msivak(a)redhat.com>
> >> >> >>>> >> >>> >> > wrote:
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling (
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> On all hosts I should have added.
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> Martin
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak
> >> >> >>>> >> >>> >> >> <msivak(a)redhat.com>
> >> >> >>>> >> >>> >> >> wrote:
> >> >> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2]
> >> >> >>>> >> >>> >> >> >> No
> >> >> >>>> >> >>> >> >> >> such
> >> >> >>>> >> >>> >> >> >> file
> >> >> >>>> >> >>> >> >> >> or
> >> >> >>>> >> >>> >> >> >> directory:
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> ls -al
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >> >> >>>> >> >>> >> >> >> referring to
> >> >> >>>> >> >>> >> >> >> that
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> addressed in RC1 or something else?
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > No, this file is the symlink. It should point to
> >> >> >>>> >> >>> >> >> > somewhere
> >> >> >>>> >> >>> >> >> > inside
> >> >> >>>> >> >>> >> >> > /rhev/. I see it is a 1G file in your case. That
> is
> >> >> >>>> >> >>> >> >> > really
> >> >> >>>> >> >>> >> >> > interesting.
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling
> >> >> >>>> >> >>> >> >> > (ovirt-ha-agent,
> >> >> >>>> >> >>> >> >> > ovirt-ha-broker), move the file (metadata file is
> >> >> >>>> >> >>> >> >> > not
> >> >> >>>> >> >>> >> >> > important
> >> >> >>>> >> >>> >> >> > when
> >> >> >>>> >> >>> >> >> > services are stopped, but better safe than sorry)
> >> >> >>>> >> >>> >> >> > and
> >> >> >>>> >> >>> >> >> > restart
> >> >> >>>> >> >>> >> >> > all
> >> >> >>>> >> >>> >> >> > services again?
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> >> Could there possibly be a permissions
> >> >> >>>> >> >>> >> >> >> problem somewhere?
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Maybe, but the file itself looks out of the
> >> >> >>>> >> >>> >> >> > ordinary.
> >> >> >>>> >> >>> >> >> > I
> >> >> >>>> >> >>> >> >> > wonder
> >> >> >>>> >> >>> >> >> > how it
> >> >> >>>> >> >>> >> >> > got there.
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Best regards
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Martin Sivak
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > On Fri, Jan 12, 2018 at 3:09 PM, Jayme
> >> >> >>>> >> >>> >> >> > <jaymef(a)gmail.com>
> >> >> >>>> >> >>> >> >> > wrote:
> >> >> >>>> >> >>> >> >> >> Thanks for the help thus far. Storage could be
> >> >> >>>> >> >>> >> >> >> related
> >> >> >>>> >> >>> >> >> >> but
> >> >> >>>> >> >>> >> >> >> all
> >> >> >>>> >> >>> >> >> >> other
> >> >> >>>> >> >>> >> >> >> VMs on
> >> >> >>>> >> >>> >> >> >> same storage are running ok. The storage is
> >> >> >>>> >> >>> >> >> >> mounted
> >> >> >>>> >> >>> >> >> >> via
> >> >> >>>> >> >>> >> >> >> NFS
> >> >> >>>> >> >>> >> >> >> from
> >> >> >>>> >> >>> >> >> >> within one
> >> >> >>>> >> >>> >> >> >> of the three hosts, I realize this is not ideal.
> >> >> >>>> >> >>> >> >> >> This
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> setup
> >> >> >>>> >> >>> >> >> >> by
> >> >> >>>> >> >>> >> >> >> a
> >> >> >>>> >> >>> >> >> >> previous admin more as a proof of concept and
> VMs
> >> >> >>>> >> >>> >> >> >> were
> >> >> >>>> >> >>> >> >> >> put on
> >> >> >>>> >> >>> >> >> >> there
> >> >> >>>> >> >>> >> >> >> that
> >> >> >>>> >> >>> >> >> >> should not have been placed in a proof of
> concept
> >> >> >>>> >> >>> >> >> >> environment..
> >> >> >>>> >> >>> >> >> >> it
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> intended to be rebuilt with proper storage down
> >> >> >>>> >> >>> >> >> >> the
> >> >> >>>> >> >>> >> >> >> road.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> So the storage is on HOST0 and the other hosts
> >> >> >>>> >> >>> >> >> >> mount
> >> >> >>>> >> >>> >> >> >> NFS
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/data
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_data
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/iso
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_iso
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/import_export
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_import__export
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/hosted_engine
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_hosted__engine
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Like I said, the VM data storage itself seems to
> >> >> >>>> >> >>> >> >> >> be
> >> >> >>>> >> >>> >> >> >> working
> >> >> >>>> >> >>> >> >> >> ok,
> >> >> >>>> >> >>> >> >> >> as
> >> >> >>>> >> >>> >> >> >> all
> >> >> >>>> >> >>> >> >> >> other
> >> >> >>>> >> >>> >> >> >> VMs appear to be running.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> I'm curious why the broker log says this file is
> >> >> >>>> >> >>> >> >> >> not
> >> >> >>>> >> >>> >> >> >> found
> >> >> >>>> >> >>> >> >> >> when
> >> >> >>>> >> >>> >> >> >> it
> >> >> >>>> >> >>> >> >> >> is
> >> >> >>>> >> >>> >> >> >> correct and I can see the file at that path:
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2]
> >> >> >>>> >> >>> >> >> >> No
> >> >> >>>> >> >>> >> >> >> such
> >> >> >>>> >> >>> >> >> >> file
> >> >> >>>> >> >>> >> >> >> or
> >> >> >>>> >> >>> >> >> >> directory:
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> ls -al
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >> >> >>>> >> >>> >> >> >> referring to
> >> >> >>>> >> >>> >> >> >> that
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> addressed in RC1 or something else? Could there
> >> >> >>>> >> >>> >> >> >> possibly be
> >> >> >>>> >> >>> >> >> >> a
> >> >> >>>> >> >>> >> >> >> permissions
> >> >> >>>> >> >>> >> >> >> problem somewhere?
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Assuming that all three hosts have 4.2 rpms
> >> >> >>>> >> >>> >> >> >> installed
> >> >> >>>> >> >>> >> >> >> and the
> >> >> >>>> >> >>> >> >> >> host
> >> >> >>>> >> >>> >> >> >> engine
> >> >> >>>> >> >>> >> >> >> will not start is it safe for me to update hosts
> >> >> >>>> >> >>> >> >> >> to
> >> >> >>>> >> >>> >> >> >> 4.2
> >> >> >>>> >> >>> >> >> >> RC1
> >> >> >>>> >> >>> >> >> >> rpms?
> >> >> >>>> >> >>> >> >> >> Or
> >> >> >>>> >> >>> >> >> >> perhaps install that repo and *only* update the
> >> >> >>>> >> >>> >> >> >> ovirt
> >> >> >>>> >> >>> >> >> >> HA
> >> >> >>>> >> >>> >> >> >> packages?
> >> >> >>>> >> >>> >> >> >> Assuming that I cannot yet apply the same
> updates
> >> >> >>>> >> >>> >> >> >> to
> >> >> >>>> >> >>> >> >> >> the
> >> >> >>>> >> >>> >> >> >> inaccessible
> >> >> >>>> >> >>> >> >> >> hosted
> >> >> >>>> >> >>> >> >> >> engine VM.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> I should also mention one more thing. I
> >> >> >>>> >> >>> >> >> >> originally
> >> >> >>>> >> >>> >> >> >> upgraded
> >> >> >>>> >> >>> >> >> >> the
> >> >> >>>> >> >>> >> >> >> engine
> >> >> >>>> >> >>> >> >> >> VM
> >> >> >>>> >> >>> >> >> >> first using new RPMS then engine-setup. It
> failed
> >> >> >>>> >> >>> >> >> >> due
> >> >> >>>> >> >>> >> >> >> to not
> >> >> >>>> >> >>> >> >> >> being
> >> >> >>>> >> >>> >> >> >> in
> >> >> >>>> >> >>> >> >> >> global maintenance, so I set global maintenance
> >> >> >>>> >> >>> >> >> >> and
> >> >> >>>> >> >>> >> >> >> ran
> >> >> >>>> >> >>> >> >> >> it
> >> >> >>>> >> >>> >> >> >> again,
> >> >> >>>> >> >>> >> >> >> which
> >> >> >>>> >> >>> >> >> >> appeared to complete as intended but never came
> >> >> >>>> >> >>> >> >> >> back
> >> >> >>>> >> >>> >> >> >> up
> >> >> >>>> >> >>> >> >> >> after.
> >> >> >>>> >> >>> >> >> >> Just
> >> >> >>>> >> >>> >> >> >> in
> >> >> >>>> >> >>> >> >> >> case
> >> >> >>>> >> >>> >> >> >> this might have anything at all to do with what
> >> >> >>>> >> >>> >> >> >> could
> >> >> >>>> >> >>> >> >> >> have
> >> >> >>>> >> >>> >> >> >> happened.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Thanks very much again, I very much appreciate
> the
> >> >> >>>> >> >>> >> >> >> help!
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> - Jayme
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> On Fri, Jan 12, 2018 at 8:44 AM, Simone
> Tiraboschi
> >> >> >>>> >> >>> >> >> >> <stirabos(a)redhat.com>
> >> >> >>>> >> >>> >> >> >> wrote:
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak
> >> >> >>>> >> >>> >> >> >>> <msivak(a)redhat.com>
> >> >> >>>> >> >>> >> >> >>> wrote:
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> Hi,
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> the hosted engine agent issue might be fixed
> by
> >> >> >>>> >> >>> >> >> >>>> restarting
> >> >> >>>> >> >>> >> >> >>>> ovirt-ha-broker or updating to newest
> >> >> >>>> >> >>> >> >> >>>> ovirt-hosted-engine-ha
> >> >> >>>> >> >>> >> >> >>>> and
> >> >> >>>> >> >>> >> >> >>>> -setup. We improved handling of the missing
> >> >> >>>> >> >>> >> >> >>>> symlink.
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>> Available just in oVirt 4.2.1 RC1
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> All the other issues seem to point to some
> >> >> >>>> >> >>> >> >> >>>> storage
> >> >> >>>> >> >>> >> >> >>>> problem
> >> >> >>>> >> >>> >> >> >>>> I
> >> >> >>>> >> >>> >> >> >>>> am
> >> >> >>>> >> >>> >> >> >>>> afraid.
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> You said you started the VM, do you see it in
> >> >> >>>> >> >>> >> >> >>>> virsh
> >> >> >>>> >> >>> >> >> >>>> -r
> >> >> >>>> >> >>> >> >> >>>> list?
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> Best regards
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> Martin Sivak
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> On Thu, Jan 11, 2018 at 10:00 PM, Jayme
> >> >> >>>> >> >>> >> >> >>>> <jaymef(a)gmail.com>
> >> >> >>>> >> >>> >> >> >>>> wrote:
> >> >> >>>> >> >>> >> >> >>>> > Please help, I'm really not sure what else
> to
> >> >> >>>> >> >>> >> >> >>>> > try
> >> >> >>>> >> >>> >> >> >>>> > at
> >> >> >>>> >> >>> >> >> >>>> > this
> >> >> >>>> >> >>> >> >> >>>> > point.
> >> >> >>>> >> >>> >> >> >>>> > Thank
> >> >> >>>> >> >>> >> >> >>>> > you
> >> >> >>>> >> >>> >> >> >>>> > for reading!
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > I'm still working on trying to get my hosted
> >> >> >>>> >> >>> >> >> >>>> > engine
> >> >> >>>> >> >>> >> >> >>>> > running
> >> >> >>>> >> >>> >> >> >>>> > after a
> >> >> >>>> >> >>> >> >> >>>> > botched
> >> >> >>>> >> >>> >> >> >>>> > upgrade to 4.2. Storage is NFS mounted from
> >> >> >>>> >> >>> >> >> >>>> > within
> >> >> >>>> >> >>> >> >> >>>> > one
> >> >> >>>> >> >>> >> >> >>>> > of
> >> >> >>>> >> >>> >> >> >>>> > the
> >> >> >>>> >> >>> >> >> >>>> > hosts.
> >> >> >>>> >> >>> >> >> >>>> > Right
> >> >> >>>> >> >>> >> >> >>>> > now I have 3 centos7 hosts that are fully
> >> >> >>>> >> >>> >> >> >>>> > updated
> >> >> >>>> >> >>> >> >> >>>> > with
> >> >> >>>> >> >>> >> >> >>>> > yum
> >> >> >>>> >> >>> >> >> >>>> > packages
> >> >> >>>> >> >>> >> >> >>>> > from
> >> >> >>>> >> >>> >> >> >>>> > ovirt 4.2, the engine was fully updated with
> >> >> >>>> >> >>> >> >> >>>> > yum
> >> >> >>>> >> >>> >> >> >>>> > packages
> >> >> >>>> >> >>> >> >> >>>> > and
> >> >> >>>> >> >>> >> >> >>>> > failed to
> >> >> >>>> >> >>> >> >> >>>> > come
> >> >> >>>> >> >>> >> >> >>>> > up after reboot. As of right now,
> everything
> >> >> >>>> >> >>> >> >> >>>> > should
> >> >> >>>> >> >>> >> >> >>>> > have
> >> >> >>>> >> >>> >> >> >>>> > full
> >> >> >>>> >> >>> >> >> >>>> > yum
> >> >> >>>> >> >>> >> >> >>>> > updates
> >> >> >>>> >> >>> >> >> >>>> > and all having 4.2 rpms. I have global
> >> >> >>>> >> >>> >> >> >>>> > maintenance
> >> >> >>>> >> >>> >> >> >>>> > mode
> >> >> >>>> >> >>> >> >> >>>> > on
> >> >> >>>> >> >>> >> >> >>>> > right
> >> >> >>>> >> >>> >> >> >>>> > now
> >> >> >>>> >> >>> >> >> >>>> > and
> >> >> >>>> >> >>> >> >> >>>> > started hosted-engine on one of the three
> host
> >> >> >>>> >> >>> >> >> >>>> > and
> >> >> >>>> >> >>> >> >> >>>> > the
> >> >> >>>> >> >>> >> >> >>>> > status is
> >> >> >>>> >> >>> >> >> >>>> > currently:
> >> >> >>>> >> >>> >> >> >>>> > Engine status : {"reason": "failed
> liveliness
> >> >> >>>> >> >>> >> >> >>>> > check”;
> >> >> >>>> >> >>> >> >> >>>> > "health":
> >> >> >>>> >> >>> >> >> >>>> > "bad",
> >> >> >>>> >> >>> >> >> >>>> > "vm":
> >> >> >>>> >> >>> >> >> >>>> > "up", "detail": "Up"}
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > this is what I get when trying to enter
> >> >> >>>> >> >>> >> >> >>>> > hosted-vm
> >> >> >>>> >> >>> >> >> >>>> > --console
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > The engine VM is running on this host
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > error: failed to get domain 'HostedEngine'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > error: Domain not found: no domain with
> >> >> >>>> >> >>> >> >> >>>> > matching
> >> >> >>>> >> >>> >> >> >>>> > name
> >> >> >>>> >> >>> >> >> >>>> > 'HostedEngine'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Here are logs from various sources when I
> >> >> >>>> >> >>> >> >> >>>> > start
> >> >> >>>> >> >>> >> >> >>>> > the
> >> >> >>>> >> >>> >> >> >>>> > VM on
> >> >> >>>> >> >>> >> >> >>>> > HOST3:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > hosted-engine --vm-start
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Command VM.getStats with args {'vmID':
> >> >> >>>> >> >>> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c'}
> >> >> >>>> >> >>> >> >> >>>> > failed:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > (code=1, message=Virtual machine does not
> >> >> >>>> >> >>> >> >> >>>> > exist:
> >> >> >>>> >> >>> >> >> >>>> > {'vmId':
> >> >> >>>> >> >>> >> >> >>>> > u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd-machined:
> >> >> >>>> >> >>> >> >> >>>> > New
> >> >> >>>> >> >>> >> >> >>>> > machine
> >> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Started
> >> >> >>>> >> >>> >> >> >>>> > Virtual
> >> >> >>>> >> >>> >> >> >>>> > Machine
> >> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Starting
> >> >> >>>> >> >>> >> >> >>>> > Virtual
> >> >> >>>> >> >>> >> >> >>>> > Machine
> >> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 kvm: 3 guests now
> >> >> >>>> >> >>> >> >> >>>> > active
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/common/api.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 48,
> >> >> >>>> >> >>> >> >> >>>> > in
> >> >> >>>> >> >>> >> >> >>>> > method
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ret = func(*args, **kwargs)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 2718, in
> >> >> >>>> >> >>> >> >> >>>> > getStorageDomainInfo
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > dom = self.validateSdUUID(sdUUID)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 304, in
> >> >> >>>> >> >>> >> >> >>>> > validateSdUUID
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > sdDom.validate()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/fileSD.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 515,
> >> >> >>>> >> >>> >> >> >>>> > in validate
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > raise
> >> >> >>>> >> >>> >> >> >>>> > se.StorageDomainAccessError(self.sdUUID)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > StorageDomainAccessError: Domain is either
> >> >> >>>> >> >>> >> >> >>>> > partially
> >> >> >>>> >> >>> >> >> >>>> > accessible
> >> >> >>>> >> >>> >> >> >>>> > or
> >> >> >>>> >> >>> >> >> >>>> > entirely
> >> >> >>>> >> >>> >> >> >>>> > inaccessible:
> >> >> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > jsonrpc/2::ERROR::2018-01-11
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 16:55:16,144::dispatcher::82::
> storage.Dispatcher::(wrapper)
> >> >> >>>> >> >>> >> >> >>>> > FINISH
> >> >> >>>> >> >>> >> >> >>>> > getStorageDomainInfo error=Domain is either
> >> >> >>>> >> >>> >> >> >>>> > partially
> >> >> >>>> >> >>> >> >> >>>> > accessible
> >> >> >>>> >> >>> >> >> >>>> > or
> >> >> >>>> >> >>> >> >> >>>> > entirely
> >> >> >>>> >> >>> >> >> >>>> > inaccessible:
> >> >> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm
> >> >> >>>> >> >>> >> >> >>>> > -name
> >> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-108-Cultivar/master-key.aes
> >> >> >>>> >> >>> >> >> >>>> > -machine
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >>>> >> >>> >> >> >>>> > -cpu
> >> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1
> >> >> >>>> >> >>> >> >> >>>> > -uuid
> >> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c
> -smbios
> >> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-108-Cultivar/monitor.sock,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -mon
> >> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor,mode=control
> >> >> >>>> >> >>> >> >> >>>> > -rtc
> >> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:33:19,driftfix=slew
> -global
> >> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet
> >> >> >>>> >> >>> >> >> >>>> > -no-reboot
> >> >> >>>> >> >>> >> >> >>>> > -boot
> >> >> >>>> >> >>> >> >> >>>> > strict=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.
> 0,addr=0x1.0x2
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >> >> >>>> >> >>> >> >> >>>> > -drive
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,
> readonly=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >> >> >>>> >> >>> >> >> >>>> > -netdev
> >> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=
> org.qemu.guest_agent.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=
> vdagent
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=4,chardev=charchannel3,id=channel3,name=
> org.ovirt.hosted-engine-setup.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=
> charconsole0,id=console0
> >> >> >>>> >> >>> >> >> >>>> > -spice
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >> >> >>>> >> >>> >> >> >>>> > -object
> >> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >> >> >>>> >> >>> >> >> >>>> > -msg
> >> >> >>>> >> >>> >> >> >>>> > timestamp=on
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:33:19.699999Z qemu-kvm:
> -chardev
> >> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> >> >>> >> >> >>>> > char
> >> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> >> >> >>>> >> >>> >> >> >>>> > charconsole0)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:38:11.640+0000: shutting down,
> >> >> >>>> >> >>> >> >> >>>> > reason=shutdown
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:39:02.122+0000: starting up
> >> >> >>>> >> >>> >> >> >>>> > libvirt
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 3.2.0,
> >> >> >>>> >> >>> >> >> >>>> > package:
> >> >> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >> >> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org),
> >> >> >>>> >> >>> >> >> >>>> > qemu
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1),
> >> >> >>>> >> >>> >> >> >>>> > hostname:
> >> >> >>>> >> >>> >> >> >>>> > cultivar3
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm
> >> >> >>>> >> >>> >> >> >>>> > -name
> >> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-109-Cultivar/master-key.aes
> >> >> >>>> >> >>> >> >> >>>> > -machine
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >>>> >> >>> >> >> >>>> > -cpu
> >> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1
> >> >> >>>> >> >>> >> >> >>>> > -uuid
> >> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c
> -smbios
> >> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-109-Cultivar/monitor.sock,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -mon
> >> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor,mode=control
> >> >> >>>> >> >>> >> >> >>>> > -rtc
> >> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:39:02,driftfix=slew
> -global
> >> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet
> >> >> >>>> >> >>> >> >> >>>> > -no-reboot
> >> >> >>>> >> >>> >> >> >>>> > -boot
> >> >> >>>> >> >>> >> >> >>>> > strict=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.
> 0,addr=0x1.0x2
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >> >> >>>> >> >>> >> >> >>>> > -drive
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,
> readonly=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >> >> >>>> >> >>> >> >> >>>> > -netdev
> >> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=
> org.qemu.guest_agent.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=
> vdagent
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=4,chardev=charchannel3,id=channel3,name=
> org.ovirt.hosted-engine-setup.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=
> charconsole0,id=console0
> >> >> >>>> >> >>> >> >> >>>> > -spice
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >> >> >>>> >> >>> >> >> >>>> > -object
> >> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >> >> >>>> >> >>> >> >> >>>> > -msg
> >> >> >>>> >> >>> >> >> >>>> > timestamp=on
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:39:02.380773Z qemu-kvm:
> -chardev
> >> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> >> >>> >> >> >>>> > char
> >> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> >> >> >>>> >> >>> >> >> >>>> > charconsole0)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:53:11.407+0000: shutting down,
> >> >> >>>> >> >>> >> >> >>>> > reason=shutdown
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:55:57.210+0000: starting up
> >> >> >>>> >> >>> >> >> >>>> > libvirt
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 3.2.0,
> >> >> >>>> >> >>> >> >> >>>> > package:
> >> >> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >> >> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org),
> >> >> >>>> >> >>> >> >> >>>> > qemu
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1),
> >> >> >>>> >> >>> >> >> >>>> > hostname:
> >> >> >>>> >> >>> >> >> >>>> > cultivar3.grove.silverorange.com
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm
> >> >> >>>> >> >>> >> >> >>>> > -name
> >> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-110-Cultivar/master-key.aes
> >> >> >>>> >> >>> >> >> >>>> > -machine
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >>>> >> >>> >> >> >>>> > -cpu
> >> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1
> >> >> >>>> >> >>> >> >> >>>> > -uuid
> >> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c
> -smbios
> >> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-110-Cultivar/monitor.sock,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -mon
> >> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor,mode=control
> >> >> >>>> >> >>> >> >> >>>> > -rtc
> >> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:55:57,driftfix=slew
> -global
> >> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet
> >> >> >>>> >> >>> >> >> >>>> > -no-reboot
> >> >> >>>> >> >>> >> >> >>>> > -boot
> >> >> >>>> >> >>> >> >> >>>> > strict=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.
> 0,addr=0x1.0x2
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >> >> >>>> >> >>> >> >> >>>> > -drive
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,
> readonly=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >> >> >>>> >> >>> >> >> >>>> > -netdev
> >> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=
> org.qemu.guest_agent.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=
> vdagent
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=4,chardev=charchannel3,id=channel3,name=
> org.ovirt.hosted-engine-setup.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=
> charconsole0,id=console0
> >> >> >>>> >> >>> >> >> >>>> > -spice
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >> >> >>>> >> >>> >> >> >>>> > -object
> >> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >> >> >>>> >> >>> >> >> >>>> > -msg
> >> >> >>>> >> >>> >> >> >>>> > timestamp=on
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:55:57.468037Z qemu-kvm:
> -chardev
> >> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> >> >>> >> >> >>>> > char
> >> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> >> >> >>>> >> >>> >> >> >>>> > charconsole0)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-
> ha/broker.log
> >> >> >>>> >> >>> >> >> >>>> > <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >> >> >>>> >> >>> >> >> >>>> > line 151, in get_raw_stats
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > f = os.open(path, direct_flag |
> >> >> >>>> >> >>> >> >> >>>> > os.O_RDONLY |
> >> >> >>>> >> >>> >> >> >>>> > os.O_SYNC)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > OSError: [Errno 2] No such file or
> directory:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > StatusStorageThread::ERROR::2018-01-11
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 16:55:15,761::status_broker::
> 92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
> >> >> >>>> >> >>> >> >> >>>> > Failed to read state.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/status_broker.py",
> >> >> >>>> >> >>> >> >> >>>> > line 88, in run
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > self._storage_broker.get_raw_stats()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >> >> >>>> >> >>> >> >> >>>> > line 162, in get_raw_stats
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > .format(str(e)))
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > RequestError: failed to read metadata:
> [Errno
> >> >> >>>> >> >>> >> >> >>>> > 2]
> >> >> >>>> >> >>> >> >> >>>> > No
> >> >> >>>> >> >>> >> >> >>>> > such
> >> >> >>>> >> >>> >> >> >>>> > file or
> >> >> >>>> >> >>> >> >> >>>> > directory:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-
> ha/agent.log
> >> >> >>>> >> >>> >> >> >>>> > <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > result = refresh_method()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >> >> >>>> >> >>> >> >> >>>> > line 519, in refresh_vm_conf
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > content =
> >> >> >>>> >> >>> >> >> >>>> > self._get_file_content_from_
> shared_storage(VM)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >> >> >>>> >> >>> >> >> >>>> > line 484, in
> >> >> >>>> >> >>> >> >> >>>> > _get_file_content_from_shared_storage
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > config_volume_path =
> >> >> >>>> >> >>> >> >> >>>> > self._get_config_volume_path()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >> >> >>>> >> >>> >> >> >>>> > line 188, in _get_config_volume_path
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > conf_vol_uuid
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/lib/heconflib.py",
> >> >> >>>> >> >>> >> >> >>>> > line 358, in get_volume_path
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > root=envconst.SD_RUN_DIR,
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > RuntimeError: Path to volume
> >> >> >>>> >> >>> >> >> >>>> > 4838749f-216d-406b-b245-98d0343fcf7f
> >> >> >>>> >> >>> >> >> >>>> > not
> >> >> >>>> >> >>> >> >> >>>> > found
> >> >> >>>> >> >>> >> >> >>>> > in /run/vdsm/storag
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > periodic/42::ERROR::2018-01-11
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 16:56:11,446::vmstats::260::
> virt.vmstats::(send_metrics)
> >> >> >>>> >> >>> >> >> >>>> > VM
> >> >> >>>> >> >>> >> >> >>>> > metrics
> >> >> >>>> >> >>> >> >> >>>> > collection failed
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 197, in
> >> >> >>>> >> >>> >> >> >>>> > send_metrics
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > data[prefix + '.cpu.usage'] =
> >> >> >>>> >> >>> >> >> >>>> > stat['cpuUsage']
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > KeyError: 'cpuUsage'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ______________________________
> _________________
> >> >> >>>> >> >>> >> >> >>>> > Users mailing list
> >> >> >>>> >> >>> >> >> >>>> > Users(a)ovirt.org
> >> >> >>>> >> >>> >> >> >>>> > http://lists.ovirt.org/
> mailman/listinfo/users
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> ______________________________
> _________________
> >> >> >>>> >> >>> >> >> >>>> Users mailing list
> >> >> >>>> >> >>> >> >> >>>> Users(a)ovirt.org
> >> >> >>>> >> >>> >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>
> >> >> >>>> >> >>
> >> >> >>>> >> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>
> >> >> >>>
> >> >> >>
> >> >> >
> >> >
> >> >
> >
> >
>
6 years, 10 months
Re: [ovirt-users] unable to bring up hosted engine after botched 4.2 upgrade
by Jayme
The lock space issue was an issue I needed to clear but I don't believe it
has resolved the problem. I shutdown agent and broker on all hosts and
disconnected hosted-storage then enabled broker/agent on just one host and
connected storage. I started the VM and actually didn't get any errors in
the logs barely at all which was good to see, however the VM is still not
running:
HOST3:
Engine status : {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "Up"}
==> /var/log/messages <==
Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered disabled
state
Jan 12 12:42:57 cultivar3 kernel: device vnet0 entered promiscuous mode
Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered blocking
state
Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
forwarding state
Jan 12 12:42:57 cultivar3 lldpad: recvfrom(Event interface): No buffer
space available
Jan 12 12:42:57 cultivar3 systemd-machined: New machine qemu-111-Cultivar.
Jan 12 12:42:57 cultivar3 systemd: Started Virtual Machine
qemu-111-Cultivar.
Jan 12 12:42:57 cultivar3 systemd: Starting Virtual Machine
qemu-111-Cultivar.
Jan 12 12:42:57 cultivar3 kvm: 3 guests now active
Jan 12 12:44:38 cultivar3 libvirtd: 2018-01-12 16:44:38.737+0000: 1535:
error : qemuDomainAgentAvailable:6010 : Guest agent is not responding: QEMU
guest agent is not connected
Interestingly though, now I'm seeing this in the logs which may be a new
clue:
==> /var/log/vdsm/vdsm.log <==
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 126,
in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 116,
in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'248f46f0-d793-4581-9810-c9d965e2f286',)
jsonrpc/4::ERROR::2018-01-12
12:40:30,380::dispatcher::82::storage.Dispatcher::(wrapper) FINISH
getStorageDomainInfo error=Storage domain does not exist:
(u'248f46f0-d793-4581-9810-c9d965e2f286',)
periodic/42::ERROR::2018-01-12 12:40:35,430::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
periodic/43::ERROR::2018-01-12 12:40:50,473::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
periodic/40::ERROR::2018-01-12 12:41:05,519::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
periodic/43::ERROR::2018-01-12 12:41:20,566::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
==> /var/log/ovirt-hosted-engine-ha/broker.log <==
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 151, in get_raw_stats
f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
OSError: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
StatusStorageThread::ERROR::2018-01-12
12:32:06,049::status_broker::92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
Failed to read state.
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 88, in run
self._storage_broker.get_raw_stats()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 162, in get_raw_stats
.format(str(e)))
RequestError: failed to read metadata: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
On Fri, Jan 12, 2018 at 12:02 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> The lock is the issue.
>
> - try running sanlock client status on all hosts
> - also make sure you do not have some forgotten host still connected
> to the lockspace, but without ha daemons running (and with the VM)
>
> I need to go to our president election now, I might check the email
> later tonight.
>
> Martin
>
> On Fri, Jan 12, 2018 at 4:59 PM, Jayme <jaymef(a)gmail.com> wrote:
> > Here are the newest logs from me trying to start hosted vm:
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> blocking
> > state
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> disabled
> > state
> > Jan 12 11:58:14 cultivar0 kernel: device vnet4 entered promiscuous mode
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> blocking
> > state
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> > forwarding state
> > Jan 12 11:58:14 cultivar0 lldpad: recvfrom(Event interface): No buffer
> space
> > available
> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8715]
> > manager: (vnet4): new Tun device
> > (/org/freedesktop/NetworkManager/Devices/140)
> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8795]
> > device (vnet4): state change: unmanaged -> unavailable (reason
> > 'connection-assumed') [10 20 41]
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12 15:58:14.879+0000: starting up libvirt version: 3.2.0,
> package:
> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> > cultivar0.grove.silverorange.com
> > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> > guest=Cultivar,debug-threads=on -S -object
> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-119-Cultivar/master-key.aes
> > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> > Conroe -m 8192 -realtime mlock=off -smp
> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> > 'type=1,manufacturer=oVirt,product=oVirt
> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> > -no-user-config -nodefaults -chardev
> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 119-Cultivar/monitor.sock,server,nowait
> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> > base=2018-01-12T15:58:14,driftfix=slew -global
> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> -device
> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> > -device
> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> > -chardev
> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> > -device
> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> > -chardev
> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> > -device
> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> > -chardev
> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> > -device
> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> > -chardev pty,id=charconsole0 -device
> > virtconsole,chardev=charconsole0,id=console0 -spice
> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> > rng-random,id=objrng0,filename=/dev/urandom -device
> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8807]
> > device (vnet4): state change: unavailable -> disconnected (reason 'none')
> > [20 30 0]
> > Jan 12 11:58:14 cultivar0 systemd-machined: New machine
> qemu-119-Cultivar.
> > Jan 12 11:58:14 cultivar0 systemd: Started Virtual Machine
> > qemu-119-Cultivar.
> > Jan 12 11:58:14 cultivar0 systemd: Starting Virtual Machine
> > qemu-119-Cultivar.
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12T15:58:15.094002Z qemu-kvm: -chardev pty,id=charconsole0: char
> > device redirected to /dev/pts/1 (label charconsole0)
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:15 cultivar0 kvm: 5 guests now active
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12 15:58:15.217+0000: shutting down, reason=failed
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:15 cultivar0 libvirtd: 2018-01-12 15:58:15.217+0000: 1908:
> > error : virLockManagerSanlockAcquire:1041 : resource busy: Failed to
> acquire
> > lock: Lease is held by another host
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12T15:58:15.219934Z qemu-kvm: terminating on signal 15 from pid
> 1773
> > (/usr/sbin/libvirtd)
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> disabled
> > state
> > Jan 12 11:58:15 cultivar0 kernel: device vnet4 left promiscuous mode
> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> disabled
> > state
> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info> [1515772695.2348]
> > device (vnet4): state change: disconnected -> unmanaged (reason
> 'unmanaged')
> > [30 10 3]
> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info> [1515772695.2349]
> > device (vnet4): released from master device ovirtmgmt
> > Jan 12 11:58:15 cultivar0 kvm: 4 guests now active
> > Jan 12 11:58:15 cultivar0 systemd-machined: Machine qemu-119-Cultivar
> > terminated.
> >
> > ==> /var/log/vdsm/vdsm.log <==
> > vm/4013c829::ERROR::2018-01-12
> > 11:58:15,444::vm::914::virt.vm::(_startUnderlyingVm)
> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start process
> failed
> > Traceback (most recent call last):
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 843, in
> > _startUnderlyingVm
> > self._run()
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2721, in
> > _run
> > dom.createWithFlags(flags)
> > File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line
> > 126, in wrapper
> > ret = f(*args, **kwargs)
> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512, in
> > wrapper
> > return func(inst, *args, **kwargs)
> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in
> > createWithFlags
> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
> failed',
> > dom=self)
> > libvirtError: resource busy: Failed to acquire lock: Lease is held by
> > another host
> > jsonrpc/6::ERROR::2018-01-12
> > 11:58:16,421::__init__::611::jsonrpc.JsonRpcServer::(_handle_request)
> > Internal server error
> > Traceback (most recent call last):
> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 606,
> > in _handle_request
> > res = method(**params)
> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
> in
> > _dynamicMethod
> > result = fn(*methodArgs)
> > File "<string>", line 2, in getAllVmIoTunePolicies
> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48,
> in
> > method
> > ret = func(*args, **kwargs)
> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> > getAllVmIoTunePolicies
> > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> > getAllVmIoTunePolicies
> > 'current_values': v.getIoTune()}
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> > getIoTune
> > result = self.getIoTuneResponse()
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> > getIoTuneResponse
> > res = self._dom.blockIoTune(
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 47,
> > in __getattr__
> > % self.vmid)
> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c' was not
> defined
> > yet or was undefined
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:16 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR
> Internal
> > server error#012Traceback (most recent call last):#012 File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
> > _handle_request#012 res = method(**params)#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in
> > _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>",
> line 2,
> > in getAllVmIoTunePolicies#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> > method#012 ret = func(*args, **kwargs)#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> > self._cif.getAllVmIoTunePolicies()#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in
> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was
> undefined
> >
> > On Fri, Jan 12, 2018 at 11:55 AM, Jayme <jaymef(a)gmail.com> wrote:
> >>
> >> One other tidbit I noticed is that it seems like there are less errors
> if
> >> I started in paused mode:
> >>
> >> but still shows: Engine status : {"reason": "bad vm
> >> status", "health": "bad", "vm": "up", "detail": "Paused"}
> >>
> >> ==> /var/log/messages <==
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> blocking state
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> disabled state
> >> Jan 12 11:55:05 cultivar0 kernel: device vnet4 entered promiscuous mode
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> blocking state
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> forwarding state
> >> Jan 12 11:55:05 cultivar0 lldpad: recvfrom(Event interface): No buffer
> >> space available
> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> [1515772505.3625]
> >> manager: (vnet4): new Tun device
> >> (/org/freedesktop/NetworkManager/Devices/139)
> >>
> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> 2018-01-12 15:55:05.370+0000: starting up libvirt version: 3.2.0,
> package:
> >> 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
> >> 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> cultivar0.grove.silverorange.com
> >> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> guest=Cultivar,debug-threads=on -S -object
> >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-118-Cultivar/master-key.aes
> >> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> >> Conroe -m 8192 -realtime mlock=off -smp
> >> 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> 'type=1,manufacturer=oVirt,product=oVirt
> >> Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> -no-user-config -nodefaults -chardev
> >> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 118-Cultivar/monitor.sock,server,nowait
> >> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> base=2018-01-12T15:55:05,driftfix=slew -global
> >> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> -device
> >> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> -device
> >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >> tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> -chardev
> >> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> -device
> >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> -chardev
> >> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> -device
> >> virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> -chardev
> >> socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> -device
> >> virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> -chardev pty,id=charconsole0 -device
> >> virtconsole,chardev=charconsole0,id=console0 -spice
> >> tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> rng-random,id=objrng0,filename=/dev/urandom -device
> >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
> >>
> >> ==> /var/log/messages <==
> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> [1515772505.3689]
> >> device (vnet4): state change: unmanaged -> unavailable (reason
> >> 'connection-assumed') [10 20 41]
> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> [1515772505.3702]
> >> device (vnet4): state change: unavailable -> disconnected (reason
> 'none')
> >> [20 30 0]
> >> Jan 12 11:55:05 cultivar0 systemd-machined: New machine
> qemu-118-Cultivar.
> >> Jan 12 11:55:05 cultivar0 systemd: Started Virtual Machine
> >> qemu-118-Cultivar.
> >> Jan 12 11:55:05 cultivar0 systemd: Starting Virtual Machine
> >> qemu-118-Cultivar.
> >>
> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> 2018-01-12T15:55:05.586827Z qemu-kvm: -chardev pty,id=charconsole0: char
> >> device redirected to /dev/pts/1 (label charconsole0)
> >>
> >> ==> /var/log/messages <==
> >> Jan 12 11:55:05 cultivar0 kvm: 5 guests now active
> >>
> >> On Fri, Jan 12, 2018 at 11:36 AM, Jayme <jaymef(a)gmail.com> wrote:
> >>>
> >>> Yeah I am in global maintenance:
> >>>
> >>> state=GlobalMaintenance
> >>>
> >>> host0: {"reason": "vm not running on this host", "health": "bad",
> "vm":
> >>> "down", "detail": "unknown"}
> >>> host2: {"reason": "vm not running on this host", "health": "bad", "vm":
> >>> "down", "detail": "unknown"}
> >>> host3: {"reason": "vm not running on this host", "health": "bad", "vm":
> >>> "down", "detail": "unknown"}
> >>>
> >>> I understand the lock is an issue, I'll try to make sure it is fully
> >>> stopped on all three before starting but I don't think that is the
> issue at
> >>> hand either. What concerns me is mostly that it seems to be unable
> to read
> >>> the meta data, I think that might be the heart of the problem but I'm
> not
> >>> sure what is causing it.
> >>>
> >>> On Fri, Jan 12, 2018 at 11:33 AM, Martin Sivak <msivak(a)redhat.com>
> wrote:
> >>>>
> >>>> > On all three hosts I ran hosted-engine --vm-shutdown; hosted-engine
> >>>> > --vm-poweroff
> >>>>
> >>>> Are you in global maintenance? I think you were in one of the previous
> >>>> emails, but worth checking.
> >>>>
> >>>> > I started ovirt-ha-broker with systemctl as root user but it does
> >>>> > appear to be running under vdsm:
> >>>>
> >>>> That is the correct behavior.
> >>>>
> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is held
> by
> >>>> > another host
> >>>>
> >>>> sanlock seems to think the VM runs somewhere and it is possible that
> >>>> some other host tried to start the VM as well unless you are in global
> >>>> maintenance (that is why I asked the first question here).
> >>>>
> >>>> Martin
> >>>>
> >>>> On Fri, Jan 12, 2018 at 4:28 PM, Jayme <jaymef(a)gmail.com> wrote:
> >>>> > Martin,
> >>>> >
> >>>> > Thanks so much for keeping with me, this is driving me crazy! I
> >>>> > really do
> >>>> > appreciate it, thanks again
> >>>> >
> >>>> > Let's go through this:
> >>>> >
> >>>> > HE VM is down - YES
> >>>> >
> >>>> >
> >>>> > HE agent fails when opening metadata using the symlink - YES
> >>>> >
> >>>> >
> >>>> > the symlink is there and readable by vdsm:kvm - it appears to be:
> >>>> >
> >>>> >
> >>>> > lrwxrwxrwx. 1 vdsm kvm 159 Jan 10 21:20
> >>>> > 14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> > ->
> >>>> >
> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >
> >>>> >
> >>>> > And the files in the linked directory exist and have vdsm:kvm perms
> as
> >>>> > well:
> >>>> >
> >>>> >
> >>>> > # cd
> >>>> >
> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >
> >>>> > [root@cultivar0 14a20941-1b84-4b82-be8f-ace38d7c037a]# ls -al
> >>>> >
> >>>> > total 2040
> >>>> >
> >>>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >>>> >
> >>>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >>>> >
> >>>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 11:19
> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >>>> >
> >>>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >>>> >
> >>>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >>>> >
> >>>> >
> >>>> > I started ovirt-ha-broker with systemctl as root user but it does
> >>>> > appear to
> >>>> > be running under vdsm:
> >>>> >
> >>>> >
> >>>> > vdsm 16928 0.6 0.0 1618244 43328 ? Ssl 10:33 0:18
> >>>> > /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> >>>> >
> >>>> >
> >>>> >
> >>>> > Here is something I tried:
> >>>> >
> >>>> >
> >>>> > - On all three hosts I ran hosted-engine --vm-shutdown;
> hosted-engine
> >>>> > --vm-poweroff
> >>>> >
> >>>> > - On HOST0 (cultivar0) I disconnected and reconnected storage using
> >>>> > hosted-engine
> >>>> >
> >>>> > - Tried starting up the hosted VM on cultivar0 while tailing the
> logs:
> >>>> >
> >>>> >
> >>>> > # hosted-engine --vm-start
> >>>> >
> >>>> > VM exists and is down, cleaning up and restarting
> >>>> >
> >>>> >
> >>>> >
> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >
> >>>> > jsonrpc/2::ERROR::2018-01-12
> >>>> > 11:27:27,194::vm::1766::virt.vm::(_getRunningVmStats)
> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') Error fetching vm
> stats
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 1762,
> >>>> > in
> >>>> > _getRunningVmStats
> >>>> >
> >>>> > vm_sample.interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 45, in
> >>>> > produce
> >>>> >
> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 322, in
> >>>> > networks
> >>>> >
> >>>> > if nic.name.startswith('hostdev'):
> >>>> >
> >>>> > AttributeError: name
> >>>> >
> >>>> > jsonrpc/3::ERROR::2018-01-12
> >>>> > 11:27:27,221::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >>>> > Internal server error
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line
> >>>> > 606,
> >>>> > in _handle_request
> >>>> >
> >>>> > res = method(**params)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> >>>> > 201, in
> >>>> > _dynamicMethod
> >>>> >
> >>>> > result = fn(*methodArgs)
> >>>> >
> >>>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> 48,
> >>>> > in
> >>>> > method
> >>>> >
> >>>> > ret = func(*args, **kwargs)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354,
> in
> >>>> > getAllVmIoTunePolicies
> >>>> >
> >>>> > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
> 524,
> >>>> > in
> >>>> > getAllVmIoTunePolicies
> >>>> >
> >>>> > 'current_values': v.getIoTune()}
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3481,
> >>>> > in
> >>>> > getIoTune
> >>>> >
> >>>> > result = self.getIoTuneResponse()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3500,
> >>>> > in
> >>>> > getIoTuneResponse
> >>>> >
> >>>> > res = self._dom.blockIoTune(
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> line
> >>>> > 47,
> >>>> > in __getattr__
> >>>> >
> >>>> > % self.vmid)
> >>>> >
> >>>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c' was
> not
> >>>> > defined
> >>>> > yet or was undefined
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR
> >>>> > Internal
> >>>> > server error#012Traceback (most recent call last):#012 File
> >>>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 606, in
> >>>> > _handle_request#012 res = method(**params)#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in
> >>>> > _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>",
> >>>> > line 2,
> >>>> > in getAllVmIoTunePolicies#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> >>>> > method#012 ret = func(*args, **kwargs)#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> >>>> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> >>>> > self._cif.getAllVmIoTunePolicies()#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> >>>> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012
> >>>> > File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> >>>> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> >>>> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 47, in
> >>>> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was
> >>>> > undefined
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > blocking
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > disabled
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 entered promiscuous
> >>>> > mode
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > blocking
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > forwarding state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 lldpad: recvfrom(Event interface): No
> buffer
> >>>> > space
> >>>> > available
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.4264]
> >>>> > manager: (vnet4): new Tun device
> >>>> > (/org/freedesktop/NetworkManager/Devices/135)
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.4342]
> >>>> > device (vnet4): state change: unmanaged -> unavailable (reason
> >>>> > 'connection-assumed') [10 20 41]
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.4353]
> >>>> > device (vnet4): state change: unavailable -> disconnected (reason
> >>>> > 'none')
> >>>> > [20 30 0]
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12 15:27:27.435+0000: starting up libvirt version: 3.2.0,
> >>>> > package:
> >>>> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >>>> > cultivar0.grove.silverorange.com
> >>>> >
> >>>> > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >
> >>>> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-114-Cultivar/master-key.aes
> >>>> > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> >>>> > -cpu
> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >
> >>>> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> > -no-user-config -nodefaults -chardev
> >>>> >
> >>>> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 114-Cultivar/monitor.sock,server,nowait
> >>>> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >>>> > base=2018-01-12T15:27:27,driftfix=slew -global
> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> >>>> > -device
> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >>>> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >>>> >
> >>>> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >>>> > -device
> >>>> >
> >>>> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >>>> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >>>> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >>>> >
> >>>> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >>>> > -chardev
> >>>> >
> >>>> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> > -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> > -chardev
> >>>> >
> >>>> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> > -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> > -chardev
> >>>> >
> >>>> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >>>> > -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> > -chardev pty,id=charconsole0 -device
> >>>> > virtconsole,chardev=charconsole0,id=console0 -spice
> >>>> >
> >>>> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >>>> > rng-random,id=objrng0,filename=/dev/urandom -device
> >>>> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >>>> > timestamp=on
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: New machine
> >>>> > qemu-114-Cultivar.
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd: Started Virtual Machine
> >>>> > qemu-114-Cultivar.
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd: Starting Virtual Machine
> >>>> > qemu-114-Cultivar.
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12T15:27:27.651669Z qemu-kvm: -chardev pty,id=charconsole0:
> >>>> > char
> >>>> > device redirected to /dev/pts/2 (label charconsole0)
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kvm: 5 guests now active
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12 15:27:27.773+0000: shutting down, reason=failed
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 libvirtd: 2018-01-12 15:27:27.773+0000:
> >>>> > 1910:
> >>>> > error : virLockManagerSanlockAcquire:1041 : resource busy: Failed
> to
> >>>> > acquire
> >>>> > lock: Lease is held by another host
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12T15:27:27.776135Z qemu-kvm: terminating on signal 15 from
> >>>> > pid 1773
> >>>> > (/usr/sbin/libvirtd)
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > disabled
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 left promiscuous mode
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > disabled
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.7989]
> >>>> > device (vnet4): state change: disconnected -> unmanaged (reason
> >>>> > 'unmanaged')
> >>>> > [30 10 3]
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.7989]
> >>>> > device (vnet4): released from master device ovirtmgmt
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kvm: 4 guests now active
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: Machine
> qemu-114-Cultivar
> >>>> > terminated.
> >>>> >
> >>>> >
> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >
> >>>> > vm/4013c829::ERROR::2018-01-12
> >>>> > 11:27:28,001::vm::914::virt.vm::(_startUnderlyingVm)
> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start process
> >>>> > failed
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 843,
> >>>> > in
> >>>> > _startUnderlyingVm
> >>>> >
> >>>> > self._run()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 2721,
> >>>> > in
> >>>> > _run
> >>>> >
> >>>> > dom.createWithFlags(flags)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/
> libvirtconnection.py",
> >>>> > line
> >>>> > 126, in wrapper
> >>>> >
> >>>> > ret = f(*args, **kwargs)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512,
> in
> >>>> > wrapper
> >>>> >
> >>>> > return func(inst, *args, **kwargs)
> >>>> >
> >>>> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069,
> in
> >>>> > createWithFlags
> >>>> >
> >>>> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
> >>>> > failed',
> >>>> > dom=self)
> >>>> >
> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is held
> by
> >>>> > another host
> >>>> >
> >>>> > periodic/47::ERROR::2018-01-12
> >>>> > 11:27:32,858::periodic::215::virt.periodic.Operation::(__call__)
> >>>> > <vdsm.virt.sampling.VMBulkstatsMonitor object at 0x3692590>
> operation
> >>>> > failed
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py",
> line
> >>>> > 213,
> >>>> > in __call__
> >>>> >
> >>>> > self._func()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py",
> line
> >>>> > 522,
> >>>> > in __call__
> >>>> >
> >>>> > self._send_metrics()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py",
> line
> >>>> > 538,
> >>>> > in _send_metrics
> >>>> >
> >>>> > vm_sample.interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 45, in
> >>>> > produce
> >>>> >
> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 322, in
> >>>> > networks
> >>>> >
> >>>> > if nic.name.startswith('hostdev'):
> >>>> >
> >>>> > AttributeError: name
> >>>> >
> >>>> >
> >>>> > On Fri, Jan 12, 2018 at 11:14 AM, Martin Sivak <msivak(a)redhat.com>
> >>>> > wrote:
> >>>> >>
> >>>> >> Hmm that rules out most of NFS related permission issues.
> >>>> >>
> >>>> >> So the current status is (I need to sum it up to get the full
> >>>> >> picture):
> >>>> >>
> >>>> >> - HE VM is down
> >>>> >> - HE agent fails when opening metadata using the symlink
> >>>> >> - the symlink is there
> >>>> >> - the symlink is readable by vdsm:kvm
> >>>> >>
> >>>> >> Hmm can you check under which user is ovirt-ha-broker started?
> >>>> >>
> >>>> >> Martin
> >>>> >>
> >>>> >>
> >>>> >> On Fri, Jan 12, 2018 at 4:10 PM, Jayme <jaymef(a)gmail.com> wrote:
> >>>> >> > Same thing happens with data images of other VMs as well though,
> >>>> >> > and
> >>>> >> > those
> >>>> >> > seem to be running ok so I'm not sure if it's the problem.
> >>>> >> >
> >>>> >> > On Fri, Jan 12, 2018 at 11:08 AM, Jayme <jaymef(a)gmail.com>
> wrote:
> >>>> >> >>
> >>>> >> >> Martin,
> >>>> >> >>
> >>>> >> >> I can as VDSM user but not as root . I get permission denied
> >>>> >> >> trying to
> >>>> >> >> touch one of the files as root, is that normal?
> >>>> >> >>
> >>>> >> >> On Fri, Jan 12, 2018 at 11:03 AM, Martin Sivak <
> msivak(a)redhat.com>
> >>>> >> >> wrote:
> >>>> >> >>>
> >>>> >> >>> Hmm, then it might be a permission issue indeed. Can you touch
> >>>> >> >>> the
> >>>> >> >>> file? Open it? (try hexdump) Just to make sure NFS does not
> >>>> >> >>> prevent
> >>>> >> >>> you from doing that.
> >>>> >> >>>
> >>>> >> >>> Martin
> >>>> >> >>>
> >>>> >> >>> On Fri, Jan 12, 2018 at 3:57 PM, Jayme <jaymef(a)gmail.com>
> wrote:
> >>>> >> >>> > Sorry, I think we got confused about the symlink, there are
> >>>> >> >>> > symlinks
> >>>> >> >>> > in
> >>>> >> >>> > /var/run that point the /rhev when I was doing an LS it was
> >>>> >> >>> > listing
> >>>> >> >>> > the
> >>>> >> >>> > files in /rhev
> >>>> >> >>> >
> >>>> >> >>> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286
> >>>> >> >>> >
> >>>> >> >>> > 14a20941-1b84-4b82-be8f-ace38d7c037a ->
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >> >>> >
> >>>> >> >>> > ls -al
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >> >>> > total 2040
> >>>> >> >>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >>>> >> >>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 10:56
> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >>>> >> >>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >>>> >> >>> >
> >>>> >> >>> > Is it possible that this is the wrong image for hosted
> engine?
> >>>> >> >>> >
> >>>> >> >>> > this is all I get in vdsm log when running hosted-engine
> >>>> >> >>> > --connect-storage
> >>>> >> >>> >
> >>>> >> >>> > jsonrpc/4::ERROR::2018-01-12
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> > 10:52:53,019::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >>>> >> >>> > Internal server error
> >>>> >> >>> > Traceback (most recent call last):
> >>>> >> >>> > File
> >>>> >> >>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> >>>> >> >>> > line
> >>>> >> >>> > 606,
> >>>> >> >>> > in _handle_request
> >>>> >> >>> > res = method(**params)
> >>>> >> >>> > File "/usr/lib/python2.7/site-
> packages/vdsm/rpc/Bridge.py",
> >>>> >> >>> > line
> >>>> >> >>> > 201,
> >>>> >> >>> > in
> >>>> >> >>> > _dynamicMethod
> >>>> >> >>> > result = fn(*methodArgs)
> >>>> >> >>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >>>> >> >>> > File "/usr/lib/python2.7/site-
> packages/vdsm/common/api.py",
> >>>> >> >>> > line
> >>>> >> >>> > 48,
> >>>> >> >>> > in
> >>>> >> >>> > method
> >>>> >> >>> > ret = func(*args, **kwargs)
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
> >>>> >> >>> > 1354, in
> >>>> >> >>> > getAllVmIoTunePolicies
> >>>> >> >>> > io_tune_policies_dict = self._cif.
> getAllVmIoTunePolicies()
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py",
> >>>> >> >>> > line
> >>>> >> >>> > 524,
> >>>> >> >>> > in
> >>>> >> >>> > getAllVmIoTunePolicies
> >>>> >> >>> > 'current_values': v.getIoTune()}
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >>>> >> >>> > 3481,
> >>>> >> >>> > in
> >>>> >> >>> > getIoTune
> >>>> >> >>> > result = self.getIoTuneResponse()
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >>>> >> >>> > 3500,
> >>>> >> >>> > in
> >>>> >> >>> > getIoTuneResponse
> >>>> >> >>> > res = self._dom.blockIoTune(
> >>>> >> >>> > File
> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> >>>> >> >>> > line
> >>>> >> >>> > 47,
> >>>> >> >>> > in __getattr__
> >>>> >> >>> > % self.vmid)
> >>>> >> >>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> > was not
> >>>> >> >>> > defined
> >>>> >> >>> > yet or was undefined
> >>>> >> >>> >
> >>>> >> >>> > On Fri, Jan 12, 2018 at 10:48 AM, Martin Sivak
> >>>> >> >>> > <msivak(a)redhat.com>
> >>>> >> >>> > wrote:
> >>>> >> >>> >>
> >>>> >> >>> >> Hi,
> >>>> >> >>> >>
> >>>> >> >>> >> what happens when you try hosted-engine --connect-storage?
> Do
> >>>> >> >>> >> you
> >>>> >> >>> >> see
> >>>> >> >>> >> any errors in the vdsm log?
> >>>> >> >>> >>
> >>>> >> >>> >> Best regards
> >>>> >> >>> >>
> >>>> >> >>> >> Martin Sivak
> >>>> >> >>> >>
> >>>> >> >>> >> On Fri, Jan 12, 2018 at 3:41 PM, Jayme <jaymef(a)gmail.com>
> >>>> >> >>> >> wrote:
> >>>> >> >>> >> > Ok this is what I've done:
> >>>> >> >>> >> >
> >>>> >> >>> >> > - All three hosts in global maintenance mode
> >>>> >> >>> >> > - Ran: systemctl stop ovirt-ha-broker; systemctl stop
> >>>> >> >>> >> > ovirt-ha-broker --
> >>>> >> >>> >> > on
> >>>> >> >>> >> > all three hosts
> >>>> >> >>> >> > - Moved ALL files in
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >>>> >> >>> >> > to
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/backup
> >>>> >> >>> >> > - Ran: systemctl start ovirt-ha-broker; systemctl start
> >>>> >> >>> >> > ovirt-ha-broker
> >>>> >> >>> >> > --
> >>>> >> >>> >> > on all three hosts
> >>>> >> >>> >> >
> >>>> >> >>> >> > - attempt start of engine vm from HOST0 (cultivar0):
> >>>> >> >>> >> > hosted-engine
> >>>> >> >>> >> > --vm-start
> >>>> >> >>> >> >
> >>>> >> >>> >> > Lots of errors in the logs still, it appears to be having
> >>>> >> >>> >> > problems
> >>>> >> >>> >> > with
> >>>> >> >>> >> > that
> >>>> >> >>> >> > directory still:
> >>>> >> >>> >> >
> >>>> >> >>> >> > Jan 12 10:40:13 cultivar0 journal: ovirt-ha-broker
> >>>> >> >>> >> > ovirt_hosted_engine_ha.broker.
> storage_broker.StorageBroker
> >>>> >> >>> >> > ERROR
> >>>> >> >>> >> > Failed
> >>>> >> >>> >> > to
> >>>> >> >>> >> > write metadata for host 1 to
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8#012Traceback
> >>>> >> >>> >> > (most recent call last):#012 File
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> >>>> >> >>> >> > line 202, in put_stats#012 f = os.open(path,
> direct_flag
> >>>> >> >>> >> > |
> >>>> >> >>> >> > os.O_WRONLY |
> >>>> >> >>> >> > os.O_SYNC)#012OSError: [Errno 2] No such file or
> directory:
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >
> >>>> >> >>> >> > There are no new files or symlinks in
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >>>> >> >>> >> >
> >>>> >> >>> >> > - Jayme
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > On Fri, Jan 12, 2018 at 10:23 AM, Martin Sivak
> >>>> >> >>> >> > <msivak(a)redhat.com>
> >>>> >> >>> >> > wrote:
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling (
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> On all hosts I should have added.
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> Martin
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak
> >>>> >> >>> >> >> <msivak(a)redhat.com>
> >>>> >> >>> >> >> wrote:
> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2] No
> such
> >>>> >> >>> >> >> >> file
> >>>> >> >>> >> >> >> or
> >>>> >> >>> >> >> >> directory:
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> ls -al
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >>>> >> >>> >> >> >> referring to
> >>>> >> >>> >> >> >> that
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> addressed in RC1 or something else?
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > No, this file is the symlink. It should point to
> >>>> >> >>> >> >> > somewhere
> >>>> >> >>> >> >> > inside
> >>>> >> >>> >> >> > /rhev/. I see it is a 1G file in your case. That is
> >>>> >> >>> >> >> > really
> >>>> >> >>> >> >> > interesting.
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling
> >>>> >> >>> >> >> > (ovirt-ha-agent,
> >>>> >> >>> >> >> > ovirt-ha-broker), move the file (metadata file is not
> >>>> >> >>> >> >> > important
> >>>> >> >>> >> >> > when
> >>>> >> >>> >> >> > services are stopped, but better safe than sorry) and
> >>>> >> >>> >> >> > restart
> >>>> >> >>> >> >> > all
> >>>> >> >>> >> >> > services again?
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> >> Could there possibly be a permissions
> >>>> >> >>> >> >> >> problem somewhere?
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Maybe, but the file itself looks out of the ordinary. I
> >>>> >> >>> >> >> > wonder
> >>>> >> >>> >> >> > how it
> >>>> >> >>> >> >> > got there.
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Best regards
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Martin Sivak
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > On Fri, Jan 12, 2018 at 3:09 PM, Jayme <
> jaymef(a)gmail.com>
> >>>> >> >>> >> >> > wrote:
> >>>> >> >>> >> >> >> Thanks for the help thus far. Storage could be
> related
> >>>> >> >>> >> >> >> but
> >>>> >> >>> >> >> >> all
> >>>> >> >>> >> >> >> other
> >>>> >> >>> >> >> >> VMs on
> >>>> >> >>> >> >> >> same storage are running ok. The storage is mounted
> via
> >>>> >> >>> >> >> >> NFS
> >>>> >> >>> >> >> >> from
> >>>> >> >>> >> >> >> within one
> >>>> >> >>> >> >> >> of the three hosts, I realize this is not ideal. This
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> setup
> >>>> >> >>> >> >> >> by
> >>>> >> >>> >> >> >> a
> >>>> >> >>> >> >> >> previous admin more as a proof of concept and VMs were
> >>>> >> >>> >> >> >> put on
> >>>> >> >>> >> >> >> there
> >>>> >> >>> >> >> >> that
> >>>> >> >>> >> >> >> should not have been placed in a proof of concept
> >>>> >> >>> >> >> >> environment..
> >>>> >> >>> >> >> >> it
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> intended to be rebuilt with proper storage down the
> >>>> >> >>> >> >> >> road.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> So the storage is on HOST0 and the other hosts mount
> NFS
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/data
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_data
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/iso
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_iso
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/import_export
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_import__export
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/hosted_engine
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Like I said, the VM data storage itself seems to be
> >>>> >> >>> >> >> >> working
> >>>> >> >>> >> >> >> ok,
> >>>> >> >>> >> >> >> as
> >>>> >> >>> >> >> >> all
> >>>> >> >>> >> >> >> other
> >>>> >> >>> >> >> >> VMs appear to be running.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> I'm curious why the broker log says this file is not
> >>>> >> >>> >> >> >> found
> >>>> >> >>> >> >> >> when
> >>>> >> >>> >> >> >> it
> >>>> >> >>> >> >> >> is
> >>>> >> >>> >> >> >> correct and I can see the file at that path:
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2] No
> such
> >>>> >> >>> >> >> >> file
> >>>> >> >>> >> >> >> or
> >>>> >> >>> >> >> >> directory:
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> ls -al
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >>>> >> >>> >> >> >> referring to
> >>>> >> >>> >> >> >> that
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> addressed in RC1 or something else? Could there
> >>>> >> >>> >> >> >> possibly be
> >>>> >> >>> >> >> >> a
> >>>> >> >>> >> >> >> permissions
> >>>> >> >>> >> >> >> problem somewhere?
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Assuming that all three hosts have 4.2 rpms installed
> >>>> >> >>> >> >> >> and the
> >>>> >> >>> >> >> >> host
> >>>> >> >>> >> >> >> engine
> >>>> >> >>> >> >> >> will not start is it safe for me to update hosts to
> 4.2
> >>>> >> >>> >> >> >> RC1
> >>>> >> >>> >> >> >> rpms?
> >>>> >> >>> >> >> >> Or
> >>>> >> >>> >> >> >> perhaps install that repo and *only* update the ovirt
> HA
> >>>> >> >>> >> >> >> packages?
> >>>> >> >>> >> >> >> Assuming that I cannot yet apply the same updates to
> the
> >>>> >> >>> >> >> >> inaccessible
> >>>> >> >>> >> >> >> hosted
> >>>> >> >>> >> >> >> engine VM.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> I should also mention one more thing. I originally
> >>>> >> >>> >> >> >> upgraded
> >>>> >> >>> >> >> >> the
> >>>> >> >>> >> >> >> engine
> >>>> >> >>> >> >> >> VM
> >>>> >> >>> >> >> >> first using new RPMS then engine-setup. It failed due
> >>>> >> >>> >> >> >> to not
> >>>> >> >>> >> >> >> being
> >>>> >> >>> >> >> >> in
> >>>> >> >>> >> >> >> global maintenance, so I set global maintenance and
> ran
> >>>> >> >>> >> >> >> it
> >>>> >> >>> >> >> >> again,
> >>>> >> >>> >> >> >> which
> >>>> >> >>> >> >> >> appeared to complete as intended but never came back
> up
> >>>> >> >>> >> >> >> after.
> >>>> >> >>> >> >> >> Just
> >>>> >> >>> >> >> >> in
> >>>> >> >>> >> >> >> case
> >>>> >> >>> >> >> >> this might have anything at all to do with what could
> >>>> >> >>> >> >> >> have
> >>>> >> >>> >> >> >> happened.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Thanks very much again, I very much appreciate the
> help!
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> - Jayme
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> On Fri, Jan 12, 2018 at 8:44 AM, Simone Tiraboschi
> >>>> >> >>> >> >> >> <stirabos(a)redhat.com>
> >>>> >> >>> >> >> >> wrote:
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak
> >>>> >> >>> >> >> >>> <msivak(a)redhat.com>
> >>>> >> >>> >> >> >>> wrote:
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> Hi,
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> the hosted engine agent issue might be fixed by
> >>>> >> >>> >> >> >>>> restarting
> >>>> >> >>> >> >> >>>> ovirt-ha-broker or updating to newest
> >>>> >> >>> >> >> >>>> ovirt-hosted-engine-ha
> >>>> >> >>> >> >> >>>> and
> >>>> >> >>> >> >> >>>> -setup. We improved handling of the missing symlink.
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>> Available just in oVirt 4.2.1 RC1
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> All the other issues seem to point to some storage
> >>>> >> >>> >> >> >>>> problem
> >>>> >> >>> >> >> >>>> I
> >>>> >> >>> >> >> >>>> am
> >>>> >> >>> >> >> >>>> afraid.
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> You said you started the VM, do you see it in virsh
> -r
> >>>> >> >>> >> >> >>>> list?
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> Best regards
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> Martin Sivak
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> On Thu, Jan 11, 2018 at 10:00 PM, Jayme
> >>>> >> >>> >> >> >>>> <jaymef(a)gmail.com>
> >>>> >> >>> >> >> >>>> wrote:
> >>>> >> >>> >> >> >>>> > Please help, I'm really not sure what else to try
> at
> >>>> >> >>> >> >> >>>> > this
> >>>> >> >>> >> >> >>>> > point.
> >>>> >> >>> >> >> >>>> > Thank
> >>>> >> >>> >> >> >>>> > you
> >>>> >> >>> >> >> >>>> > for reading!
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > I'm still working on trying to get my hosted
> engine
> >>>> >> >>> >> >> >>>> > running
> >>>> >> >>> >> >> >>>> > after a
> >>>> >> >>> >> >> >>>> > botched
> >>>> >> >>> >> >> >>>> > upgrade to 4.2. Storage is NFS mounted from
> within
> >>>> >> >>> >> >> >>>> > one
> >>>> >> >>> >> >> >>>> > of
> >>>> >> >>> >> >> >>>> > the
> >>>> >> >>> >> >> >>>> > hosts.
> >>>> >> >>> >> >> >>>> > Right
> >>>> >> >>> >> >> >>>> > now I have 3 centos7 hosts that are fully updated
> >>>> >> >>> >> >> >>>> > with
> >>>> >> >>> >> >> >>>> > yum
> >>>> >> >>> >> >> >>>> > packages
> >>>> >> >>> >> >> >>>> > from
> >>>> >> >>> >> >> >>>> > ovirt 4.2, the engine was fully updated with yum
> >>>> >> >>> >> >> >>>> > packages
> >>>> >> >>> >> >> >>>> > and
> >>>> >> >>> >> >> >>>> > failed to
> >>>> >> >>> >> >> >>>> > come
> >>>> >> >>> >> >> >>>> > up after reboot. As of right now, everything
> should
> >>>> >> >>> >> >> >>>> > have
> >>>> >> >>> >> >> >>>> > full
> >>>> >> >>> >> >> >>>> > yum
> >>>> >> >>> >> >> >>>> > updates
> >>>> >> >>> >> >> >>>> > and all having 4.2 rpms. I have global
> maintenance
> >>>> >> >>> >> >> >>>> > mode
> >>>> >> >>> >> >> >>>> > on
> >>>> >> >>> >> >> >>>> > right
> >>>> >> >>> >> >> >>>> > now
> >>>> >> >>> >> >> >>>> > and
> >>>> >> >>> >> >> >>>> > started hosted-engine on one of the three host and
> >>>> >> >>> >> >> >>>> > the
> >>>> >> >>> >> >> >>>> > status is
> >>>> >> >>> >> >> >>>> > currently:
> >>>> >> >>> >> >> >>>> > Engine status : {"reason": "failed liveliness
> >>>> >> >>> >> >> >>>> > check”;
> >>>> >> >>> >> >> >>>> > "health":
> >>>> >> >>> >> >> >>>> > "bad",
> >>>> >> >>> >> >> >>>> > "vm":
> >>>> >> >>> >> >> >>>> > "up", "detail": "Up"}
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > this is what I get when trying to enter hosted-vm
> >>>> >> >>> >> >> >>>> > --console
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > The engine VM is running on this host
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > error: failed to get domain 'HostedEngine'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > error: Domain not found: no domain with matching
> >>>> >> >>> >> >> >>>> > name
> >>>> >> >>> >> >> >>>> > 'HostedEngine'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Here are logs from various sources when I start
> the
> >>>> >> >>> >> >> >>>> > VM on
> >>>> >> >>> >> >> >>>> > HOST3:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > hosted-engine --vm-start
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Command VM.getStats with args {'vmID':
> >>>> >> >>> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > (code=1, message=Virtual machine does not exist:
> >>>> >> >>> >> >> >>>> > {'vmId':
> >>>> >> >>> >> >> >>>> > u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd-machined: New
> >>>> >> >>> >> >> >>>> > machine
> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Started Virtual
> >>>> >> >>> >> >> >>>> > Machine
> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Starting
> Virtual
> >>>> >> >>> >> >> >>>> > Machine
> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 kvm: 3 guests now active
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/common/api.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 48,
> >>>> >> >>> >> >> >>>> > in
> >>>> >> >>> >> >> >>>> > method
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ret = func(*args, **kwargs)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 2718, in
> >>>> >> >>> >> >> >>>> > getStorageDomainInfo
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > dom = self.validateSdUUID(sdUUID)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 304, in
> >>>> >> >>> >> >> >>>> > validateSdUUID
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > sdDom.validate()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/fileSD.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 515,
> >>>> >> >>> >> >> >>>> > in validate
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > raise se.StorageDomainAccessError(
> self.sdUUID)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > StorageDomainAccessError: Domain is either
> partially
> >>>> >> >>> >> >> >>>> > accessible
> >>>> >> >>> >> >> >>>> > or
> >>>> >> >>> >> >> >>>> > entirely
> >>>> >> >>> >> >> >>>> > inaccessible:
> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > jsonrpc/2::ERROR::2018-01-11
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 16:55:16,144::dispatcher::82::
> storage.Dispatcher::(wrapper)
> >>>> >> >>> >> >> >>>> > FINISH
> >>>> >> >>> >> >> >>>> > getStorageDomainInfo error=Domain is either
> >>>> >> >>> >> >> >>>> > partially
> >>>> >> >>> >> >> >>>> > accessible
> >>>> >> >>> >> >> >>>> > or
> >>>> >> >>> >> >> >>>> > entirely
> >>>> >> >>> >> >> >>>> > inaccessible:
> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-108-Cultivar/master-key.aes
> >>>> >> >>> >> >> >>>> > -machine
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >>>> >> >>> >> >> >>>> > -cpu
> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-108-Cultivar/monitor.sock,server,nowait
> >>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control
> >>>> >> >>> >> >> >>>> > -rtc
> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:33:19,driftfix=slew -global
> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot
> >>>> >> >>> >> >> >>>> > -boot
> >>>> >> >>> >> >> >>>> > strict=on
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >>>> >> >>> >> >> >>>> > -drive
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >>>> >> >>> >> >> >>>> > -netdev
> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0
> -spice
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >>>> >> >>> >> >> >>>> > -object
> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >>>> >> >>> >> >> >>>> > -msg
> >>>> >> >>> >> >> >>>> > timestamp=on
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev
> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >>>> >> >>> >> >> >>>> > char
> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> charconsole0)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:38:11.640+0000: shutting down,
> >>>> >> >>> >> >> >>>> > reason=shutdown
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:39:02.122+0000: starting up libvirt
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 3.2.0,
> >>>> >> >>> >> >> >>>> > package:
> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >>>> >> >>> >> >> >>>> > cultivar3
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-109-Cultivar/master-key.aes
> >>>> >> >>> >> >> >>>> > -machine
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >>>> >> >>> >> >> >>>> > -cpu
> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-109-Cultivar/monitor.sock,server,nowait
> >>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control
> >>>> >> >>> >> >> >>>> > -rtc
> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:39:02,driftfix=slew -global
> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot
> >>>> >> >>> >> >> >>>> > -boot
> >>>> >> >>> >> >> >>>> > strict=on
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >>>> >> >>> >> >> >>>> > -drive
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >>>> >> >>> >> >> >>>> > -netdev
> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0
> -spice
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >>>> >> >>> >> >> >>>> > -object
> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >>>> >> >>> >> >> >>>> > -msg
> >>>> >> >>> >> >> >>>> > timestamp=on
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev
> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >>>> >> >>> >> >> >>>> > char
> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> charconsole0)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:53:11.407+0000: shutting down,
> >>>> >> >>> >> >> >>>> > reason=shutdown
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:55:57.210+0000: starting up libvirt
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 3.2.0,
> >>>> >> >>> >> >> >>>> > package:
> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >>>> >> >>> >> >> >>>> > cultivar3.grove.silverorange.com
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-110-Cultivar/master-key.aes
> >>>> >> >>> >> >> >>>> > -machine
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >>>> >> >>> >> >> >>>> > -cpu
> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-110-Cultivar/monitor.sock,server,nowait
> >>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control
> >>>> >> >>> >> >> >>>> > -rtc
> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:55:57,driftfix=slew -global
> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot
> >>>> >> >>> >> >> >>>> > -boot
> >>>> >> >>> >> >> >>>> > strict=on
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >>>> >> >>> >> >> >>>> > -drive
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >>>> >> >>> >> >> >>>> > -netdev
> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0
> -spice
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >>>> >> >>> >> >> >>>> > -object
> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >>>> >> >>> >> >> >>>> > -msg
> >>>> >> >>> >> >> >>>> > timestamp=on
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev
> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >>>> >> >>> >> >> >>>> > char
> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> charconsole0)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-ha/broker.log
> <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >>>> >> >>> >> >> >>>> > line 151, in get_raw_stats
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > f = os.open(path, direct_flag | os.O_RDONLY |
> >>>> >> >>> >> >> >>>> > os.O_SYNC)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > OSError: [Errno 2] No such file or directory:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > StatusStorageThread::ERROR::2018-01-11
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 16:55:15,761::status_broker::
> 92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
> >>>> >> >>> >> >> >>>> > Failed to read state.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/status_broker.py",
> >>>> >> >>> >> >> >>>> > line 88, in run
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > self._storage_broker.get_raw_stats()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >>>> >> >>> >> >> >>>> > line 162, in get_raw_stats
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > .format(str(e)))
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > RequestError: failed to read metadata: [Errno 2]
> No
> >>>> >> >>> >> >> >>>> > such
> >>>> >> >>> >> >> >>>> > file or
> >>>> >> >>> >> >> >>>> > directory:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-ha/agent.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > result = refresh_method()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >>>> >> >>> >> >> >>>> > line 519, in refresh_vm_conf
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > content =
> >>>> >> >>> >> >> >>>> > self._get_file_content_from_shared_storage(VM)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >>>> >> >>> >> >> >>>> > line 484, in _get_file_content_from_shared_
> storage
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > config_volume_path =
> >>>> >> >>> >> >> >>>> > self._get_config_volume_path()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >>>> >> >>> >> >> >>>> > line 188, in _get_config_volume_path
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > conf_vol_uuid
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/lib/heconflib.py",
> >>>> >> >>> >> >> >>>> > line 358, in get_volume_path
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > root=envconst.SD_RUN_DIR,
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > RuntimeError: Path to volume
> >>>> >> >>> >> >> >>>> > 4838749f-216d-406b-b245-98d0343fcf7f
> >>>> >> >>> >> >> >>>> > not
> >>>> >> >>> >> >> >>>> > found
> >>>> >> >>> >> >> >>>> > in /run/vdsm/storag
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > periodic/42::ERROR::2018-01-11
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 16:56:11,446::vmstats::260::
> virt.vmstats::(send_metrics)
> >>>> >> >>> >> >> >>>> > VM
> >>>> >> >>> >> >> >>>> > metrics
> >>>> >> >>> >> >> >>>> > collection failed
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 197, in
> >>>> >> >>> >> >> >>>> > send_metrics
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > data[prefix + '.cpu.usage'] = stat['cpuUsage']
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > KeyError: 'cpuUsage'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > _______________________________________________
> >>>> >> >>> >> >> >>>> > Users mailing list
> >>>> >> >>> >> >> >>>> > Users(a)ovirt.org
> >>>> >> >>> >> >> >>>> > http://lists.ovirt.org/mailman/listinfo/users
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> _______________________________________________
> >>>> >> >>> >> >> >>>> Users mailing list
> >>>> >> >>> >> >> >>>> Users(a)ovirt.org
> >>>> >> >>> >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>
> >>>> >> >>
> >>>> >> >
> >>>> >
> >>>> >
> >>>
> >>>
> >>
> >
>
6 years, 10 months
oVirt NGN image customization troubles
by Giuseppe Ragusa
Hi all,
I'm trying to modify the oVirt NGN image (to add RPMs, since imgbased rpmpersistence currently seems to have a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1528468 ) but I'm unfortunately stuck at the very beginning: it seems that I'm unable to recreate even the standard 4.1 squashfs image.
I'm following the instructions at https://gerrit.ovirt.org/gitweb?p=ovirt-node-ng.git;a=blob;f=README
I'm working inside a CentOS7 fully-updated vm (hosted inside VMware, with nested virtualization enabled).
I'm trying to work on the 4.1 branch, so I issued a:
./autogen.sh --with-ovirt-release-rpm-url=http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
And after that I'm stuck in the "make squashfs" step: it never ends (keeps printing dots forever with no errors/warnings in log messages nor any apparent activity on the virtual disk image).
Invoking it in debug mode and connecting to the VNC console shows the detailed Plymouth startup listing stuck (latest messages displayed: "Starting udev Wait for Complete Device Initialization..." and "Starting Device-Mapper Multipath Device Controller...")
I wonder if it's actually supposed to be run only from a recent Fedora (the "dnf" reference seems a good indicator): if so, which version?
I kindly ask for advice: has anyone succeeded in modifying/reproducing NGN squash images recently? If so, how? :-)
Many thanks in advance,
Giuseppe
6 years, 10 months
mount_options for hosted_engine storage domain
by Artem Tambovskiy
Hi,
I have deployed a small cluster with 2 ovirt hosts and GlusterFS cluster
some time ago. And recently during software upgrade I noticed that I made
some mistakes during the installation:
if the host which was deployed first will be taken down for upgrade
(powered off or rebooted) the engine becomes unavailable (even all VM's and
hosted engine were migrated to second host in advance).
I was thinking that this is due to missing
mnt_options=backup-volfile--servers=host1.domain.com;host2.domain.com
option for hosted engine storage domain.
Is there any good way to fix this? I have tried
edit /etc/ovirt-hosted-engine/hosted-engine.conf manually to add missing
mnt_options but after while I noticed that those changes are gone.
Any suggestions?
Thanks in advance!
Artem
6 years, 10 months
unable to bring up hosted engine after botched 4.2 upgrade
by Jayme
Please help, I'm really not sure what else to try at this point. Thank you
for reading!
I'm still working on trying to get my hosted engine running after a botched
upgrade to 4.2. Storage is NFS mounted from within one of the hosts. Right
now I have 3 centos7 hosts that are fully updated with yum packages from
ovirt 4.2, the engine was fully updated with yum packages and failed to
come up after reboot. As of right now, everything should have full yum
updates and all having 4.2 rpms. I have global maintenance mode on right
now and started hosted-engine on one of the three host and the status is
currently: Engine status : {"reason": "failed liveliness check”; "health":
"bad", "vm": "up", "detail": "Up"}
this is what I get when trying to enter hosted-vm --console
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
Here are logs from various sources when I start the VM on HOST3:
hosted-engine --vm-start
Command VM.getStats with args {'vmID':
'4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
(code=1, message=Virtual machine does not exist: {'vmId':
u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
Jan 11 16:55:57 cultivar3 systemd-machined: New machine qemu-110-Cultivar.
Jan 11 16:55:57 cultivar3 systemd: Started Virtual Machine
qemu-110-Cultivar.
Jan 11 16:55:57 cultivar3 systemd: Starting Virtual Machine
qemu-110-Cultivar.
Jan 11 16:55:57 cultivar3 kvm: 3 guests now active
==> /var/log/vdsm/vdsm.log <==
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2718,
in getStorageDomainInfo
dom = self.validateSdUUID(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 304, in
validateSdUUID
sdDom.validate()
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 515,
in validate
raise se.StorageDomainAccessError(self.sdUUID)
StorageDomainAccessError: Domain is either partially accessible or entirely
inaccessible: (u'248f46f0-d793-4581-9810-c9d965e2f286',)
jsonrpc/2::ERROR::2018-01-11
16:55:16,144::dispatcher::82::storage.Dispatcher::(wrapper) FINISH
getStorageDomainInfo error=Domain is either partially accessible or
entirely inaccessible: (u'248f46f0-d793-4581-9810-c9d965e2f286',)
==> /var/log/libvirt/qemu/Cultivar.log <==
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=Cultivar,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-108-Cultivar/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Conroe -m 8192 -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-108-Cultivar/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2018-01-11T20:33:19,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-chardev
socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=4,chardev=charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
2018-01-11T20:33:19.699999Z qemu-kvm: -chardev pty,id=charconsole0: char
device redirected to /dev/pts/2 (label charconsole0)
2018-01-11 20:38:11.640+0000: shutting down, reason=shutdown
2018-01-11 20:39:02.122+0000: starting up libvirt version: 3.2.0, package:
14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname: cultivar3
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=Cultivar,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-109-Cultivar/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Conroe -m 8192 -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-109-Cultivar/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2018-01-11T20:39:02,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-chardev
socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=4,chardev=charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
2018-01-11T20:39:02.380773Z qemu-kvm: -chardev pty,id=charconsole0: char
device redirected to /dev/pts/2 (label charconsole0)
2018-01-11 20:53:11.407+0000: shutting down, reason=shutdown
2018-01-11 20:55:57.210+0000: starting up libvirt version: 3.2.0, package:
14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version:
2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
cultivar3.grove.silverorange.com
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=Cultivar,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-110-Cultivar/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Conroe -m 8192 -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-110-Cultivar/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2018-01-11T20:55:57,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-chardev
socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=4,chardev=charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
2018-01-11T20:55:57.468037Z qemu-kvm: -chardev pty,id=charconsole0: char
device redirected to /dev/pts/2 (label charconsole0)
==> /var/log/ovirt-hosted-engine-ha/broker.log <==
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 151, in get_raw_stats
f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
OSError: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
StatusStorageThread::ERROR::2018-01-11
16:55:15,761::status_broker::92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
Failed to read state.
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 88, in run
self._storage_broker.get_raw_stats()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 162, in get_raw_stats
.format(str(e)))
RequestError: failed to read metadata: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
==> /var/log/ovirt-hosted-engine-ha/agent.log <==
result = refresh_method()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 519, in refresh_vm_conf
content = self._get_file_content_from_shared_storage(VM)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 484, in _get_file_content_from_shared_storage
config_volume_path = self._get_config_volume_path()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 188, in _get_config_volume_path
conf_vol_uuid
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/heconflib.py",
line 358, in get_volume_path
root=envconst.SD_RUN_DIR,
RuntimeError: Path to volume 4838749f-216d-406b-b245-98d0343fcf7f not found
in /run/vdsm/storag
==> /var/log/vdsm/vdsm.log <==
periodic/42::ERROR::2018-01-11
16:56:11,446::vmstats::260::virt.vmstats::(send_metrics) VM metrics
collection failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line 197,
in send_metrics
data[prefix + '.cpu.usage'] = stat['cpuUsage']
KeyError: 'cpuUsage'
6 years, 10 months
Some major problems after 4.2 upgrade, could really use some assistance
by Jayme
I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
The shared storage is mounted from one of the hosts.
I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
update then engine setup which seemed to complete successfully, at the end
it powered down the hosted VM but it never came back up. I was unable to
start it.
I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum
update. I also rebooted each of the three hosts.
After some time the hosts did come back and almost all of the VMs are
running again and seem to be working ok with the exception of two:
1. The hosted VM still will not start, I've tried everything I can think of.
2. A VM that I know existed is not running and does not appear to exist, I
have no idea where it is or how to start it.
1. Hosted engine
>From one of the hosts I get a weird error trying to start it:
# hosted-engine --vm-start
Command VM.getStats with args {'vmID':
'4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
(code=1, message=Virtual machine does not exist: {'vmId':
u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
>From the two other hosts I do not get the same error as above, sometimes it
appears to start but --vm-status shows errors such as: Engine status
: {"reason": "failed liveliness check", "health": "bad",
"vm": "up", "detail": "Up"}
Seeing these errors in syslog:
Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+0000: 1910: error :
qemuOpenFileAs:3183 : Failed to open file
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
No such file or directory
Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+0000: 1910: error :
qemuDomainStorageOpenStat:11492 : cannot stat file
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
Bad file descriptor
2. Missing VM. virsh -r list on each host does not show the VM at all. I
know it existed and is important. The log on one of the hosts even shows
that it started it recently then stopped in 10 or so minutes later:
Jan 10 18:47:17 host3 systemd-machined: New machine qemu-9-Berna.
Jan 10 18:47:17 host3 systemd: Started Virtual Machine qemu-9-Berna.
Jan 10 18:47:17 host3 systemd: Starting Virtual Machine qemu-9-Berna.
Jan 10 18:54:45 host3 systemd-machined: Machine qemu-9-Berna terminated.
How can I find out the status of the "Berna" VM and get it running again?
Thanks so much!
6 years, 10 months
hosted_engine
by volga629@networklab.ca
Hello Everyone,
Is possible in 4.2 migrate hosted_engine to another storage from same
type. Right now I am trying migrate from old to new iscsi storage.
volga629
6 years, 10 months
non operational node
by Tomeu Sastre Cabanellas
hi there,
i'm testing ovirt 4.2 because I want to migrate all our VMs from XenServer,
I have set a engine and a node, when conecting to the node I receive a
"non-operational" and I cannot set it to ON.
I'm an experienced engineer, but I'm new with oVirt, any clue where do I
have to start to check ?
thanks a lot.
[image: Inline images 1]
6 years, 10 months
wheel next to the up triangle in the UI
by Nathanaël Blanchet
Hi all,
I'm using vagrant ovirt4 plugin to provision some vms, and I noticed
that a wheel is still present next to the up status of those vms. What
does that mean?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
6 years, 10 months