Many thanks Marcin. Attending to some patches in the meanwhile.
On Wed, Jul 31, 2019 at 3:12 PM Marcin Sobczyk <msobczyk(a)redhat.com> wrote:
On 7/31/19 12:33 PM, Amit Bawer wrote:
On Wed, Jul 31, 2019 at 1:11 PM Michal Skrivanek <
michal.skrivanek(a)redhat.com> wrote:
>
>
> > On 30 Jul 2019, at 16:08, Milan Zamazal <mzamazal(a)redhat.com> wrote:
> >
> > Amit Bawer <abawer(a)redhat.com> writes:
> >
> >> Cherry-picked (locally) 'py3-hooks' pending gerrit patches on top
of
> >> 'py3_poc' branch.
> >>
> >> Able to start VM ,
> >
> > Cool!
> >
> >> but cannot connect graphics console - when trying it shows blank
> >> screen with "connecting to graphics sever" and nothing happens.
> >
> > Did you try it with VNC console? There is better chance with VNC than
> > with SPICE.
>
> or headless. That worked 2 weeks ago already.
>
Thanks. Managed to get to VM console on VNC mode.
Yet when trying to choose CD image there i am seeing the following py3
error in vdsm.log:
2019-07-31 05:58:00,935-0400 INFO (Thread-2) [vds.http.Server] Request
handler for ::ffff:10.35.0.140:33459 started (http:306)
2019-07-31 05:58:00,936-0400 ERROR (Thread-2)
[rpc.http.ImageRequestHandler] error during execution (http:253)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 154, in
do_PUT
httplib.LENGTH_REQUIRED)
File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 216, in
_getIntHeader
value = self._getRequiredHeader(headerName, missingError)
File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 221, in
_getRequiredHeader
value = self.headers.getheader(
AttributeError: 'HTTPMessage' object has no attribute 'getheader'
A quick look at py27 docs vs py36/py37 docs show that indeed the
implementation of HTTPMessage is very different between those two.
I will handle this and get back to you as soon as there's a fix available.
> Once the current patches are merged it’s going to be far easier for
> everyone to resolve the other remaining issues
>
I believe so as well, but its up to the gerrit committee :)
> >
> >> No error message in vdsm.log.
> >>
> >> [image: image.png]
> >>
> >> On Mon, Jul 29, 2019 at 5:13 PM Amit Bawer <abawer(a)redhat.com> wrote:
> >>
> >>> I see. Since there are several patches on this topic. Please ping me
> when
> >>> its merged and I'll rebase the PoC branch.
> >>> Thanks!
> >>>
> >>> On Mon, Jul 29, 2019 at 4:51 PM Marcin Sobczyk
<msobczyk(a)redhat.com>
> >>> wrote:
> >>>
> >>>>
> >>>> On 7/29/19 3:40 PM, Amit Bawer wrote:
> >>>>
> >>>> Thanks Marcin.
> >>>> I think we made a progress, former qemu spice TLS port error is now
> gone
> >>>> with this hook.
> >>>>
> >>>> Now its seems like py3 issue for hooks handling:
> >>>>
> >>>> Unfortunately it doesn't mean the hook actually worked - now you
get
> an
> >>>> error probably a bit earlier, when trying to run the hook and never
> get to
> >>>> the previous place.
> >>>> As I mentioned in the previous email you need my hook fixes for
this
> >>>> stuff to work.
> >>>> You can do a quick and dirty fix by simply taking 'hooks.py'
from
> >>>>
https://gerrit.ovirt.org/#/c/102049/ or rebase on top of the whole
> >>>> 'py3-hooks' topic.
> >>>>
> >>>>
> >>>>
> >>>> 2019-07-29 09:29:54,981-0400 INFO (vm/f62ae48a) [vds] prepared
> volume
> >>>> path: /rhev/data-center/mnt/10.35.0.
>
136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65
> >>>> (clientIF:501)
> >>>> 2019-07-29 09:29:54,982-0400 INFO (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Enabling
drive
> monitoring
> >>>> (drivemonitor:56)
> >>>> 2019-07-29 09:29:55,052-0400 WARN (vm/f62ae48a) [root] Attempting
> to add
> >>>> an existing net user:
ovirtmgmt/f62ae48a-4e6f-4763-9a66-48e04708a2b5
> >>>> (libvirtnetwork:192)
> >>>> 2019-07-29 09:29:55,054-0400 INFO (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive
'hdc' path:
> 'file=' ->
> >>>> '*file=' (storagexml:333)
> >>>> 2019-07-29 09:29:55,054-0400 INFO (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive
'vda' path:
> >>>>
>
'file=/rhev/data-center/dab8cf3a-a969-11e9-84eb-080027624b78/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
> >>>> -> '*file=/rhev/data-center/mnt/10.35.0.136:
>
_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
> >>>> (storagexml:333)
> >>>> 2019-07-29 09:29:55,056-0400 ERROR (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') The vm start
process
> failed
> >>>> (vm:841)
> >>>> Traceback (most recent call last):
> >>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py",
line 775,
> in
> >>>> _startUnderlyingVm
> >>>> self._run()
> >>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py",
line 2564,
> in
> >>>> _run
> >>>> final_callback=self._updateDomainDescriptor)
> >>>> File
"/usr/lib/python3.6/site-packages/vdsm/common/hooks.py", line
> 159,
> >>>> in before_vm_start
> >>>> raiseError=False, errors=errors)
> >>>> File
"/usr/lib/python3.6/site-packages/vdsm/common/hooks.py", line
> 79,
> >>>> in _runHooksDir
> >>>> os.write(data_fd, data or '')
> >>>> TypeError: a bytes-like object is required, not 'str'
> >>>>
> >>>> On Mon, Jul 29, 2019 at 4:16 PM Marcin Sobczyk
<msobczyk(a)redhat.com>
> >>>> wrote:
> >>>>
> >>>>>
> >>>>> On 7/29/19 1:14 PM, Amit Bawer wrote:
> >>>>>
> >>>>> Reviving the mail-thread, checking for Non-TLS host-engine
> communication
> >>>>> resolution:
> >>>>>
> >>>>> Current master base for PoC RHEL8 host is:
> >>>>>
> >>>>> commit cfe7b11c71c1bf0dada89a8209c8d544b0d0f138
(vdsm-master/master)
> >>>>> Author: Marcin Sobczyk <msobczyk(a)redhat.com>
> >>>>> Date: Fri Jul 12 12:54:57 2019 +0200
> >>>>>
> >>>>> When trying to "Run" VM on RHEL8 vdsm.log shows
following failure
> trace:
> >>>>>
> >>>>> 9-07-29 06:58:49,140-0400 INFO (vm/f62ae48a) [virt.vm]
> >>>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') <?xml
version='1.0'
> >>>>> encoding='utf-8'?>
> >>>>> <domain
xmlns:ns0="http://ovirt.org/vm/tune/1.0"
xmlns:ovirt-vm="
> >>>>>
http://ovirt.org/vm/1.0" type="kvm">
> >>>>> <name>vm1</name>
> >>>>>
<uuid>f62ae48a-4e6f-4763-9a66-48e04708a2b5</uuid>
> >>>>> <memory>1048576</memory>
> >>>>> <currentMemory>1048576</currentMemory>
> >>>>> <iothreads>1</iothreads>
> >>>>> <maxMemory
slots="16">4194304</maxMemory>
> >>>>> <vcpu current="1">16</vcpu>
> >>>>> <sysinfo type="smbios">
> >>>>> <system>
> >>>>> <entry
name="manufacturer">oVirt</entry>
> >>>>> <entry
name="product">RHEL</entry>
> >>>>> <entry
name="version">8.0-0.44.el8</entry>
> >>>>> <entry
> >>>>>
name="serial">e5825ba8-473e-4821-829a-bc6dbbe79617</entry>
> >>>>> <entry
> >>>>>
name="uuid">f62ae48a-4e6f-4763-9a66-48e04708a2b5</entry>
> >>>>> </system>
> >>>>> </sysinfo>
> >>>>> <clock adjustment="0"
offset="variable">
> >>>>> <timer name="rtc"
tickpolicy="catchup" />
> >>>>> <timer name="pit"
tickpolicy="delay" />
> >>>>> <timer name="hpet" present="no"
/>
> >>>>> </clock>
> >>>>> <features>
> >>>>> <acpi />
> >>>>> </features>
> >>>>> <cpu match="exact">
> >>>>> <model>SandyBridge</model>
> >>>>> <topology cores="1" sockets="16"
threads="1" />
> >>>>> <numa>
> >>>>> <cell cpus="0-15" id="0"
memory="1048576" />
> >>>>> </numa>
> >>>>> </cpu>
> >>>>> <cputune />
> >>>>> <devices>
> >>>>> <input bus="usb" type="tablet"
/>
> >>>>> <channel type="unix">
> >>>>> <target name="ovirt-guest-agent.0"
type="virtio" />
> >>>>> <source mode="bind"
> >>>>>
>
path="/var/lib/libvirt/qemu/channels/f62ae48a-4e6f-4763-9a66-48e04708a2b5.ovirt-guest-agent.0"
> >>>>> />
> >>>>> </channel>
> >>>>> <channel type="unix">
> >>>>> <target name="org.qemu.guest_agent.0"
type="virtio" />
> >>>>> <source mode="bind"
> >>>>>
>
path="/var/lib/libvirt/qemu/channels/f62ae48a-4e6f-4763-9a66-48e04708a2b5.org.qemu.guest_agent.0"
> >>>>> />
> >>>>> </channel>
> >>>>> <rng model="virtio">
> >>>>> <backend
model="random">/dev/urandom</backend>
> >>>>> <alias
name="ua-06642ecb-5d30-4119-a939-ee8792dee24b" />
> >>>>> </rng>
> >>>>> <video>
> >>>>> <model heads="1" ram="65536"
type="qxl" vgamem="16384"
> >>>>> vram="8192" />
> >>>>> <alias
name="ua-5b36baa2-cc27-4840-b309-bd1a4bbbab2c" />
> >>>>> </video>
> >>>>> <graphics autoport="yes"
keymap="en-us" passwd="*****"
> >>>>> passwdValidTo="1970-01-01T00:00:01"
port="-1" type="vnc">
> >>>>> <listen network="vdsm-ovirtmgmt"
type="network" />
> >>>>> </graphics>
> >>>>> <memballoon model="virtio">
> >>>>> <stats period="5" />
> >>>>> <alias
name="ua-b1b3f8c9-0d05-4675-9ab5-28a228d68c54" />
> >>>>> </memballoon>
> >>>>> <graphics autoport="yes"
passwd="*****"
> >>>>> passwdValidTo="1970-01-01T00:00:01"
port="-1" tlsPort="-1"
> type="spice">
> >>>>> <channel mode="secure"
name="main" />
> >>>>> <channel mode="secure"
name="inputs" />
> >>>>> <channel mode="secure"
name="cursor" />
> >>>>> <channel mode="secure"
name="playback" />
> >>>>> <channel mode="secure"
name="record" />
> >>>>> <channel mode="secure"
name="display" />
> >>>>> <channel mode="secure"
name="smartcard" />
> >>>>> <channel mode="secure"
name="usbredir" />
> >>>>> <listen network="vdsm-ovirtmgmt"
type="network" />
> >>>>> </graphics>
> >>>>> <channel type="spicevmc">
> >>>>> <target name="com.redhat.spice.0"
type="virtio" />
> >>>>> </channel>
> >>>>> <disk device="cdrom" snapshot="no"
type="file">
> >>>>> <driver error_policy="report"
name="qemu" type="raw" />
> >>>>> <source file=""
startupPolicy="optional">
> >>>>> <seclabel model="dac"
relabel="no" type="none" />
> >>>>> </source>
> >>>>> <target bus="ide" dev="hdc"
/>
> >>>>> <readonly />
> >>>>> <alias
name="ua-ffd6c93f-62e0-4d6d-a510-03e8a02644ad" />
> >>>>> </disk>
> >>>>> <disk device="disk" snapshot="no"
type="file">
> >>>>> <target bus="virtio" dev="vda"
/>
> >>>>> <source
file="/rhev/data-center/mnt/10.35.0.136:
> >>>>>
>
_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65">
> >>>>> <seclabel model="dac"
relabel="no" type="none" />
> >>>>> </source>
> >>>>> <driver cache="none"
error_policy="stop" io="threads"
> >>>>> iothread="1" name="qemu"
type="raw" />
> >>>>> <alias
name="ua-111de599-2afa-4dbb-9a99-3378ece66187" />
> >>>>>
<serial>111de599-2afa-4dbb-9a99-3378ece66187</serial>
> >>>>> </disk>
> >>>>> </devices>
> >>>>> <pm>
> >>>>> <suspend-to-disk enabled="no" />
> >>>>> <suspend-to-mem enabled="no" />
> >>>>> </pm>
> >>>>> <os>
> >>>>> <type arch="x86_64"
machine="pc-i440fx-rhel7.6.0">hvm</type>
> >>>>> <smbios mode="sysinfo" />
> >>>>> </os>
> >>>>> <metadata>
> >>>>> <ns0:qos />
> >>>>> <ovirt-vm:vm>
> >>>>> <ovirt-vm:minGuaranteedMemoryMb
> >>>>>
type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
> >>>>>
<ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
> >>>>> <ovirt-vm:custom />
> >>>>> <ovirt-vm:device devtype="disk"
name="vda">
> >>>>>
> >>>>>
> <ovirt-vm:poolID>dab8cf3a-a969-11e9-84eb-080027624b78</ovirt-vm:poolID>
> >>>>>
> >>>>>
>
<ovirt-vm:volumeID>61e1e186-f289-4ffa-b59e-af90bde5db65</ovirt-vm:volumeID>
> >>>>>
> >>>>>
>
<ovirt-vm:imageID>111de599-2afa-4dbb-9a99-3378ece66187</ovirt-vm:imageID>
> >>>>>
> >>>>>
>
<ovirt-vm:domainID>8a68eacc-0e0e-436a-bb25-c498c9f5f749</ovirt-vm:domainID>
> >>>>> </ovirt-vm:device>
> >>>>>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
> >>>>>
> >>>>>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
> >>>>> </ovirt-vm:vm>
> >>>>> </metadata>
> >>>>> </domain>
> >>>>> (vm:2570)
> >>>>> 2019-07-29 06:58:49,845-0400 ERROR (vm/f62ae48a) [virt.vm]
> >>>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') The vm
start process
> failed
> >>>>> (vm:841)
> >>>>> Traceback (most recent call last):
> >>>>> File
"/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 775,
> in
> >>>>> _startUnderlyingVm
> >>>>> self._run()
> >>>>> File
"/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line
> 2575, in
> >>>>> _run
> >>>>> dom.createWithFlags(flags)
> >>>>> File
> >>>>>
> "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line
> >>>>> 131, in wrapper
> >>>>> ret = f(*args, **kwargs)
> >>>>> File
"/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> line
> >>>>> 94, in wrapper
> >>>>> return func(inst, *args, **kwargs)
> >>>>> File "/usr/lib64/python3.6/site-packages/libvirt.py",
line 1110, in
> >>>>> createWithFlags
> >>>>> if ret == -1: raise libvirtError
('virDomainCreateWithFlags()
> >>>>> failed', dom=self)
> >>>>> libvirt.libvirtError: unsupported configuration: Auto allocation
of
> >>>>> spice TLS port requested but spice TLS is disabled in qemu.conf
> >>>>> 2019-07-29 06:58:49,845-0400 INFO (vm/f62ae48a) [virt.vm]
> >>>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Changed
state to Down:
> >>>>> unsupported configuration: Auto allocation of spice TLS port
> requested but
> >>>>> spice TLS is disabled in qemu.conf (code=1) (vm:1595)
> >>>>> 2019-07-29 06:58:49,875-0400 INFO (vm/f62ae48a) [virt.vm]
> >>>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Stopping
connection
> >>>>> (guestagent:455)
> >>>>> 2019-07-29 06:58:49,875-0400 DEBUG (vm/f62ae48a)
> [jsonrpc.Notification]
> >>>>> Sending event {"jsonrpc": "2.0",
"method":
> >>>>>
"|virt|VM_status|f62ae48a-4e6f-4763-9a66-48e04708a2b5", "params":
> >>>>> {"f62ae48a-4e6f-4763-9a66-48e04708a2b5":
{"status": "Down", "vmId":
> >>>>> "f62ae48a-4e6f-4763-9a66-48e04708a2b5",
"exitCode": 1,
> "exitMessage":
> >>>>> "unsupported configuration: Auto allocation of spice TLS
port
> requested but
> >>>>> spice TLS is disabled in qemu.conf",
"exitReason": 1},
> "notify_time":
> >>>>> 4883259290}} (__init__:181)
> >>>>>
> >>>>>
> >>>>> I'm definitely not a libvirt expert and not sure if it will
work,
> but if
> >>>>> you have my 'py3-hooks' patch in your tree and per [1],
you can tree
> >>>>> forcing the VM to run with insecure spice:
> >>>>>
> >>>>> cat << EOF >
> /usr/libexec/vdsm/hooks/before_vm_start/insecure_spice.sh
> >>>>>
> >>>>> #!/bin/bash
> >>>>>
> >>>>> sed -i 's/secure/insecure/' "$_hook_domxml"
> >>>>> sed -i 's/tlsPort="-1"//'
"$_hook_domxml"
> >>>>>
> >>>>>
> >>>>> EOF
> >>>>>
> >>>>> chmod +x
/usr/libexec/vdsm/hooks/before_vm_start/insecure_spice.sh
> >>>>>
> >>>>>
> >>>>> [1]
> >>>>>
>
https://github.com/libvirt/libvirt/blob/3f93884a4d047a012b968c62b94ea07da...
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Wed, Jul 24, 2019 at 12:09 PM Amit Bawer
<abawer(a)redhat.com>
> wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Wed, Jul 24, 2019 at 12:02 PM Michal Skrivanek <
> >>>>>> michal.skrivanek(a)redhat.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On 24 Jul 2019, at 10:36, Amit Bawer
<abawer(a)redhat.com> wrote:
> >>>>>>>
> >>>>>>> Per +Milan Zamazal <mzamazal(a)redhat.com> comment,
adding +devel
> >>>>>>> <devel(a)ovirt.org>
> >>>>>>>
> >>>>>>> On Wed, Jul 24, 2019 at 11:32 AM Michal Skrivanek <
> >>>>>>> michal.skrivanek(a)redhat.com> wrote:
> >>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 24 Jul 2019, at 10:24, Amit Bawer
<abawer(a)redhat.com> wrote:
> >>>>>>>>
> >>>>>>>> Thanks, applied the fixed patch.
> >>>>>>>>
> >>>>>>>> No I am punished for choosing not to work with
SSL/TLS in Vdsm
> when
> >>>>>>>> trying to "Run" VM.
> >>>>>>>> - Any known workaround for this?
> >>>>>>>>
> >>>>>>>>
> >>>>>>> yes, vdsm-tool reconfigure
> >>>>>>>
> >>>>>>
> >>>>>> "vdsm-tool reconfigure" is not a valid option.
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>>> That’s part of the ongoing fixes, please don’t
discuss this
> >>>>>>>> privately, this belongs to devel@ list.
> >>>>>>>> Many people are struggling with the same issues
while they’re
> working
> >>>>>>>> on their areas, and we need complete visibility
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> 24 04:04:54,610-0400 INFO (vm/01de706d) [virt.vm]
> >>>>>>>>
(vmId='01de706d-ee4c-484f-a17f-6b3355adf047') <?xml version='1.0'
> >>>>>>>> encoding='utf-8'?>
> >>>>>>>> <domain
xmlns:ns0="http://ovirt.org/vm/tune/1.0"
> xmlns:ovirt-vm="
> >>>>>>>>
http://ovirt.org/vm/1.0"
type="kvm">
> >>>>>>>> <name>vm1</name>
> >>>>>>>>
<uuid>01de706d-ee4c-484f-a17f-6b3355adf047</uuid>
> >>>>>>>> <memory>1048576</memory>
> >>>>>>>>
<currentMemory>1048576</currentMemory>
> >>>>>>>> <iothreads>1</iothreads>
> >>>>>>>> <maxMemory
slots="16">4194304</maxMemory>
> >>>>>>>> <vcpu
current="1">16</vcpu>
> >>>>>>>> <sysinfo type="smbios">
> >>>>>>>> <system>
> >>>>>>>> <entry
name="manufacturer">oVirt</entry>
> >>>>>>>> <entry
name="product">RHEL</entry>
> >>>>>>>> <entry
name="version">8.0-0.44.el8</entry>
> >>>>>>>> <entry
> >>>>>>>>
name="serial">e5825ba8-473e-4821-829a-bc6dbbe79617</entry>
> >>>>>>>> <entry
> >>>>>>>>
name="uuid">01de706d-ee4c-484f-a17f-6b3355adf047</entry>
> >>>>>>>> </system>
> >>>>>>>> </sysinfo>
> >>>>>>>> <clock adjustment="0"
offset="variable">
> >>>>>>>> <timer name="rtc"
tickpolicy="catchup" />
> >>>>>>>> <timer name="pit"
tickpolicy="delay" />
> >>>>>>>> <timer name="hpet"
present="no" />
> >>>>>>>> </clock>
> >>>>>>>> <features>
> >>>>>>>> <acpi />
> >>>>>>>> </features>
> >>>>>>>> <cpu match="exact">
> >>>>>>>> <model>SandyBridge</model>
> >>>>>>>> <topology cores="1"
sockets="16" threads="1" />
> >>>>>>>> <numa>
> >>>>>>>> <cell cpus="0-15"
id="0" memory="1048576" />
> >>>>>>>> </numa>
> >>>>>>>> </cpu>
> >>>>>>>> <cputune />
> >>>>>>>> <devices>
> >>>>>>>> <input bus="usb"
type="tablet" />
> >>>>>>>> <channel type="unix">
> >>>>>>>> <target
name="ovirt-guest-agent.0" type="virtio" />
> >>>>>>>> <source mode="bind"
> >>>>>>>>
>
path="/var/lib/libvirt/qemu/channels/01de706d-ee4c-484f-a17f-6b3355adf047.ovirt-guest-agent.0"
> >>>>>>>> />
> >>>>>>>> </channel>
> >>>>>>>> <channel type="unix">
> >>>>>>>> <target
name="org.qemu.guest_agent.0" type="virtio" />
> >>>>>>>> <source mode="bind"
> >>>>>>>>
>
path="/var/lib/libvirt/qemu/channels/01de706d-ee4c-484f-a17f-6b3355adf047.org.qemu.guest_agent.0"
> >>>>>>>> />
> >>>>>>>> </channel>
> >>>>>>>> <rng model="virtio">
> >>>>>>>> <backend
model="random">/dev/urandom</backend>
> >>>>>>>> <alias
name="ua-117e932d-e465-40d6-9005-bf7d7eb22023"
> />
> >>>>>>>> </rng>
> >>>>>>>> <graphics autoport="yes"
passwd="*****"
> >>>>>>>> passwdValidTo="1970-01-01T00:00:01"
port="-1" tlsPort="-1"
> type="spice">
> >>>>>>>> <channel mode="secure"
name="main" />
> >>>>>>>> <channel mode="secure"
name="inputs" />
> >>>>>>>> <channel mode="secure"
name="cursor" />
> >>>>>>>> <channel mode="secure"
name="playback" />
> >>>>>>>> <channel mode="secure"
name="record" />
> >>>>>>>> <channel mode="secure"
name="display" />
> >>>>>>>> <channel mode="secure"
name="smartcard" />
> >>>>>>>> <channel mode="secure"
name="usbredir" />
> >>>>>>>> <listen
network="vdsm-ovirtmgmt" type="network" />
> >>>>>>>> </graphics>
> >>>>>>>> <controller index="0"
model="piix3-uhci" type="usb" />
> >>>>>>>> <controller index="0"
model="virtio-scsi" type="scsi">
> >>>>>>>> <driver iothread="1" />
> >>>>>>>> <alias
name="ua-73437643-3c01-4763-b8ef-c4275527606b"
> />
> >>>>>>>> </controller>
> >>>>>>>> <graphics autoport="yes"
keymap="en-us" passwd="*****"
> >>>>>>>> passwdValidTo="1970-01-01T00:00:01"
port="-1" type="vnc">
> >>>>>>>> <listen
network="vdsm-ovirtmgmt" type="network" />
> >>>>>>>> </graphics>
> >>>>>>>> <memballoon model="virtio">
> >>>>>>>> <stats period="5" />
> >>>>>>>> <alias
name="ua-93d69b50-e010-4879-80db-713ee4a11727"
> />
> >>>>>>>> </memballoon>
> >>>>>>>> <controller index="0"
ports="16" type="virtio-serial">
> >>>>>>>> <alias
name="ua-cd3edc3f-25c5-4fef-b555-cedf45c86adc"
> />
> >>>>>>>> </controller>
> >>>>>>>> <video>
> >>>>>>>> <model heads="1"
ram="65536" type="qxl" vgamem="16384"
> >>>>>>>> vram="8192" />
> >>>>>>>> <alias
name="ua-d660095f-dfe6-4f2c-aa76-c3525fb6244e"
> />
> >>>>>>>> </video>
> >>>>>>>> <channel type="spicevmc">
> >>>>>>>> <target
name="com.redhat.spice.0" type="virtio" />
> >>>>>>>> </channel>
> >>>>>>>> <disk device="cdrom"
snapshot="no" type="file">
> >>>>>>>> <driver
error_policy="report" name="qemu" type="raw"
> />
> >>>>>>>> <source file=""
startupPolicy="optional">
> >>>>>>>> <seclabel model="dac"
relabel="no" type="none" />
> >>>>>>>> </source>
> >>>>>>>> <target bus="ide"
dev="hdc" />
> >>>>>>>> <readonly />
> >>>>>>>> <alias
name="ua-917aea8a-bc4b-4d15-8580-a0c442e925ee"
> />
> >>>>>>>> </disk>
> >>>>>>>> <disk device="disk"
snapshot="no" type="file">
> >>>>>>>> <target bus="scsi"
dev="sda" />
> >>>>>>>> <source
file="/rhev/data-center/mnt/10.35.1.6:
> >>>>>>>>
>
_exports_data/f49de997-9fb7-4ef8-82bd-f5b97ba31fb0/images/7dee6442-1838-48dd-892a-86fb96a85737/da070fc0-4af5-406e-bf2b-2cf4d89eb276">
> >>>>>>>> <seclabel model="dac"
relabel="no" type="none" />
> >>>>>>>> </source>
> >>>>>>>> <driver cache="none"
error_policy="stop" io="threads"
> >>>>>>>> name="qemu" type="raw" />
> >>>>>>>> <alias
name="ua-7dee6442-1838-48dd-892a-86fb96a85737"
> />
> >>>>>>>> <address bus="0"
controller="0" target="0"
> type="drive"
> >>>>>>>> unit="0" />
> >>>>>>>> <boot order="1" />
> >>>>>>>>
<serial>7dee6442-1838-48dd-892a-86fb96a85737</serial>
> >>>>>>>> </disk>
> >>>>>>>> </devices>
> >>>>>>>> <pm>
> >>>>>>>> <suspend-to-disk enabled="no"
/>
> >>>>>>>> <suspend-to-mem enabled="no"
/>
> >>>>>>>> </pm>
> >>>>>>>> <os>
> >>>>>>>> <type arch="x86_64"
> machine="pc-i440fx-rhel7.6.0">hvm</type>
> >>>>>>>> <smbios mode="sysinfo" />
> >>>>>>>> </os>
> >>>>>>>> <metadata>
> >>>>>>>> <ns0:qos />
> >>>>>>>> <ovirt-vm:vm>
> >>>>>>>> <ovirt-vm:minGuaranteedMemoryMb
> >>>>>>>>
type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
> >>>>>>>>
<ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
> >>>>>>>> <ovirt-vm:custom />
> >>>>>>>> <ovirt-vm:device
devtype="disk" name="sda">
> >>>>>>>>
> >>>>>>>>
> <ovirt-vm:poolID>dab8cf3a-a969-11e9-84eb-080027624b78</ovirt-vm:poolID>
> >>>>>>>>
> >>>>>>>>
>
<ovirt-vm:volumeID>da070fc0-4af5-406e-bf2b-2cf4d89eb276</ovirt-vm:volumeID>
> >>>>>>>>
> >>>>>>>>
>
<ovirt-vm:imageID>7dee6442-1838-48dd-892a-86fb96a85737</ovirt-vm:imageID>
> >>>>>>>>
> >>>>>>>>
>
<ovirt-vm:domainID>f49de997-9fb7-4ef8-82bd-f5b97ba31fb0</ovirt-vm:domainID>
> >>>>>>>> </ovirt-vm:device>
> >>>>>>>>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
> >>>>>>>>
> >>>>>>>>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
> >>>>>>>> </ovirt-vm:vm>
> >>>>>>>> </metadata>
> >>>>>>>> </domain>
> >>>>>>>> (vm:2570)
> >>>>>>>> 2019-07-24 04:04:55,348-0400 ERROR (vm/01de706d)
[virt.vm]
> >>>>>>>>
(vmId='01de706d-ee4c-484f-a17f-6b3355adf047') The vm start
> process failed
> >>>>>>>> (vm:841)
> >>>>>>>> Traceback (most recent call last):
> >>>>>>>> File
"/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line
> 775,
> >>>>>>>> in _startUnderlyingVm
> >>>>>>>> self._run()
> >>>>>>>> File
"/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line
> 2575,
> >>>>>>>> in _run
> >>>>>>>> dom.createWithFlags(flags)
> >>>>>>>> File
> >>>>>>>>
> "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line
> >>>>>>>> 131, in wrapper
> >>>>>>>> ret = f(*args, **kwargs)
> >>>>>>>> File
"/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> >>>>>>>> line 94, in wrapper
> >>>>>>>> return func(inst, *args, **kwargs)
> >>>>>>>> File
"/usr/lib64/python3.6/site-packages/libvirt.py", line
> 1110, in
> >>>>>>>> createWithFlags
> >>>>>>>> if ret == -1: raise libvirtError
('virDomainCreateWithFlags()
> >>>>>>>> failed', dom=self)
> >>>>>>>> libvirt.libvirtError: unsupported configuration:
Auto allocation
> of
> >>>>>>>> spice TLS port requested but spice TLS is disabled
in qemu.conf
> >>>>>>>> 2019-07-24 04:04:55,348-0400 INFO (vm/01de706d)
[virt.vm]
> >>>>>>>>
(vmId='01de706d-ee4c-484f-a17f-6b3355adf047') Changed state to
> Down:
> >>>>>>>> unsupported configuration: Auto allocation of spice
TLS port
> requested but
> >>>>>>>> spice TLS is disabled in qemu.conf (code=1)
(vm:1595)
> >>>>>>>> 2019-07-24 04:04:55,525-0400 INFO (vm/01de706d)
[virt.vm]
> >>>>>>>>
(vmId='01de706d-ee4c-484f-a17f-6b3355adf047') Stopping connection
> >>>>>>>> (guestagent:455)
> >>>>>>>> 2019-07-24 04:04:55,525-0400 DEBUG (vm/01de706d)
> >>>>>>>> [jsonrpc.Notification] Sending event
{"jsonrpc": "2.0", "method":
> >>>>>>>>
"|virt|VM_status|01de706d-ee4c-484f-a17f-6b3355adf047", "params":
> >>>>>>>> {"01de706d-ee4c-484f-a17f-6b3355adf047":
{"status": "Down",
> "vmId":
> >>>>>>>> "01de706d-ee4c-484f-a17f-6b3355adf047",
"exitCode": 1,
> "exitMessage":
> >>>>>>>> "unsupported configuration: Auto allocation of
spice TLS port
> requested but
> >>>>>>>> spice TLS is disabled in qemu.conf",
"exitReason": 1},
> "notify_time":
> >>>>>>>> 4440824940}} (__init__:181)
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
>
>