KVM with RDP using RDPMux for oVirt?
by Neal Gompa
Hello,
At the urging of Sandro Bonazzola, I'm bringing up something that was
being discussed on Twitter earlier today: supporting RDP with KVM
directly.
As it turns out, about four years ago, Datto developed support for
QEMU to display to RDP in a similar fashion to what was done for SPICE
years ago. The way this was done was by patching QEMU[1] to send
buffers to a service called RDPMux[2], which would provide RDP
connections. Additionally, libvirt was adjusted to support this[3].
Now, a couple years ago, I attempted to start rebasing this to QEMU
2.12.0 for my own use[4], but while I was able to make it compile[5],
it always hung when I tried to use the functionality.
This came up on Twitter earlier today when talking about how KVM lacks
RDP but had SPICE (which didn't catch on, and is apparently deprecated
now[6]?!?)
Is anyone interested in potentially helping me get this working with
the latest QEMU and libvirt? I'd be happy to package up RDPMux for
Fedora and get that in the distribution for people to use in that
case...
Best regards,
Neal
[1]: https://pagure.io/virt-with-rdp/blob/master/f/qemu-2.5
[2]: https://github.com/datto/RDPMux
[3]: https://pagure.io/virt-with-rdp/blob/master/f/libvirt-1.3.1
[4]: https://pagure.io/virt-with-rdp/blob/master/f/qemu-2.12.0
[5]: https://copr.fedorainfracloud.org/coprs/ngompa/virt-with-rdp/build/774882/
[6]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8-...
--
真実はいつも一つ!/ Always, there's only one truth!
4 years, 4 months
Improving VM behavior in case of IO errors
by Shubha Kulkarni
Hello,
In OVirt, we have a property propagate_error at the disk level that
decides in case of an error, how this error be propagated to the VM.
This value is maintained in the database table with the default value
set as Off. The default setting(Off) results in a policy that ends up
pausing the VM rather than propagating the errors to VM. There is no
provision in the UI currently to configure this property for disk
(images or luns). So there is no easy way to set this value. Further,
even if the value is manually set to "On" in db, it gets overwriiten by
UI everytime some other property is updated as described here -
https://bugzilla.redhat.com/show_bug.cgi?id=1669367
Setting the value to "Off" is not ideal for multipath devices where a
single path failure causes vm to pause. It puts serious restrictions for
the DR situation and unlike VMWare * Hyper-V, oVirt is not able to
support the DR functionality -
https://bugzilla.redhat.com/show_bug.cgi?id=1314160
While we wait for RFE, the proposal here is to revise the out of the box
behavior for LUNs. For LUNs, we should propagate the errors to VM rather
than directly stopping those. This will allow us to handle short-term
multipath outages and improve availability. This is a simple change in
behavior but will have good positive impact. I would like to seek
feedback about this to make sure that everyone is ok with the proposal.
Thanks,
Shubha
4 years, 4 months
ovirt-engine 4.4.2 has been branched
by Tal Nisan
I've now branched oVirt 4.4.2, this will allow us to start working on 4.4.3
bugs/features and also maintain the stability of 4.4.2 before the final
build.
How does it affect you:
If you are working on 4.4.3 content you can now submit your work to master
and it will be included in 4.4.3
If you are working on 4.4.2 content you will have to push it to master
first and then backport it to the ovirt-engine-4.4.2.z branch
4 years, 4 months
192.168.200.90 unreachable (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1696 - Still Failing!)
by Yedidyah Bar David
On Tue, Aug 4, 2020 at 6:40 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>
> Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
> Build: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1696/
This is failing for a few days now, all with the same error:
> FAILED: 004_basic_sanity.hotplug_disk
>
> Error Message:
> False != True after 180 seconds
> -------------------- >> begin captured logging << --------------------
> lago.ssh: DEBUG: start task:cef1ecf3-76fd-4aad-a1fb-093eeda8193a:Get ssh client for lago-he-basic-suite-master-host-1:
> lago.ssh: DEBUG: end task:cef1ecf3-76fd-4aad-a1fb-093eeda8193a:Get ssh client for lago-he-basic-suite-master-host-1:
> lago.ssh: DEBUG: Running 833afb8a on lago-he-basic-suite-master-host-1: ping -4 -c 1 192.168.200.90
> lago.ssh: DEBUG: Command 833afb8a on lago-he-basic-suite-master-host-1 returned with 1
> lago.ssh: DEBUG: Command 833afb8a on lago-he-basic-suite-master-host-1 output:
> PING 192.168.200.90 (192.168.200.90) 56(84) bytes of data.
> From 192.168.200.3 icmp_seq=1 Destination Host Unreachable
This is after Michal pushed a patch to restore the removed function
(and related) get_vm0_ip_address, which failed build 1691.
I guess this patch was not enough - either the VM now gets a different
IP address, not 192.168.200.90, or its network is down, or whatever.
Any idea?
Thanks,
--
Didi
4 years, 4 months
Re: Error creating a storage domain "LUNs are already in use"
by Nir Soffer
Hi Kobi, moving your question to the mailing list, the right place for
technical questions.
> I have a problem with some FC lun that from some reason when I tried to add it as SD I got:
> Are you sure?
> This operation might be unrecoverable and destructive!
> The following LUNs are already in use
> - 3600a09803830447a4f244c465759506a
> Approve operation
>
> after I check the "approve operation" I got:
>
> Operation Canceled
> Error while executing action New SAN Storage Domain: Cannot create Volume Group
>
> Do you have any suggestions on how to clean it to be able to add it to the engine?
This LUN probably has a partition table because it was used as a
direct LUN in a previous
test. You can check this with lsblk.
You need to wipe it manually using "wipefs -a".
Nir
4 years, 4 months
Error during deployment of self hosted engine oVirt 4.4
by i iordanov
Hi guys,
I am trying to install oVirt 4.4 for testing of the aSPICE and Opaque
Android clients and tried to follow this slightly outdated doc:
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_eng...
to deploy an all-in-one self-hosted engine using the command-line.
I started with a clean CentOS 8 installation, set up an NFS server and
tested that mounts work from the local host and other hosts, opened all
ports with firewalld to my LAN and localhost (but left firewalld enabled).
During the run of
hosted-engine --deploy
I got the following error:
2020-08-03 23:31:51,426-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 TASK [ovirt.hosted_engine_setup : debug]
2020-08-03 23:31:51,827-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 server_cpu_dict: {'Intel Nehalem Family':
'Nehalem', 'Secure Intel Nehalem Family': 'Nehalem,+sp
ec-ctrl,+ssbd,+md-clear', 'Intel Westmere Family': 'Westmere', 'Secure
Intel Westmere Family': 'Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel
SandyBridge Family': 'SandyBridge', 'Secure Intel SandyBridge Fa
mily': 'SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel IvyBridge
Family': 'IvyBridge', 'Secure Intel IvyBridge Family':
'IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel Haswell Family':
'Haswell-noTSX
', 'Secure Intel Haswell Family':
'Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear', 'Intel Broadwell Family':
'Broadwell-noTSX', 'Secure Intel Broadwell Family':
'Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear', 'Intel Sk
ylake Client Family': 'Skylake-Client,-hle,-rtm', 'Secure Intel Skylake
Client Family': 'Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm',
'Intel Skylake Server Family': 'Skylake-Server,-hle,-rtm', 'Secure I
ntel Skylake Server Family':
'Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm', 'Intel Cascadelake
Server Family': 'Cascadelake-Server,-hle,-rtm,+arch-capabilities', 'Secure
Intel Cascadelake Server Family':
'Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities',
'AMD Opteron G4': 'Opteron_G4', 'AMD Opteron G5': 'Opteron_G5', 'AMD EPYC':
'EPYC', 'Secure AMD EPYC': 'EPYC,+ibpb,+virt-ssbd', 'IB
M POWER8': 'POWER8', 'IBM POWER9': 'POWER9', 'IBM z114, z196': 'z196-base',
'IBM zBC12, zEC12': 'zEC12-base', 'IBM z13s, z13': 'z13-base', 'IBM z14':
'z14-base'}
2020-08-03 23:31:52,128-0400 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Convert
CPU model name]
2020-08-03 23:31:52,530-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {'msg': "The task includes an option with
an undefined variable. The error was: 'dict object' ha
s no attribute ''\n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
line 105, column 15, but may\nbe elsewhere in th
e file depending on the exact syntax problem.\n\nThe offending line appears
to be:\n\n - debug: var=server_cpu_dict\n ^ here\n\nThere
appears to be both 'k=v' shorthand syntax and YAML in this task
. Only one syntax may be used.\n", '_ansible_no_log': False}
2020-08-03 23:31:52,630-0400 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost]: FAILED! => {"msg":
"The task includes an option with an undefined variable.
The error was: 'dict object' has no attribute ''\n\nThe error appears to be
in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
line 105, column
15, but may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n - debug:
var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v'
shortha
nd syntax and YAML in this task. Only one syntax may be used.\n"}
2020-08-03 23:31:52,931-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 35 changed:
5 unreachable: 0 skipped: 5 failed: 1
2020-08-03 23:31:53,032-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215
ansible-playbook rc: 2
2020-08-03 23:31:53,032-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222
ansible-playbook stdout:
2020-08-03 23:31:53,032-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225
ansible-playbook stderr:
2020-08-03 23:31:53,033-0400 DEBUG otopi.context context._executeMethod:145
method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/target_vm.py",
line 238, in _closeup
r = ah.run()
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 229, in run
raise RuntimeError(_('Failed executing ansible-playbook'))
RuntimeError: Failed executing ansible-playbook
2020-08-03 23:31:53,034-0400 ERROR otopi.context context._executeMethod:154
Failed to execute stage 'Closing up': Failed executing ansible-playbook
On line 105 of the Ansible config file indicated, there is the following
task:
- debug: var=server_cpu_dict
However, here is a bit more of the file:
- name: Parse server CPU list
set_fact:
server_cpu_dict: "{{ server_cpu_dict | combine({item.split(':')[1]:
item.split(':')[3]}) }}"
with_items: >-
{{
server_cpu_list.json['values']['system_option_value'][0]['value'].split(';
')|list|difference(['']) }}
- debug: var=server_cpu_dict
- name: Convert CPU model name
set_fact:
cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
- debug: var=cluster_cpu_model
- name: Parse emulated_machine
set_fact:
emulated_machine:
emulated_machine_list.json['values']['system_option_value'][0]['value'].replace(
'[','').replace(']','').split(', ')|first
Exact version of the package that contains the file in question:
ovirt-ansible-hosted-engine-setup-1.1.7-1.el8.noarch
Many thanks for your attention! Please cc me when you reply, as I may not
be monitoring the mailing list actively. Happy to provide any other
information.
In case it's relevant, the CPU of the machine:
# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU X5460 @ 3.16GHz
stepping : 6
microcode : 0x60f
cpu MHz : 2703.129
cache size : 6144 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm
constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni
dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm
pti tpr_shadow vnmi flexpriority dtherm
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
itlb_multihit
bogomips : 6318.15
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
Sincerely,
iordan
--
The conscious mind has only one thread of execution.
4 years, 4 months