This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
Content-Type: multipart/mixed; boundary="AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv";
From: =?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?= <duck(a)redhat.com>
To: oVirt Infra <infra(a)ovirt.org>, users <users(a)ovirt.org>,
Subject: Mailing-Lists upgrade
Content-Type: text/plain; charset=utf-8
On behalf of the oVirt infra team, I'd like to announce the current
Mailing-Lists system is going to be upgraded to a brand new Mailman 3
installation on Monday during the slot 11:00-12:00 JST.
It should not take a full hour to migrate as we already made incremental
synchronization with the current system but better keep some margin. The
system will then take over delivery of the mails but might be a bit slow
at first as it needs to reindex all the archived mails (which might take
a few hours).
To manage your subscriptions and delivery settings you can do this
easily on the much nicer web interface (https://lists.ovirt.org). There
is a notion of account so you don't need to login separately for each ML.=
You can Sign In using Fedora, GitHub or Google or create a local account
if you prefer. Please keep in mind signing in with a different method
would create separate accounts (which cannot be merged at the moment).
But you can easily link your account to other authentication methods in
your settings (click on you name in the up-right corner -> Account ->
As for the original mail archives, because the previous system did not
have stable URLs, we cannot create mappings to the new pages. We decided
to keep the old archives around on the same URL (/pipermail), so the
Internet links would still work fine.
Hope you'd be happy with the new system.
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----
vdsClient will be removed from master branch today.
It is using XMLRPC protocol which has been deprecated and replaced by
A new client for vdsm was introduced in 4.1: vdsm-client.
This is a simple client that uses JSON-RPC protocol which was introduced in
The client is not aware of the available methods and parameters, and you
the schema  in order to construct the desired command.
Future version should parse the schema and provide online help.
If you're using vdsClient, we will be happy to assist you in migrating to
the new vdsm client.
vdsm-client [-h] [-a ADDRESS] [-p PORT] [--unsecure] [--timeout TIMEOUT]
[-f FILE] namespace method [name=value [name=value] ...]
Invoking simple methods:
# vdsm-client Host getVMList
For invoking methods with many or complex parameters, you can read
the parameters from a JSON format file:
# vdsm-client Lease info -f lease.json
where lease.json file content is:
It is also possible to read parameters from standard input, creating
complex parameters interactively:
# cat <<EOF | vdsm-client Lease info -f -
*Constructing a command from vdsm schema:*
Let's take VM.getStats as an example.
This is the entry in the schema:
description: Get statistics about a running virtual machine.
- description: The UUID of the VM
description: An array containing a single VmStats record
method name: getStats
The vdsm-client command is:
# vdsm-client VM getStats vmID=b3f6fa00-b315-4ad4-8108-f73da817b5c5
*Invoking getVdsCaps command:*
# vdsm-client Host getCapabilities
Please consult vdsm-client help and man page for further details and
Red Hat Israel Ltd.
Is it possible to pass multiple VLANs to a VM (pfSense) using a single
virtual NIC? All my existing oVirt networks are setup as a single tagged
VLAN. I know this didn't used to be supported but wondered if this has
changed. My other option is to pass each VLAN as a separate NIC to the VM
however if I needed to add a new VLAN I would have to add a new interface
and reboot the VM as hot-add of NICs is not supported by pfSense.
I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.
When i shutdown the network of host with HostedEngine VM, it should be
possible the HostedEngine VM migrate automatically to another host?
What is the expected behaviour on this HA scenario?
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
I’m running a 4.2 cluster with all VMs on Ceph using cinder external provider.
I’m trying to limit IOPS with cinder qos and volume type but it doesn’t work.
VM xml doesn’t show anything about <iotune>
Is it expected to work or is not implemented yet?
This is a multi-part message in MIME format.
Content-Type: text/plain; charset=utf-8; format=flowed
Has anyone installed a Windows 10 Virtual Machine ?
I am having serious Console Performance issues even after installing the
Ted Hat QXL controller from the virtio-win ISO.
Someone informed in a forum having similar issues and have resolved by
increasing the graphics card memory to 65536 by editing the XML (example
below), but how is that possible in oVirt permanently ?
<model type='qxl' ram='131072' vram='131072' vgamem='65536'
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
Content-Type: text/html; charset=utf-8
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello<br>
Has anyone installed a Windows 10 Virtual Machine ? <br>
I am having serious Console Performance issues even after
installing the Ted Hat QXL controller from the virtio-win ISO.<br>
Someone informed in a forum having similar issues and have
resolved by increasing the graphics card memory to 65536 by
editing the XML (example below), but how is that possible in oVirt
<model type='qxl' ram='131072' vram='131072' vgamem='<ins>65536</ins>'
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.
How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.