LACP across multiple switches
by Jorge Visentini
Hi all.
Is it possible to configure oVirt for work with two NICs in bond/LACP
across two switches, according to the image below?
[image: LACP_Across_Two_Switchs.png]
Thank you all.
You guys do a wonderful job.
--
Att,
Jorge Visentini
+55 55 98432-9868
2 years, 6 months
OVS switch type for hosted-engine
by Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
2 years, 7 months
USB3 redirection
by Rik Theys
Hi,
I'm trying to assign a USB3 controller to a CentOS 7.4 VM in oVirt 4.1
with USB redirection enabled.
I've created the following file in /etc/ovirt-engine/osinfo.conf.d:
01-usb.properties with content
os.other.devices.usb.controller.value = nec-xhci
and have restarted ovirt-engine.
If I disable USB-support in the web interface for the VM, the xhci
controller is added to the VM (I can see it in the qemu-kvm
commandline), but usb redirection is not available.
If I enable USB-support in the UI, no xhci controller is added (only 4
uhci controllers).
Is there a way to make the controllers for usb redirection xhci controllers?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
2 years, 7 months
OVN routing and firewalling in oVirt
by Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?
Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?
Is there any security policy (like security groups in Openstack) to
implement?
Thanks,
Gianluca
2 years, 7 months
Install hosted-engine - Task Get local VM IP failed
by florentl
Hi all,
I try to install hosted-engine on node : ovirt-node-ng-4.2.3-0.20180518.
Every times I get stuck on :
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
true, "cmd": "virsh -r net-dhcp-leases default | grep -i
00:16:3e:6c:5a:91 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
"0:00:00.108872", "end": "2018-06-01 11:17:34.421769", "rc": 0, "start":
"2018-06-01 11:17:34.312897", "stderr": "", "stderr_lines": [],
"stdout": "", "stdout_lines": []}
I tried with static IP Address and with DHCP but both failed.
To be more specific, I installed three nodes, deployed glusterfs with
the wizard. I'm in a nested virtualization environment for this lab
(Vmware Esxi Hypervisor).
My node IP is : 192.168.176.40 / and I want the hosted-engine vm has
192.168.176.43.
Thanks,
Florent
2 years, 8 months
Important changes to the oVirt Terraform Provider
by Janos Bonic
Dear oVirt community,
We are making sweeping and backwards-incompatible changes to the oVirt
Terraform provider. *We want your feedback before we make these changes.*
Here’s the short list what we would like to change, please read the details
below.
1. The current master branch will be renamed to legacy. The usage of
this provider will be phased out within Red Hat around the end / beginning
of next year. If you want to create a fork, we are happy to add a link to
your fork to the readme.
2. A new main branch will be created and a *new Terraform provider*
written from scratch on the basis of go-ovirt-client
<https://github.com/ovirt/go-ovirt-client>. (Preview here
<https://github.com/haveyoudebuggedit/terraform-provider-ovirt>) This
provider will only have limited functionality in its first release.
3. This new provider will be released to the Terraform registry, and
will have full test coverage and documentation. This provider will be
released as version v2.0.0 when ready to signal that it is built on the
Terraform SDK v2.
4. A copy of this new Terraform provider will be kept in the v1 branch
and backported to the Terraform SDK v1 for the benefit of the OpenShift
Installer <https://github.com/openshift/installer>. We will not tag any
releases, and we will not release this backported version in binary form.
5. We are hosting a *community call* on the 14th of October at 13:00 UTC
on this link <https://bluejeans.com/476587312/8047>. Please join to
provide feedback and suggest changes to this plan.
Why are we doing this?
The original Terraform provider
<https://github.com/EMSL-MSC/terraform-provider-ovirt> for oVirt was
written four years ago by @Maigard <https://github.com/Maigard> at EMSL-MSC
<http://github.com/EMSL-MSC/terraform-provider-ovirt>. The oVirt fork of
this provider is about 2 years old and went through rapid expansion, adding
a large number of features.
Unfortunately, this continuous rapid growth came at a price: the original
test infrastructure deteriorated and certain resources, especially the
virtual machine creation ballooned to a size we feel has become
unmaintainable.
If you tried to contribute to the Terraform provider recently, you may have
noticed that our review process has become extremely slow. We can no longer
run the original tests, and our end to end test suite is not integrated
outside of the OpenShift CI system. Every change to the provider requires
one of only 3 people to review the code and also run a manual test suite
that is currently only runable on one computer.
We also noticed an increasing number of bugs reported on OpenShift on
oVirt/RHV related to the Terraform provider.
Our original plan was that we would fix the test infrastructure and then
subsequently slowly transition API calls to go-ovirt-client, but that
resulted in a PR that is over 5000 lines in code
<https://github.com/oVirt/terraform-provider-ovirt/pull/277> and cannot in
good conscience be merged in a single piece. Splitting it up is difficult,
and would likely result in broken functionality where test coverage is not
present.
What are we changing for you, the users?
First of all, documentation. You can already preview the documentation here
<https://registry.terraform.io/providers/haveyoudebuggedit/ovirt/latest/docs>.
You will notice that the provider currently only supports a small set of
features. You can find the full list of features
<https://github.com/haveyoudebuggedit/terraform-provider-ovirt/milestone/1>
we are planning for the first release on GitHub. However, if you are using
resources like cluster creation, etc. these will currently not work and we
recommend sticking to the old provider for the time being.
The second big change will be how resources are treated. Instead of
creating large resources that need to call several of the oVirt APIs to
create, we will create resources that are only calling one API. This will
lead to fewer bugs. For example:
- ovirt_vm will create the VM, but not attach any disks or network
interfaces to it.
- ovirt_disk_attachment or ovirt_disk_attachments will attach a disk to
the VM.
- ovirt_nic will create a network interface.
- ovirt_vm_start will start the virtual machine when provisioned, stop
it when deprovisioned.
You can use the depends_on
<https://www.terraform.io/docs/language/meta-arguments/depends_on.html>
meta-argument to make sure disks and network interfaces are attached before
you start the VM. Alternatively, you can hot-plug network interfaces later.
For example:
resource "ovirt_vm" "test" {
cluster_id = "some-cluster-id"
template_id = "some-template-id"
}
resource "ovirt_disk" "test" {
storagedomain_id = "some-storage-domain-id"
format = "cow"
size = 512
alias = "test"
sparse = true
}
resource "ovirt_disk_attachment" "test" {
vm_id = ovirt_vm.test.id
disk_id = ovirt_disk.test.id
disk_interface = "virtio_scsi"
}
resource "ovirt_vm_start" "test" {
vm_id = ovirt_vm.test.id
depends_on = [ovirt_disk_attachment.test]
}
The next change is the availability of the provider on the Terraform
Registry. You will no longer have to download the binary. Instead, you will
be able to simply pull in the provider like this:
terraform {
required_providers {
ovirt = {
source = "ovirt/ovirt"
version = "..."
}
}
}
provider "ovirt" {
# Configuration options
}
The configuration options for the provider itself have also been greatly
expanded, see the preliminary documentation
<https://registry.terraform.io/providers/haveyoudebuggedit/ovirt/latest/docs>
for details.
What’s changing behind the scenes?
The new Terraform provider is a complete rewrite based on the
go-ovirt-client <https://github.com/ovirt/go-ovirt-client> library. The
single biggest advantage of this library is that it has built-in mocks for
all resources it supports. Having mocks allows us to run tests without
needing to spin up an oVirt instance. We have already configured GitHub
Actions
<https://github.com/haveyoudebuggedit/terraform-provider-ovirt/actions> on
the new provider and all changes are automatically checked against these
mocks.
We may decide to add an end-to-end test later, but for the foreseeable
future we will trust the correctness of the mocks to test community
contributions. This means that we will be able to merge changes much
quicker.
On the OpenShift side we will also switch to using the new provider, since
this is the primary motivation for the change. The OpenShift Installer uses
the legacy version 1 of the Terraform SDK, so we will maintain a version
1-compatible copy in the v1 branch, which the installer can pull in. It is
important to note, however, that the v1 branch will be a pure backport, we
will not develop it separately. Development will be focused on the version
in main that is being released to the Terraform Registry.
What does this mean to you, the contributors?
The current Terraform provider has several pull requests open
<https://github.com/oVirt/terraform-provider-ovirt/pulls>. Unfortunately,
we currently do not have the capacity to properly vet and and run our
internal test suite against these changes. In contrast to the new Terraform
provider, we do not have working tests, linting, and the code structure
that make merging changes easier.
We are very sorry to say that *these patches are unlikely to be merged*. We
know that this is a terrible thing, you have put in effort into writing
them. Unfortunately, we do not see an alternative as there already numerous
bugs on our radar and adding more code would not make the problem go away.
We want to hear your opinion
As the owners of the original Terraform provider we haven’t been keeping up
with reviewing your contributions and issues. Some are several months old
and haven’t received answers for a long time. We want to change that, we
want to hear from you. Please join our community round table around the
Terraform provider on the 14th of October at 13:00 UTC on this link
<https://bluejeans.com/476587312/8047>.
*We want to know: Which resources are the most important to you? How does
this change impact you? Can we make the transition smoother for you? Would
you do anything differently in the light of the issues described above?*
2 years, 9 months
NFS Synology NAS (DSM 7)
by Maton, Brett
Hi List,
I can't get oVirt 4.4.8.5-1.el8 (running on oVirt Node hosts) to connect
to an NFS share on a Synology NAS.
I gave up trying to get the hosted engine deployed and put that on an
iscsi volume instead...
The directory being exported from NAS is owned by vdsm / kvm (36:36)
perms I've tried:
0750
0755
0777
Tried auto / v3 / v4_0
As others have mentioned regarding NFS, if I connect manually from the
host with
mount nas.mydomain.com:/volume1/ov_nas
It connects and works just fine.
If I try to add the share as a domain in oVirt I get
Operation Cancelled
Error while executing action Add Storage Connection: Permission settings on
the specified path do not allow access to the storage.
Verify permission settings on the specified storage path.
When tailing /var/log/messages on
When tailing /var/log/messages on the oVirt host, I see this message appear
(I changed the domain name for this post so the dots might be transcoded in
reality):
Aug 27 17:36:07 ov001 systemd[1]:
rhev-data\x2dcenter-mnt-nas.mydomain.com:_volume1_ov__nas.mount:
Succeeded.
The NAS is running the 'new' DSM 7, /etc/exports looks like this:
/volume1/ov_nas x.x.x.x(rw,async,no_root_squash,anonuid=36,anongid=36)
(reloaded with exportfs -ra)
Any suggestions appreciated.
Regards,
Brett
2 years, 9 months
VM hanging at sustained high throughput
by David Johnson
Hi ovirt gurus,
This is an interesting issue, one I never expected to have.
When I push high volumes of writes to my NAS, I will cause VM's to go into
a paused state. I'm looking at this from a number of angles, including
upgrades on the NAS appliance.
I can reproduce this problem at will running a centos 7.9 VM on Ovirt 4.5.
*Questions:*
1. Is my analysis of the failure (below) reasonable/correct?
2. What am I looking for to validate this?
3. Is there a configuration that I can set to make it a little more robust
while I acquire the hardware to improve the NAS?
*Reproduction:*
Standard test of file write speed:
[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=4096
oflag=direct
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 1.68431 s, 1.3 GB/s
Give it more data
[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=12228
oflag=direct
12228+0 records in
12228+0 records out
6410993664 bytes (6.4 GB) copied, 7.22078 s, 888 MB/s
The odds are about 50/50 that 6 GB will kill the VM, but 100% when I hit 8
GB.
*Analysis:*
What I think appears to be happening is that the intent cache on the NAS is
on an SSD, and my VM's are pushing data about three times as fast as the
SSD can handle. When the SSD gets queued up beyond a certain point, the NAS
(which places reliability over speed) says "Whoah Nellie!", and the VM
chokes.
*David Johnson*
2 years, 9 months
Deploy ovirt-csi in the kubernetes cluster
by ssarang520@gmail.com
Hi,
I want to deploy ovirt-csi in the kubernetes cluster. But the guide only has how to deploy to openshift.
How can I deploy the ovirt-csi in the kubernetes cluster? Is there any way to do that?
2 years, 9 months
Host needs to be reinstalled after configuring power management
by Andrew DeMaria
Hi,
I am running ovirt 4.3 and have found the following action item immediately
after configuring power management for a host:
Host needs to be reinstalled as important configuration changes were
applied on it.
The thing is - I've just freshly installed this host and it seems strange
that I need to reinstall it.
Is there a better way to install a host and configure power management
without having to reinstall it after?
Thanks,
Andrew
2 years, 10 months