OVN routing and firewalling in oVirt
by Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?
Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?
Is there any security policy (like security groups in Openstack) to
implement?
Thanks,
Gianluca
2 years
Unable to access ovirt Admin Screen from ovirt Host
by louisb@ameritech.net
I've reinstalled ovirt 4.4 on my server remotely via cockpit terminal. I'm able to access the ovirt admin screen remotely from the laptop that I used for the install. However, using the same URL I'm unable to gain access to the admin screen.
Following the instruction in the documentation I've modified the file: /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf, to reflect the DNS name and I enter in the IP address. But I'm still unable to access the screen from the server console.
What else needs to change in order to gain access from the server console?
Thanks
2 years
VM failed to start when host's network is down
by lizhijian@fujitsu.com
Post again after subscribing the mail list.
Hi guys
I have an all in one ovirt environment which node installed both
vdsm and ovirt-engine.
I have setup the ovirt environment and it could work well.
For some reasons, i have to use this ovirt with node's networking down(i unplugged the network cable)
In such case, I noticed that i cannot start a VM anymore.
I wonder if there is a configuration switch to enable a ovirt to work with node's networking down ?
if not, may i possible to make it work by a easy way ?
When i tried to start VM with ovirt API, it responses with:
```bash
[root@74d2ab9cb0 ~]# sh start.sh
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<action>
<async>false</async>
<fault>
<detail>[Cannot run VM. Unknown Data Center status.]</detail>
<reason>Operation Failed</reason>
</fault>
<status>failed</status>
</action>
[root@74d2ab9cb0 ~]# sh start.sh
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<action>
<async>false</async>
<fault>
<detail>[Cannot run VM. Unknown Data Center status.]</detail>
<reason>Operation Failed</reason>
</fault>
<status>failed</status>
</action>
[root@74d2ab9cb0 ~]#
```
Attached the vdsm and ovirt-engine
Thanks
Zhijian
2 years
Wait for the engine to come up on the target vm
by Vladimir Belov
I'm trying to deploy oVirt from a self-hosted engine, but at the last step I get an engine startup error.
[ INFO ] TASK [Wait for the engine to come up on the target VM]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.181846", "end": "2022-03-28 15:41:28.853150", "rc": 0, "start": "2022-03-28 15:41:28.671304", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5537 (Mon Mar 28 15:41:20 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=5537 (Mon Mar 28 15:41:20 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"v2.test.ru\", \"host-id\": 1, \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"4d2eeaea\", \"local_conf_timestamp\": 5537, \"host-ts\": 5537}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5537 (Mon Mar 28 15:41:20 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=5537 (Mon Mar 28 15:41:20 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"v2.test.ru\", \"host-id\": 1, \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"4d2eeaea\", \"local_conf_timestamp\": 5537, \"host-ts\": 5537}, \"global_maintenance\": false}"]}
Аfter the installation is completed, the condition of the engine is as follows:
Engine status: {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
After reading the vdsm.logs, I found that qemu-guest-agent failed to connect to the engine for some reason.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5400, in qemuGuestAgentShutdown
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in shutdownFlags
if ret == -1: raise libvirtError ('virDomainShutdownFlags() failed', dom=self)
libvirtError: Guest agent is not responding: QEMU guest agent is not connected
During the installation phase, qemu-guest-agent on the guest VM is running.
Setting a temporary password (hosted-engine --add-console-password --password) and connecting via VNC also failed.
Using "hosted-engine --console" also failed to connect
The engine VM is running on this host
Connected to HostedEngine domain
Escaping character: ^]
error: internal error: character device <null> not found
The network settings are configured using static addressing, without DHCP.
It seems to me that this is due to the fact that the engine receives an IP address that does not match the entry in /etc/hosts, but I do not know how to fix it. Any help is welcome, I will provide the necessary logs. Thanks
2 years
Gluster storage and TRIM VDO
by Oleh Horbachov
Hello everyone. I have a Gluster distributed replication cluster deployed. The cluster - store for ovirt. For bricks - VDO over a raw disk. When discarding via 'fstrim -av' the storage hangs for a few seconds and the connection is lost. Does anyone know the best practices for using TRIM with VDO in the context of ovirt?
ovirt - v4.4.10
gluster - v8.6
2 years
Mac addresses pool issues
by Nicolas MAIRE
Hi,
We're encountering some issues on one of our production clusters running oVirt 4.2. We've had an incident with the engine's database a few weeks back that we were able to recover from, however since then we've been having a bunch of weird issues, mostly around MACs.
It started off with the engine being unable to find a free MAC when creating a VM, despite there being significantly less virtual interfaces (around 250) than the total number of MACs in the default pool (default configuration, so 65536 addresses) and escalated into creating duplicate MACs (despite the pool not allowing it) and now we can't even modify the pool or remove VMs (since deleting the attached vnics fail), so we're kinda stuck with a cluster that has running VMs which are fine as long as we don't touch them, but on which we can't create new VMs (or modify the existing ones).
In the engine's log we can see that we've had an "Unable to initialize MAC pool due to existing duplicates (Failed with error MAC_POOL_INITIALIZATION_FAILED and code 5010)" error when we tried to reconfigure the pool this morning (see the full error stack here : https://pastebin.com/6bKMfbLn) and now whenever we try to delete a VM or reconfigure the pool we have a 'Pool for id="58ca604b-017d-0374-0220-00000000014e" does not exist' error (see the full error stack here: https://pastebin.com/Huy91iig), but, if we check the engine's mac_pool table we can see that it's there :
engine=# select * from mac_pools;
id | name | description | allow_duplicate_mac_addresses | default_pool
--------------------------------------+---------+------------------+-------------------------------+--------------
58ca604b-017d-0374-0220-00000000014e | Default | Default MAC pool | f | t
(1 row)
engine=# select * from mac_pool_ranges;
mac_pool_id | from_mac | to_mac
--------------------------------------+-------------------+-------------------
58ca604b-017d-0374-0220-00000000014e | 56:6f:1a:1a:00:00 | 56:6f:1a:1a:ff:ff
(1 row)
I found this bugzilla that seems to somehow apply https://bugzilla.redhat.com/show_bug.cgi?id=1554180 however I don't really know how to "reinitialize engine", especially considering that the mac pool was not configured to allow duplicate macs to begin with, and I've no idea what the impact of that reinitialization would be on the current VMs.
I'm quite new to oVirt (only been using it for one year) so any help would be greatly appreciated.
2 years
Enroll Host Certificate
by dlotarev@yahoo.com
Hi there! I have a problem to enroll host certificate.
The steps that I took:
1) Move host to maintenance mode (all VMs transferred to another host including HE VM)
2) Enroll certificate via web interface without errors
3) Exit from maintenance mode (transferred all VMs back including HE VM)
4) Restart ovirt-engine service
But my problem that after 6 hours i get message from oVirt engine notifier that my certificate expired soon.
I know that my oVirt installation is old (4.1.9), but what can i do with that? Maybe i missed something. I didn't reboot the host after renewing the certificate
Thank you for any advice!
2 years
Python Unsupported Version Detection (ovirt Manager 4.4.10)
by michael.li@hactlsolutions.com
Hi,
We have installed oVirt manger in Centos stream 8 and running the security scanning by Tenable Nessus ID 148367
When I try to remove the python3.6. It will remove many dependency package related ovirt.
How can I fixed this vulnerability as below?
Python Unsupported Version Detection
Plugin Output:
The following Python installation is unsupported :
Path : /
Port : 35357
Installed version : 3.6.8
Latest version : 3.10
Support dates : 2021-12-23 (end of life)
Regards,
Michael Li
2 years
Network filters in oVirt : zero-trust, IP and port filtering
by ravi k
Good people of the community,
Hope you are all doing well. We are exploring the network filters in oVirt to check if we can implement a zero-trust model at the network level. The intention is to have a filter which takes two parameters, IP and PORT. After that there will be a 'deny all' rule. We realized that none of the default network filters offer such a functionality and the only option is to write a custom filter.
Why don't we have such a filter in libvirt and thereby in oVirt? Someone would've already thought about such a use case. So I was thinking maybe network filters aren't meant to be used for implementing such functionalities like zero-trust?
Also what are some practical use cases of the default filters that are provided? I was able to understand and use the clean-traffic and clean-traffic-gateway.
Regards,
ravi
2 years
info about removal of LVM structures before removing LUNs
by Gianluca Cecchi
Hello,
I'm going to hot remove some LUNS that were used as storage domains from a
4.4.7 environment.
I have already removed them for oVirt.
I think I would use the remove_mpath_device.yml playbook if I find it... it
seems it should be in examples dir inside ovirt ansible collections, but
there is not...
Anyway I'm aware of the corresponding manual steps of (I think version 8
doesn't differ from 7 in this):
. get disks name comprising the multipath device to remove
. remove multipath device
multipath -f "{{ lun }}"
. flush I/O
blockdev --flushbufs {{ item }}
for every disk that was comprised in the multipath device
. remove disks
echo 1 > /sys/block/{{ item }}/device/delete
for every disk that was comprised in the multipath device
My main doubt is related to the LVM structure that I can see is yet present
on the multipath devices.
Eg for a multipath device 360002ac0000000000000013e0001894c:
# pvs --config 'devices { filter = ["a|.*|" ] }' | grep
360002ac0000000000000013e0001894c
/dev/mapper/360002ac0000000000000013e0001894c
a7f5cf77-5640-4d2d-8f6d-abf663431d01 lvm2 a-- <4.00t <675.88g
# lvs --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01
LV VG
Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
067dd3d0-db3b-4fd0-9130-c616c699dbb4 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 900.00g
1682612b-fcbb-4226-a821-3d90621c0dc3 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 55.00g
3b863da5-2492-4c07-b4f8-0e8ac943803b a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
47586b40-b5c0-4a65-a7dc-23ddffbc64c7 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 35.00g
7a5878fb-d70d-4bb5-b637-53934d234ba9 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 570.00g
94852fc8-5208-4da1-a429-b97b0c82a538 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 55.00g
a2edcd76-b9d7-4559-9c4f-a6941aaab956 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
de08d92d-611f-445c-b2d4-836e33935fcf a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 300.00g
de54928d-2727-46fc-81de-9de2ce002bee a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 1.17t
f9f4d24d-5f2b-4ec3-b7e3-1c50a7c45525 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 300.00g
ids a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
inbox a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
leases a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 2.00g
master a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 1.00g
metadata a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
outbox a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
xleases a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 1.00g
So the question is:
would it be better to execute something like
lvremove for every LV lv_name
lvremove --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01/lv_name
vgremove
vgremove --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01
pvremove
pvremove --config 'devices { filter = ["a|.*|" ] }'
/dev/mapper/360002ac0000000000000013e0001894c
and then proceed with the steps above or nothing at all as the OS itself
doesn't "see" the LVMs and it is only an oVirt view that is already "clean"?
Also because LVM is not cluster aware, so after doing that on one node, I
would have the problem about LVM rescan on other nodes....
Thanks in advance,
Gianluca
2 years