Python Unsupported Version Detection (ovirt Manager 4.4.10)
by michael.li@hactlsolutions.com
Hi,
We have installed oVirt manger in Centos stream 8 and running the security scanning by Tenable Nessus ID 148367
When I try to remove the python3.6. It will remove many dependency package related ovirt.
How can I fixed this vulnerability as below?
Python Unsupported Version Detection
Plugin Output:
The following Python installation is unsupported :
Path : /
Port : 35357
Installed version : 3.6.8
Latest version : 3.10
Support dates : 2021-12-23 (end of life)
Regards,
Michael Li
2 years, 7 months
Network filters in oVirt : zero-trust, IP and port filtering
by ravi k
Good people of the community,
Hope you are all doing well. We are exploring the network filters in oVirt to check if we can implement a zero-trust model at the network level. The intention is to have a filter which takes two parameters, IP and PORT. After that there will be a 'deny all' rule. We realized that none of the default network filters offer such a functionality and the only option is to write a custom filter.
Why don't we have such a filter in libvirt and thereby in oVirt? Someone would've already thought about such a use case. So I was thinking maybe network filters aren't meant to be used for implementing such functionalities like zero-trust?
Also what are some practical use cases of the default filters that are provided? I was able to understand and use the clean-traffic and clean-traffic-gateway.
Regards,
ravi
2 years, 7 months
info about removal of LVM structures before removing LUNs
by Gianluca Cecchi
Hello,
I'm going to hot remove some LUNS that were used as storage domains from a
4.4.7 environment.
I have already removed them for oVirt.
I think I would use the remove_mpath_device.yml playbook if I find it... it
seems it should be in examples dir inside ovirt ansible collections, but
there is not...
Anyway I'm aware of the corresponding manual steps of (I think version 8
doesn't differ from 7 in this):
. get disks name comprising the multipath device to remove
. remove multipath device
multipath -f "{{ lun }}"
. flush I/O
blockdev --flushbufs {{ item }}
for every disk that was comprised in the multipath device
. remove disks
echo 1 > /sys/block/{{ item }}/device/delete
for every disk that was comprised in the multipath device
My main doubt is related to the LVM structure that I can see is yet present
on the multipath devices.
Eg for a multipath device 360002ac0000000000000013e0001894c:
# pvs --config 'devices { filter = ["a|.*|" ] }' | grep
360002ac0000000000000013e0001894c
/dev/mapper/360002ac0000000000000013e0001894c
a7f5cf77-5640-4d2d-8f6d-abf663431d01 lvm2 a-- <4.00t <675.88g
# lvs --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01
LV VG
Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
067dd3d0-db3b-4fd0-9130-c616c699dbb4 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 900.00g
1682612b-fcbb-4226-a821-3d90621c0dc3 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 55.00g
3b863da5-2492-4c07-b4f8-0e8ac943803b a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
47586b40-b5c0-4a65-a7dc-23ddffbc64c7 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 35.00g
7a5878fb-d70d-4bb5-b637-53934d234ba9 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 570.00g
94852fc8-5208-4da1-a429-b97b0c82a538 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 55.00g
a2edcd76-b9d7-4559-9c4f-a6941aaab956 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
de08d92d-611f-445c-b2d4-836e33935fcf a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 300.00g
de54928d-2727-46fc-81de-9de2ce002bee a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 1.17t
f9f4d24d-5f2b-4ec3-b7e3-1c50a7c45525 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 300.00g
ids a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
inbox a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
leases a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 2.00g
master a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 1.00g
metadata a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
outbox a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 128.00m
xleases a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi------- 1.00g
So the question is:
would it be better to execute something like
lvremove for every LV lv_name
lvremove --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01/lv_name
vgremove
vgremove --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01
pvremove
pvremove --config 'devices { filter = ["a|.*|" ] }'
/dev/mapper/360002ac0000000000000013e0001894c
and then proceed with the steps above or nothing at all as the OS itself
doesn't "see" the LVMs and it is only an oVirt view that is already "clean"?
Also because LVM is not cluster aware, so after doing that on one node, I
would have the problem about LVM rescan on other nodes....
Thanks in advance,
Gianluca
2 years, 7 months
oVirt Local Repo
by simon@justconnect.ie
Hi all,
I work in a secure environment with no external internet access and local mirrors provide a local oVirt repo.
The person that set these up has since left with no documentation available. Updating our oVirt environments fails as 'no updates available' is the response. I recall a thread that stated something had broken when eol was announced in December but can't find that thread anymore.
I would like to build a clean local ovirt and ovirt dependencies repos - is there any documentation for this.
Kind Regards
Simon...
2 years, 7 months
4.5.0 beta compose delayed to April 4th 2022
by Sandro Bonazzola
oVirt 4.5.0 beta compose has been delayed to next week on April 4th as
ovirt-engine and gluster-ansible-role support for ansible-core 2.12 missed
today's deadline.
Test day has been rescheduled accordingly to April 5th.
Testers: please continue testing the released Alpha and providing feedback
https://trello.com/b/3FZ7gdhM/ovirt-450-test-day
A few packages that were meant to be shipped today as part of the beta
release have been already pushed to testing repositories so you can already
provide some feedback on the beta sanity.
Known issues:
- hyperconverged deployment doesn't work due to missing updated
gluster-ansible-roles packages
- ovirt-engine is still using ansible 2.9.27 and wasn't updated from 4.5.0
Alpha
- ovirt-appliance and oVirt Node have not been built with current content
of the testing repos as beta has been rescheduled
Be aware the RHEL 8.6 Beta has been released yesterday so you can already
try running on top of it.
Rocky Linux announced they're already building 8.6 beta as well so it may
be possible to start testing on top of it soon as well.
Professional Services, Integrators and Backup vendors: please run a test
session against your additional services, integrated solutions,
downstream rebuilds, backup solution on the released alpha release and
report issues as soon as possible.
If you're not listed here:
https://ovirt.org/community/user-stories/users-and-providers.html
consider adding your company there.
If you're willing to help updating the localization for oVirt 4.5.0 please
follow https://ovirt.org/develop/localization.html
If you're willing to help promoting the oVirt 4.5.0 release you can submit
your banner proposals for the oVirt home page and for the
social media advertising at https://github.com/oVirt/ovirt-site/issues no
later than April 5th
As an alternative please consider submitting a case study as in
https://ovirt.org/community/user-stories/user-stories.html
Feature owners: please submit a presentation of your feature for oVirt
Youtube channel: https://www.youtube.com/c/ovirtproject no later than April
5th
If you have some new feature requiring community feedback / testing please
add your case under the "Test looking for volunteer" section no later than
April 4th.
Do you want to contribute to getting ready for this release?
Read more about oVirt community at https://ovirt.org/community/ and join
the oVirt developers https://ovirt.org/develop/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 7 months
No bootable device
by nicolas@devels.es
Hi,
We're running oVirt 4.4.8.6. We have uploaded a qcow2 image (metasploit
v.3, FWIW) using the GUI (Storage -> Disks -> Upload -> Start). The
image is in qcow2 format. No options on the right side were checked. The
upload went smoothly, so we now tried to attach the disk to a VM.
To do that, we opened the VM -> Disks -> Attach and selected the disk.
As interface, VirtIO-iSCSI was chosen, and the disk was marked as OS, so
the "bootable" checkbox was selected.
The VM was later powered on, but when accessing the console the message
"No bootable device." appears. We're pretty sure this is a bootable
image, because it was tested on other virtualization infrastructure and
it boots well. We also tried to upload the image in RAW format but the
result is the same.
What are we missing here? Is anything else needed to do so the disk is
bootable?
Thanks.
2 years, 7 months
Wait for the engine to come up on the target vm
by Vladimir Belov
When I try to deploy the engine using self-hosted-engine technology, I get an error at the end of the installation.
[ INFO ] TASK [Wait for the engine to come up on the target VM]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.181846", "end": "2022-03-28 15:41:28.853150", "rc": 0, "start": "2022-03-28 15:41:28.671304", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5537 (Mon Mar 28 15:41:20 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=5537 (Mon Mar 28 15:41:20 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"v2.test.ru\", \"host-id\": 1, \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"4d2eeaea\", \"local_conf_timestamp\": 5537, \"host-ts\": 5537}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\
"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5537 (Mon Mar 28 15:41:20 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=5537 (Mon Mar 28 15:41:20 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"v2.test.ru\", \"host-id\": 1, \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"4d2eeaea\", \"local_conf_timestamp\": 5537, \"host-ts\": 5537}, \"global_maintenance\": false}"]}
After installation, when checking the condition of the engine, it issues:
Engine status: {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
After looking at the vdsm.logs, I found that qemu-guest-agent for some reason does not connect to the guest VM
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5400, in qemuGuestAgentShutdown
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in shutdownFlags
if ret == -1: raise libvirtError ('virDomainShutdownFlags() failed', dom=self)
libvirtError: Guest agent is not responding: QEMU guest agent is not connected
During the deployment phase of the engine, qemu-guest-agent was up and running.
When configuring network settings, static addressing is used.
It seems to me that this is due to the fact that the engine receives an IP address that does not match the entry in /etc/hosts, but I do not know how to fix it. Any help is welcome, I will provide the necessary logs. Thanks
2 years, 7 months
Guest OS console port
by jihwahn1018@naver.com
Hi,
when i connect to vm by console.vv, i found that it allocate 2 ports for spice and 1 port for vnc per vm
and this ports are allocated from 5900 in each host.
In vdsm it looks like 5900-6923 ports are existed for vm console.
-----------------------
# firewall-cmd --info-service vdsm
vdsm
ports: 54321/tcp 5900-6923/tcp 49152-49216/tcp
protocols:
source-ports:
modules:
destination:
includes:
helpers:
-------------------
Can i change this range to specific port like (8000-8080) or reduce this range?
(for example, 5900-6000)?
Thanks.
2 years, 7 months