Network filters in oVirt : zero-trust, IP and port filtering
by ravi k
Good people of the community,
Hope you are all doing well. We are exploring the network filters in oVirt to check if we can implement a zero-trust model at the network level. The intention is to have a filter which takes two parameters, IP and PORT. After that there will be a 'deny all' rule. We realized that none of the default network filters offer such a functionality and the only option is to write a custom filter.
Why don't we have such a filter in libvirt and thereby in oVirt? Someone would've already thought about such a use case. So I was thinking maybe network filters aren't meant to be used for implementing such functionalities like zero-trust?
Also what are some practical use cases of the default filters that are provided? I was able to understand and use the clean-traffic and clean-traffic-gateway.
Regards,
ravi
2 years, 7 months
dnf update fails with oVirt 4.4 on centos 8 stream due to ansible package conflicts.
by Daniel McCoshen
Hey all,
I'm running ovirt 4.4 in production (4.4.5-11-1.el8), and I'm attempting to update the OS on my hosts. The hosts are all centos 8 stream, and dnf update is failing on all of them with the following output:
[root@ovirthost ~]# dnf update
Last metadata expiration check: 1:36:32 ago on Thu 17 Feb 2022 12:01:25 PM CST.
Error:
Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires ansible, but none of the providers can be installed
- package ansible-2.9.27-2.el8.noarch conflicts with ansible-core > 2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.27-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.27-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.17-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.18-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.20-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.21-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.23-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.24-2.el8.noarch
- cannot install the best update candidate for package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
- cannot install the best update candidate for package ansible-2.9.27-2.el8.noarch
- package ansible-2.9.20-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.16-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.19-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.23-1.el8.noarch is filtered out by exclude filtering
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
cockpit-ovirt-dashboard.noarch is at 0.15.1-1.el8, and it looks like that conflicting ansible-core package was added to the 8-stream repo two days ago. That's when I first noticed the issue, but I it might be older. When the eariler issues with the centos 8 deprecation happened, I had swapped out the repos on some of these hosts for the new ones, and have since added new hosts as well, using the updated repos. Both hosts that had been moved from the old repos, and ones created with the new repos are experienceing this issue.
ansible-core is being pulled from the centos 8 stream AppStream repo, and the ansible package that cockpit-ovirt-dashboard.noarch is trying to use as a dependency is comming from ovirt-4.4-centos-ovirt44
I'm tempted to blacklist ansible-core in my dnf conf, but that seems like a hacky work-around and not the actual fix here.
Thanks,
Dan
2 years, 8 months
Console - VNC password is 12 characters long, only 8 permitted
by francesco@shellrent.com
Hi all,
I'm using websockify + noVNC for expose the vm console via browser getting the graphicsconsoles ticket via API. Everything works fine for every other host that I have (more than 200), the console works either via oVirt engine and via browser) but just for a single host (CentOS Stream release 8, oVirt 4.4.9) the console works only via engine but when I try the connection via browser I get the following error (vdsm log of the host):
ERROR FINISH updateDevice error=unsupported configuration: VNC password is 12 characters long, only 8 permitted
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in method
ret = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 372, in updateDevice
return self.vm.updateDevice(params)
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3389, in updateDevice
return self._updateGraphicsDevice(params)
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3365, in _updateGraphicsDevice
params['params']
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5169, in _setTicketForGraphicDev
self._dom.updateDeviceFlags(xmlutils.tostring(graphics), 0)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3244, in updateDeviceFlags
raise libvirtError('virDomainUpdateDeviceFlags() failed')
libvirt.libvirtError: unsupported configuration: VNC password is 12 characters long, only 8 permitted
The error is pretty much self explanatory but, I can't manage to figure out why only on this server and I wonder if I can set the length of the generated vnc password somewhere.
Thank you for your time,
Francesco
2 years, 8 months
Install hosted-engine - Task Get local VM IP failed
by florentl
Hi all,
I try to install hosted-engine on node : ovirt-node-ng-4.2.3-0.20180518.
Every times I get stuck on :
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
true, "cmd": "virsh -r net-dhcp-leases default | grep -i
00:16:3e:6c:5a:91 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
"0:00:00.108872", "end": "2018-06-01 11:17:34.421769", "rc": 0, "start":
"2018-06-01 11:17:34.312897", "stderr": "", "stderr_lines": [],
"stdout": "", "stdout_lines": []}
I tried with static IP Address and with DHCP but both failed.
To be more specific, I installed three nodes, deployed glusterfs with
the wizard. I'm in a nested virtualization environment for this lab
(Vmware Esxi Hypervisor).
My node IP is : 192.168.176.40 / and I want the hosted-engine vm has
192.168.176.43.
Thanks,
Florent
2 years, 8 months
Account on Zanata
by ちゃーりー
Hi,
I'm Yoshihiro Hayashi, just an oVirt user.
I found a mistake in Japanese translation on ovirt Web UI, I'm going to fix it.
It would be grateful if someone could make an zanata account for me.
Thank you,
Yoshihiro Hayashi
2 years, 8 months
Important changes to the oVirt Terraform Provider
by Janos Bonic
Dear oVirt community,
We are making sweeping and backwards-incompatible changes to the oVirt
Terraform provider. *We want your feedback before we make these changes.*
Here’s the short list what we would like to change, please read the details
below.
1. The current master branch will be renamed to legacy. The usage of
this provider will be phased out within Red Hat around the end / beginning
of next year. If you want to create a fork, we are happy to add a link to
your fork to the readme.
2. A new main branch will be created and a *new Terraform provider*
written from scratch on the basis of go-ovirt-client
<https://github.com/ovirt/go-ovirt-client>. (Preview here
<https://github.com/haveyoudebuggedit/terraform-provider-ovirt>) This
provider will only have limited functionality in its first release.
3. This new provider will be released to the Terraform registry, and
will have full test coverage and documentation. This provider will be
released as version v2.0.0 when ready to signal that it is built on the
Terraform SDK v2.
4. A copy of this new Terraform provider will be kept in the v1 branch
and backported to the Terraform SDK v1 for the benefit of the OpenShift
Installer <https://github.com/openshift/installer>. We will not tag any
releases, and we will not release this backported version in binary form.
5. We are hosting a *community call* on the 14th of October at 13:00 UTC
on this link <https://bluejeans.com/476587312/8047>. Please join to
provide feedback and suggest changes to this plan.
Why are we doing this?
The original Terraform provider
<https://github.com/EMSL-MSC/terraform-provider-ovirt> for oVirt was
written four years ago by @Maigard <https://github.com/Maigard> at EMSL-MSC
<http://github.com/EMSL-MSC/terraform-provider-ovirt>. The oVirt fork of
this provider is about 2 years old and went through rapid expansion, adding
a large number of features.
Unfortunately, this continuous rapid growth came at a price: the original
test infrastructure deteriorated and certain resources, especially the
virtual machine creation ballooned to a size we feel has become
unmaintainable.
If you tried to contribute to the Terraform provider recently, you may have
noticed that our review process has become extremely slow. We can no longer
run the original tests, and our end to end test suite is not integrated
outside of the OpenShift CI system. Every change to the provider requires
one of only 3 people to review the code and also run a manual test suite
that is currently only runable on one computer.
We also noticed an increasing number of bugs reported on OpenShift on
oVirt/RHV related to the Terraform provider.
Our original plan was that we would fix the test infrastructure and then
subsequently slowly transition API calls to go-ovirt-client, but that
resulted in a PR that is over 5000 lines in code
<https://github.com/oVirt/terraform-provider-ovirt/pull/277> and cannot in
good conscience be merged in a single piece. Splitting it up is difficult,
and would likely result in broken functionality where test coverage is not
present.
What are we changing for you, the users?
First of all, documentation. You can already preview the documentation here
<https://registry.terraform.io/providers/haveyoudebuggedit/ovirt/latest/docs>.
You will notice that the provider currently only supports a small set of
features. You can find the full list of features
<https://github.com/haveyoudebuggedit/terraform-provider-ovirt/milestone/1>
we are planning for the first release on GitHub. However, if you are using
resources like cluster creation, etc. these will currently not work and we
recommend sticking to the old provider for the time being.
The second big change will be how resources are treated. Instead of
creating large resources that need to call several of the oVirt APIs to
create, we will create resources that are only calling one API. This will
lead to fewer bugs. For example:
- ovirt_vm will create the VM, but not attach any disks or network
interfaces to it.
- ovirt_disk_attachment or ovirt_disk_attachments will attach a disk to
the VM.
- ovirt_nic will create a network interface.
- ovirt_vm_start will start the virtual machine when provisioned, stop
it when deprovisioned.
You can use the depends_on
<https://www.terraform.io/docs/language/meta-arguments/depends_on.html>
meta-argument to make sure disks and network interfaces are attached before
you start the VM. Alternatively, you can hot-plug network interfaces later.
For example:
resource "ovirt_vm" "test" {
cluster_id = "some-cluster-id"
template_id = "some-template-id"
}
resource "ovirt_disk" "test" {
storagedomain_id = "some-storage-domain-id"
format = "cow"
size = 512
alias = "test"
sparse = true
}
resource "ovirt_disk_attachment" "test" {
vm_id = ovirt_vm.test.id
disk_id = ovirt_disk.test.id
disk_interface = "virtio_scsi"
}
resource "ovirt_vm_start" "test" {
vm_id = ovirt_vm.test.id
depends_on = [ovirt_disk_attachment.test]
}
The next change is the availability of the provider on the Terraform
Registry. You will no longer have to download the binary. Instead, you will
be able to simply pull in the provider like this:
terraform {
required_providers {
ovirt = {
source = "ovirt/ovirt"
version = "..."
}
}
}
provider "ovirt" {
# Configuration options
}
The configuration options for the provider itself have also been greatly
expanded, see the preliminary documentation
<https://registry.terraform.io/providers/haveyoudebuggedit/ovirt/latest/docs>
for details.
What’s changing behind the scenes?
The new Terraform provider is a complete rewrite based on the
go-ovirt-client <https://github.com/ovirt/go-ovirt-client> library. The
single biggest advantage of this library is that it has built-in mocks for
all resources it supports. Having mocks allows us to run tests without
needing to spin up an oVirt instance. We have already configured GitHub
Actions
<https://github.com/haveyoudebuggedit/terraform-provider-ovirt/actions> on
the new provider and all changes are automatically checked against these
mocks.
We may decide to add an end-to-end test later, but for the foreseeable
future we will trust the correctness of the mocks to test community
contributions. This means that we will be able to merge changes much
quicker.
On the OpenShift side we will also switch to using the new provider, since
this is the primary motivation for the change. The OpenShift Installer uses
the legacy version 1 of the Terraform SDK, so we will maintain a version
1-compatible copy in the v1 branch, which the installer can pull in. It is
important to note, however, that the v1 branch will be a pure backport, we
will not develop it separately. Development will be focused on the version
in main that is being released to the Terraform Registry.
What does this mean to you, the contributors?
The current Terraform provider has several pull requests open
<https://github.com/oVirt/terraform-provider-ovirt/pulls>. Unfortunately,
we currently do not have the capacity to properly vet and and run our
internal test suite against these changes. In contrast to the new Terraform
provider, we do not have working tests, linting, and the code structure
that make merging changes easier.
We are very sorry to say that *these patches are unlikely to be merged*. We
know that this is a terrible thing, you have put in effort into writing
them. Unfortunately, we do not see an alternative as there already numerous
bugs on our radar and adding more code would not make the problem go away.
We want to hear your opinion
As the owners of the original Terraform provider we haven’t been keeping up
with reviewing your contributions and issues. Some are several months old
and haven’t received answers for a long time. We want to change that, we
want to hear from you. Please join our community round table around the
Terraform provider on the 14th of October at 13:00 UTC on this link
<https://bluejeans.com/476587312/8047>.
*We want to know: Which resources are the most important to you? How does
this change impact you? Can we make the transition smoother for you? Would
you do anything differently in the light of the issues described above?*
2 years, 8 months
NFS Synology NAS (DSM 7)
by Maton, Brett
Hi List,
I can't get oVirt 4.4.8.5-1.el8 (running on oVirt Node hosts) to connect
to an NFS share on a Synology NAS.
I gave up trying to get the hosted engine deployed and put that on an
iscsi volume instead...
The directory being exported from NAS is owned by vdsm / kvm (36:36)
perms I've tried:
0750
0755
0777
Tried auto / v3 / v4_0
As others have mentioned regarding NFS, if I connect manually from the
host with
mount nas.mydomain.com:/volume1/ov_nas
It connects and works just fine.
If I try to add the share as a domain in oVirt I get
Operation Cancelled
Error while executing action Add Storage Connection: Permission settings on
the specified path do not allow access to the storage.
Verify permission settings on the specified storage path.
When tailing /var/log/messages on
When tailing /var/log/messages on the oVirt host, I see this message appear
(I changed the domain name for this post so the dots might be transcoded in
reality):
Aug 27 17:36:07 ov001 systemd[1]:
rhev-data\x2dcenter-mnt-nas.mydomain.com:_volume1_ov__nas.mount:
Succeeded.
The NAS is running the 'new' DSM 7, /etc/exports looks like this:
/volume1/ov_nas x.x.x.x(rw,async,no_root_squash,anonuid=36,anongid=36)
(reloaded with exportfs -ra)
Any suggestions appreciated.
Regards,
Brett
2 years, 8 months
VM hanging at sustained high throughput
by David Johnson
Hi ovirt gurus,
This is an interesting issue, one I never expected to have.
When I push high volumes of writes to my NAS, I will cause VM's to go into
a paused state. I'm looking at this from a number of angles, including
upgrades on the NAS appliance.
I can reproduce this problem at will running a centos 7.9 VM on Ovirt 4.5.
*Questions:*
1. Is my analysis of the failure (below) reasonable/correct?
2. What am I looking for to validate this?
3. Is there a configuration that I can set to make it a little more robust
while I acquire the hardware to improve the NAS?
*Reproduction:*
Standard test of file write speed:
[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=4096
oflag=direct
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 1.68431 s, 1.3 GB/s
Give it more data
[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=12228
oflag=direct
12228+0 records in
12228+0 records out
6410993664 bytes (6.4 GB) copied, 7.22078 s, 888 MB/s
The odds are about 50/50 that 6 GB will kill the VM, but 100% when I hit 8
GB.
*Analysis:*
What I think appears to be happening is that the intent cache on the NAS is
on an SSD, and my VM's are pushing data about three times as fast as the
SSD can handle. When the SSD gets queued up beyond a certain point, the NAS
(which places reliability over speed) says "Whoah Nellie!", and the VM
chokes.
*David Johnson*
2 years, 8 months
Hosted Engine deployment looks stuck in startup during the deployment
by Eugène Ngontang
Hi,
I'm using an aws ec2 bare metal install to deploy RHV-M in order to
create and test NVidia GPU VMs.
I'm trying to deploy a self hosted engine version 4.4.
I've setup everything till the hosted-engine deployment and Hosted
Engine deployment looks like stuck at engine host startup, and times
out many more hours after.
I'm suspecting networking startup issue but can really and clearly
identify the issue. Because during all this time the deployment
process is waiting for the hosted engine to come up before it
finishes, the hosted engine itself is up and running, is still running
till now, but is not reachable.
Here attached you will find :
- A screenshot before the timeout
- A screenshot after the timeout (fail)
- The answer file I appended to the hosted-engine command
> hosted-engine --deploy --4 --config-append=hosted-engine.conf
- The deployment log output
- The resulting answer file after the deployment.
I think the problem would
I think the problem would at the network startup step but as I don't
have any explicit error/failure message, I can't tell.
Please can someone here advise?
Please let me know if you need any more information from me.
Best regards,
Eugène NG
Best regards,
Eugène NG
--
LesCDN <http://lescdn.com>
engontang(a)lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
2 years, 8 months
Multiple dependencies unresolved
by Andrea Chierici
Dear all,
lately the engine started notifying me about some "errors":
"Failed to check for available updates on host XYZ with message 'Task
Ensure Python3 is installed for CentOS/RHEL8 hosts failed to execute.
Please check logs for more details"
I do understand this is not something that can impact my cluster
stability, since it's only a matter of checking updates, anyway it
annoys me a lot.
I checked the logs and apparently the issue is related to some repos
that are missing/unresolved.
Right now on my hosts I have these repos:
ovirt-release44-4.4.8.3-1.el8.noarch
epel-release-8-13.el8.noarch
centos-stream-release-8.6-1.el8.noarch
puppet5-release-5.0.0-5.el8.noarch
The problems come from:
Error: Failed to download metadata for repo 'ovirt-4.4-centos-gluster8':
Cannot prepare internal mirrorlist: No URLs in mirrorlist
Error: Failed to download metadata for repo 'ovirt-4.4-centos-opstools':
Cannot prepare internal mirrorlist: No URLs in mirrorlist
Error: Failed to download metadata for repo
'ovirt-4.4-openstack-victoria': Cannot download repomd.xml: Cannot
download repodata/repomd.xml: All mirrors were tried
If I disable these repos "yum update" can finish but then I get a large
number of unresolved dependencies and "problems":
Error:
Problem 1: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
requires ansible, but none of the providers can be installed
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.25-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.27-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.17-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.18-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.20-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.21-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.23-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.24-2.el8.noarch
- package ansible-2.9.27-2.el8.noarch conflicts with ansible-core >
2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.27-2.el8.noarch
- cannot install the best update candidate for package
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
- cannot install the best update candidate for package
ansible-2.9.25-1.el8.noarch
- package ansible-2.9.20-1.el8.noarch is filtered out by exclude
filtering
Problem 2: package fence-agents-ibm-powervs-4.2.1-84.el8.noarch
requires fence-agents-common = 4.2.1-84.el8, but none of the providers
can be installed
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-84.el8.noarch
- cannot install the best update candidate for package
fence-agents-ibm-powervs-4.2.1-77.el8.noarch
- cannot install the best update candidate for package
fence-agents-common-4.2.1-77.el8.noarch
Problem 3: cannot install both fence-agents-common-4.2.1-88.el8.noarch
and fence-agents-common-4.2.1-84.el8.noarch
- package fence-agents-ibm-vpc-4.2.1-84.el8.noarch requires
fence-agents-common = 4.2.1-84.el8, but none of the providers can be
installed
- package fence-agents-amt-ws-4.2.1-88.el8.noarch requires
fence-agents-common >= 4.2.1-88.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
fence-agents-ibm-vpc-4.2.1-77.el8.noarch
- cannot install the best update candidate for package
fence-agents-amt-ws-4.2.1-77.el8.noarch
Problem 4: problem with installed package
fence-agents-ibm-vpc-4.2.1-77.el8.noarch
- package fence-agents-ibm-vpc-4.2.1-77.el8.noarch requires
fence-agents-common = 4.2.1-77.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-78.el8.noarch requires
fence-agents-common = 4.2.1-78.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-82.el8.noarch requires
fence-agents-common = 4.2.1-82.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-83.el8.noarch requires
fence-agents-common = 4.2.1-83.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-84.el8.noarch requires
fence-agents-common = 4.2.1-84.el8, but none of the providers can be
installed
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-77.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-78.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-82.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-83.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-84.el8.noarch
- package fence-agents-apc-4.2.1-88.el8.noarch requires
fence-agents-common >= 4.2.1-88.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
fence-agents-apc-4.2.1-77.el8.noarch
Problem 5: problem with installed package
fence-agents-ibm-powervs-4.2.1-77.el8.noarch
- package fence-agents-ibm-powervs-4.2.1-77.el8.noarch requires
fence-agents-common = 4.2.1-77.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-78.el8.noarch requires
fence-agents-common = 4.2.1-78.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-82.el8.noarch requires
fence-agents-common = 4.2.1-82.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-83.el8.noarch requires
fence-agents-common = 4.2.1-83.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-84.el8.noarch requires
fence-agents-common = 4.2.1-84.el8, but none of the providers can be
installed
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-77.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-78.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-82.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-83.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-84.el8.noarch
- package fence-agents-apc-snmp-4.2.1-88.el8.noarch requires
fence-agents-common >= 4.2.1-88.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
fence-agents-apc-snmp-4.2.1-77.el8.noarch
My question is: since I am not willing to update anything right now, and
I just want to get rid of the stupid python3 error, should I disable
these repos, even if now I get some dep issues or the solution is another?
Any suggestion?
Thanks,
Andrea
--
Andrea Chierici - INFN-CNAF
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463
SkypeID ataruz
--
2 years, 8 months