Engine update error from 4.4.2 to 4.4.3
by Gianluca Cecchi
Hello,
is this still the correct command during updates through minor releases in
4.4.x?
yum update ovirt\*setup\*
Because my standalone external engine is now at level
ovirt-engine-4.4.2.6-1.el8.noarch
with
[root@ovmgr1 ~]# yum repolist
repo id
repo name
AppStream
CentOS-8 - AppStream
BaseOS
CentOS-8 - Base
PowerTools
CentOS-8 - PowerTools
extras
CentOS-8 - Extras
ovirt-4.4
Latest oVirt 4.4 Release
ovirt-4.4-advanced-virtualization
Advanced Virtualization packages for x86_64
ovirt-4.4-centos-gluster7
CentOS-8 - Gluster 7
ovirt-4.4-centos-opstools
CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-ovirt44
CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:mdbarroso:ovsdbapp
Copr repo for ovsdbapp owned by mdbarroso
ovirt-4.4-copr:copr.fedorainfracloud.org:networkmanager:NetworkManager-1.22
Copr repo for NetworkManager-1.22 owned by networkmanager
ovirt-4.4-copr:copr.fedorainfracloud.org:nmstate:nmstate-0.2
Copr repo for nmstate-stable owned by nmstate
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible
Copr repo for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection
Copr repo for EL8_collection owned by sbonazzo
ovirt-4.4-epel
Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest
virtio-win builds roughly matching what will be shipped in upcoming RHEL
[root@ovmgr1 ~]# yum repolist
repo id
repo name
AppStream
CentOS-8 - AppStream
BaseOS
CentOS-8 - Base
PowerTools
CentOS-8 - PowerTools
extras
CentOS-8 - Extras
ovirt-4.4
Latest oVirt 4.4 Release
ovirt-4.4-advanced-virtualization
Advanced Virtualization packages for x86_64
ovirt-4.4-centos-gluster7
CentOS-8 - Gluster 7
ovirt-4.4-centos-opstools
CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-ovirt44
CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:mdbarroso:ovsdbapp
Copr repo for ovsdbapp owned by mdbarroso
ovirt-4.4-copr:copr.fedorainfracloud.org:networkmanager:NetworkManager-1.22
Copr repo for NetworkManager-1.22 owned by networkmanager
ovirt-4.4-copr:copr.fedorainfracloud.org:nmstate:nmstate-0.2
Copr repo for nmstate-stable owned by nmstate
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible
Copr repo for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection
Copr repo for EL8_collection owned by sbonazzo
ovirt-4.4-epel
Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest
virtio-win builds roughly matching what will be shipped in upcoming RHEL
[root@ovmgr1 ~]#
- /etc/yum.repos.d/ovirt-4.4.repo
[ovirt-4.4]
name=Latest oVirt 4.4 Release
#baseurl=https://resources.ovirt.org/pub/ovirt-4.4/rpm/el$releasever/
mirrorlist=
https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-4.4-el$releasever
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.4
But if I run the command above I get:
[root@ovmgr1 ~]# yum update ovirt\*setup\*
Last metadata expiration check: 0:39:59 ago on Wed 11 Nov 2020 09:02:05 AM
CET.
Error:
Problem 1: package
ovirt-engine-setup-plugin-ovirt-engine-4.4.3.11-1.el8.noarch requires
ovirt-engine >= 4.4.0, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-engine-setup-plugin-ovirt-engine-4.4.2.6-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-cluster-upgrade-1.2.3-1.el8.noarch
- package ovirt-engine-4.4.0.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.2-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.11-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.9-1.el8.noarch is filtered out by exclude
filtering
Problem 2: package
ovirt-engine-setup-plugin-ovirt-engine-4.4.3.11-1.el8.noarch requires
ovirt-engine >= 4.4.0, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-engine-setup-4.4.3.11-1.el8.noarch requires
ovirt-engine-setup-plugin-ovirt-engine = 4.4.3.11-1.el8, but none of the
providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-engine-setup-4.4.2.6-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-disaster-recovery-1.3.0-1.el8.noarch
- package ovirt-engine-4.4.0.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.2-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.11-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.9-1.el8.noarch is filtered out by exclude
filtering
Problem 3: package
ovirt-engine-setup-plugin-ovirt-engine-4.4.3.11-1.el8.noarch requires
ovirt-engine >= 4.4.0, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-engine-setup-plugin-cinderlib-4.4.3.11-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine = 4.4.3.11-1.el8, but none
of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-engine-setup-plugin-cinderlib-4.4.2.6-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-engine-setup-1.2.4-1.el8.noarch
- package ovirt-engine-4.4.0.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.2-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.11-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.9-1.el8.noarch is filtered out by exclude
filtering
Problem 4: package
ovirt-engine-setup-plugin-ovirt-engine-4.4.3.11-1.el8.noarch requires
ovirt-engine >= 4.4.0, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-engine-setup-plugin-imageio-4.4.3.11-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine = 4.4.3.11-1.el8, but none
of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-engine-setup-plugin-imageio-4.4.2.6-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-hosted-engine-setup-1.1.8-1.el8.noarch
- package ovirt-engine-4.4.0.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.2-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.11-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.9-1.el8.noarch is filtered out by exclude
filtering
Problem 5: package
ovirt-engine-setup-plugin-ovirt-engine-4.4.3.11-1.el8.noarch requires
ovirt-engine >= 4.4.0, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.3.11-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine = 4.4.3.11-1.el8, but none
of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.2.6-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-image-template-1.2.2-1.el8.noarch
- package ovirt-engine-4.4.0.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.2-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.11-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.9-1.el8.noarch is filtered out by exclude
filtering
Problem 6: problem with installed package ovirt-engine-4.4.2.6-1.el8.noarch
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-infra-1.2.2-1.el8.noarch
Problem 7: problem with installed package
ovirt-engine-ui-extensions-1.2.3-1.el8.noarch
- package ovirt-engine-ui-extensions-1.2.3-1.el8.noarch requires
ovirt-ansible-cluster-upgrade >= 1.1.12, but none of the providers can be
installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-cluster-upgrade provided by
ovirt-ansible-cluster-upgrade-1.2.3-1.el8.noarch
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-cluster-upgrade provided by
ovirt-ansible-cluster-upgrade-1.2.2-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-manageiq-1.2.1-1.el8.noarch
Problem 8: problem with installed package
ovirt-engine-setup-plugin-ovirt-engine-4.4.2.6-1.el8.noarch
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.11-1.el8.noarch
requires ovirt-engine >= 4.4.0, but none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.2.6-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.2.6-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.10-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.10-1.el8,
but none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.3-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.3-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.4-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.4-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.5-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.5-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.6-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.6-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.7-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.7-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.8-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.8-1.el8, but
none of the providers can be installed
- package ovirt-engine-setup-plugin-ovirt-engine-4.4.3.9-1.el8.noarch
requires ovirt-engine-setup-plugin-ovirt-engine-common = 4.4.3.9-1.el8, but
none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.2.6-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.10-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.3-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.4-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.5-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.6-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.7-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.8-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- cannot install both
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.9-1.el8.noarch and
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.3.11-1.el8.noarch
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.2.6-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-repositories-1.2.5-1.el8.noarch
- package ovirt-engine-4.4.0.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.1-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.2-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.1.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.10-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.11-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.3-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.4-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.5-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.6-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.7-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.8-1.el8.noarch is filtered out by exclude
filtering
- package ovirt-engine-4.4.3.9-1.el8.noarch is filtered out by exclude
filtering
Problem 9: problem with installed package
ovirt-engine-webadmin-portal-4.4.2.6-1.el8.noarch
- package ovirt-engine-webadmin-portal-4.4.2.6-1.el8.noarch requires
ovirt-engine = 4.4.2.6-1.el8, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-roles-1.2.3-1.el8.noarch
Problem 10: problem with installed package
ovirt-engine-tools-4.4.2.6-1.el8.noarch
- package ovirt-engine-tools-4.4.2.6-1.el8.noarch requires ovirt-engine =
4.4.2.6-1.el8, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-shutdown-env-1.1.0-1.el8.noarch
Problem 11: problem with installed package
ovirt-engine-restapi-4.4.2.6-1.el8.noarch
- package ovirt-engine-restapi-4.4.2.6-1.el8.noarch requires ovirt-engine
= 4.4.2.6-1.el8, but none of the providers can be installed
- package ovirt-engine-4.4.2.6-1.el8.noarch requires ovirt-ansible-roles
>= 1.2.0, but none of the providers can be installed
- package ovirt-ansible-collection-1.2.1-1.el8.noarch obsoletes
ovirt-ansible-roles provided by ovirt-ansible-roles-1.2.3-1.el8.noarch
- cannot install the best update candidate for package
ovirt-ansible-vm-infra-1.2.3-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting
packages or '--skip-broken' to skip uninstallable packages or '--nobest' to
use not only best candidate packages)
[root@ovmgr1 ~]#
Thanks,
Gianluca
4 years
[ANN] oVirt 4.4.4 First Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.4 First Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.4
First Release Candidate for testing, as of November 12th, 2020.
This update is the fourth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.4 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.4 (redeploy in case of already being on 4.4.4).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.4 release highlights:
http://www.ovirt.org/release/4.4.4/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.4/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years
Re: sshd_config AuthorizedKeysFile
by Yedidyah Bar David
On Thu, Nov 12, 2020 at 11:44 AM Angus Clarke <angus(a)charworth.com> wrote:
>
> Righto
>
> For sure the ssh-copy-id is not happy either - in 20something years I've never used this before, I've always just manipulated files.
>
> I did start raising a bug ... missed some field, had to click back and lost all the previously inputted information - I forgot about this ...
>
>
> Overall, tracing the issue is the thing that took too long, there was some message in the hosted-engine-deployment logs (one of the logs from the engine VM itself) which said something like "check you ssh host keys" (sorry, I've lost that information now) - it would have been useful to see it say something about editing the file /root/.ssh/authorized_keys on the KVM host prior to this.
>
> I didn't spend too long debugging the kvm-add-host-to-cluster issue - I didn't find too much in the way of obvious errors in logs on the 2nd KVM host for that either; rather - having just realised and resolved the hosted-engine deployment issue - I guessed the same response to the add-kvm-host to cluster issue which resolved that too.
>
> So maybe a bit of extra logging on the matter would be a great way forwards, or with such few use cases (am I really the only one to manipulate AuthorizedKeysFile? - wow!) then no action at all might be appropriate.
If you still have the logs, please file a bug and attach them, and
perhaps point out what you'd expect there that is missing (or just let
us look at them, and if we decide that the information there is
enough, we'll ask you, after finding the most relevant line).
Generally speaking, and assuming your only problem was in "Add host to
engine" flow (specifically also during hosted-engine deploy), you
should find these logs in the _engine_ machine, in
/var/log/ovirt-engine (and subdirs). The deploy process also tries to
copy these logs to the _host_, under
/var/log/ovirt-hosted-engine-setup, and this part was enhanced
somewhat in 4.4.3 (see https://bugzilla.redhat.com/1844965 ).
Best regards,
>
> Cheers
> Angus
>
> # ls .ssh
> ls: cannot access .ssh: No such file or directory
> # ssh-copy-id server2
>
> /usr/bin/ssh-copy-id: ERROR: failed to open ID file '/root/.pub': No such file or directory
> (to install the contents of '/root/.pub' anyway, look at the -f option)
>
> ________________________________
> From: Martin Perina <mperina(a)redhat.com>
> Sent: 12 November 2020 09:43
> To: Angus Clarke <angus(a)charworth.com>
> Cc: users(a)ovirt.org <users(a)ovirt.org>; Dana Elfassy <delfassy(a)redhat.com>; Yedidyah Bar David <didi(a)redhat.com>
> Subject: Re: [ovirt-users] sshd_config AuthorizedKeysFile
>
> Hi,
>
> could you please try if ssh-copy-id works with your non-standard sshd configuration? Because last time I've checked I haven't noticed that behavior and keys were always added to $HOME/.ssh/authorized_keys
>
> So feel free to create a bug for that, but up until now you are the first user using this non-standard configuration ...
>
> Regards,
> Martin
>
> On Thu, Nov 12, 2020 at 9:00 AM Angus Clarke <angus(a)charworth.com> wrote:
>
> Hello
>
> Sharing for anyone who needs it, this was carried out on OL7, they use ovirt 4.3
>
> In short: both the hosted-engine deployment routine and the host add to cluster routine distribute public ssh keys to /root/.ssh/authorized_keys regardless of the AuthorizedKeysFile setting in /etc/ssh/sshd_config. Both routines fail if AuthorizedKeysfile is not default.
>
>
> The hosted-engine setup assumes AuthorizedKeysFile to be default (~/.ssh/authorized_keys) and creates a public key there, instead of following the sshd_config directive. The setup fails on the back of this.
>
> Once I commented this out of sshd_config file (assumes default) and restarted sshd on the KVM host that was running the hosted-engine deployment, the hosted-engine setup completed successfully.
>
>
> Similarly, I could not deploy a second KVM host to the compute cluster until I had altered this setting on that 2nd KVM host - presumably that process has some similar routine that unwittingly writes keys to ~/.ssh/authorized_keys.
>
> HTH
> Angus
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMJ4Y622RAL...
>
>
>
> --
> Martin Perina
> Manager, Software Engineering
> Red Hat Czech s.r.o.
--
Didi
4 years
Re: sshd_config AuthorizedKeysFile
by Martin Perina
Hi,
could you please try if ssh-copy-id works with your non-standard sshd
configuration? Because last time I've checked I haven't noticed that
behavior and keys were always added to $HOME/.ssh/authorized_keys
So feel free to create a bug for that, but up until now you are the first
user using this non-standard configuration ...
Regards,
Martin
On Thu, Nov 12, 2020 at 9:00 AM Angus Clarke <angus(a)charworth.com> wrote:
> Hello
>
> Sharing for anyone who needs it, this was carried out on OL7, they use
> ovirt 4.3
>
> In short: both the hosted-engine deployment routine and the host add to
> cluster routine distribute public ssh keys to /root/.ssh/authorized_keys
> regardless of the AuthorizedKeysFile setting in /etc/ssh/sshd_config. Both
> routines fail if AuthorizedKeysfile is not default.
>
>
> The hosted-engine setup assumes AuthorizedKeysFile to be default
> (~/.ssh/authorized_keys) and creates a public key there, instead of
> following the sshd_config directive. The setup fails on the back of this.
>
> Once I commented this out of sshd_config file (assumes default) and
> restarted sshd on the KVM host that was running the hosted-engine
> deployment, the hosted-engine setup completed successfully.
>
>
> Similarly, I could not deploy a second KVM host to the compute cluster
> until I had altered this setting on that 2nd KVM host - presumably that
> process has some similar routine that unwittingly writes keys to
> ~/.ssh/authorized_keys.
>
> HTH
> Angus
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMJ4Y622RAL...
>
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
4 years
Re: Cluster compatibility version 4.5 on oVirt 4.4
by Christopher Law
I should have pointed out that in my case the hosted engine failed to install. After the Local VM came up the host was marked as non-operational because the cluster (which I had given a custom name via ansible input) was set to compatibility 4.5 and was not changeable. No idea why the cluster was created at 4.5 if that is not supposed to be available yet or marked for some sort of future release. This caused the host to be non-operational with a warning that the “host” needed upgrading, so perhaps it’s a slightly different issue to that faced below.
If you have faced this issue though I would appreciated some advice on how to resolve it. I noticed the default cluster was marked as 4.4. My plan is therefore to re-install the hosted engine and use the “Default” datacentre and cluster names and changes these later.
From: Christopher Law <chris(a)chrislaw.me>
Sent: 12 November 2020 09:21
To: Ritesh Chikatwar <rchikatw(a)redhat.com>; shadow emy <shadow.emy1(a)gmail.com>
Cc: users <users(a)ovirt.org>
Subject: [ovirt-users] Re: Cluster compatibility version 4.5 on oVirt 4.4
I’ve also faced this issue, which is surprising since I used an ovirt node download for 4.4.
How did you resolve it? How can you upgrade to CentOS 8.3 or downgrade the cluster?
Thanks,
Chris.
From: Ritesh Chikatwar <rchikatw(a)redhat.com<mailto:rchikatw@redhat.com>>
Sent: 12 November 2020 05:10
To: shadow emy <shadow.emy1(a)gmail.com<mailto:shadow.emy1@gmail.com>>
Cc: users <users(a)ovirt.org<mailto:users@ovirt.org>>
Subject: [ovirt-users] Re: Cluster compatibility version 4.5 on oVirt 4.4
Hello,
To upgrade to cluster 4.5 you need a 8.3 host.
I guess still CentOS 8.3 is not available. I am not sure on CentOS 8.3 availability.
If it's available please make sure your host is upgraded to 8.3 .
On Thu, Nov 12, 2020, 5:38 AM shadow emy <shadow.emy1(a)gmail.com<mailto:shadow.emy1@gmail.com>> wrote:
Just to confirm i face similar problem.
Yes i saw that warning too : "Upgrade Cluster Compatibility Level" to upgrade the Cluster to version 4.5.
Though when i try to do that there are a lot of errors.
In GUI :
Error while executing action: Cannot change Cluster Compatibility Version to higher version when there are active Hosts with lower version.
-Please move Host host1, host2, host3 with lower version to maintenance first.
In engine.log :
WARN [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-163) [2c681b74-8666-4f2f-b2e0-6b20e98f417e] Validation of action 'UpdateCluster' failed for user admin@internal-authz. Reasons: VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,$host host1, host2, host3,CLUSTER_CANNOT_UPDATE_COMPATIBILITY_VERSION_WITH_LOWER_HOSTS
I did not find any documentation for 4.5 cluster compatibility, so i like as well to understand why is that option present there.
It will be used when ovirt 4.5.x will be released ?
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UGQ6HPT2HT...
4 years
Cluster compatibility version 4.5 on oVirt 4.4
by tferic@swissonline.ch
Hi
Today, I upgraded oVirt from 4.3 to 4.4.3.
After the upgrade, I upgraded the compatibility from 4.3 to 4.4.
I noticed that the cluster config is offering me another upgrade of the
compatibility to version 4.5.
Up until now, I was under the impression that the compatibility version
must match the oVirt version.
I am now reluctant to upgrade the compatibility to 4.5, while my oVirt
version is still at 4.4 (there is no oVirt 4.5 at this time).
Is it safe to upgrade the compatibility in any case, or are there
certain circumstances, where we should refrain from upgrading it?
I wasn't able to find anything in the documentation.
Kind regards
Toni Feric
4 years
Re: sshd_config AuthorizedKeysFile
by Yedidyah Bar David
On Thu, Nov 12, 2020 at 10:01 AM Angus Clarke <angus(a)charworth.com> wrote:
>
> Hello
>
> Sharing for anyone who needs it, this was carried out on OL7, they use ovirt 4.3
>
> In short: both the hosted-engine deployment routine and the host add to cluster routine distribute public ssh keys to /root/.ssh/authorized_keys regardless of the AuthorizedKeysFile setting in /etc/ssh/sshd_config. Both routines fail if AuthorizedKeysfile is not default.
>
>
> The hosted-engine setup assumes AuthorizedKeysFile to be default (~/.ssh/authorized_keys) and creates a public key there, instead of following the sshd_config directive. The setup fails on the back of this.
>
> Once I commented this out of sshd_config file (assumes default) and restarted sshd on the KVM host that was running the hosted-engine deployment, the hosted-engine setup completed successfully.
>
>
> Similarly, I could not deploy a second KVM host to the compute cluster until I had altered this setting on that 2nd KVM host - presumably that process has some similar routine that unwittingly writes keys to ~/.ssh/authorized_keys.
Thanks for the report.
Would you like to open one or two bugs about this?
I think it's just bug, though - from searching relevant source - in the
code adding a host to the engine. This code is also used during hosted-engine
deploy.
We also have code there to add lines to this file on the appliance (engine
vm image), but I do not believe users will work so hard as to update the
image before deploy.
So one bug is probably enough. To make sure, please include there all
relevant details about how "they" (your customer?) configure their
machines - e.g. is it only during their installation (image/PXE/etc.)
or also routinely (puppet etc.)
I admit I am not sure what the expected behavior should be, though:
An admin can run sshd with a custom file. So should we also check that?
Perhaps it's enough if we allow the admin to set a custom location also
for oVirt, instead of trying to guess. And make sure that the failure
error message is clear and unique enough so that people searching the
net for it find your bug, so can find how to configure it :-)
Best regards,
--
Didi
4 years
sshd_config AuthorizedKeysFile
by Angus Clarke
Hello
Sharing for anyone who needs it, this was carried out on OL7, they use ovirt 4.3
In short: both the hosted-engine deployment routine and the host add to cluster routine distribute public ssh keys to /root/.ssh/authorized_keys regardless of the AuthorizedKeysFile setting in /etc/ssh/sshd_config. Both routines fail if AuthorizedKeysfile is not default.
The hosted-engine setup assumes AuthorizedKeysFile to be default (~/.ssh/authorized_keys) and creates a public key there, instead of following the sshd_config directive. The setup fails on the back of this.
Once I commented this out of sshd_config file (assumes default) and restarted sshd on the KVM host that was running the hosted-engine deployment, the hosted-engine setup completed successfully.
Similarly, I could not deploy a second KVM host to the compute cluster until I had altered this setting on that 2nd KVM host - presumably that process has some similar routine that unwittingly writes keys to ~/.ssh/authorized_keys.
HTH
Angus
4 years
Upgrade OVIRT from 3.6 to 4.3
by Miguel Angel Costas
Hi Guys!
I need to upgrade from 3.6 to 4.3 and I have a doubt.
Do I need to restard VMs for each upgrade (DC Compatibilty 1° 4.0 - 2° 4.1 - 3° 4.2 and 4° 4.3) or can modify the compatibilty from 3.6 to 4.2 and restart the vms in this only step.
Best regards
4 years
Adding iscsi issue 4.3
by thilburn@generalpacific.com
When connecting an iscsi connection from an array I get the following popup "A database error occurred. Please contact your system administrator." I have checked everything and the only device connected right now is the single host in the datacenter. If I check the box " Approve operation" and click Ok it will error out with "A database error occurred. Please contact your system administrator." This is a new volume and no other machine is connected to this LUN or array. Below is from the vdsm log
2020-11-11 10:42:26,087-0800 INFO (jsonrpc/0) [vdsm.api] START getDeviceList(storageType=3, guids=[u'32021001378a6ddad'], checkStatus=True, options={}) from=::ffff:XXXXXX,46208, flow_id=d09dbfd6-2256-48ad-8597-b6c581def1fd, task_id=67c17255-3ffb-4e8b-b6d3-a4e40836d12c (api:48)
2020-11-11 10:42:26,650-0800 INFO (jsonrpc/0) [storage.LVM] Overriding read_only mode current=True override=False (lvm:398)
2020-11-11 10:42:26,822-0800 INFO (jsonrpc/0) [vdsm.api] FINISH getDeviceList return={'devList': [{'status': 'used', 'vendorID': 'ETIUSA', 'capacity': '3999688294400', 'fwrev': '10E', 'discard_zeroes_data': 0, 'vgUUID': '', 'pvsize': '', 'pathlist': [{'connection': u'10.87.172.100', 'iqn': u'XXXXXX.XXXXXX.XXXXXX:storage3', 'portal': '0', 'port': '3260', 'initiatorname': u'default'}], 'logicalblocksize': '512', 'discard_max_bytes': 0, 'pathstatus': [{'type': 'iSCSI', 'physdev': 'sdc', 'capacity': '3999688294400', 'state': 'active', 'lun': '0'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': '', 'serial': 'SETIUSA_UltraStorRS8IP4_2021001378A6DDAD', 'GUID': '32021001378a6ddad', 'productID': 'UltraStorRS8IP4'}]} from=::ffff:XXXXX,46208, flow_id=d09dbfd6-2256-48ad-8597-b6c581def1fd, task_id=67c17255-3ffb-4e8b-b6d3-a4e40836d12c (api:54)
2020-11-11 10:42:26,823-0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getDeviceList succeeded in 0.74 seconds (__init__:312)
2020-11-11 10:42:27,158-0800 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::1,48480 (api:48)
2020-11-11 10:42:27,159-0800 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,48480 (api:54)
2020-11-11 10:42:27,159-0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2020-11-11 10:42:27,393-0800 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::ffff:XXXXX,46208 (api:48)
2020-11-11 10:42:27,394-0800 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:XXXXX,46208 (api:54)
2020-11-11 10:42:27,394-0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2020-11-11 10:43:12,323-0800 INFO (jsonrpc/6) [vdsm.api] START createVG(vgname=u'35e95355-75db-4640-bf99-e576b9ce39aa', devlist=[u'32021001378a6ddad'], force=True, options=None) from=::ffff:XXXXX,46208, flow_id=2c973734, task_id=91f2ab5b-017a-4c88-aa45-ece3bd920ce0 (api:48)
2020-11-11 10:43:12,348-0800 INFO (jsonrpc/6) [storage.LVM] Overriding read_only mode current=True override=False (lvm:398)
2020-11-11 10:43:12,409-0800 INFO (jsonrpc/4) [api.host] START getAllVmStats() from=::ffff:XXXXXX,46208 (api:48)
2020-11-11 10:43:12,410-0800 INFO (jsonrpc/4) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:XXXXXXX,46208 (api:54)
2020-11-11 10:43:12,410-0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2020-11-11 10:43:12,520-0800 ERROR (jsonrpc/6) [storage.LVM] pvcreate failed with rc=5 (lvm:988)
2020-11-11 10:43:12,520-0800 ERROR (jsonrpc/6) [storage.LVM] [], [' Device /dev/mapper/32021001378a6ddad excluded by a filter.'] (lvm:989)
2020-11-11 10:43:12,520-0800 INFO (jsonrpc/6) [vdsm.api] FINISH createVG error=Failed to initialize physical device: ("[u'/dev/mapper/32021001378a6ddad']",) from=::ffff:XXXXXX,46208, flow_id=2c973734, task_id=91f2ab5b-017a-4c88-aa45-ece3bd920ce0 (api:52)
2020-11-11 10:43:12,520-0800 ERROR (jsonrpc/6) [storage.TaskManager.Task] (Task='91f2ab5b-017a-4c88-aa45-ece3bd920ce0') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in createVG
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2146, in createVG
force=force)
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1256, in createVG
_initpvs(pvs, metadataSize, force)
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 990, in _initpvs
raise se.PhysDevInitializationError(str(devices))
PhysDevInitializationError: Failed to initialize physical device: ("[u'/dev/mapper/32021001378a6ddad']",)
2020-11-11 10:43:12,520-0800 INFO (jsonrpc/6) [storage.TaskManager.Task] (Task='91f2ab5b-017a-4c88-aa45-ece3bd920ce0') aborting: Task is aborted: 'Failed to initialize physical device: ("[u\'/dev/mapper/32021001378a6ddad\']",)' - code 601 (task:1181)
2020-11-11 10:43:12,521-0800 ERROR (jsonrpc/6) [storage.Dispatcher] FINISH createVG error=Failed to initialize physical device: ("[u'/dev/mapper/32021001378a6ddad']",) (dispatcher:83)
2020-11-11 10:43:12,521-0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call LVMVolumeGroup.create failed (error 601) in 0.20 seconds (__init__:312)
4 years