Problem with cloud-init (metrics install)
by Chris Adams
I am trying to set up the oVirt Metrics Store, which uses cloud-init for
network settings, so I set the info under the "Initial Run" tab.
However, it doesn't seem to actually apply the network settings unless I
"run once" and enable clout-init there.
I haven't used cloud-init before (been on my to-do list to check out) -
am I missing something?
--
Chris Adams <cma(a)cmadams.net>
5 years, 3 months
Re: Inconsistent metadata on VM's disks
by Strahil
I see only inconsistent VG errors...
Most probably it is CentOS specific.
Best Regards,
Strahil NikolovOn Aug 13, 2019 12:03, Nardus Geldenhuys <nardusg(a)gmail.com> wrote:
>
> Hi There
>
> Hope you are all well. We got this weird issue at the moment. Let me explain.
>
> We are on ovirt 4.3.5. We use Centos 7.6 and when we do an update on the VM's the VM's can not boot into multi user mode. It complains that the root mount can't be found. When you boot using ISO and rescue mode you can see the disks. When you chroot into the VM and run pvscan twice your VM can be rebooted and it is fixed. We don't know if this is Centos based or maybe something on ovirt. Screenshots below. Any help would be appreciated.
>
>
>
>
>
5 years, 3 months
LDAPS-Config
by Budur Nagaraju
Hi
Can someone help in configuring LDAPs authentication in oVirt3.5 ?
Thanks,
Nagaraju
5 years, 3 months
upgrading ovirt 4.2.8 to 4.3.5
by Cole Johnson
I am trying to upgrade ovirt on a standalone ovirt node host, and when I run
# yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
I get:
# yum install http://resources.ovirt.o : package_upload,
product-id, search-disabled-repos, subscription-
: manager, vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Repository centos-sclo-rh-release is listed more than once in the
configuration
Examining /var/tmp/yum-root-sbzUPq/ovirt-release43.rpm:
ovirt-release43-4.3.5.1-1.el7.noarch
/var/tmp/yum-root-sbzUPq/ovirt-release43.rpm: does not update
installed package.
Loading mirror speeds from cached hostfile
* ovirt-4.2-epel: d2lzkl7pfhq30w.cloudfront.net
* ovirt-4.3: resources.ovirt.org
* ovirt-4.3-epel: d2lzkl7pfhq30w.cloudfront.net
No package ovirt-release43.txt available.
Error: Nothing to do
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Repository centos-sclo-rh-release is listed more than once in the
configuration
Cannot upload enabled repos report, is this client registered?
rg/pub/yum-repo/ovirt-release43.rpm
and then running
# yum update "ovirt-*-setup*"
I get:
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
: package_upload, product-id, search-disabled-repos,
subscription-
: manager, vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Repository centos-sclo-rh-release is listed more than once in the
configuration
Loading mirror speeds from cached hostfile
* ovirt-4.2-epel: d2lzkl7pfhq30w.cloudfront.net
* ovirt-4.3: resources.ovirt.org
* ovirt-4.3-epel: d2lzkl7pfhq30w.cloudfront.net
Resolving Dependencies
--> Running transaction check
---> Package ovirt-hosted-engine-setup.noarch 0:2.2.33-1.el7 will be
updated
---> Package ovirt-hosted-engine-setup.noarch 0:2.3.11-1.el7 will be
an update
--> Processing Dependency: otopi >= 1.8 for package:
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
--> Processing Dependency: ovirt-ansible-engine-setup >= 1.1.9 for
package: ovirt-hosted-engine-setup-2.3.11-1.el7.noarc
h
--> Processing Dependency: ovirt-ansible-hosted-engine-setup >= 1.0.21
for package: ovirt-hosted-engine-setup-2.3.11-1.e
l7.noarch
--> Processing Dependency: ovirt-ansible-repositories >= 1.1.5 for
package: ovirt-hosted-engine-setup-2.3.11-1.el7.noarc
h
--> Processing Dependency: ovirt-host >= 4.3 for package:
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
--> Processing Dependency: ovirt-host-deploy >= 1.8 for package:
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
--> Processing Dependency: ovirt-hosted-engine-ha >= 2.3.3 for
package: ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
--> Processing Dependency: vdsm-python >= 4.30 for package:
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
--> Processing Dependency: vdsm-python >= 4.30 for package:
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
--> Running transaction check
---> Package otopi.noarch 0:1.7.8-1.el7 will be obsoleted
---> Package ovirt-ansible-engine-setup.noarch 0:1.1.9-1.el7 will be
installed
---> Package ovirt-ansible-hosted-engine-setup.noarch 0:1.0.26-1.el7
will be installed
---> Package ovirt-ansible-repositories.noarch 0:1.1.5-1.el7 will be
installed
---> Package ovirt-host.x86_64 0:4.2.3-1.el7 will be updated
---> Package ovirt-host.x86_64 0:4.3.4-1.el7 will be an update
--> Processing Dependency: ovirt-host-dependencies = 4.3.4-1.el7 for
package: ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: aide for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: iperf3 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: libvirt-admin for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap-utils for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: pam_pkcs11 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: scap-security-guide for package:
ovirt-host-4.3.4-1.el7.x86_64
---> Package ovirt-host-deploy.noarch 0:1.7.4-1.el7 will be obsoleted
---> Package ovirt-hosted-engine-ha.noarch 0:2.2.19-1.el7 will be
updated
---> Package ovirt-hosted-engine-ha.noarch 0:2.3.3-1.el7 will be an
update
--> Processing Dependency: vdsm >= 4.30.11 for package:
ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
--> Processing Dependency: vdsm >= 4.30.11 for package:
ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
--> Processing Dependency: vdsm-client >= 4.30.11 for package:
ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
---> Package python2-otopi.noarch 0:1.8.3-1.el7 will be obsoleting
--> Processing Dependency: otopi-common = 1.8.3-1.el7 for package:
python2-otopi-1.8.3-1.el7.noarch
---> Package python2-ovirt-host-deploy.noarch 0:1.8.0-1.el7 will be
obsoleting
--> Processing Dependency: ovirt-host-deploy-common = 1.8.0-1.el7 for
package: python2-ovirt-host-deploy-1.8.0-1.el7.noa
rch
---> Package vdsm-python.noarch 0:4.20.46-1.el7 will be updated
--> Processing Dependency: vdsm-python = 4.20.46-1.el7 for package:
vdsm-jsonrpc-4.20.46-1.el7.noarch
--> Processing Dependency: vdsm-python = 4.20.46-1.el7 for package:
vdsm-http-4.20.46-1.el7.noarch
---> Package vdsm-python.noarch 0:4.30.24-1.el7 will be an update
--> Processing Dependency: vdsm-api = 4.30.24-1.el7 for package:
vdsm-python-4.30.24-1.el7.noarch
--> Processing Dependency: vdsm-common = 4.30.24-1.el7 for package:
vdsm-python-4.30.24-1.el7.noarch
--> Processing Dependency: vdsm-network = 4.30.24-1.el7 for package:
vdsm-python-4.30.24-1.el7.noarch
--> Running transaction check
---> Package otopi-common.noarch 0:1.8.3-1.el7 will be installed
---> Package ovirt-host.x86_64 0:4.3.4-1.el7 will be an update
--> Processing Dependency: aide for package: ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: iperf3 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: libvirt-admin for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap-utils for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: pam_pkcs11 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: scap-security-guide for package:
ovirt-host-4.3.4-1.el7.x86_64
---> Package ovirt-host-dependencies.x86_64 0:4.2.3-1.el7 will be
updated
---> Package ovirt-host-dependencies.x86_64 0:4.3.4-1.el7 will be an
update
--> Processing Dependency: collectd-write_syslog for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: liblognorm for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-elasticsearch for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmjsonparse for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmnormalize for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: v2v-conversion-host-wrapper for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
---> Package ovirt-host-deploy-common.noarch 0:1.8.0-1.el7 will be
installed
---> Package vdsm.x86_64 0:4.20.46-1.el7 will be updated
--> Processing Dependency: vdsm = 4.20.46-1.el7 for package:
vdsm-hook-ethtool-options-4.20.46-1.el7.noarch
--> Processing Dependency: vdsm = 4.20.46-1.el7 for package:
vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch
--> Processing Dependency: vdsm = 4.20.46-1.el7 for package:
vdsm-gluster-4.20.46-1.el7.x86_64
--> Processing Dependency: vdsm = 4.20.46-1.el7 for package:
vdsm-hook-fcoe-4.20.46-1.el7.noarch
---> Package vdsm.x86_64 0:4.30.24-1.el7 will be an update
--> Processing Dependency: kernel >= 3.10.0-957.12.2.el7 for package:
vdsm-4.30.24-1.el7.x86_64
--> Processing Dependency: libvirt-daemon-kvm >= 4.5.0-10.el7_6.9 for
package: vdsm-4.30.24-1.el7.x86_64
--> Processing Dependency: qemu-kvm-rhev >= 10:2.12.0-18.el7_6.5 for
package: vdsm-4.30.24-1.el7.x86_64
---> Package vdsm-api.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-api.noarch 0:4.30.24-1.el7 will be an update
---> Package vdsm-client.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-client.noarch 0:4.30.24-1.el7 will be an update
--> Processing Dependency: vdsm-yajsonrpc = 4.30.24-1.el7 for package:
vdsm-client-4.30.24-1.el7.noarch
---> Package vdsm-common.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-common.noarch 0:4.30.24-1.el7 will be an update
---> Package vdsm-http.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-http.noarch 0:4.30.24-1.el7 will be an update
---> Package vdsm-jsonrpc.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-jsonrpc.noarch 0:4.30.24-1.el7 will be an update
---> Package vdsm-network.x86_64 0:4.20.46-1.el7 will be updated
---> Package vdsm-network.x86_64 0:4.30.24-1.el7 will be an update
--> Running transaction check
---> Package collectd-write_syslog.x86_64 0:5.8.1-4.el7 will be
installed
--> Processing Dependency: collectd(x86-64) = 5.8.1-4.el7 for package:
collectd-write_syslog-5.8.1-4.el7.x86_64
---> Package ovirt-host.x86_64 0:4.3.4-1.el7 will be an update
--> Processing Dependency: aide for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: iperf3 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: libvirt-admin for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap-utils for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: pam_pkcs11 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: scap-security-guide for package:
ovirt-host-4.3.4-1.el7.x86_64
---> Package ovirt-host-dependencies.x86_64 0:4.3.4-1.el7 will be an
update
--> Processing Dependency: liblognorm for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-elasticsearch for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmjsonparse for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmnormalize for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
---> Package qemu-kvm-ev.x86_64 10:2.12.0-18.el7_6.3.1 will be updated
---> Package qemu-kvm-ev.x86_64 10:2.12.0-18.el7_6.7.1 will be an
update
--> Processing Dependency: qemu-kvm-common-ev = 10:2.12.0-18.el7_6.7.1
for package: 10:qemu-kvm-ev-2.12.0-18.el7_6.7.1.x
86_64
--> Processing Dependency: qemu-img-ev = 10:2.12.0-18.el7_6.7.1 for
package: 10:qemu-kvm-ev-2.12.0-18.el7_6.7.1.x86_64
---> Package v2v-conversion-host-wrapper.noarch 0:1.14.2-1.el7 will be
installed
--> Processing Dependency: libcgroup-tools for package:
v2v-conversion-host-wrapper-1.14.2-1.el7.noarch
---> Package vdsm.x86_64 0:4.30.24-1.el7 will be an update
--> Processing Dependency: kernel >= 3.10.0-957.12.2.el7 for package:
vdsm-4.30.24-1.el7.x86_64
--> Processing Dependency: libvirt-daemon-kvm >= 4.5.0-10.el7_6.9 for
package: vdsm-4.30.24-1.el7.x86_64
---> Package vdsm-gluster.x86_64 0:4.20.46-1.el7 will be updated
---> Package vdsm-gluster.x86_64 0:4.30.24-1.el7 will be an update
---> Package vdsm-hook-ethtool-options.noarch 0:4.20.46-1.el7 will be
updated
---> Package vdsm-hook-ethtool-options.noarch 0:4.30.24-1.el7 will be
an update
---> Package vdsm-hook-fcoe.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-hook-fcoe.noarch 0:4.30.24-1.el7 will be an update
---> Package vdsm-hook-vmfex-dev.noarch 0:4.20.46-1.el7 will be
updated
---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.24-1.el7 will be an
update
---> Package vdsm-yajsonrpc.noarch 0:4.20.46-1.el7 will be updated
---> Package vdsm-yajsonrpc.noarch 0:4.30.24-1.el7 will be an update
--> Running transaction check
---> Package collectd.x86_64 0:5.8.1-2.el7 will be updated
--> Processing Dependency: collectd(x86-64) = 5.8.1-2.el7 for package:
collectd-virt-5.8.1-2.el7.x86_64
--> Processing Dependency: collectd(x86-64) = 5.8.1-2.el7 for package:
collectd-write_http-5.8.1-2.el7.x86_64
--> Processing Dependency: collectd(x86-64) = 5.8.1-2.el7 for package:
collectd-disk-5.8.1-2.el7.x86_64
--> Processing Dependency: collectd(x86-64) = 5.8.1-2.el7 for package:
collectd-netlink-5.8.1-2.el7.x86_64
---> Package collectd.x86_64 0:5.8.1-4.el7 will be an update
---> Package ovirt-host.x86_64 0:4.3.4-1.el7 will be an update
--> Processing Dependency: aide for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: iperf3 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: libvirt-admin for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap-utils for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: pam_pkcs11 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: scap-security-guide for package:
ovirt-host-4.3.4-1.el7.x86_64
---> Package ovirt-host-dependencies.x86_64 0:4.3.4-1.el7 will be an
update
--> Processing Dependency: liblognorm for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-elasticsearch for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmjsonparse for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmnormalize for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
---> Package qemu-img-ev.x86_64 10:2.12.0-18.el7_6.3.1 will be updated
---> Package qemu-img-ev.x86_64 10:2.12.0-18.el7_6.7.1 will be an
update
---> Package qemu-kvm-common-ev.x86_64 10:2.12.0-18.el7_6.3.1 will be
updated
---> Package qemu-kvm-common-ev.x86_64 10:2.12.0-18.el7_6.7.1 will be
an update
---> Package v2v-conversion-host-wrapper.noarch 0:1.14.2-1.el7 will be
installed
--> Processing Dependency: libcgroup-tools for package:
v2v-conversion-host-wrapper-1.14.2-1.el7.noarch
---> Package vdsm.x86_64 0:4.30.24-1.el7 will be an update
--> Processing Dependency: kernel >= 3.10.0-957.12.2.el7 for package:
vdsm-4.30.24-1.el7.x86_64
--> Processing Dependency: libvirt-daemon-kvm >= 4.5.0-10.el7_6.9 for
package: vdsm-4.30.24-1.el7.x86_64
--> Running transaction check
---> Package collectd-disk.x86_64 0:5.8.1-2.el7 will be updated
---> Package collectd-disk.x86_64 0:5.8.1-4.el7 will be an update
---> Package collectd-netlink.x86_64 0:5.8.1-2.el7 will be updated
---> Package collectd-netlink.x86_64 0:5.8.1-4.el7 will be an update
---> Package collectd-virt.x86_64 0:5.8.1-2.el7 will be updated
---> Package collectd-virt.x86_64 0:5.8.1-4.el7 will be an update
---> Package collectd-write_http.x86_64 0:5.8.1-2.el7 will be updated
---> Package collectd-write_http.x86_64 0:5.8.1-4.el7 will be an
update
---> Package ovirt-host.x86_64 0:4.3.4-1.el7 will be an update
--> Processing Dependency: aide for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: iperf3 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: libvirt-admin for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: openscap-utils for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: pam_pkcs11 for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: scap-security-guide for package:
ovirt-host-4.3.4-1.el7.x86_64
--> Processing Dependency: liblognorm for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-elasticsearch for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmjsonparse for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
--> Processing Dependency: rsyslog-mmnormalize for package:
ovirt-host-dependencies-4.3.4-1.el7.x86_64
---> Package v2v-conversion-host-wrapper.noarch 0:1.14.2-1.el7 will be
installed
--> Processing Dependency: libcgroup-tools for package:
v2v-conversion-host-wrapper-1.14.2-1.el7.noarch
---> Package vdsm.x86_64 0:4.30.24-1.el7 will be an update
--> Processing Dependency: kernel >= 3.10.0-957.12.2.el7 for package:
vdsm-4.30.24-1.el7.x86_64
--> Processing Dependency: libvirt-daemon-kvm >= 4.5.0-10.el7_6.9 for
package: vdsm-4.30.24-1.el7.x86_64
--> Finished Dependency Resolution
Error: Package: v2v-conversion-host-wrapper-1.14.2-1.el7.noarch
(ovirt-4.3)
Requires: libcgroup-tools
Error: Package: ovirt-host-dependencies-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: rsyslog-mmjsonparse
Error: Package: ovirt-host-dependencies-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: rsyslog-elasticsearch
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: openscap
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: libvirt-admin
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: scap-security-guide
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: aide
Error: Package: ovirt-host-dependencies-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: rsyslog-mmnormalize
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: pam_pkcs11
Error: Package: ovirt-host-dependencies-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: liblognorm
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: openscap-utils
Error: Package: ovirt-host-4.3.4-1.el7.x86_64 (ovirt-4.3)
Requires: iperf3
Error: Package: vdsm-4.30.24-1.el7.x86_64 (ovirt-4.3)
Requires: libvirt-daemon-kvm >= 4.5.0-10.el7_6.9
Installed: libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
(installed)
libvirt-daemon-kvm = 4.5.0-10.el7_6.4
Error: Package: vdsm-4.30.24-1.el7.x86_64 (ovirt-4.3)
Requires: kernel >= 3.10.0-957.12.2.el7
Installed: kernel-3.10.0-957.5.1.el7.x86_64 (installed)
kernel = 3.10.0-957.5.1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Repository centos-sclo-rh-release is listed more than once in the
configuration
Cannot upload enabled repos report, is this client registered?
Is there any idea on the dependencies that are not being found?
Thanks for the help
5 years, 3 months
oVirt 4.3.5 potential issue with NFS storage
by Vrgotic, Marko
Dear oVIrt,
This is my third oVirt platform in the company, but first time I am seeing following logs:
“2019-08-07 16:00:16,099Z INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed to object 'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]', sharedLocks=''}'
2019-08-07 16:00:25,618Z WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-37723) [] domain 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem 'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,630Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-37735) [] Domain 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,652Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. vds: 'ovirt-sj-01.ictv.com'
2019-08-07 16:00:40,652Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from problem. No active host in the DC is reporting it as problematic, so clearing the domain recovery timer.”
Can you help me understanding why is this being reported?
This setup is:
5HOSTS, 3 in HA
SelfHostedEngine
Version 4.3.5
NFS based Netapp storage, version 4.1
“10.210.13.64:/ovirt_hosted_engine on /rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine type nfs4 (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
10.210.13.64:/ovirt_production on /rhev/data-center/mnt/10.210.13.64:_ovirt__production type nfs4 (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”
First mount is SHE dedicated storage.
Second mount “ovirt_produciton” is for other VM Guests.
Kindly awaiting your reply.
Marko Vrgotic
5 years, 3 months
VM --- is not responding.
by Edoardo Mazza
Hi all,
It is more days that for same vm I received this error, but I don't
underdand why.
The traffic of the virtual machine is not excessive, cpu and ram to, but
for few minutes the vm is not responding. and in the messages log file of
the vm I received the error under, yo can help me?
thanks
Edoardo
kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
[kworker/2:0:26227]
Aug 8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
ip6table_filter ip6_tables iptable_filter snd_hda_c
odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
snd_timer snd soundcore virtio_rng sg virtio_balloon
i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
Aug 8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
virtio_net virtio_console virtio_scsi ata_generic p
ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
dm_mirror dm_region_hash dm_log dm_mod
Aug 8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
loaded Tainted: G L ------------ 3.10.0-957.12.1.el7.x86_64 #1
Aug 8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
1.11.0-2.el7 04/01/2014
Aug 8 02:51:14 vmmysql kernel: Workqueue: events_freezable
disk_events_workfn
Aug 8 02:51:14 vmmysql kernel: task: ffff9e25b6609040 ti: ffff9e27b1610000
task.ti: ffff9e27b1610000
Aug 8 02:51:14 vmmysql kernel: RIP: 0010:[<ffffffffb8b6b355>]
[<ffffffffb8b6b355>] _raw_spin_unlock_irqrestore+0x15/0x20
Aug 8 02:51:14 vmmysql kernel: RSP: 0000:ffff9e27b1613a68 EFLAGS: 00000286
Aug 8 02:51:14 vmmysql kernel: RAX: 0000000000000001 RBX: ffff9e27b1613a10
RCX: ffff9e27b72a3d05
Aug 8 02:51:14 vmmysql kernel: RDX: ffff9e27b729a420 RSI: 0000000000000286
RDI: 0000000000000286
Aug 8 02:51:14 vmmysql kernel: RBP: ffff9e27b1613a68 R08: 0000000000000001
R09: ffff9e25b67fc198
Aug 8 02:51:14 vmmysql kernel: R10: ffff9e27b45bd8d8 R11: 0000000000000000
R12: ffff9e25b67fde80
Aug 8 02:51:14 vmmysql kernel: R13: ffff9e25b67fc000 R14: ffff9e25b67fc158
R15: ffffffffc032f8e0
Aug 8 02:51:14 vmmysql kernel: FS: 0000000000000000(0000)
GS:ffff9e27b7280000(0000) knlGS:0000000000000000
Aug 8 02:51:14 vmmysql kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
Aug 8 02:51:14 vmmysql kernel: CR2: 00007f0c9e9b6008 CR3: 0000000232480000
CR4: 00000000003606e0
Aug 8 02:51:14 vmmysql kernel: DR0: 0000000000000000 DR1: 0000000000000000
DR2: 0000000000000000
Aug 8 02:51:14 vmmysql kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0
DR7: 0000000000000400
Aug 8 02:51:14 vmmysql kernel: Call Trace:
Aug 8 02:51:14 vmmysql kernel: [<ffffffffc0323d65>]
ata_scsi_queuecmd+0x155/0x450 [libata]
Aug 8 02:51:14 vmmysql kernel: [<ffffffffc031fdb0>] ?
ata_scsiop_inq_std+0xf0/0xf0 [libata]
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb88d14f0>]
scsi_dispatch_cmd+0xb0/0x240
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb88daa8c>]
scsi_request_fn+0x4cc/0x680
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb8743679>]
__blk_run_queue+0x39/0x50
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb874b6d5>]
blk_execute_rq_nowait+0xb5/0x170
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb874b81b>]
blk_execute_rq+0x8b/0x150
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb867d369>] ?
bio_phys_segments+0x19/0x20
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb8746cb1>] ?
blk_rq_bio_prep+0x31/0xb0
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb874b537>] ?
blk_rq_map_kern+0xc7/0x180
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb88d75b3>] scsi_execute+0xd3/0x170
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb88d929e>]
scsi_execute_req_flags+0x8e/0x100
Aug 8 02:51:14 vmmysql kernel: [<ffffffffc041431c>]
sr_check_events+0xbc/0x2d0 [sr_mod]
Aug 8 02:51:14 vmmysql kernel: [<ffffffffc036905e>]
cdrom_check_events+0x1e/0x40 [cdrom]
Aug 8 02:51:14 vmmysql kernel: [<ffffffffc04150b1>]
sr_block_check_events+0xb1/0x120 [sr_mod]
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb8759276>]
disk_check_events+0x66/0x190
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb87593b6>]
disk_events_workfn+0x16/0x20
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb84b9d8f>]
process_one_work+0x17f/0x440
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb84bae26>]
worker_thread+0x126/0x3c0
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb84bad00>] ?
manage_workers.isra.25+0x2a0/0x2a0
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb84c1c71>] kthread+0xd1/0xe0
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb84c1ba0>] ?
insert_kthread_work+0x40/0x40
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb8b75bf7>]
ret_from_fork_nospec_begin+0x21/0x21
Aug 8 02:51:14 vmmysql kernel: [<ffffffffb84c1ba0>] ?
insert_kthread_work+0x40/0x40
Aug 8 02:51:14 vmmysql kernel: Code: 14 25 10 43 03 b9 5d c3 0f 1f 40 00
66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 ff 14 25 10 43 03
b9 48 89 f7 57 9d <0f> 1f 44 00 00 5d c3 0f 1f 40 00 0f 1f 44 00 00 55 48
89 e5 48
5 years, 3 months
Re: oVirt 4.3.5.1 failed to configure management network on the host
by Strahil
You can't bond a bond, nor to layer a teaming device ontop a bond ... There was an article about that.
If it worked - most probably it won't as you wish.
Lacp can play active-backup based on aggregation group, but you need all NICs in the same LACP.
Best Regards,
Strahil NikolovOn Aug 9, 2019 10:22, Mitja Pirih <mitja(a)pirih.si> wrote:
>
> On 08. 08. 2019 11:42, Strahil wrote:
> >
> > LACP supports works with 2 switches, but if you wish to aggregate all
> > links - you need switch support (high-end hardware).
> >
> > Best Regards,
> > Strahil Nikolov
> >
>
> I am aware of that. That's why my idea was to use bond1 (LACP) on eth1+2
> on switch1 and bond2 (LACP) on eth3+4 on switch2 and then team together
> bond1 + bond2. With this config theoretically I should get bonding
> spanned over two switches. Technically it worked, redundancy and
> aggregation.
>
> The problem was deploying self-hosted engine, because the script was
> unable to configure management network.
>
> If I use bonding spanned over two switches as you suggest, based on
> documentation
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
> my options are:
> - Mode 2 (XOR policy)
> - Mode 5 (adaptive transmit load-balancing policy): no use of bridges
> - Mode 6 (adaptive load-balancing policy): same limitation of mode 5
>
> Basically only Mode 2 looks usable for us.
>
>
> Regards,
> Mitja
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCFVROZHDJ5...
5 years, 3 months
Re: RFE: Add the ability to the engine to serve as a fencing proxy
by Strahil
I think poison pill-based fencing is easier to implement but it requires either Network-based (iSCSI or NFS) or FC-based shared storage.
It is used in corosync/pacemaker clusters and is easier to implement.
Best Regards,
Strahil Nikolov
On Aug 8, 2019 11:29, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
>
>
> Il giorno ven 2 ago 2019 alle ore 10:50 Sandro E <feeds.sandro(a)gmail.com> ha scritto:
>>
>> Hi,
>>
>> i hope that this hits the right people i found an RFE (Bug 1373957) which would be a realy nice feature for my company as we have to request firewall rules for every new host and this ends up in a lot of mess and work. Is there any change that this RFE gets implemented ?
>>
>> Thanks for any help or tips
>
>
> This RFE has been filed in 2016 and didn't got much interest so far. Can you elaborate a bit on the user story for this?
>
>
>
>>
>>
>> BR,
>> Sandro
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UP7NZWXZBNH...
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbonazzo(a)redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
5 years, 3 months
Re: oVirt 4.3.5.1 failed to configure management network on the host
by Strahil
LACP supports works with 2 switches, but if you wish to aggregate all links - you need switch support (high-end hardware).
Best Regards,
Strahil NikolovOn Aug 6, 2019 18:08, Vincent Royer <vincent(a)epicenergy.ca> wrote:
>
> I also am spanned over two switches. You can use bonding, you just can't use 802.3 mode.
>
> I have MGMT bonded to two gig switches and storage bonded to two 10g switches for Gluster. Each switch has its own fw/router in HA. So we can lose either switch, either router, or any single interface or cable without interruption.
>
>
> On Tue, Aug 6, 2019, 12:33 AM Mitja Pirih <mitja(a)pirih.si> wrote:
>>
>> On 05. 08. 2019 21:20, Vincent Royer wrote:
>> > I tried deployment of 4.3.5.1 using teams and it didn't work. I did
>> > get into the engine using the temp url on the host, but the teams
>> > showed up as individual nics. Any changes made, like assigning a new
>> > logical network to the nic, failed and I lost connectivity.
>> >
>> > Setup as a bond instead of team before deployment worked as expected,
>> > and the bonds showed up properly in the engine.
>> >
>> > ymmv
>> >
>>
>> I can't use bonding, spanned over two switches.
>> Maybe there is another way to do it, but I am burned out, anybody with
>> an idea?
>>
>> The server has 4x 10Gbps nics. I need up to 20Gbps throughput in HA mode.
>>
>>
>> Thanks.
>>
>>
>>
>> Br,
>> Mitja
5 years, 3 months