Apologies this should read...
"I put the cluster into global maintenance, then installed the 4.3 repo,
then "engine-upgrade-check" then "yum update ovirt\*setup\*" and then
"engine-setup"..."
On Tue, Jul 9, 2019 at 4:08 PM Neil <nwilson123(a)gmail.com> wrote:
Hi Strahil,
Thanks for the quick reply.
I put the cluster into global maintenance, then installed the 4.3 repo,
then "yum update ovirt\*setup\*" then "engine-upgrade-check",
"engine-setup", then "yum update", once completed, I rebooted the
hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
update" after doing the engine-setup, not sure if this would cause it
perhaps?
Thank you.
Regards.
Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg(a)yahoo.com>
wrote:
> Hi Neil,
>
> for "Could not fetch data needed for VM migrate operation" - there was a
> bug and it was fixed.
> Are you sure you have fully updated ?
> What procedure did you use ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <
> nwilson123(a)gmail.com> написа:
>
>
> Hi guys.
>
> I have two problems since upgrading from 4.2.x to 4.3.4
>
> First issue is I can no longer manually migrate VM's between hosts, I get
> an error in the ovirt GUI that says "Could not fetch data needed for VM
> migrate operation" and nothing gets logged either in my engine.log or my
> vdsm.log
>
> Then the other issue is my Dashboard says the following "Error! Could not
> fetch dashboard data. Please ensure that data warehouse is properly
> installed and configured."
>
> If I look at my ovirt-engine-dwhd.log I see the following if I try
> restart the dwh service...
>
> 2019-07-09 11:48:04|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
>
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
>
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
> 2019-07-09 11:48:10|ETL Service Stopped
> 2019-07-09 11:49:59|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
>
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
>
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
> 2019-07-09 11:52:56|ETL Service Stopped
> 2019-07-09 11:52:57|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
>
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
>
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
> 2019-07-09 12:16:01|ETL Service Stopped
> 2019-07-09 12:16:45|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
>
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
>
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
>
>
>
>
>
> I have a hosted engine, and I have two hosts and my storage is FC based.
> The hosts are still running on 4.2 because I'm unable to migrate VM's off.
>
> I have plenty resources available in terms of CPU and Memory on the
> destination host, and my Cluster version is set to 4.2 because my hosts are
> still on 4.2
>
> I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to
> 4.2 as well, but I can't get my hosts to 4.3 because of the above migration
> issue.
>
> Below my ovirt packages installed...
>
> ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
> ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
> ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
> ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch
> ovirt-ansible-image-template-1.1.11-1.el7.noarch
> ovirt-ansible-infra-1.1.12-1.el7.noarch
> ovirt-ansible-manageiq-1.1.14-1.el7.noarch
> ovirt-ansible-repositories-1.1.5-1.el7.noarch
> ovirt-ansible-roles-1.1.6-1.el7.noarch
> ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
> ovirt-ansible-vm-infra-1.1.18-1.el7.noarch
> ovirt-cockpit-sso-0.1.1-1.el7.noarch
> ovirt-engine-4.3.4.3-1.el7.noarch
> ovirt-engine-api-explorer-0.0.5-1.el7.noarch
> ovirt-engine-backend-4.3.4.3-1.el7.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
> ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch
> ovirt-engine-dwh-4.3.0-1.el7.noarch
> ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
> ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch
> ovirt-engine-metrics-1.3.3.1-1.el7.noarch
> ovirt-engine-restapi-4.3.4.3-1.el7.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
> ovirt-engine-setup-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-base-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch
> ovirt-engine-tools-4.3.4.3-1.el7.noarch
> ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch
> ovirt-engine-ui-extensions-1.0.5-1.el7.noarch
> ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch
> ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch
> ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch
> ovirt-engine-wildfly-15.0.1-1.el7.x86_64
> ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
> ovirt-guest-agent-common-1.0.16-1.el7.noarch
> ovirt-guest-tools-iso-4.3-3.el7.noarch
> ovirt-host-deploy-common-1.8.0-1.el7.noarch
> ovirt-host-deploy-java-1.8.0-1.el7.noarch
> ovirt-imageio-common-1.5.1-0.el7.x86_64
> ovirt-imageio-proxy-1.5.1-0.el7.noarch
> ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
> ovirt-iso-uploader-4.3.1-1.el7.noarch
> ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
> ovirt-provider-ovn-1.2.22-1.el7.noarch
> ovirt-release41-4.1.9-1.el7.centos.noarch
> ovirt-release42-4.2.8-1.el7.noarch
> ovirt-release43-4.3.4-1.el7.noarch
> ovirt-vmconsole-1.0.7-2.el7.noarch
> ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
> ovirt-web-ui-1.5.2-1.el7.noarch
> python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch
> python2-ovirt-host-deploy-1.8.0-1.el7.noarch
> python2-ovirt-setup-lib-1.2.0-1.el7.noarch
> python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
>
> [root@dell-ovirt ~]# rpm -qa | grep postgre
> rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64
> rh-postgresql10-postgresql-10.6-1.el7.x86_64
> postgresql-libs-9.2.24-1.el7_5.x86_64
> collectd-postgresql-5.8.1-4.el7.x86_64
> postgresql-server-9.2.24-1.el7_5.x86_64
> rh-postgresql10-postgresql-server-10.6-1.el7.x86_64
> rh-postgresql95-postgresql-9.5.14-1.el7.x86_64
> rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64
> postgresql-jdbc-9.2.1002-6.el7_5.noarch
> rh-postgresql10-runtime-3.1-1.el7.x86_64
> rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64
> rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64
> postgresql-9.2.24-1.el7_5.x86_64
> rh-postgresql95-runtime-2.2-2.el7.x86_64
> rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64
>
> I'm also seeing a strange error on my hosts when I log in showing...
>
> node status: DEGRADED
> Please check the status manually using `nodectl check`
>
> [root@host-a ~]# nodectl check
> Status: FAILED
> Bootloader ... OK
> Layer boot entries ... OK
> Valid boot entries ... OK
> Mount points ... OK
> Separate /var ... OK
> Discard is used ... OK
> Basic storage ... OK
> Initialized VG ... OK
> Initialized Thin Pool ... OK
> Initialized LVs ... OK
> Thin storage ... FAILED - It looks like the LVM layout is not correct.
> The reason could be an incorrect installation.
> Checking available space in thinpool ... OK
> Checking thinpool auto-extend ... FAILED - In order to enable thinpool
> auto-extend,activation/thin_pool_autoextend_threshold needs to be set below
> 100 in lvm.conf
> vdsmd ... OK
>
> I'm running CentOS Linux release 7.6.1810 (Core)
>
> These are my package versions on my hosts...
>
> [root@host-a ~]# rpm -qa | grep -i ovirt
> ovirt-release41-4.1.9-1.el7.centos.noarch
> ovirt-hosted-engine-ha-2.2.19-1.el7.noarch
> ovirt-host-deploy-1.7.4-1.el7.noarch
> ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch
> ovirt-vmconsole-host-1.0.6-2.el7.noarch
> ovirt-provider-ovn-driver-1.2.18-1.el7.noarch
> ovirt-engine-appliance-4.2-20190121.1.el7.noarch
> ovirt-release42-4.2.8-1.el7.noarch
> ovirt-release43-4.3.4-1.el7.noarch
> python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
> cockpit-ovirt-dashboard-0.11.38-1.el7.noarch
> ovirt-imageio-daemon-1.4.6-1.el7.noarch
> ovirt-host-4.2.3-1.el7.x86_64
> ovirt-setup-lib-1.1.5-1.el7.noarch
> ovirt-node-ng-image-update-4.2.8-1.el7.noarch
> ovirt-imageio-common-1.4.6-1.el7.x86_64
> ovirt-vmconsole-1.0.6-2.el7.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
> ovirt-host-dependencies-4.2.3-1.el7.x86_64
> ovirt-release-host-node-4.2.8-1.el7.noarch
> cockpit-machines-ovirt-193-2.el7.noarch
> ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
>
> [root@host-a ~]# rpm -qa | grep -i vdsm
> vdsm-http-4.20.46-1.el7.noarch
> vdsm-common-4.20.46-1.el7.noarch
> vdsm-network-4.20.46-1.el7.x86_64
> vdsm-jsonrpc-4.20.46-1.el7.noarch
> vdsm-4.20.46-1.el7.x86_64
> vdsm-hook-ethtool-options-4.20.46-1.el7.noarch
> vdsm-hook-vhostmd-4.20.46-1.el7.noarch
> vdsm-python-4.20.46-1.el7.noarch
> vdsm-api-4.20.46-1.el7.noarch
> vdsm-yajsonrpc-4.20.46-1.el7.noarch
> vdsm-hook-fcoe-4.20.46-1.el7.noarch
> vdsm-hook-openstacknet-4.20.46-1.el7.noarch
> vdsm-client-4.20.46-1.el7.noarch
> vdsm-gluster-4.20.46-1.el7.x86_64
> vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch
>
> I am seeing the following error every minute or so in my vdsm.log as
> follows....
>
> 2019-07-09 12:50:31,543+0200 WARN (qgapoller/2)
> [virt.periodic.VmDispatcher] could not run <function <lambda> at
> 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef']
(periodic:323)
>
> Then also under /var/log/messages..
>
> Jul 9 12:57:48 host-a ovs-vsctl:
> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
> connection failed (No such file or directory)
>
> I'm not using ovn so I'm guessing this can be ignored.
>
> If I search for ERROR or WARN in my logs nothing relevant is logged
>
> Any suggestions on what to start looking for please?
>
> Please let me know if you need further info.
>
> Thank you.
>
> Regards.
>
> Neil Wilson
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WN...
>