Manual Migration not working and Dashboard broken after 4.3.4 update

Hi guys. I have two problems since upgrading from 4.2.x to 4.3.4 First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured." If I look at my ovirt-engine-dwhd.log I see the following if I try restart the dwh service... 2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off. I have plenty resources available in terms of CPU and Memory on the destination host, and my Cluster version is set to 4.2 because my hosts are still on 4.2 I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as well, but I can't get my hosts to 4.3 because of the above migration issue. Below my ovirt packages installed... ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch ovirt-ansible-engine-setup-1.1.9-1.el7.noarch ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch ovirt-ansible-image-template-1.1.11-1.el7.noarch ovirt-ansible-infra-1.1.12-1.el7.noarch ovirt-ansible-manageiq-1.1.14-1.el7.noarch ovirt-ansible-repositories-1.1.5-1.el7.noarch ovirt-ansible-roles-1.1.6-1.el7.noarch ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch ovirt-ansible-vm-infra-1.1.18-1.el7.noarch ovirt-cockpit-sso-0.1.1-1.el7.noarch ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-api-explorer-0.0.5-1.el7.noarch ovirt-engine-backend-4.3.4.3-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch ovirt-engine-dwh-4.3.0-1.el7.noarch ovirt-engine-dwh-setup-4.3.0-1.el7.noarch ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch ovirt-engine-metrics-1.3.3.1-1.el7.noarch ovirt-engine-restapi-4.3.4.3-1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-setup-4.3.4.3-1.el7.noarch ovirt-engine-setup-base-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-tools-4.3.4.3-1.el7.noarch ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch ovirt-engine-ui-extensions-1.0.5-1.el7.noarch ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-wildfly-15.0.1-1.el7.x86_64 ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch ovirt-guest-agent-common-1.0.16-1.el7.noarch ovirt-guest-tools-iso-4.3-3.el7.noarch ovirt-host-deploy-common-1.8.0-1.el7.noarch ovirt-host-deploy-java-1.8.0-1.el7.noarch ovirt-imageio-common-1.5.1-0.el7.x86_64 ovirt-imageio-proxy-1.5.1-0.el7.noarch ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch ovirt-iso-uploader-4.3.1-1.el7.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.22-1.el7.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch ovirt-vmconsole-1.0.7-2.el7.noarch ovirt-vmconsole-proxy-1.0.7-2.el7.noarch ovirt-web-ui-1.5.2-1.el7.noarch python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch python2-ovirt-host-deploy-1.8.0-1.el7.noarch python2-ovirt-setup-lib-1.2.0-1.el7.noarch python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64 [root@dell-ovirt ~]# rpm -qa | grep postgre rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64 rh-postgresql10-postgresql-10.6-1.el7.x86_64 postgresql-libs-9.2.24-1.el7_5.x86_64 collectd-postgresql-5.8.1-4.el7.x86_64 postgresql-server-9.2.24-1.el7_5.x86_64 rh-postgresql10-postgresql-server-10.6-1.el7.x86_64 rh-postgresql95-postgresql-9.5.14-1.el7.x86_64 rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64 postgresql-jdbc-9.2.1002-6.el7_5.noarch rh-postgresql10-runtime-3.1-1.el7.x86_64 rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64 rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64 postgresql-9.2.24-1.el7_5.x86_64 rh-postgresql95-runtime-2.2-2.el7.x86_64 rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64 I'm also seeing a strange error on my hosts when I log in showing... node status: DEGRADED Please check the status manually using `nodectl check` [root@host-a ~]# nodectl check Status: FAILED Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation. Checking available space in thinpool ... OK Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf vdsmd ... OK I'm running CentOS Linux release 7.6.1810 (Core) These are my package versions on my hosts... [root@host-a ~]# rpm -qa | grep -i ovirt ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.19-1.el7.noarch ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch ovirt-vmconsole-host-1.0.6-2.el7.noarch ovirt-provider-ovn-driver-1.2.18-1.el7.noarch ovirt-engine-appliance-4.2-20190121.1.el7.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64 cockpit-ovirt-dashboard-0.11.38-1.el7.noarch ovirt-imageio-daemon-1.4.6-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-setup-lib-1.1.5-1.el7.noarch ovirt-node-ng-image-update-4.2.8-1.el7.noarch ovirt-imageio-common-1.4.6-1.el7.x86_64 ovirt-vmconsole-1.0.6-2.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-release-host-node-4.2.8-1.el7.noarch cockpit-machines-ovirt-193-2.el7.noarch ovirt-hosted-engine-setup-2.2.33-1.el7.noarch [root@host-a ~]# rpm -qa | grep -i vdsm vdsm-http-4.20.46-1.el7.noarch vdsm-common-4.20.46-1.el7.noarch vdsm-network-4.20.46-1.el7.x86_64 vdsm-jsonrpc-4.20.46-1.el7.noarch vdsm-4.20.46-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.46-1.el7.noarch vdsm-hook-vhostmd-4.20.46-1.el7.noarch vdsm-python-4.20.46-1.el7.noarch vdsm-api-4.20.46-1.el7.noarch vdsm-yajsonrpc-4.20.46-1.el7.noarch vdsm-hook-fcoe-4.20.46-1.el7.noarch vdsm-hook-openstacknet-4.20.46-1.el7.noarch vdsm-client-4.20.46-1.el7.noarch vdsm-gluster-4.20.46-1.el7.x86_64 vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch I am seeing the following error every minute or so in my vdsm.log as follows.... 2019-07-09 12:50:31,543+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323) Then also under /var/log/messages.. Jul 9 12:57:48 host-a ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) I'm not using ovn so I'm guessing this can be ignored. If I search for ERROR or WARN in my logs nothing relevant is logged Any suggestions on what to start looking for please? Please let me know if you need further info. Thank you. Regards. Neil Wilson

Hi Neil, for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed.Are you sure you have fully updated ?What procedure did you use ? Best Regards,Strahil Nikolov В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <nwilson123@gmail.com> написа: Hi guys. I have two problems since upgrading from 4.2.x to 4.3.4 First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured." If I look at my ovirt-engine-dwhd.log I see the following if I try restart the dwh service... 2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off. I have plenty resources available in terms of CPU and Memory on the destination host, and my Cluster version is set to 4.2 because my hosts are still on 4.2 I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as well, but I can't get my hosts to 4.3 because of the above migration issue. Below my ovirt packages installed... ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch ovirt-ansible-engine-setup-1.1.9-1.el7.noarch ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch ovirt-ansible-image-template-1.1.11-1.el7.noarch ovirt-ansible-infra-1.1.12-1.el7.noarch ovirt-ansible-manageiq-1.1.14-1.el7.noarch ovirt-ansible-repositories-1.1.5-1.el7.noarch ovirt-ansible-roles-1.1.6-1.el7.noarch ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch ovirt-ansible-vm-infra-1.1.18-1.el7.noarch ovirt-cockpit-sso-0.1.1-1.el7.noarch ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-api-explorer-0.0.5-1.el7.noarch ovirt-engine-backend-4.3.4.3-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch ovirt-engine-dwh-4.3.0-1.el7.noarch ovirt-engine-dwh-setup-4.3.0-1.el7.noarch ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch ovirt-engine-metrics-1.3.3.1-1.el7.noarch ovirt-engine-restapi-4.3.4.3-1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-setup-4.3.4.3-1.el7.noarch ovirt-engine-setup-base-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-tools-4.3.4.3-1.el7.noarch ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch ovirt-engine-ui-extensions-1.0.5-1.el7.noarch ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-wildfly-15.0.1-1.el7.x86_64 ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch ovirt-guest-agent-common-1.0.16-1.el7.noarch ovirt-guest-tools-iso-4.3-3.el7.noarch ovirt-host-deploy-common-1.8.0-1.el7.noarch ovirt-host-deploy-java-1.8.0-1.el7.noarch ovirt-imageio-common-1.5.1-0.el7.x86_64 ovirt-imageio-proxy-1.5.1-0.el7.noarch ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch ovirt-iso-uploader-4.3.1-1.el7.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.22-1.el7.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch ovirt-vmconsole-1.0.7-2.el7.noarch ovirt-vmconsole-proxy-1.0.7-2.el7.noarch ovirt-web-ui-1.5.2-1.el7.noarch python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch python2-ovirt-host-deploy-1.8.0-1.el7.noarch python2-ovirt-setup-lib-1.2.0-1.el7.noarch python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64 [root@dell-ovirt ~]# rpm -qa | grep postgre rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64 rh-postgresql10-postgresql-10.6-1.el7.x86_64 postgresql-libs-9.2.24-1.el7_5.x86_64 collectd-postgresql-5.8.1-4.el7.x86_64 postgresql-server-9.2.24-1.el7_5.x86_64 rh-postgresql10-postgresql-server-10.6-1.el7.x86_64 rh-postgresql95-postgresql-9.5.14-1.el7.x86_64 rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64 postgresql-jdbc-9.2.1002-6.el7_5.noarch rh-postgresql10-runtime-3.1-1.el7.x86_64 rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64 rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64 postgresql-9.2.24-1.el7_5.x86_64 rh-postgresql95-runtime-2.2-2.el7.x86_64 rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64 I'm also seeing a strange error on my hosts when I log in showing... node status: DEGRADED Please check the status manually using `nodectl check` [root@host-a ~]# nodectl check Status: FAILED Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation. Checking available space in thinpool ... OK Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf vdsmd ... OK I'm running CentOS Linux release 7.6.1810 (Core) These are my package versions on my hosts... [root@host-a ~]# rpm -qa | grep -i ovirt ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.19-1.el7.noarch ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch ovirt-vmconsole-host-1.0.6-2.el7.noarch ovirt-provider-ovn-driver-1.2.18-1.el7.noarch ovirt-engine-appliance-4.2-20190121.1.el7.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64 cockpit-ovirt-dashboard-0.11.38-1.el7.noarch ovirt-imageio-daemon-1.4.6-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-setup-lib-1.1.5-1.el7.noarch ovirt-node-ng-image-update-4.2.8-1.el7.noarch ovirt-imageio-common-1.4.6-1.el7.x86_64 ovirt-vmconsole-1.0.6-2.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-release-host-node-4.2.8-1.el7.noarch cockpit-machines-ovirt-193-2.el7.noarch ovirt-hosted-engine-setup-2.2.33-1.el7.noarch [root@host-a ~]# rpm -qa | grep -i vdsm vdsm-http-4.20.46-1.el7.noarch vdsm-common-4.20.46-1.el7.noarch vdsm-network-4.20.46-1.el7.x86_64 vdsm-jsonrpc-4.20.46-1.el7.noarch vdsm-4.20.46-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.46-1.el7.noarch vdsm-hook-vhostmd-4.20.46-1.el7.noarch vdsm-python-4.20.46-1.el7.noarch vdsm-api-4.20.46-1.el7.noarch vdsm-yajsonrpc-4.20.46-1.el7.noarch vdsm-hook-fcoe-4.20.46-1.el7.noarch vdsm-hook-openstacknet-4.20.46-1.el7.noarch vdsm-client-4.20.46-1.el7.noarch vdsm-gluster-4.20.46-1.el7.x86_64 vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch I am seeing the following error every minute or so in my vdsm.log as follows.... 2019-07-09 12:50:31,543+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323) Then also under /var/log/messages.. Jul 9 12:57:48 host-a ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) I'm not using ovn so I'm guessing this can be ignored. If I search for ERROR or WARN in my logs nothing relevant is logged Any suggestions on what to start looking for please? Please let me know if you need further info. Thank you. Regards. Neil Wilson _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2...

Hi Strahil, Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance. Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps? Thank you. Regards. Neil Wilson. On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <nwilson123@gmail.com> написа:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restart the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off.
I have plenty resources available in terms of CPU and Memory on the destination host, and my Cluster version is set to 4.2 because my hosts are still on 4.2
I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as well, but I can't get my hosts to 4.3 because of the above migration issue.
Below my ovirt packages installed...
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch ovirt-ansible-engine-setup-1.1.9-1.el7.noarch ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch ovirt-ansible-image-template-1.1.11-1.el7.noarch ovirt-ansible-infra-1.1.12-1.el7.noarch ovirt-ansible-manageiq-1.1.14-1.el7.noarch ovirt-ansible-repositories-1.1.5-1.el7.noarch ovirt-ansible-roles-1.1.6-1.el7.noarch ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch ovirt-ansible-vm-infra-1.1.18-1.el7.noarch ovirt-cockpit-sso-0.1.1-1.el7.noarch ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-api-explorer-0.0.5-1.el7.noarch ovirt-engine-backend-4.3.4.3-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch ovirt-engine-dwh-4.3.0-1.el7.noarch ovirt-engine-dwh-setup-4.3.0-1.el7.noarch ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch ovirt-engine-metrics-1.3.3.1-1.el7.noarch ovirt-engine-restapi-4.3.4.3-1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-setup-4.3.4.3-1.el7.noarch ovirt-engine-setup-base-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-tools-4.3.4.3-1.el7.noarch ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch ovirt-engine-ui-extensions-1.0.5-1.el7.noarch ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-wildfly-15.0.1-1.el7.x86_64 ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch ovirt-guest-agent-common-1.0.16-1.el7.noarch ovirt-guest-tools-iso-4.3-3.el7.noarch ovirt-host-deploy-common-1.8.0-1.el7.noarch ovirt-host-deploy-java-1.8.0-1.el7.noarch ovirt-imageio-common-1.5.1-0.el7.x86_64 ovirt-imageio-proxy-1.5.1-0.el7.noarch ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch ovirt-iso-uploader-4.3.1-1.el7.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.22-1.el7.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch ovirt-vmconsole-1.0.7-2.el7.noarch ovirt-vmconsole-proxy-1.0.7-2.el7.noarch ovirt-web-ui-1.5.2-1.el7.noarch python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch python2-ovirt-host-deploy-1.8.0-1.el7.noarch python2-ovirt-setup-lib-1.2.0-1.el7.noarch python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
[root@dell-ovirt ~]# rpm -qa | grep postgre rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64 rh-postgresql10-postgresql-10.6-1.el7.x86_64 postgresql-libs-9.2.24-1.el7_5.x86_64 collectd-postgresql-5.8.1-4.el7.x86_64 postgresql-server-9.2.24-1.el7_5.x86_64 rh-postgresql10-postgresql-server-10.6-1.el7.x86_64 rh-postgresql95-postgresql-9.5.14-1.el7.x86_64 rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64 postgresql-jdbc-9.2.1002-6.el7_5.noarch rh-postgresql10-runtime-3.1-1.el7.x86_64 rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64 rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64 postgresql-9.2.24-1.el7_5.x86_64 rh-postgresql95-runtime-2.2-2.el7.x86_64 rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64
I'm also seeing a strange error on my hosts when I log in showing...
node status: DEGRADED Please check the status manually using `nodectl check`
[root@host-a ~]# nodectl check Status: FAILED Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation. Checking available space in thinpool ... OK Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf vdsmd ... OK
I'm running CentOS Linux release 7.6.1810 (Core)
These are my package versions on my hosts...
[root@host-a ~]# rpm -qa | grep -i ovirt ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.19-1.el7.noarch ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch ovirt-vmconsole-host-1.0.6-2.el7.noarch ovirt-provider-ovn-driver-1.2.18-1.el7.noarch ovirt-engine-appliance-4.2-20190121.1.el7.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64 cockpit-ovirt-dashboard-0.11.38-1.el7.noarch ovirt-imageio-daemon-1.4.6-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-setup-lib-1.1.5-1.el7.noarch ovirt-node-ng-image-update-4.2.8-1.el7.noarch ovirt-imageio-common-1.4.6-1.el7.x86_64 ovirt-vmconsole-1.0.6-2.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-release-host-node-4.2.8-1.el7.noarch cockpit-machines-ovirt-193-2.el7.noarch ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
[root@host-a ~]# rpm -qa | grep -i vdsm vdsm-http-4.20.46-1.el7.noarch vdsm-common-4.20.46-1.el7.noarch vdsm-network-4.20.46-1.el7.x86_64 vdsm-jsonrpc-4.20.46-1.el7.noarch vdsm-4.20.46-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.46-1.el7.noarch vdsm-hook-vhostmd-4.20.46-1.el7.noarch vdsm-python-4.20.46-1.el7.noarch vdsm-api-4.20.46-1.el7.noarch vdsm-yajsonrpc-4.20.46-1.el7.noarch vdsm-hook-fcoe-4.20.46-1.el7.noarch vdsm-hook-openstacknet-4.20.46-1.el7.noarch vdsm-client-4.20.46-1.el7.noarch vdsm-gluster-4.20.46-1.el7.x86_64 vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch
I am seeing the following error every minute or so in my vdsm.log as follows....
2019-07-09 12:50:31,543+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323)
Then also under /var/log/messages..
Jul 9 12:57:48 host-a ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
I'm not using ovn so I'm guessing this can be ignored.
If I search for ERROR or WARN in my logs nothing relevant is logged
Any suggestions on what to start looking for please?
Please let me know if you need further info.
Thank you.
Regards.
Neil Wilson
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2...

Apologies this should read... "I put the cluster into global maintenance, then installed the 4.3 repo, then "engine-upgrade-check" then "yum update ovirt\*setup\*" and then "engine-setup"..." On Tue, Jul 9, 2019 at 4:08 PM Neil <nwilson123@gmail.com> wrote:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil < nwilson123@gmail.com> написа:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restart the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off.
I have plenty resources available in terms of CPU and Memory on the destination host, and my Cluster version is set to 4.2 because my hosts are still on 4.2
I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as well, but I can't get my hosts to 4.3 because of the above migration issue.
Below my ovirt packages installed...
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch ovirt-ansible-engine-setup-1.1.9-1.el7.noarch ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch ovirt-ansible-image-template-1.1.11-1.el7.noarch ovirt-ansible-infra-1.1.12-1.el7.noarch ovirt-ansible-manageiq-1.1.14-1.el7.noarch ovirt-ansible-repositories-1.1.5-1.el7.noarch ovirt-ansible-roles-1.1.6-1.el7.noarch ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch ovirt-ansible-vm-infra-1.1.18-1.el7.noarch ovirt-cockpit-sso-0.1.1-1.el7.noarch ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-api-explorer-0.0.5-1.el7.noarch ovirt-engine-backend-4.3.4.3-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch ovirt-engine-dwh-4.3.0-1.el7.noarch ovirt-engine-dwh-setup-4.3.0-1.el7.noarch ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch ovirt-engine-metrics-1.3.3.1-1.el7.noarch ovirt-engine-restapi-4.3.4.3-1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-setup-4.3.4.3-1.el7.noarch ovirt-engine-setup-base-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-tools-4.3.4.3-1.el7.noarch ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch ovirt-engine-ui-extensions-1.0.5-1.el7.noarch ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-wildfly-15.0.1-1.el7.x86_64 ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch ovirt-guest-agent-common-1.0.16-1.el7.noarch ovirt-guest-tools-iso-4.3-3.el7.noarch ovirt-host-deploy-common-1.8.0-1.el7.noarch ovirt-host-deploy-java-1.8.0-1.el7.noarch ovirt-imageio-common-1.5.1-0.el7.x86_64 ovirt-imageio-proxy-1.5.1-0.el7.noarch ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch ovirt-iso-uploader-4.3.1-1.el7.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.22-1.el7.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch ovirt-vmconsole-1.0.7-2.el7.noarch ovirt-vmconsole-proxy-1.0.7-2.el7.noarch ovirt-web-ui-1.5.2-1.el7.noarch python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch python2-ovirt-host-deploy-1.8.0-1.el7.noarch python2-ovirt-setup-lib-1.2.0-1.el7.noarch python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
[root@dell-ovirt ~]# rpm -qa | grep postgre rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64 rh-postgresql10-postgresql-10.6-1.el7.x86_64 postgresql-libs-9.2.24-1.el7_5.x86_64 collectd-postgresql-5.8.1-4.el7.x86_64 postgresql-server-9.2.24-1.el7_5.x86_64 rh-postgresql10-postgresql-server-10.6-1.el7.x86_64 rh-postgresql95-postgresql-9.5.14-1.el7.x86_64 rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64 postgresql-jdbc-9.2.1002-6.el7_5.noarch rh-postgresql10-runtime-3.1-1.el7.x86_64 rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64 rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64 postgresql-9.2.24-1.el7_5.x86_64 rh-postgresql95-runtime-2.2-2.el7.x86_64 rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64
I'm also seeing a strange error on my hosts when I log in showing...
node status: DEGRADED Please check the status manually using `nodectl check`
[root@host-a ~]# nodectl check Status: FAILED Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation. Checking available space in thinpool ... OK Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf vdsmd ... OK
I'm running CentOS Linux release 7.6.1810 (Core)
These are my package versions on my hosts...
[root@host-a ~]# rpm -qa | grep -i ovirt ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.19-1.el7.noarch ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch ovirt-vmconsole-host-1.0.6-2.el7.noarch ovirt-provider-ovn-driver-1.2.18-1.el7.noarch ovirt-engine-appliance-4.2-20190121.1.el7.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64 cockpit-ovirt-dashboard-0.11.38-1.el7.noarch ovirt-imageio-daemon-1.4.6-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-setup-lib-1.1.5-1.el7.noarch ovirt-node-ng-image-update-4.2.8-1.el7.noarch ovirt-imageio-common-1.4.6-1.el7.x86_64 ovirt-vmconsole-1.0.6-2.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-release-host-node-4.2.8-1.el7.noarch cockpit-machines-ovirt-193-2.el7.noarch ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
[root@host-a ~]# rpm -qa | grep -i vdsm vdsm-http-4.20.46-1.el7.noarch vdsm-common-4.20.46-1.el7.noarch vdsm-network-4.20.46-1.el7.x86_64 vdsm-jsonrpc-4.20.46-1.el7.noarch vdsm-4.20.46-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.46-1.el7.noarch vdsm-hook-vhostmd-4.20.46-1.el7.noarch vdsm-python-4.20.46-1.el7.noarch vdsm-api-4.20.46-1.el7.noarch vdsm-yajsonrpc-4.20.46-1.el7.noarch vdsm-hook-fcoe-4.20.46-1.el7.noarch vdsm-hook-openstacknet-4.20.46-1.el7.noarch vdsm-client-4.20.46-1.el7.noarch vdsm-gluster-4.20.46-1.el7.x86_64 vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch
I am seeing the following error every minute or so in my vdsm.log as follows....
2019-07-09 12:50:31,543+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323)
Then also under /var/log/messages..
Jul 9 12:57:48 host-a ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
I'm not using ovn so I'm guessing this can be ignored.
If I search for ERROR or WARN in my logs nothing relevant is logged
Any suggestions on what to start looking for please?
Please let me know if you need further info.
Thank you.
Regards.
Neil Wilson
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2...

Shouldn't cause that problem. You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression.As far as I remember , only the dashboard was affected due to new features about vdo disk savings. About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ? Best Regards,Strahil Nikolov В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil <nwilson123@gmail.com> написа: Hi Strahil, Thanks for the quick reply.I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance. Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps? Thank you.Regards. Neil Wilson. On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: Hi Neil, for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed.Are you sure you have fully updated ?What procedure did you use ? Best Regards,Strahil Nikolov В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <nwilson123@gmail.com> написа: Hi guys. I have two problems since upgrading from 4.2.x to 4.3.4 First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured." If I look at my ovirt-engine-dwhd.log I see the following if I try restart the dwh service... 2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off. I have plenty resources available in terms of CPU and Memory on the destination host, and my Cluster version is set to 4.2 because my hosts are still on 4.2 I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as well, but I can't get my hosts to 4.3 because of the above migration issue. Below my ovirt packages installed... ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch ovirt-ansible-engine-setup-1.1.9-1.el7.noarch ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch ovirt-ansible-image-template-1.1.11-1.el7.noarch ovirt-ansible-infra-1.1.12-1.el7.noarch ovirt-ansible-manageiq-1.1.14-1.el7.noarch ovirt-ansible-repositories-1.1.5-1.el7.noarch ovirt-ansible-roles-1.1.6-1.el7.noarch ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch ovirt-ansible-vm-infra-1.1.18-1.el7.noarch ovirt-cockpit-sso-0.1.1-1.el7.noarch ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-api-explorer-0.0.5-1.el7.noarch ovirt-engine-backend-4.3.4.3-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch ovirt-engine-dwh-4.3.0-1.el7.noarch ovirt-engine-dwh-setup-4.3.0-1.el7.noarch ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch ovirt-engine-metrics-1.3.3.1-1.el7.noarch ovirt-engine-restapi-4.3.4.3-1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-setup-4.3.4.3-1.el7.noarch ovirt-engine-setup-base-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-tools-4.3.4.3-1.el7.noarch ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch ovirt-engine-ui-extensions-1.0.5-1.el7.noarch ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-wildfly-15.0.1-1.el7.x86_64 ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch ovirt-guest-agent-common-1.0.16-1.el7.noarch ovirt-guest-tools-iso-4.3-3.el7.noarch ovirt-host-deploy-common-1.8.0-1.el7.noarch ovirt-host-deploy-java-1.8.0-1.el7.noarch ovirt-imageio-common-1.5.1-0.el7.x86_64 ovirt-imageio-proxy-1.5.1-0.el7.noarch ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch ovirt-iso-uploader-4.3.1-1.el7.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.22-1.el7.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch ovirt-vmconsole-1.0.7-2.el7.noarch ovirt-vmconsole-proxy-1.0.7-2.el7.noarch ovirt-web-ui-1.5.2-1.el7.noarch python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch python2-ovirt-host-deploy-1.8.0-1.el7.noarch python2-ovirt-setup-lib-1.2.0-1.el7.noarch python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64 [root@dell-ovirt ~]# rpm -qa | grep postgre rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64 rh-postgresql10-postgresql-10.6-1.el7.x86_64 postgresql-libs-9.2.24-1.el7_5.x86_64 collectd-postgresql-5.8.1-4.el7.x86_64 postgresql-server-9.2.24-1.el7_5.x86_64 rh-postgresql10-postgresql-server-10.6-1.el7.x86_64 rh-postgresql95-postgresql-9.5.14-1.el7.x86_64 rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64 postgresql-jdbc-9.2.1002-6.el7_5.noarch rh-postgresql10-runtime-3.1-1.el7.x86_64 rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64 rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64 postgresql-9.2.24-1.el7_5.x86_64 rh-postgresql95-runtime-2.2-2.el7.x86_64 rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64 I'm also seeing a strange error on my hosts when I log in showing... node status: DEGRADED Please check the status manually using `nodectl check` [root@host-a ~]# nodectl check Status: FAILED Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation. Checking available space in thinpool ... OK Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf vdsmd ... OK I'm running CentOS Linux release 7.6.1810 (Core) These are my package versions on my hosts... [root@host-a ~]# rpm -qa | grep -i ovirt ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.19-1.el7.noarch ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch ovirt-vmconsole-host-1.0.6-2.el7.noarch ovirt-provider-ovn-driver-1.2.18-1.el7.noarch ovirt-engine-appliance-4.2-20190121.1.el7.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64 cockpit-ovirt-dashboard-0.11.38-1.el7.noarch ovirt-imageio-daemon-1.4.6-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-setup-lib-1.1.5-1.el7.noarch ovirt-node-ng-image-update-4.2.8-1.el7.noarch ovirt-imageio-common-1.4.6-1.el7.x86_64 ovirt-vmconsole-1.0.6-2.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-release-host-node-4.2.8-1.el7.noarch cockpit-machines-ovirt-193-2.el7.noarch ovirt-hosted-engine-setup-2.2.33-1.el7.noarch [root@host-a ~]# rpm -qa | grep -i vdsm vdsm-http-4.20.46-1.el7.noarch vdsm-common-4.20.46-1.el7.noarch vdsm-network-4.20.46-1.el7.x86_64 vdsm-jsonrpc-4.20.46-1.el7.noarch vdsm-4.20.46-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.46-1.el7.noarch vdsm-hook-vhostmd-4.20.46-1.el7.noarch vdsm-python-4.20.46-1.el7.noarch vdsm-api-4.20.46-1.el7.noarch vdsm-yajsonrpc-4.20.46-1.el7.noarch vdsm-hook-fcoe-4.20.46-1.el7.noarch vdsm-hook-openstacknet-4.20.46-1.el7.noarch vdsm-client-4.20.46-1.el7.noarch vdsm-gluster-4.20.46-1.el7.x86_64 vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch I am seeing the following error every minute or so in my vdsm.log as follows.... 2019-07-09 12:50:31,543+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323) Then also under /var/log/messages.. Jul 9 12:57:48 host-a ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) I'm not using ovn so I'm guessing this can be ignored. If I search for ERROR or WARN in my logs nothing relevant is logged Any suggestions on what to start looking for please? Please let me know if you need further info. Thank you. Regards. Neil Wilson _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2...

I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it.... https://bugzilla.redhat.com/show_bug.cgi?id=1670701 Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?.... /usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on Please shout if you need further info. Thanks. On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ?
Best Regards, Strahil Nikolov
В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil < nwilson123@gmail.com> написа:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <nwilson123@gmail.com> написа:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restart the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off.
I have plenty resources available in terms of CPU and Memory on the destination host, and my Cluster version is set to 4.2 because my hosts are still on 4.2
I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as well, but I can't get my hosts to 4.3 because of the above migration issue.
Below my ovirt packages installed...
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch ovirt-ansible-engine-setup-1.1.9-1.el7.noarch ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch ovirt-ansible-image-template-1.1.11-1.el7.noarch ovirt-ansible-infra-1.1.12-1.el7.noarch ovirt-ansible-manageiq-1.1.14-1.el7.noarch ovirt-ansible-repositories-1.1.5-1.el7.noarch ovirt-ansible-roles-1.1.6-1.el7.noarch ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch ovirt-ansible-vm-infra-1.1.18-1.el7.noarch ovirt-cockpit-sso-0.1.1-1.el7.noarch ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-api-explorer-0.0.5-1.el7.noarch ovirt-engine-backend-4.3.4.3-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch ovirt-engine-dwh-4.3.0-1.el7.noarch ovirt-engine-dwh-setup-4.3.0-1.el7.noarch ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch ovirt-engine-metrics-1.3.3.1-1.el7.noarch ovirt-engine-restapi-4.3.4.3-1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-setup-4.3.4.3-1.el7.noarch ovirt-engine-setup-base-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-tools-4.3.4.3-1.el7.noarch ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch ovirt-engine-ui-extensions-1.0.5-1.el7.noarch ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch ovirt-engine-wildfly-15.0.1-1.el7.x86_64 ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch ovirt-guest-agent-common-1.0.16-1.el7.noarch ovirt-guest-tools-iso-4.3-3.el7.noarch ovirt-host-deploy-common-1.8.0-1.el7.noarch ovirt-host-deploy-java-1.8.0-1.el7.noarch ovirt-imageio-common-1.5.1-0.el7.x86_64 ovirt-imageio-proxy-1.5.1-0.el7.noarch ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch ovirt-iso-uploader-4.3.1-1.el7.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.22-1.el7.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch ovirt-vmconsole-1.0.7-2.el7.noarch ovirt-vmconsole-proxy-1.0.7-2.el7.noarch ovirt-web-ui-1.5.2-1.el7.noarch python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch python2-ovirt-host-deploy-1.8.0-1.el7.noarch python2-ovirt-setup-lib-1.2.0-1.el7.noarch python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
[root@dell-ovirt ~]# rpm -qa | grep postgre rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64 rh-postgresql10-postgresql-10.6-1.el7.x86_64 postgresql-libs-9.2.24-1.el7_5.x86_64 collectd-postgresql-5.8.1-4.el7.x86_64 postgresql-server-9.2.24-1.el7_5.x86_64 rh-postgresql10-postgresql-server-10.6-1.el7.x86_64 rh-postgresql95-postgresql-9.5.14-1.el7.x86_64 rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64 postgresql-jdbc-9.2.1002-6.el7_5.noarch rh-postgresql10-runtime-3.1-1.el7.x86_64 rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64 rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64 postgresql-9.2.24-1.el7_5.x86_64 rh-postgresql95-runtime-2.2-2.el7.x86_64 rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64
I'm also seeing a strange error on my hosts when I log in showing...
node status: DEGRADED Please check the status manually using `nodectl check`
[root@host-a ~]# nodectl check Status: FAILED Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation. Checking available space in thinpool ... OK Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf vdsmd ... OK
I'm running CentOS Linux release 7.6.1810 (Core)
These are my package versions on my hosts...
[root@host-a ~]# rpm -qa | grep -i ovirt ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.19-1.el7.noarch ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch ovirt-vmconsole-host-1.0.6-2.el7.noarch ovirt-provider-ovn-driver-1.2.18-1.el7.noarch ovirt-engine-appliance-4.2-20190121.1.el7.noarch ovirt-release42-4.2.8-1.el7.noarch ovirt-release43-4.3.4-1.el7.noarch python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64 cockpit-ovirt-dashboard-0.11.38-1.el7.noarch ovirt-imageio-daemon-1.4.6-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-setup-lib-1.1.5-1.el7.noarch ovirt-node-ng-image-update-4.2.8-1.el7.noarch ovirt-imageio-common-1.4.6-1.el7.x86_64 ovirt-vmconsole-1.0.6-2.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-release-host-node-4.2.8-1.el7.noarch cockpit-machines-ovirt-193-2.el7.noarch ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
[root@host-a ~]# rpm -qa | grep -i vdsm vdsm-http-4.20.46-1.el7.noarch vdsm-common-4.20.46-1.el7.noarch vdsm-network-4.20.46-1.el7.x86_64 vdsm-jsonrpc-4.20.46-1.el7.noarch vdsm-4.20.46-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.46-1.el7.noarch vdsm-hook-vhostmd-4.20.46-1.el7.noarch vdsm-python-4.20.46-1.el7.noarch vdsm-api-4.20.46-1.el7.noarch vdsm-yajsonrpc-4.20.46-1.el7.noarch vdsm-hook-fcoe-4.20.46-1.el7.noarch vdsm-hook-openstacknet-4.20.46-1.el7.noarch vdsm-client-4.20.46-1.el7.noarch vdsm-gluster-4.20.46-1.el7.x86_64 vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch
I am seeing the following error every minute or so in my vdsm.log as follows....
2019-07-09 12:50:31,543+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323)
Then also under /var/log/messages..
Jul 9 12:57:48 host-a ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
I'm not using ovn so I'm guessing this can be ignored.
If I search for ERROR or WARN in my logs nothing relevant is logged
Any suggestions on what to start looking for please?
Please let me know if you need further info.
Thank you.
Regards.
Neil Wilson
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2...

Can you share the engine.log please? And highlight the exact time when you attempt that migrate action Thanks, michal
On 9 Jul 2019, at 16:42, Neil <nwilson123@gmail.com> wrote:
--000000000000166784058d409302 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=3DHeadoffice.cbl-ho.local,debug-threads= =3Don -S -object secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H= eadoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c= pu Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt= =3Don,abm=3Don,rtm=3Don,hle=3Don -m 8192 -realtime mlock=3Doff -smp 8,maxcpus=3D64,sockets=3D16,cores=3D4,th= reads=3D1 -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=3D1,manufacturer=3DoVirt,product=3DoVirt Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4= E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2019-07-09T10:26:53,driftfix=3Dslew -global kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5= -drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive file=3D/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a= -473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546= 7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di= sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst= op,cache=3Dnone,aio=3Dnative -device virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0= ,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc= i.0,addr=3D0x3 -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch= annel0,name=3Dcom.redhat.rhevm.vdsm -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch= annel1,name=3Dorg.qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch= annel2,name=3Dcom.redhat.spice.0 -spice tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls= -channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di= nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-= channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don -device qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb= =3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2 -incoming defer -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr= =3D0x6 -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D= deny -msg timestamp=3Don
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new feature= s about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 10:09:05 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,= Neil < nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 7:26:21 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4, = Neil <nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restar= t the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off=

Hi Michal, Thanks for assisting. I've just done as requested however nothing is logged in the engine.log at the time I click Migrate, below is the log and I hit the Migrate button about 4 times between 09:35 and 09:36 and nothing was logged about this... 2019-07-10 09:35:57,967+02 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) [] User trouble@internal successfully logged in with scopes: ovirt-app-admin ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2019-07-10 09:35:58,012+02 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14) [2997034] Running command: CreateUserSessionCommand internal: false. 2019-07-10 09:35:58,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User trouble@internal-authz connecting from '160.128.20.85' using session 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig==' logged in. 2019-07-10 09:36:58,304+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 tasks in queue. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for tasks. The same is observed in the vdsm.log too, below is the log during the attempted migration.... 2019-07-10 09:39:57,034+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [api.host] START getStats() from=::ffff:10.0.1.1,57934 (api:46) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=()) from=::ffff:10.0.1.1,57934, task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck': '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0', 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4', 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4', 'valid': True}} from=::ffff:10.0.1.1,57934, task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52) 2019-07-10 09:39:57,995+0200 INFO (jsonrpc/2) [vdsm.api] START multipath_health() from=::ffff:10.0.1.1,57934, task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46) 2019-07-10 09:39:57,995+0200 INFO (jsonrpc/2) [vdsm.api] FINISH multipath_health return={} from=::ffff:10.0.1.1,57934, task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52) 2019-07-10 09:39:58,002+0200 INFO (jsonrpc/2) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'42': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '24': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.20'}, '25': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '26': {'cpuUser': '5.59', 'nodeIndex': 0, 'cpuSys': '1.20', 'cpuIdle': '93.21'}, '27': {'cpuUser': '0.87', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '98.53'}, '20': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.34'}, '21': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '22': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '23': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '46': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.87'}, '47': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '44': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '45': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '28': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.33'}, '29': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '98.73'}, '40': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '41': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '1': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '1.13', 'cpuIdle': '97.80'}, '0': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.20'}, '3': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.73'}, '2': {'cpuUser': '3.00', 'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle': '96.47'}, '5': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.67'}, '4': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.33'}, '7': {'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '6': {'cpuUser': '0.67', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.13'}, '9': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.80'}, '39': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.54'}, '38': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '11': {'cpuUser': '0.67', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.06'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '15': {'cpuUser': '0.27', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.60'}, '14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.66'}, '17': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '16': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.40'}, '19': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '18': {'cpuUser': '1.00', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '98.73'}, '31': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '30': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '37': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '36': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '35': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '34': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '33': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '32': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}}, 'numaNodeMemFree': {'1': {'memPercent': 5, 'memFree': '94165'}, '0': {'memPercent': 22, 'memFree': '77122'}}, 'memShared': 0, 'haScore': 3400, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 2, 'memUsed': '11', 'storageDomains': {u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck': '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0', 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4', 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'em4': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em4', 'tx': '2160', 'txDropped': '0', 'rx': '261751836', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '1'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'ovirtmgmt', 'tx': '193005142', 'txDropped': '0', 'rx': '4300879104', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '478'}, 'restores': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'restores', 'tx': '1362', 'txDropped': '0', 'rx': '226442665', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '478'}, 'em2': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': 'em2', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'vnet0', 'tx': '2032610435', 'txDropped': '686', 'rx': '4287479548', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'em1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em1', 'tx': '4548433238', 'txDropped': '0', 'rx': '6476729588', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '1'}, 'em3': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': 'em3', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'lo', 'tx': '397962377', 'txDropped': '0', 'rx': '397962377', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'vnet1', 'tx': '526185708', 'txDropped': '0', 'rx': '118512222', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '686', 'anonHugePages': '18532', 'ksmPages': 100, 'elapsedTime': '85176.64', 'cpuLoad': '0.06', 'cpuSys': '0.17', 'diskStats': {'/var/log': {'free': '6850'}, '/var/run/vdsm/': {'free': '96410'}, '/tmp': {'free': '1825'}}, 'cpuUserVdsmd': '1.07', 'netConfigDirty': 'False', 'memCommitted': 24706, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 166010, 'bootTime': '1562659184', 'haStats': {'active': True, 'configured': True, 'score': 3400, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '958', 'outgoingVmMigrations': 0, 'swapTotal': 4095, 'swapFree': 4095, 'hugepages': defaultdict(<type 'dict'>, {1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2019-07-10T07:39:57 GMT', 'cpuUser': '0.44', 'memFree': 172451, 'cpuIdle': '99.39', 'vmActive': 2, 'v2vJobs': {}, 'cpuSysVdsmd': '0.60'}} from=::ffff:10.0.1.1,57934 (api:52) 2019-07-10 09:39:58,004+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) Please let me know if you need further info. Thank you. Regards. Neil Wilson On Tue, Jul 9, 2019 at 5:52 PM Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
Can you share the engine.log please? And highlight the exact time when you attempt that migrate action
Thanks, michal
On 9 Jul 2019, at 16:42, Neil <nwilson123@gmail.com> wrote:
--000000000000166784058d409302 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=3DHeadoffice.cbl-ho.local,debug-threads= =3Don -S -object
secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H=
eadoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c= pu
Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt=
=3Don,abm=3Don,rtm=3Don,hle=3Don -m 8192 -realtime mlock=3Doff -smp 8,maxcpus=3D64,sockets=3D16,cores=3D4,th= reads=3D1 -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=3D1,manufacturer=3DoVirt,product=3DoVirt
Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4=
E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2019-07-09T10:26:53,driftfix=3Dslew -global kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device
virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5=
-drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive
file=3D/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a=
-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546=
7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di=
sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst=
op,cache=3Dnone,aio=3Dnative -device
virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0=
,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device
virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc=
i.0,addr=3D0x3 -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch=
annel0,name=3Dcom.redhat.rhevm.vdsm -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch=
annel1,name=3Dorg.qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch=
annel2,name=3Dcom.redhat.spice.0 -spice
tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls=
-channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di=
nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-=
channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don -device
qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb=
=3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2 -incoming defer -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr= =3D0x6 -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox
on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D=
deny -msg timestamp=3Don
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new feature= s about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 10:09:05 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,= Neil < nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 7:26:21 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4, = Neil <nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restar= t the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off=

To provide a slight update on this. I put one of my hosts into maintenance and it then migrated the two VM's off of it, I then upgraded the host to 4.3. I have 12 VM's running on the remaining host, if I put it into maintenance will it try migrate all 12 VM's at once or will it stagger them until they are all migrated? Thank you. Regards. Neil Wilson. On Wed, Jul 10, 2019 at 9:44 AM Neil <nwilson123@gmail.com> wrote:
Hi Michal,
Thanks for assisting.
I've just done as requested however nothing is logged in the engine.log at the time I click Migrate, below is the log and I hit the Migrate button about 4 times between 09:35 and 09:36 and nothing was logged about this...
2019-07-10 09:35:57,967+02 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) [] User trouble@internal successfully logged in with scopes: ovirt-app-admin ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2019-07-10 09:35:58,012+02 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14) [2997034] Running command: CreateUserSessionCommand internal: false. 2019-07-10 09:35:58,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User trouble@internal-authz connecting from '160.128.20.85' using session 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig==' logged in. 2019-07-10 09:36:58,304+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 tasks in queue. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for tasks.
The same is observed in the vdsm.log too, below is the log during the attempted migration....
2019-07-10 09:39:57,034+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [api.host] START getStats() from=::ffff:10.0.1.1,57934 (api:46) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=()) from=::ffff:10.0.1.1,57934, task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck': '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0', 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4', 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4', 'valid': True}} from=::ffff:10.0.1.1,57934, task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52) 2019-07-10 09:39:57,995+0200 INFO (jsonrpc/2) [vdsm.api] START multipath_health() from=::ffff:10.0.1.1,57934, task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46) 2019-07-10 09:39:57,995+0200 INFO (jsonrpc/2) [vdsm.api] FINISH multipath_health return={} from=::ffff:10.0.1.1,57934, task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52) 2019-07-10 09:39:58,002+0200 INFO (jsonrpc/2) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'42': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '24': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.20'}, '25': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '26': {'cpuUser': '5.59', 'nodeIndex': 0, 'cpuSys': '1.20', 'cpuIdle': '93.21'}, '27': {'cpuUser': '0.87', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '98.53'}, '20': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.34'}, '21': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '22': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '23': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '46': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.87'}, '47': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '44': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '45': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '28': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.33'}, '29': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '98.73'}, '40': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '41': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '1': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '1.13', 'cpuIdle': '97.80'}, '0': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.20'}, '3': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.73'}, '2': {'cpuUser': '3.00', 'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle': '96.47'}, '5': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.67'}, '4': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.33'}, '7': {'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '6': {'cpuUser': '0.67', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.13'}, '9': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.80'}, '39': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.54'}, '38': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '11': {'cpuUser': '0.67', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.06'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '15': {'cpuUser': '0.27', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.60'}, '14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.66'}, '17': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '16': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.40'}, '19': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '18': {'cpuUser': '1.00', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '98.73'}, '31': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '30': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '37': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '36': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '35': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '34': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '33': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '32': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}}, 'numaNodeMemFree': {'1': {'memPercent': 5, 'memFree': '94165'}, '0': {'memPercent': 22, 'memFree': '77122'}}, 'memShared': 0, 'haScore': 3400, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 2, 'memUsed': '11', 'storageDomains': {u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck': '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0', 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4', 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'em4': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em4', 'tx': '2160', 'txDropped': '0', 'rx': '261751836', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '1'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'ovirtmgmt', 'tx': '193005142', 'txDropped': '0', 'rx': '4300879104', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '478'}, 'restores': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'restores', 'tx': '1362', 'txDropped': '0', 'rx': '226442665', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '478'}, 'em2': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': 'em2', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'vnet0', 'tx': '2032610435', 'txDropped': '686', 'rx': '4287479548', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'em1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em1', 'tx': '4548433238', 'txDropped': '0', 'rx': '6476729588', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '1'}, 'em3': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': 'em3', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'lo', 'tx': '397962377', 'txDropped': '0', 'rx': '397962377', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'vnet1', 'tx': '526185708', 'txDropped': '0', 'rx': '118512222', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '686', 'anonHugePages': '18532', 'ksmPages': 100, 'elapsedTime': '85176.64', 'cpuLoad': '0.06', 'cpuSys': '0.17', 'diskStats': {'/var/log': {'free': '6850'}, '/var/run/vdsm/': {'free': '96410'}, '/tmp': {'free': '1825'}}, 'cpuUserVdsmd': '1.07', 'netConfigDirty': 'False', 'memCommitted': 24706, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 166010, 'bootTime': '1562659184', 'haStats': {'active': True, 'configured': True, 'score': 3400, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '958', 'outgoingVmMigrations': 0, 'swapTotal': 4095, 'swapFree': 4095, 'hugepages': defaultdict(<type 'dict'>, {1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2019-07-10T07:39:57 GMT', 'cpuUser': '0.44', 'memFree': 172451, 'cpuIdle': '99.39', 'vmActive': 2, 'v2vJobs': {}, 'cpuSysVdsmd': '0.60'}} from=::ffff:10.0.1.1,57934 (api:52) 2019-07-10 09:39:58,004+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573)
Please let me know if you need further info.
Thank you.
Regards.
Neil Wilson
On Tue, Jul 9, 2019 at 5:52 PM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
Can you share the engine.log please? And highlight the exact time when you attempt that migrate action
Thanks, michal
On 9 Jul 2019, at 16:42, Neil <nwilson123@gmail.com> wrote:
--000000000000166784058d409302 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=3DHeadoffice.cbl-ho.local,debug-threads= =3Don -S -object
eadoffice.cbl-ho.lo/master-key.aes -machine
secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H= pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c=
pu
Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt=
=3Don,abm=3Don,rtm=3Don,hle=3Don -m 8192 -realtime mlock=3Doff -smp 8,maxcpus=3D64,sockets=3D16,cores=3D4,th= reads=3D1 -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=3D1,manufacturer=3DoVirt,product=3DoVirt
Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4=
E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2019-07-09T10:26:53,driftfix=3Dslew -global kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device
virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5=
-drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive
file=3D/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a=
-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546=
7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di=
sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst=
op,cache=3Dnone,aio=3Dnative -device
virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0=
,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device
virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc=
i.0,addr=3D0x3 -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch=
annel0,name=3Dcom.redhat.rhevm.vdsm -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch=
annel1,name=3Dorg.qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch=
annel2,name=3Dcom.redhat.spice.0 -spice
tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls=
-channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di=
nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-=
channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don -device
qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb=
=3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2 -incoming defer -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr= =3D0x6 -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox
deny -msg timestamp=3Don
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new feature= s about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 10:09:05 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,= Neil < nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 7:26:21 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4, = Neil <nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restar= t the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC
on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D= based.
The hosts are still running on 4.2 because I'm unable to migrate VM's off=

On Wed, Jul 10, 2019, 14:57 Neil <nwilson123@gmail.com> wrote:
To provide a slight update on this.
I put one of my hosts into maintenance and it then migrated the two VM's off of it, I then upgraded the host to 4.3.
I have 12 VM's running on the remaining host, if I put it into maintenance will it try migrate all 12 VM's at once or will it stagger them until they are all migrated?
If you have a good migration network (at least 10Gbps) the it should be fine. You could also just manually migrate one by one.
Thank you.
Regards.
Neil Wilson.
On Wed, Jul 10, 2019 at 9:44 AM Neil <nwilson123@gmail.com> wrote:
Hi Michal,
Thanks for assisting.
I've just done as requested however nothing is logged in the engine.log at the time I click Migrate, below is the log and I hit the Migrate button about 4 times between 09:35 and 09:36 and nothing was logged about this...
2019-07-10 09:35:57,967+02 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) [] User trouble@internal successfully logged in with scopes: ovirt-app-admin ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2019-07-10 09:35:58,012+02 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14) [2997034] Running command: CreateUserSessionCommand internal: false. 2019-07-10 09:35:58,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User trouble@internal-authz connecting from '160.128.20.85' using session 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig==' logged in. 2019-07-10 09:36:58,304+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 tasks in queue. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for tasks. 2019-07-10 09:36:58,305+02 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for tasks.
The same is observed in the vdsm.log too, below is the log during the attempted migration....
2019-07-10 09:39:57,034+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [api.host] START getStats() from=::ffff:10.0.1.1,57934 (api:46) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=()) from=::ffff:10.0.1.1,57934, task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46) 2019-07-10 09:39:57,994+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck': '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0', 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4', 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4', 'valid': True}} from=::ffff:10.0.1.1,57934, task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52) 2019-07-10 09:39:57,995+0200 INFO (jsonrpc/2) [vdsm.api] START multipath_health() from=::ffff:10.0.1.1,57934, task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46) 2019-07-10 09:39:57,995+0200 INFO (jsonrpc/2) [vdsm.api] FINISH multipath_health return={} from=::ffff:10.0.1.1,57934, task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52) 2019-07-10 09:39:58,002+0200 INFO (jsonrpc/2) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'42': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '24': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.20'}, '25': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '26': {'cpuUser': '5.59', 'nodeIndex': 0, 'cpuSys': '1.20', 'cpuIdle': '93.21'}, '27': {'cpuUser': '0.87', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '98.53'}, '20': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.34'}, '21': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '22': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '23': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '46': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.87'}, '47': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '44': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '45': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '28': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.33'}, '29': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '98.73'}, '40': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '41': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '1': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '1.13', 'cpuIdle': '97.80'}, '0': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.20'}, '3': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.73'}, '2': {'cpuUser': '3.00', 'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle': '96.47'}, '5': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.67'}, '4': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.33'}, '7': {'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '6': {'cpuUser': '0.67', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.13'}, '9': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.80'}, '39': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.54'}, '38': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '11': {'cpuUser': '0.67', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.06'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '15': {'cpuUser': '0.27', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.60'}, '14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.66'}, '17': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '16': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.40'}, '19': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '18': {'cpuUser': '1.00', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '98.73'}, '31': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '30': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '37': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '36': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '35': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '34': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '33': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '32': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}}, 'numaNodeMemFree': {'1': {'memPercent': 5, 'memFree': '94165'}, '0': {'memPercent': 22, 'memFree': '77122'}}, 'memShared': 0, 'haScore': 3400, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 2, 'memUsed': '11', 'storageDomains': {u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck': '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0', 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4', 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'em4': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em4', 'tx': '2160', 'txDropped': '0', 'rx': '261751836', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '1'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'ovirtmgmt', 'tx': '193005142', 'txDropped': '0', 'rx': '4300879104', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '478'}, 'restores': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'restores', 'tx': '1362', 'txDropped': '0', 'rx': '226442665', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '478'}, 'em2': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': 'em2', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'vnet0', 'tx': '2032610435', 'txDropped': '686', 'rx': '4287479548', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'em1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em1', 'tx': '4548433238', 'txDropped': '0', 'rx': '6476729588', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '1'}, 'em3': {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name': 'em3', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'lo', 'tx': '397962377', 'txDropped': '0', 'rx': '397962377', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'vnet1', 'tx': '526185708', 'txDropped': '0', 'rx': '118512222', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '686', 'anonHugePages': '18532', 'ksmPages': 100, 'elapsedTime': '85176.64', 'cpuLoad': '0.06', 'cpuSys': '0.17', 'diskStats': {'/var/log': {'free': '6850'}, '/var/run/vdsm/': {'free': '96410'}, '/tmp': {'free': '1825'}}, 'cpuUserVdsmd': '1.07', 'netConfigDirty': 'False', 'memCommitted': 24706, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 166010, 'bootTime': '1562659184', 'haStats': {'active': True, 'configured': True, 'score': 3400, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '958', 'outgoingVmMigrations': 0, 'swapTotal': 4095, 'swapFree': 4095, 'hugepages': defaultdict(<type 'dict'>, {1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2019-07-10T07:39:57 GMT', 'cpuUser': '0.44', 'memFree': 172451, 'cpuIdle': '99.39', 'vmActive': 2, 'v2vJobs': {}, 'cpuSysVdsmd': '0.60'}} from=::ffff:10.0.1.1,57934 (api:52) 2019-07-10 09:39:58,004+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573)
Please let me know if you need further info.
Thank you.
Regards.
Neil Wilson
On Tue, Jul 9, 2019 at 5:52 PM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
Can you share the engine.log please? And highlight the exact time when you attempt that migrate action
Thanks, michal
On 9 Jul 2019, at 16:42, Neil <nwilson123@gmail.com> wrote:
--000000000000166784058d409302 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=3DHeadoffice.cbl-ho.local,debug-threads= =3Don -S -object
eadoffice.cbl-ho.lo/master-key.aes -machine
secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H= pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c=
pu
Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt=
=3Don,abm=3Don,rtm=3Don,hle=3Don -m 8192 -realtime mlock=3Doff -smp 8,maxcpus=3D64,sockets=3D16,cores=3D4,th= reads=3D1 -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=3D1,manufacturer=3DoVirt,product=3DoVirt
Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4=
E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2019-07-09T10:26:53,driftfix=3Dslew -global kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device
virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5=
-drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive
file=3D/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a=
-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546=
7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di=
sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst=
op,cache=3Dnone,aio=3Dnative -device
virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0=
,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device
virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc=
i.0,addr=3D0x3 -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch=
annel0,name=3Dcom.redhat.rhevm.vdsm -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch=
annel1,name=3Dorg.qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device
virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch=
annel2,name=3Dcom.redhat.spice.0 -spice
tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls=
-channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di=
nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-=
channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don -device
qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb=
=3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2 -incoming defer -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr= =3D0x6 -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox
deny -msg timestamp=3Don
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new feature= s about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 10:09:05 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,= Neil < nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com
wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 7:26:21 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4, = Neil <nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restar= t the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory
runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC
on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D= based.
The hosts are still running on 4.2 because I'm unable to migrate VM's off=
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPVRXXGDGZ6MYM...
participants (4)
-
Alex K
-
Michal Skrivanek
-
Neil
-
Strahil Nikolov