
Can you share the engine.log please? And highlight the exact time when you attempt that migrate action Thanks, michal
On 9 Jul 2019, at 16:42, Neil <nwilson123@gmail.com> wrote:
--000000000000166784058d409302 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=3DHeadoffice.cbl-ho.local,debug-threads= =3Don -S -object secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H= eadoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c= pu Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt= =3Don,abm=3Don,rtm=3Don,hle=3Don -m 8192 -realtime mlock=3Doff -smp 8,maxcpus=3D64,sockets=3D16,cores=3D4,th= reads=3D1 -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=3D1,manufacturer=3DoVirt,product=3DoVirt Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4= E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2019-07-09T10:26:53,driftfix=3Dslew -global kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5= -drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive file=3D/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a= -473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546= 7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di= sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst= op,cache=3Dnone,aio=3Dnative -device virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0= ,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc= i.0,addr=3D0x3 -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch= annel0,name=3Dcom.redhat.rhevm.vdsm -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch= annel1,name=3Dorg.qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch= annel2,name=3Dcom.redhat.spice.0 -spice tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls= -channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di= nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-= channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don -device qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb= =3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2 -incoming defer -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr= =3D0x6 -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D= deny -msg timestamp=3Don
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new feature= s about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the VMs (ovirt or qemu) ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 10:09:05 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,= Neil < nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi Strahil,
Thanks for the quick reply. I put the cluster into global maintenance, then installed the 4.3 repo, then "yum update ovirt\*setup\*" then "engine-upgrade-check", "engine-setup", then "yum update", once completed, I rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum update" after doing the engine-setup, not sure if this would cause it perhaps?
Thank you. Regards. Neil Wilson.
On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug and it was fixed. Are you sure you have fully updated ? What procedure did you use ?
Best Regards, Strahil Nikolov
=D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2= 019 =D0=B3., 7:26:21 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4, = Neil <nwilson123@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an error in the ovirt GUI that says "Could not fetch data needed for VM migrate operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restar= t the dwh service...
2019-07-09 11:48:04|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:48:10|ETL Service Stopped 2019-07-09 11:49:59|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 11:52:56|ETL Service Stopped 2019-07-09 11:52:57|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** 2019-07-09 12:16:01|ETL Service Stopped 2019-07-09 12:16:45|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt= _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa= ctory=3Dorg.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.0 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I have a hosted engine, and I have two hosts and my storage is FC based. The hosts are still running on 4.2 because I'm unable to migrate VM's off=