Re: Manual Migration not working and Dashboard broken after 4.3.4 update

I'm not sure, but I always thought that you need an agent for live migrations. You can always try installing either qemu-guest-agent or ovirt-guest-agent and check if live migration between hosts is possible. Have you set the new cluster/dc version ? Best Regards Strahil NikolovOn Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.

On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote: I'm not sure, but I always thought that you need an agent for live migrations. You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only You can always try installing either qemu-guest-agent or ovirt-guest-agent and check if live migration between hosts is possible. Have you set the new cluster/dc version ? Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote: I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it.... https://bugzilla.redhat.com/show_bug.cgi?id=1670701 Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?.... /usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on Please shout if you need further info. Thanks. On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: Shouldn't cause that problem. You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...

On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga.
You can always try installing either qemu-guest-agent or ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...

On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote: On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga. Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help. You can always try installing either qemu-guest-agent or ovirt-guest-agent
and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade... C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...

On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help.
That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or
ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...

Hi everyone, Just an update. I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems. 1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured." 2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation" Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down. I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either. Any ideas what to do or try for 1 and 2 above? Thank you. Regards. Neil Wilson. On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help.
That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or
ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5...

Hi, Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue. Regarding issue 2 (Manual Migrate dialog): Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=... Thanks, Sharon On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help.
That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or
ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM...

Hi Sharon, Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available? Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately. Please shout if you need further info. Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help.
That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or
ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM...

Hi Neil, Regarding issue 1 (Dashboard): I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well as other enhancements and bug fixes. For oVirt 4.3.5 installation / upgrade instructions: http://www.ovirt.org/release/4.3.5/ Regarding issue 2 (Manual Migrate dialog): If it will be reproduced after upgrading then please try to clean your browser caching before running the admin portal. It might help. Regards, Sharon On Thu, Jul 11, 2019 at 1:24 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available?
Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only
I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help.
That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or
ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote:
I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM...

Hi Sharon, Thank you for coming back to me. Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I have also tried clearing all data out of my browser and re-logged back in. I see a new error though in my engine.log as below, however I still don't see anything logged when I click the migrate button... 2019-07-16 15:01:19,600+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,601+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,603+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be updated when status is 'Up' Then in my vdsm.log I'm seeing the following error.... 2019-07-16 15:05:59,038+0200 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289) 2019-07-16 15:06:09,036+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289) I'm not sure if this is related to either of the above issues though, but I can attach the full log if needed. Please shout if there is anything else you think I can try doing. Thank you. Regards. Neil Wilson On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi Neil,
Regarding issue 1 (Dashboard): I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well as other enhancements and bug fixes. For oVirt 4.3.5 installation / upgrade instructions: http://www.ovirt.org/release/4.3.5/
Regarding issue 2 (Manual Migrate dialog): If it will be reproduced after upgrading then please try to clean your browser caching before running the admin portal. It might help.
Regards, Sharon
On Thu, Jul 11, 2019 at 1:24 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available?
Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
> > > On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote: > > I'm not sure, but I always thought that you need an agent for live > migrations. > > > You don’t. For snapshots, and other less important stuff like > reporting IPs you do. In 4.3 you should be fine with qemu-ga only > I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help.
That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or > ovirt-guest-agent and check if live migration between hosts is possible. > > Have you set the new cluster/dc version ? > > Best Regards > Strahil Nikolov > On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote: > > I remember seeing the bug earlier but because it was closed thought > it was unrelated, this appears to be it.... > > https://bugzilla.redhat.com/show_bug.cgi?id=1670701 > > Perhaps I'm not understanding your question about the VM guest > agent, but I don't have any guest agent currently installed on the VM, not > sure if the output of my qemu-kvm process maybe answers this question?.... > > /usr/libexec/qemu-kvm -name > guest=Headoffice.cbl-ho.local,debug-threads=on -S -object > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu > Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on > -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 > -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid > 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios > type=1,manufacturer=oVirt,product=oVirt > Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033- > > It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef > -no-user-config -nodefaults -chardev > socket,id=charmonitor,fd=31,server,nowait -mon > chardev=charmonitor,id=monitor,mode=control -rtc > base=2019-07-09T10:26:53,driftfix=slew -global > kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device > virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive > if=none,id=drive-ide0-1-0,readonly=on -device > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive > file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on > -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 > -chardev socket,id=charchannel0,fd=35,server,nowait -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > -chardev socket,id=charchannel1,fd=36,server,nowait -device > virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 > -chardev spicevmc,id=charchannel2,name=vdagent -device > virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 > -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on > -device > qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 > -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 > -object rng-random,id=objrng0,filename=/dev/urandom -device > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox > on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny > -msg timestamp=on > > Please shout if you need further info. > > Thanks. > > > > > > > On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov < > hunter86_bg@yahoo.com> wrote: > > Shouldn't cause that problem. > > You have to find the bug in bugzilla and report a regression (if > it's not closed) , or open a new one and report the regression. > As far as I remember , only the dashboard was affected due to new > features about vdo disk savings. > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7... > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB... > _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM...

Hi, For the dashboard: If ovirt-engine-dwh is still installed and running after upgrade (service ovirt-engine-dwhd restart) then can you please re-check the ovirt-engine-dwh.log file for errors? @Shirly Radco <sradco@redhat.com> anything else to check? For the Migrate option, please attach again your browser console log snippet when you have the problem and also a screenshot of the error. Please also attach the engine log (the warnings you mentioned are not related to those issues). Thanks, Sharon On Tue, Jul 16, 2019 at 4:14 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thank you for coming back to me.
Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I have also tried clearing all data out of my browser and re-logged back in.
I see a new error though in my engine.log as below, however I still don't see anything logged when I click the migrate button...
2019-07-16 15:01:19,600+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,601+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,603+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be updated when status is 'Up'
Then in my vdsm.log I'm seeing the following error....
2019-07-16 15:05:59,038+0200 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
2019-07-16 15:06:09,036+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
I'm not sure if this is related to either of the above issues though, but I can attach the full log if needed.
Please shout if there is anything else you think I can try doing.
Thank you.
Regards.
Neil Wilson
On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi Neil,
Regarding issue 1 (Dashboard): I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well as other enhancements and bug fixes. For oVirt 4.3.5 installation / upgrade instructions: http://www.ovirt.org/release/4.3.5/
Regarding issue 2 (Manual Migrate dialog): If it will be reproduced after upgrading then please try to clean your browser caching before running the admin portal. It might help.
Regards, Sharon
On Thu, Jul 11, 2019 at 1:24 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available?
Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
> > > On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote: > > > > On Tue, Jul 9, 2019, 19:10 Michal Skrivanek < > michal.skrivanek@redhat.com> wrote: > >> >> >> On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote: >> >> I'm not sure, but I always thought that you need an agent for live >> migrations. >> >> >> You don’t. For snapshots, and other less important stuff like >> reporting IPs you do. In 4.3 you should be fine with qemu-ga only >> > I've seen resolving live migration issues by installing newer > versions of ovirt ga. > > > Hm, it shouldn’t make any difference whatsoever. Do you have any > concrete data? that would help. > That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
> You can always try installing either qemu-guest-agent or >> ovirt-guest-agent and check if live migration between hosts is possible. >> >> Have you set the new cluster/dc version ? >> >> Best Regards >> Strahil Nikolov >> On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote: >> >> I remember seeing the bug earlier but because it was closed thought >> it was unrelated, this appears to be it.... >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1670701 >> >> Perhaps I'm not understanding your question about the VM guest >> agent, but I don't have any guest agent currently installed on the VM, not >> sure if the output of my qemu-kvm process maybe answers this question?.... >> >> /usr/libexec/qemu-kvm -name >> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes >> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu >> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on >> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 >> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid >> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios >> type=1,manufacturer=oVirt,product=oVirt >> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033- >> >> > It’s 7.3, likely oVirt 4.1. Please upgrade... > > C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef >> -no-user-config -nodefaults -chardev >> socket,id=charmonitor,fd=31,server,nowait -mon >> chardev=charmonitor,id=monitor,mode=control -rtc >> base=2019-07-09T10:26:53,driftfix=slew -global >> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on >> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device >> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device >> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive >> if=none,id=drive-ide0-1-0,readonly=on -device >> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive >> file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native >> -device >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on >> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 >> -chardev socket,id=charchannel0,fd=35,server,nowait -device >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm >> -chardev socket,id=charchannel1,fd=36,server,nowait -device >> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 >> -chardev spicevmc,id=charchannel2,name=vdagent -device >> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 >> -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on >> -device >> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 >> -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 >> -object rng-random,id=objrng0,filename=/dev/urandom -device >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox >> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >> -msg timestamp=on >> >> Please shout if you need further info. >> >> Thanks. >> >> >> >> >> >> >> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov < >> hunter86_bg@yahoo.com> wrote: >> >> Shouldn't cause that problem. >> >> You have to find the bug in bugzilla and report a regression (if >> it's not closed) , or open a new one and report the regression. >> As far as I remember , only the dashboard was affected due to new >> features about vdo disk savings. >> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7... >> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB... >> > _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM...

Hi Sharon, Thank you for the info and apologies for the very late reply. I've done the service ovirt-engine-dwhd restart, and unfortunately there's no difference, below is the log.... 2019-07-24 03:00:00|3lI186|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has been closed.|1 Exception in component tJDBCInput_10 org.postgresql.util.PSQLException: This connection has been closed. at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822) at org.postgresql.jdbc3.AbstractJdbc3Connection.createStatement(AbstractJdbc3Connection.java:229) at org.postgresql.jdbc2.AbstractJdbc2Connection.createStatement(AbstractJdbc2Connection.java:294) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tJDBCInput_10Process(DeleteTimeKeepingJob.java:1493) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tPostjob_2Process(DeleteTimeKeepingJob.java:1232) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.runJobInTOS(DeleteTimeKeepingJob.java:11707) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.runJob(DeleteTimeKeepingJob.java:11308) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun.tInfiniteLoop_6Process(ParallelRun.java:4174) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun.tJava_5Process(ParallelRun.java:3716) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun$5.run(ParallelRun.java:5758) 2019-07-24 03:01:15|z7VVUn|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has been closed.|1 2019-07-24 15:05:50|ETL Service Stopped 2019-07-24 15:05:51|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.5 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** I've also attached a screenshot, the browser console log, as well as the engine.log although I excluded (grep -v ObjectIdentityChecker | grep -v ThreadPoolMonitoringService) from the engine.log because it was flooded with those warnings. Please let me know if there is anything else I can try or if you need further info. Thank you. Regards. Neil Wilson. On Tue, Jul 16, 2019 at 6:24 PM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
For the dashboard: If ovirt-engine-dwh is still installed and running after upgrade (service ovirt-engine-dwhd restart) then can you please re-check the ovirt-engine-dwh.log file for errors? @Shirly Radco <sradco@redhat.com> anything else to check?
For the Migrate option, please attach again your browser console log snippet when you have the problem and also a screenshot of the error.
Please also attach the engine log (the warnings you mentioned are not related to those issues).
Thanks, Sharon
On Tue, Jul 16, 2019 at 4:14 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thank you for coming back to me.
Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I have also tried clearing all data out of my browser and re-logged back in.
I see a new error though in my engine.log as below, however I still don't see anything logged when I click the migrate button...
2019-07-16 15:01:19,600+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,601+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,603+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be updated when status is 'Up'
Then in my vdsm.log I'm seeing the following error....
2019-07-16 15:05:59,038+0200 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
2019-07-16 15:06:09,036+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
I'm not sure if this is related to either of the above issues though, but I can attach the full log if needed.
Please shout if there is anything else you think I can try doing.
Thank you.
Regards.
Neil Wilson
On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi Neil,
Regarding issue 1 (Dashboard): I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well as other enhancements and bug fixes. For oVirt 4.3.5 installation / upgrade instructions: http://www.ovirt.org/release/4.3.5/
Regarding issue 2 (Manual Migrate dialog): If it will be reproduced after upgrading then please try to clean your browser caching before running the admin portal. It might help.
Regards, Sharon
On Thu, Jul 11, 2019 at 1:24 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available?
Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> wrote:
> > > On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < > michal.skrivanek@redhat.com> wrote: > >> >> >> On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote: >> >> >> >> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek < >> michal.skrivanek@redhat.com> wrote: >> >>> >>> >>> On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote: >>> >>> I'm not sure, but I always thought that you need an agent for >>> live migrations. >>> >>> >>> You don’t. For snapshots, and other less important stuff like >>> reporting IPs you do. In 4.3 you should be fine with qemu-ga only >>> >> I've seen resolving live migration issues by installing newer >> versions of ovirt ga. >> >> >> Hm, it shouldn’t make any difference whatsoever. Do you have any >> concrete data? that would help. >> > That is some time ago when runnign 4.1. No data unfortunately. Also > did not expect ovirt ga to affect migration, but experience showed me that > it did. The only observation is that it affected only Windows VMs. Linux > VMs never had an issue, regardless of ovirt ga. > >> You can always try installing either qemu-guest-agent or >>> ovirt-guest-agent and check if live migration between hosts is possible. >>> >>> Have you set the new cluster/dc version ? >>> >>> Best Regards >>> Strahil Nikolov >>> On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote: >>> >>> I remember seeing the bug earlier but because it was closed >>> thought it was unrelated, this appears to be it.... >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701 >>> >>> Perhaps I'm not understanding your question about the VM guest >>> agent, but I don't have any guest agent currently installed on the VM, not >>> sure if the output of my qemu-kvm process maybe answers this question?.... >>> >>> /usr/libexec/qemu-kvm -name >>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object >>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes >>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on >>> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 >>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid >>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios >>> type=1,manufacturer=oVirt,product=oVirt >>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033- >>> >>> >> It’s 7.3, likely oVirt 4.1. Please upgrade... >> >> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef >>> -no-user-config -nodefaults -chardev >>> socket,id=charmonitor,fd=31,server,nowait -mon >>> chardev=charmonitor,id=monitor,mode=control -rtc >>> base=2019-07-09T10:26:53,driftfix=slew -global >>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on >>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device >>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device >>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive >>> if=none,id=drive-ide0-1-0,readonly=on -device >>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive >>> file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native >>> -device >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on >>> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 >>> -chardev socket,id=charchannel0,fd=35,server,nowait -device >>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm >>> -chardev socket,id=charchannel1,fd=36,server,nowait -device >>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 >>> -chardev spicevmc,id=charchannel2,name=vdagent -device >>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 >>> -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on >>> -device >>> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 >>> -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 >>> -object rng-random,id=objrng0,filename=/dev/urandom -device >>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox >>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>> -msg timestamp=on >>> >>> Please shout if you need further info. >>> >>> Thanks. >>> >>> >>> >>> >>> >>> >>> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov < >>> hunter86_bg@yahoo.com> wrote: >>> >>> Shouldn't cause that problem. >>> >>> You have to find the bug in bugzilla and report a regression (if >>> it's not closed) , or open a new one and report the regression. >>> As far as I remember , only the dashboard was affected due to new >>> features about vdo disk savings. >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7... >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB... >>> >> _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5... > _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM...

Hi Sharon, This issue still persists, and when I saw that 4.3.5 was released I've tried to upgrade, but I see it says there are no packages available, however I see I have 11 updates that are version locked. Could this possibly be causing issues in terms of why updating to 4.3.5 when it was in "pre" that it didn't resolve the dashboard problem? [root@ovirt]# yum update "ovirt-*-setup*" Loaded plugins: fastestmirror, versionlock Repository centos-sclo-rh-release is listed more than once in the configuration Repository ovirt-4.3-epel is listed more than once in the configuration Repository ovirt-4.3-centos-gluster6 is listed more than once in the configuration Repository ovirt-4.3-virtio-win-latest is listed more than once in the configuration Repository ovirt-4.3-centos-qemu-ev is listed more than once in the configuration Repository ovirt-4.3-centos-ovirt43 is listed more than once in the configuration Repository ovirt-4.3-centos-opstools is listed more than once in the configuration Repository centos-sclo-rh-release is listed more than once in the configuration Repository sac-gluster-ansible is listed more than once in the configuration Repository ovirt-4.3 is listed more than once in the configuration Loading mirror speeds from cached hostfile ovirt-4.3-epel/x86_64/metalink | 46 kB 00:00:00 * base: mirror.pcsp.co.za * extras: mirror.pcsp.co.za * ovirt-4.1: mirror.slu.cz * ovirt-4.1-epel: ftp.uni-bayreuth.de * ovirt-4.2: mirror.slu.cz * ovirt-4.2-epel: ftp.uni-bayreuth.de * ovirt-4.3-epel: ftp.uni-bayreuth.de * updates: mirror.bitco.co.za ovirt-4.3-centos-gluster6 | 2.9 kB 00:00:00 ovirt-4.3-centos-opstools | 2.9 kB 00:00:00 ovirt-4.3-centos-ovirt43 | 2.9 kB 00:00:00 ovirt-4.3-centos-qemu-ev | 2.9 kB 00:00:00 ovirt-4.3-virtio-win-latest | 3.0 kB 00:00:00 sac-gluster-ansible | 3.3 kB 00:00:00 Excluding 11 updates due to versionlock (use "yum versionlock status" to show them) No packages marked for update [root@ovirt yum.repos.d]# yum versionlock status Loaded plugins: fastestmirror, versionlock Repository centos-sclo-rh-release is listed more than once in the configuration Repository ovirt-4.3-epel is listed more than once in the configuration Repository ovirt-4.3-centos-gluster6 is listed more than once in the configuration Repository ovirt-4.3-virtio-win-latest is listed more than once in the configuration Repository ovirt-4.3-centos-qemu-ev is listed more than once in the configuration Repository ovirt-4.3-centos-ovirt43 is listed more than once in the configuration Repository ovirt-4.3-centos-opstools is listed more than once in the configuration Repository centos-sclo-rh-release is listed more than once in the configuration Repository sac-gluster-ansible is listed more than once in the configuration Repository ovirt-4.3 is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: mirror.pcsp.co.za * extras: mirror.pcsp.co.za * ovirt-4.1: mirror.slu.cz * ovirt-4.1-epel: ftp.uni-bayreuth.de * ovirt-4.2: mirror.slu.cz * ovirt-4.2-epel: ftp.uni-bayreuth.de * ovirt-4.3-epel: ftp.uni-bayreuth.de * updates: mirror.bitco.co.za 0:ovirt-engine-webadmin-portal-4.2.8.2-1.el7.* 0:ovirt-engine-dwh-4.2.4.3-1.el7.* 0:ovirt-engine-tools-backup-4.2.8.2-1.el7.* 0:ovirt-engine-restapi-4.2.8.2-1.el7.* 0:ovirt-engine-dbscripts-4.2.8.2-1.el7.* 0:ovirt-engine-4.2.8.2-1.el7.* 0:ovirt-engine-backend-4.2.8.2-1.el7.* 0:ovirt-engine-wildfly-14.0.1-3.el7.* 0:ovirt-engine-wildfly-overlay-14.0.1-3.el7.* 0:ovirt-engine-tools-4.2.8.2-1.el7.* 0:ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.* versionlock status done Any ideas? Thank you. Regards. Neil Wilson. On Wed, Jul 24, 2019 at 3:46 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thank you for the info and apologies for the very late reply.
I've done the service ovirt-engine-dwhd restart, and unfortunately there's no difference, below is the log....
2019-07-24 03:00:00|3lI186|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has been closed.|1 Exception in component tJDBCInput_10 org.postgresql.util.PSQLException: This connection has been closed. at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822) at org.postgresql.jdbc3.AbstractJdbc3Connection.createStatement(AbstractJdbc3Connection.java:229) at org.postgresql.jdbc2.AbstractJdbc2Connection.createStatement(AbstractJdbc2Connection.java:294) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tJDBCInput_10Process(DeleteTimeKeepingJob.java:1493) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tPostjob_2Process(DeleteTimeKeepingJob.java:1232) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.runJobInTOS(DeleteTimeKeepingJob.java:11707) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.runJob(DeleteTimeKeepingJob.java:11308) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun.tInfiniteLoop_6Process(ParallelRun.java:4174) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun.tJava_5Process(ParallelRun.java:3716) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun$5.run(ParallelRun.java:5758) 2019-07-24 03:01:15|z7VVUn|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has been closed.|1 2019-07-24 15:05:50|ETL Service Stopped 2019-07-24 15:05:51|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.5 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I've also attached a screenshot, the browser console log, as well as the engine.log although I excluded (grep -v ObjectIdentityChecker | grep -v ThreadPoolMonitoringService) from the engine.log because it was flooded with those warnings.
Please let me know if there is anything else I can try or if you need further info.
Thank you.
Regards.
Neil Wilson.
On Tue, Jul 16, 2019 at 6:24 PM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
For the dashboard: If ovirt-engine-dwh is still installed and running after upgrade (service ovirt-engine-dwhd restart) then can you please re-check the ovirt-engine-dwh.log file for errors? @Shirly Radco <sradco@redhat.com> anything else to check?
For the Migrate option, please attach again your browser console log snippet when you have the problem and also a screenshot of the error.
Please also attach the engine log (the warnings you mentioned are not related to those issues).
Thanks, Sharon
On Tue, Jul 16, 2019 at 4:14 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thank you for coming back to me.
Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I have also tried clearing all data out of my browser and re-logged back in.
I see a new error though in my engine.log as below, however I still don't see anything logged when I click the migrate button...
2019-07-16 15:01:19,600+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,601+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,603+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be updated when status is 'Up'
Then in my vdsm.log I'm seeing the following error....
2019-07-16 15:05:59,038+0200 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
2019-07-16 15:06:09,036+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
I'm not sure if this is related to either of the above issues though, but I can attach the full log if needed.
Please shout if there is anything else you think I can try doing.
Thank you.
Regards.
Neil Wilson
On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi Neil,
Regarding issue 1 (Dashboard): I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well as other enhancements and bug fixes. For oVirt 4.3.5 installation / upgrade instructions: http://www.ovirt.org/release/4.3.5/
Regarding issue 2 (Manual Migrate dialog): If it will be reproduced after upgrading then please try to clean your browser caching before running the admin portal. It might help.
Regards, Sharon
On Thu, Jul 11, 2019 at 1:24 PM Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com> wrote:
Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available?
Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com> wrote:
> Hi everyone, > Just an update. > > I have both hosts upgraded to 4.3, I have upgraded my DC and cluster > to 4.3 and I'm still faced with the same problems. > > 1.) My Dashboard says the following "Error! Could not fetch > dashboard data. Please ensure that data warehouse is properly installed and > configured." > > 2.) When I click the Migrate button I get the error "Could not > fetch data needed for VM migrate operation" > > Upgrading my hosts resolved the "node status: DEGRADED" issue so at > least it's one issue down. > > I've done an engine-upgrade-check and a yum update on all my hosts > and engine and there are no further updates or patches waiting. > Nothing is logged in my engine.log when I click the Migrate button > either. > > Any ideas what to do or try for 1 and 2 above? > > Thank you. > > Regards. > > Neil Wilson. > > > > > > On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com> > wrote: > >> >> >> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek < >> michal.skrivanek@redhat.com> wrote: >> >>> >>> >>> On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com> wrote: >>> >>> >>> >>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek < >>> michal.skrivanek@redhat.com> wrote: >>> >>>> >>>> >>>> On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com> wrote: >>>> >>>> I'm not sure, but I always thought that you need an agent for >>>> live migrations. >>>> >>>> >>>> You don’t. For snapshots, and other less important stuff like >>>> reporting IPs you do. In 4.3 you should be fine with qemu-ga only >>>> >>> I've seen resolving live migration issues by installing newer >>> versions of ovirt ga. >>> >>> >>> Hm, it shouldn’t make any difference whatsoever. Do you have any >>> concrete data? that would help. >>> >> That is some time ago when runnign 4.1. No data unfortunately. Also >> did not expect ovirt ga to affect migration, but experience showed me that >> it did. The only observation is that it affected only Windows VMs. Linux >> VMs never had an issue, regardless of ovirt ga. >> >>> You can always try installing either qemu-guest-agent or >>>> ovirt-guest-agent and check if live migration between hosts is possible. >>>> >>>> Have you set the new cluster/dc version ? >>>> >>>> Best Regards >>>> Strahil Nikolov >>>> On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com> wrote: >>>> >>>> I remember seeing the bug earlier but because it was closed >>>> thought it was unrelated, this appears to be it.... >>>> >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701 >>>> >>>> Perhaps I'm not understanding your question about the VM guest >>>> agent, but I don't have any guest agent currently installed on the VM, not >>>> sure if the output of my qemu-kvm process maybe answers this question?.... >>>> >>>> /usr/libexec/qemu-kvm -name >>>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object >>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes >>>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on >>>> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 >>>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid >>>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios >>>> type=1,manufacturer=oVirt,product=oVirt >>>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033- >>>> >>>> >>> It’s 7.3, likely oVirt 4.1. Please upgrade... >>> >>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef >>>> -no-user-config -nodefaults -chardev >>>> socket,id=charmonitor,fd=31,server,nowait -mon >>>> chardev=charmonitor,id=monitor,mode=control -rtc >>>> base=2019-07-09T10:26:53,driftfix=slew -global >>>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on >>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device >>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device >>>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive >>>> if=none,id=drive-ide0-1-0,readonly=on -device >>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive >>>> file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native >>>> -device >>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on >>>> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device >>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 >>>> -chardev socket,id=charchannel0,fd=35,server,nowait -device >>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm >>>> -chardev socket,id=charchannel1,fd=36,server,nowait -device >>>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 >>>> -chardev spicevmc,id=charchannel2,name=vdagent -device >>>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 >>>> -spice tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on >>>> -device >>>> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 >>>> -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 >>>> -object rng-random,id=objrng0,filename=/dev/urandom -device >>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox >>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>> -msg timestamp=on >>>> >>>> Please shout if you need further info. >>>> >>>> Thanks. >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov < >>>> hunter86_bg@yahoo.com> wrote: >>>> >>>> Shouldn't cause that problem. >>>> >>>> You have to find the bug in bugzilla and report a regression (if >>>> it's not closed) , or open a new one and report the regression. >>>> As far as I remember , only the dashboard was affected due to new >>>> features about vdo disk savings. >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: >>>> https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: >>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7... >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: >>>> https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: >>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB... >>>> >>> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5... >> > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM... >

On 31 Jul 2019, at 16:16, Neil <nwilson123@gmail.com> wrote:
Hi Sharon,
This issue still persists, and when I saw that 4.3.5 was released I've tried to upgrade, but I see it says there are no packages available, however I see I have 11 updates that are version locked.
you probably upgraded setup files already, but didn’t run engine-setup, did you?
Could this possibly be causing issues in terms of why updating to 4.3.5 when it was in "pre" that it didn't resolve the dashboard problem?
[root@ovirt]# yum update "ovirt-*-setup*" Loaded plugins: fastestmirror, versionlock Repository centos-sclo-rh-release is listed more than once in the configuration Repository ovirt-4.3-epel is listed more than once in the configuration Repository ovirt-4.3-centos-gluster6 is listed more than once in the configuration Repository ovirt-4.3-virtio-win-latest is listed more than once in the configuration Repository ovirt-4.3-centos-qemu-ev is listed more than once in the configuration Repository ovirt-4.3-centos-ovirt43 is listed more than once in the configuration Repository ovirt-4.3-centos-opstools is listed more than once in the configuration Repository centos-sclo-rh-release is listed more than once in the configuration Repository sac-gluster-ansible is listed more than once in the configuration Repository ovirt-4.3 is listed more than once in the configuration Loading mirror speeds from cached hostfile ovirt-4.3-epel/x86_64/metalink | 46 kB 00:00:00 * base: mirror.pcsp.co.za <http://mirror.pcsp.co.za/> * extras: mirror.pcsp.co.za <http://mirror.pcsp.co.za/> * ovirt-4.1: mirror.slu.cz <http://mirror.slu.cz/> * ovirt-4.1-epel: ftp.uni-bayreuth.de <http://ftp.uni-bayreuth.de/> * ovirt-4.2: mirror.slu.cz <http://mirror.slu.cz/> * ovirt-4.2-epel: ftp.uni-bayreuth.de <http://ftp.uni-bayreuth.de/> * ovirt-4.3-epel: ftp.uni-bayreuth.de <http://ftp.uni-bayreuth.de/> * updates: mirror.bitco.co.za <http://mirror.bitco.co.za/> ovirt-4.3-centos-gluster6 | 2.9 kB 00:00:00 ovirt-4.3-centos-opstools | 2.9 kB 00:00:00 ovirt-4.3-centos-ovirt43 | 2.9 kB 00:00:00 ovirt-4.3-centos-qemu-ev | 2.9 kB 00:00:00 ovirt-4.3-virtio-win-latest | 3.0 kB 00:00:00 sac-gluster-ansible | 3.3 kB 00:00:00 Excluding 11 updates due to versionlock (use "yum versionlock status" to show them) No packages marked for update
[root@ovirt yum.repos.d]# yum versionlock status Loaded plugins: fastestmirror, versionlock Repository centos-sclo-rh-release is listed more than once in the configuration Repository ovirt-4.3-epel is listed more than once in the configuration Repository ovirt-4.3-centos-gluster6 is listed more than once in the configuration Repository ovirt-4.3-virtio-win-latest is listed more than once in the configuration Repository ovirt-4.3-centos-qemu-ev is listed more than once in the configuration Repository ovirt-4.3-centos-ovirt43 is listed more than once in the configuration Repository ovirt-4.3-centos-opstools is listed more than once in the configuration Repository centos-sclo-rh-release is listed more than once in the configuration Repository sac-gluster-ansible is listed more than once in the configuration Repository ovirt-4.3 is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: mirror.pcsp.co.za <http://mirror.pcsp.co.za/> * extras: mirror.pcsp.co.za <http://mirror.pcsp.co.za/> * ovirt-4.1: mirror.slu.cz <http://mirror.slu.cz/> * ovirt-4.1-epel: ftp.uni-bayreuth.de <http://ftp.uni-bayreuth.de/> * ovirt-4.2: mirror.slu.cz <http://mirror.slu.cz/> * ovirt-4.2-epel: ftp.uni-bayreuth.de <http://ftp.uni-bayreuth.de/> * ovirt-4.3-epel: ftp.uni-bayreuth.de <http://ftp.uni-bayreuth.de/> * updates: mirror.bitco.co.za <http://mirror.bitco.co.za/> 0:ovirt-engine-webadmin-portal-4.2.8.2-1.el7.* 0:ovirt-engine-dwh-4.2.4.3-1.el7.* 0:ovirt-engine-tools-backup-4.2.8.2-1.el7.* 0:ovirt-engine-restapi-4.2.8.2-1.el7.* 0:ovirt-engine-dbscripts-4.2.8.2-1.el7.* 0:ovirt-engine-4.2.8.2-1.el7.* 0:ovirt-engine-backend-4.2.8.2-1.el7.* 0:ovirt-engine-wildfly-14.0.1-3.el7.* 0:ovirt-engine-wildfly-overlay-14.0.1-3.el7.* 0:ovirt-engine-tools-4.2.8.2-1.el7.* 0:ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.* versionlock status done
Any ideas?
Thank you. Regards. Neil Wilson.
On Wed, Jul 24, 2019 at 3:46 PM Neil <nwilson123@gmail.com <mailto:nwilson123@gmail.com>> wrote: Hi Sharon,
Thank you for the info and apologies for the very late reply.
I've done the service ovirt-engine-dwhd restart, and unfortunately there's no difference, below is the log....
2019-07-24 03:00:00|3lI186|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has been closed.|1 Exception in component tJDBCInput_10 org.postgresql.util.PSQLException: This connection has been closed. at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822) at org.postgresql.jdbc3.AbstractJdbc3Connection.createStatement(AbstractJdbc3Connection.java:229) at org.postgresql.jdbc2.AbstractJdbc2Connection.createStatement(AbstractJdbc2Connection.java:294) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tJDBCInput_10Process(DeleteTimeKeepingJob.java:1493) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tPostjob_2Process(DeleteTimeKeepingJob.java:1232) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.runJobInTOS(DeleteTimeKeepingJob.java:11707) at ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.runJob(DeleteTimeKeepingJob.java:11308) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun.tInfiniteLoop_6Process(ParallelRun.java:4174) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun.tJava_5Process(ParallelRun.java:3716) at ovirt_engine_dwh.parallelrun_4_3.ParallelRun$5.run(ParallelRun.java:5758) 2019-07-24 03:01:15|z7VVUn|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection has been closed.|1 2019-07-24 15:05:50|ETL Service Stopped 2019-07-24 15:05:51|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.3.5 dwhAggregationDebug|false dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316 ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|**********************
I've also attached a screenshot, the browser console log, as well as the engine.log although I excluded (grep -v ObjectIdentityChecker | grep -v ThreadPoolMonitoringService) from the engine.log because it was flooded with those warnings.
Please let me know if there is anything else I can try or if you need further info.
Thank you.
Regards.
Neil Wilson.
On Tue, Jul 16, 2019 at 6:24 PM Sharon Gratch <sgratch@redhat.com <mailto:sgratch@redhat.com>> wrote: Hi,
For the dashboard: If ovirt-engine-dwh is still installed and running after upgrade (service ovirt-engine-dwhd restart) then can you please re-check the ovirt-engine-dwh.log file for errors? @Shirly Radco <mailto:sradco@redhat.com> anything else to check?
For the Migrate option, please attach again your browser console log snippet when you have the problem and also a screenshot of the error.
Please also attach the engine log (the warnings you mentioned are not related to those issues).
Thanks, Sharon
On Tue, Jul 16, 2019 at 4:14 PM Neil <nwilson123@gmail.com <mailto:nwilson123@gmail.com>> wrote: Hi Sharon,
Thank you for coming back to me.
Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I have also tried clearing all data out of my browser and re-logged back in.
I see a new error though in my engine.log as below, however I still don't see anything logged when I click the migrate button...
2019-07-16 15:01:19,600+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,601+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated when status is 'Up' 2019-07-16 15:01:19,602+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not be updated when status is 'Up' 2019-07-16 15:01:19,603+02 WARN [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15) [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be updated when status is 'Up'
Then in my vdsm.log I'm seeing the following error....
2019-07-16 15:05:59,038+0200 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
2019-07-16 15:06:09,036+0200 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not run <function <lambda> at 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde', '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb', '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b', '2489c75f-2758-4d82-8338-12f02ff78afa', '9a6561b8-5702-43dc-9e92-1dc5dfed4eef', '523ad9ee-5738-42f2-9ee1-50727207e93b', '84f4685b-39e1-4bc8-b8ab-755a2c325cb0', '43c06f86-2e37-410b-84be-47e83052344a', '6f44a02c-5de6-4002-992f-2c2c5feb2ee5', '19844323-b3cc-441a-8d70-e45326848b10', '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
I'm not sure if this is related to either of the above issues though, but I can attach the full log if needed.
Please shout if there is anything else you think I can try doing.
Thank you.
Regards.
Neil Wilson
On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch <sgratch@redhat.com <mailto:sgratch@redhat.com>> wrote: Hi Neil,
Regarding issue 1 (Dashboard): I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well as other enhancements and bug fixes. For oVirt 4.3.5 installation / upgrade instructions: http://www.ovirt.org/release/4.3.5/ <http://www.ovirt.org/release/4.3.5/>
Regarding issue 2 (Manual Migrate dialog): If it will be reproduced after upgrading then please try to clean your browser caching before running the admin portal. It might help.
Regards, Sharon
On Thu, Jul 11, 2019 at 1:24 PM Neil <nwilson123@gmail.com <mailto:nwilson123@gmail.com>> wrote:
Hi Sharon,
Thanks for the assistance. On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch@redhat.com <mailto:sgratch@redhat.com>> wrote: Hi,
Regarding issue 1 (Dashboard): Did you upgrade the engine to 4.3.5? There was a bug fixed in version 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 <https://bugzilla.redhat.com/show_bug.cgi?id=1713967> and it may be the same issue.
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is there another repo available?
Regarding issue 2 (Manual Migrate dialog): Can you please attach your browser console log and engine.log snippet when you have the problem? If you could take from the console log the actual REST API response, that would be great. The request will be something like <engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST API being logged, just a stack trace error. The engine.log literally doesn't get updated when I click the Migrate button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123@gmail.com <mailto:nwilson123@gmail.com>> wrote: Hi everyone, Just an update.
I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3 and I'm still faced with the same problems.
1.) My Dashboard says the following "Error! Could not fetch dashboard data. Please ensure that data warehouse is properly installed and configured."
2.) When I click the Migrate button I get the error "Could not fetch data needed for VM migrate operation"
Upgrading my hosts resolved the "node status: DEGRADED" issue so at least it's one issue down.
I've done an engine-upgrade-check and a yum update on all my hosts and engine and there are no further updates or patches waiting. Nothing is logged in my engine.log when I click the Migrate button either.
Any ideas what to do or try for 1 and 2 above?
Thank you.
Regards.
Neil Wilson.
On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech@gmail.com <mailto:rightkicktech@gmail.com>> wrote:
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <michal.skrivanek@redhat.com <mailto:michal.skrivanek@redhat.com>> wrote:
On 11 Jul 2019, at 06:34, Alex K <rightkicktech@gmail.com <mailto:rightkicktech@gmail.com>> wrote:
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <michal.skrivanek@redhat.com <mailto:michal.skrivanek@redhat.com>> wrote:
On 9 Jul 2019, at 17:16, Strahil <hunter86_bg@yahoo.com <mailto:hunter86_bg@yahoo.com>> wrote:
I'm not sure, but I always thought that you need an agent for live migrations.
You don’t. For snapshots, and other less important stuff like reporting IPs you do. In 4.3 you should be fine with qemu-ga only I've seen resolving live migration issues by installing newer versions of ovirt ga.
Hm, it shouldn’t make any difference whatsoever. Do you have any concrete data? that would help. That is some time ago when runnign 4.1. No data unfortunately. Also did not expect ovirt ga to affect migration, but experience showed me that it did. The only observation is that it affected only Windows VMs. Linux VMs never had an issue, regardless of ovirt ga.
You can always try installing either qemu-guest-agent or ovirt-guest-agent and check if live migration between hosts is possible.
Have you set the new cluster/dc version ?
Best Regards Strahil Nikolov
On Jul 9, 2019 17:42, Neil <nwilson123@gmail.com <mailto:nwilson123@gmail.com>> wrote: I remember seeing the bug earlier but because it was closed thought it was unrelated, this appears to be it....
https://bugzilla.redhat.com/show_bug.cgi?id=1670701 <https://bugzilla.redhat.com/show_bug.cgi?id=1670701>
Perhaps I'm not understanding your question about the VM guest agent, but I don't have any guest agent currently installed on the VM, not sure if the output of my qemu-kvm process maybe answers this question?....
/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
It’s 7.3, likely oVirt 4.1. Please upgrade...
C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,fd=36,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.0.1.11 <http://10.0.1.11/>,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please shout if you need further info.
Thanks.
On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86_bg@yahoo.com <mailto:hunter86_bg@yahoo.com>> wrote: Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not closed) , or open a new one and report the regression. As far as I remember , only the dashboard was affected due to new features about vdo disk savings.
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7NSBYK5UMZYFRTJ7B2E/>
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UBJYLEHLMKFXURLWK7YR/>
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTEBN5I6VXW6JVM3ET4FGJZR/> _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBM... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2TBMOVBGM72AJBDZ3QT7OQ/> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6CWKQRS4AIWEF...
participants (5)
-
Alex K
-
Michal Skrivanek
-
Neil
-
Sharon Gratch
-
Strahil