Hi Sharon,
Thanks for the assistance.
On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch <sgratch(a)redhat.com> wrote:
No I wasn't aware that there were updates, how do I obtain 4.3.4-5 is
there another repo available?
Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet
when
you have the problem?
If you could take from the console log the actual REST API response, that
would be great.
The request will be something like
<engine>/api/hosts?migration_target_of=...
Please see attached text log for the browser console, I don't see any REST
API being logged, just a stack trace error.
The engine.log literally doesn't get updated when I click the Migrate
button so there isn't anything to share unfortunately.
Please shout if you need further info.
Thank you!
On Thu, Jul 11, 2019 at 10:04 AM Neil <nwilson123(a)gmail.com> wrote:
> Hi everyone,
> Just an update.
>
> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
> 4.3 and I'm still faced with the same problems.
>
> 1.) My Dashboard says the following "Error! Could not fetch dashboard
> data. Please ensure that data warehouse is properly installed and
> configured."
>
> 2.) When I click the Migrate button I get the error "Could not fetch
> data needed for VM migrate operation"
>
> Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
> it's one issue down.
>
> I've done an engine-upgrade-check and a yum update on all my hosts and
> engine and there are no further updates or patches waiting.
> Nothing is logged in my engine.log when I click the Migrate button either.
>
> Any ideas what to do or try for 1 and 2 above?
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
>
> On Thu, Jul 11, 2019 at 8:27 AM Alex K <rightkicktech(a)gmail.com> wrote:
>
>>
>>
>> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
>> michal.skrivanek(a)redhat.com> wrote:
>>
>>>
>>>
>>> On 11 Jul 2019, at 06:34, Alex K <rightkicktech(a)gmail.com> wrote:
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <
>>> michal.skrivanek(a)redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On 9 Jul 2019, at 17:16, Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>>
>>>> I'm not sure, but I always thought that you need an agent for live
>>>> migrations.
>>>>
>>>>
>>>> You don’t. For snapshots, and other less important stuff like
>>>> reporting IPs you do. In 4.3 you should be fine with qemu-ga only
>>>>
>>> I've seen resolving live migration issues by installing newer versions
>>> of ovirt ga.
>>>
>>>
>>> Hm, it shouldn’t make any difference whatsoever. Do you have any
>>> concrete data? that would help.
>>>
>> That is some time ago when runnign 4.1. No data unfortunately. Also did
>> not expect ovirt ga to affect migration, but experience showed me that it
>> did. The only observation is that it affected only Windows VMs. Linux VMs
>> never had an issue, regardless of ovirt ga.
>>
>>> You can always try installing either qemu-guest-agent or
>>>> ovirt-guest-agent and check if live migration between hosts is
possible.
>>>>
>>>> Have you set the new cluster/dc version ?
>>>>
>>>> Best Regards
>>>> Strahil Nikolov
>>>> On Jul 9, 2019 17:42, Neil <nwilson123(a)gmail.com> wrote:
>>>>
>>>> I remember seeing the bug earlier but because it was closed thought it
>>>> was unrelated, this appears to be it....
>>>>
>>>>
https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>>>
>>>> Perhaps I'm not understanding your question about the VM guest
agent,
>>>> but I don't have any guest agent currently installed on the VM, not
sure if
>>>> the output of my qemu-kvm process maybe answers this question?....
>>>>
>>>> /usr/libexec/qemu-kvm -name
>>>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
>>>>
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>>>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>>>
Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>>>> -m 8192 -realtime mlock=off -smp
8,maxcpus=64,sockets=16,cores=4,threads=1
>>>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
>>>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
>>>> type=1,manufacturer=oVirt,product=oVirt
>>>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>>>>
>>>>
>>> It’s 7.3, likely oVirt 4.1. Please upgrade...
>>>
>>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
>>>> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
>>>> chardev=charmonitor,id=monitor,mode=control -rtc
>>>> base=2019-07-09T10:26:53,driftfix=slew -global
>>>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
>>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>>>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive
>>>> if=none,id=drive-ide0-1-0,readonly=on -device
>>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>>>>
file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
>>>> -device
>>>>
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
>>>> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
>>>>
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
>>>> -chardev socket,id=charchannel0,fd=35,server,nowait -device
>>>>
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>>>> -chardev socket,id=charchannel1,fd=36,server,nowait -device
>>>>
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>>>> -chardev spicevmc,id=charchannel2,name=vdagent -device
>>>>
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>>>> -spice
tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
>>>> -device
>>>>
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
>>>> -incoming defer -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
>>>> -object rng-random,id=objrng0,filename=/dev/urandom -device
>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
>>>> -msg timestamp=on
>>>>
>>>> Please shout if you need further info.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov
<hunter86_bg(a)yahoo.com>
>>>> wrote:
>>>>
>>>> Shouldn't cause that problem.
>>>>
>>>> You have to find the bug in bugzilla and report a regression (if
it's
>>>> not closed) , or open a new one and report the regression.
>>>> As far as I remember , only the dashboard was affected due to new
>>>> features about vdo disk savings.
>>>>
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQ...
>>>>
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH...
>>>>
>>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OITNPMYSTE...
>>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWMVWDDQMY2...
>