[Users] VM running on two hosts somehow`
Neil
nwilson123 at gmail.com
Fri May 31 08:50:20 UTC 2013
Hi guys,
Sorry for the late reply, just to update everyone, I managed to
resolve this issue by speaking to Juan Hernandez on channel #ovirt as
it was extremely critical.
I had to manually kill the kvm processes on both hosts they were
running on and then start the VM through ovirt, it gave fsck warnings
(I'm so glad it was a Linux VM that had this issue!) and I was forced
into running a manual fsck which I did, and it fixed all the errors it
found and the guest then booted up successfully without any data loss
or issues somehow.
Lesson learnt, if you have an issue where your engine somehow says the
VM is off even though it's on, shut down the VM, check that the kvm
PID of that VM has in fact stopped on all of your hosts, and then
power it back on ensuring there is only one PID running for that VM.
Huge thanks to Juan for his help here.
Regards.
Neil Wilson.
On Sun, May 26, 2013 at 9:15 AM, Omer Frenkel <ofrenkel at redhat.com> wrote:
>
>
> ----- Original Message -----
>> From: "Neil" <nwilson123 at gmail.com>
>> To: users at ovirt.org
>> Sent: Friday, May 24, 2013 10:43:27 AM
>> Subject: [Users] VM running on two hosts somehow`
>>
>> Hi guys,
>>
>> Sorry I thought I'd start a new thread for this issue, as it's now a
>> different problem from my original post "Migration failed due to
>> Error: novm"
>>
>> After my VM failed to migrate from one host to the other my VM was
>> still responsive but showed that it was powered off in oVirt, so I
>> logged into the console on the Linux guest and rebooted it, which
>> appears to have resolve the issue as it now shows up on my engine as
>> on, but I've just noticed now I've got two VM instances running on two
>> separate hosts...
>>
>
> what is the status of the hosts (migration source and destination) in the engine ui?
> when the vm was down in the engine ui, you didnt start again from it? just restart from within the guest?
>
> looking at the logs from other thread, seems that for some reason vm had a balloon-device with no spec-params,
> this caused vdsm to fail to respond to the engine monitoring, i still need to understand the engine behavior in this case,
> i believe it stays in UP but can't get any info from vdsm..
> can you find when these errors started on vdsm log?
> (i assume this is the migration destination vdsm, maybe it started when vms migrated to this host?)
>
> can you please share the engine.log of the migration and restart time? (log from other thread is too short, migrate command info is not there)
>
> thanks!
>
>> On host 10.0.2.22
>>
>> 15407 ? Sl 223:35 /usr/libexec/qemu-kvm -name zimbra -S -M
>> rhel6.4.0 -cpu Westmere -enable-kvm -m 8192 -smp
>> 4,sockets=1,cores=4,threads=1 -uuid
>> 179c293b-e6a3-4ec6-a54c-2f92f875bc5e -smbios
>> type=1,manufacturer=oVirt,product=oVirt
>> Node,version=6-4.el6.centos.10,serial=4C4C4544-0038-5310-8050-C4C04F34354A,uuid=179c293b-e6a3-4ec6-a54c-2f92f875bc5e
>> -nodefconfig -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2013-05-23T15:07:39,driftfix=slew -no-shutdown -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> file=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/446921d9-cbd1-42b1-919f-88d6ae310fd9/2ff8ba31-7397-41e7-8a60-7ef9eec23d1a,if=none,id=drive-virtio-disk0,format=raw,serial=446921d9-cbd1-42b1-919f-88d6ae310fd9,cache=none,werror=stop,rerror=stop,aio=native
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
>> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
>> -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:7a:01,bus=pci.0,addr=0x3
>> -chardev
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zimbra.com.redhat.rhevm.vdsm,server,nowait
>> -device
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>> -chardev
>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/zimbra.org.qemu.guest_agent.0,server,nowait
>> -device
>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>> -device usb-tablet,id=input0 -vnc 0:10,password -k en-us -vga cirrus
>>
>>
>>
>> On Host 10.0.2.21
>>
>> 17594 ? Sl 449:39 /usr/libexec/qemu-kvm -name zimbra -S -M
>> rhel6.2.0 -cpu Westmere -enable-kvm -m 8192 -smp
>> 4,sockets=1,cores=4,threads=1 -uuid
>> 179c293b-e6a3-4ec6-a54c-2f92f875bc5e -smbios type=1,manufacturer=Red
>> Hat,product=RHEV
>> Hypervisor,version=6-2.el6.centos.7,serial=4C4C4544-0038-5310-8050-C4C04F34354A_BC:30:5B:E4:19:C2,uuid=179c293b-e6a3-4ec6-a54c-2f92f875bc5e
>> -nodefconfig -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2013-05-23T10:19:47,driftfix=slew -no-shutdown -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> file=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/446921d9-cbd1-42b1-919f-88d6ae310fd9/2ff8ba31-7397-41e7-8a60-7ef9eec23d1a,if=none,id=drive-virtio-disk0,format=raw,serial=446921d9-cbd1-42b1-919f-88d6ae310fd9,cache=none,werror=stop,rerror=stop,aio=native
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
>> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
>> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:7a:01,bus=pci.0,addr=0x3
>> -chardev
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zimbra.com.redhat.rhevm.vdsm,server,nowait
>> -device
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>> -chardev pty,id=charconsole0 -device
>> virtconsole,chardev=charconsole0,id=console0 -device
>> usb-tablet,id=input0 -vnc 0:2,password -k en-us -vga cirrus -incoming
>> tcp:[::]:49153
>>
>> This sounds like a very serious problem considering it's most likely
>> been like this for more than 12hours before I noticed it. By doing a
>> tcpdump I can see traffic from the users passing to both VM's so I'm
>> very worried.
>>
>> Please could someone assist me please, I'm desperate!
>>
>> Thank you!!
>>
>> Regards.
>>
>> Neil Wilson.
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
More information about the Users
mailing list