<div dir="ltr"><div><br></div>selinux issue?<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 13 January 2014 12:48, Michal Skrivanek <span dir="ltr"><<a href="mailto:michal.skrivanek@redhat.com" target="_blank">michal.skrivanek@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
On Jan 13, 2014, at 11:37 , Neil <<a href="mailto:nwilson123@gmail.com">nwilson123@gmail.com</a>> wrote:<br>
<br>
> Good morning everyone,<br>
><br>
> Sorry to trouble you again, anyone have any ideas on what to try next?<br>
<br>
</div>Hi Neil,<br>
hm, other than noise I don't really see any failures in migration.<br>
Can you attach both src and dst vdsm log with a hint which VM and at what time approx it failed for you?<br>
There are errors for one volume reoccuring all the time, but that doesn't look related to the migration<br>
<br>
Thanks,<br>
michal<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> Thank you so much,<br>
><br>
> Regards.<br>
><br>
> Neil Wilson.<br>
><br>
> On Fri, Jan 10, 2014 at 8:31 AM, Neil <<a href="mailto:nwilson123@gmail.com">nwilson123@gmail.com</a>> wrote:<br>
>> Hi Dafna,<br>
>><br>
>> Apologies for the late reply, I was out of my office yesterday.<br>
>><br>
>> Just to get back to you on your questions.<br>
>><br>
>> can you look at the vm dialogue and see what boot devices the vm has?<br>
>> Sorry I'm not sure where you want me to get this info from? Inside the<br>
>> ovirt GUI or on the VM itself.<br>
>> The VM has one 2TB LUN assigned. Then inside the VM this is the fstab<br>
>> parameters..<br>
>><br>
>> [root@tux ~]# cat /etc/fstab<br>
>> /dev/VolGroup00/LogVol00 / ext3 defaults 1 0<br>
>> /dev/vda1 /boot ext3 defaults 1 0<br>
>> tmpfs /dev/shm tmpfs defaults 0 0<br>
>> devpts /dev/pts devpts gid=5,mode=620 0 0<br>
>> sysfs /sys sysfs defaults 0 0<br>
>> proc /proc proc defaults 0 0<br>
>> /dev/VolGroup00/LogVol01 swap swap defaults 0 0<br>
>> /dev/VolGroup00/LogVol02 /homes xfs<br>
>> defaults,usrquota,grpquota 1 0<br>
>><br>
>><br>
>> can you write to the vm?<br>
>> Yes the machine is fully functioning, it's their main PDC and hosts<br>
>> all of their files.<br>
>><br>
>><br>
>> can you please dump the vm xml from libvirt? (it's one of the commands<br>
>> that you have in virsh)<br>
>><br>
>> Below is the xml<br>
>><br>
>> <domain type='kvm' id='7'><br>
>> <name>tux</name><br>
>> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid><br>
>> <memory unit='KiB'>8388608</memory><br>
>> <currentMemory unit='KiB'>8388608</currentMemory><br>
>> <vcpu placement='static'>4</vcpu><br>
>> <cputune><br>
>> <shares>1020</shares><br>
>> </cputune><br>
>> <sysinfo type='smbios'><br>
>> <system><br>
>> <entry name='manufacturer'>oVirt</entry><br>
>> <entry name='product'>oVirt Node</entry><br>
>> <entry name='version'>6-4.el6.centos.10</entry><br>
>> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry><br>
>> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry><br>
>> </system><br>
>> </sysinfo><br>
>> <os><br>
>> <type arch='x86_64' machine='rhel6.4.0'>hvm</type><br>
>> <smbios mode='sysinfo'/><br>
>> </os><br>
>> <features><br>
>> <acpi/><br>
>> </features><br>
>> <cpu mode='custom' match='exact'><br>
>> <model fallback='allow'>Westmere</model><br>
>> <topology sockets='1' cores='4' threads='1'/><br>
>> </cpu><br>
>> <clock offset='variable' adjustment='0' basis='utc'><br>
>> <timer name='rtc' tickpolicy='catchup'/><br>
>> </clock><br>
>> <on_poweroff>destroy</on_poweroff><br>
>> <on_reboot>restart</on_reboot><br>
>> <on_crash>destroy</on_crash><br>
>> <devices><br>
>> <emulator>/usr/libexec/qemu-kvm</emulator><br>
>> <disk type='file' device='cdrom'><br>
>> <driver name='qemu' type='raw'/><br>
>> <source startupPolicy='optional'/><br>
>> <target dev='hdc' bus='ide'/><br>
>> <readonly/><br>
>> <serial></serial><br>
>> <alias name='ide0-1-0'/><br>
>> <address type='drive' controller='0' bus='1' target='0' unit='0'/><br>
>> </disk><br>
>> <disk type='block' device='disk' snapshot='no'><br>
>> <driver name='qemu' type='raw' cache='none' error_policy='stop'<br>
>> io='native'/><br>
>> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/><br>
>> <target dev='vda' bus='virtio'/><br>
>> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial><br>
>> <boot order='1'/><br>
>> <alias name='virtio-disk0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x05'<br>
>> function='0x0'/><br>
>> </disk><br>
>> <controller type='ide' index='0'><br>
>> <alias name='ide0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x01'<br>
>> function='0x1'/><br>
>> </controller><br>
>> <controller type='virtio-serial' index='0'><br>
>> <alias name='virtio-serial0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x04'<br>
>> function='0x0'/><br>
>> </controller><br>
>> <controller type='usb' index='0'><br>
>> <alias name='usb0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x01'<br>
>> function='0x2'/><br>
>> </controller><br>
>> <interface type='bridge'><br>
>> <mac address='00:1a:4a:a8:7a:00'/><br>
>> <source bridge='ovirtmgmt'/><br>
>> <target dev='vnet5'/><br>
>> <model type='virtio'/><br>
>> <filterref filter='vdsm-no-mac-spoofing'/><br>
>> <link state='up'/><br>
>> <alias name='net0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'<br>
>> function='0x0'/><br>
>> </interface><br>
>> <channel type='unix'><br>
>> <source mode='bind'<br>
>> path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/><br>
>> <target type='virtio' name='com.redhat.rhevm.vdsm'/><br>
>> <alias name='channel0'/><br>
>> <address type='virtio-serial' controller='0' bus='0' port='1'/><br>
>> </channel><br>
>> <channel type='unix'><br>
>> <source mode='bind'<br>
>> path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/><br>
>> <target type='virtio' name='org.qemu.guest_agent.0'/><br>
>> <alias name='channel1'/><br>
>> <address type='virtio-serial' controller='0' bus='0' port='2'/><br>
>> </channel><br>
>> <channel type='spicevmc'><br>
>> <target type='virtio' name='com.redhat.spice.0'/><br>
>> <alias name='channel2'/><br>
>> <address type='virtio-serial' controller='0' bus='0' port='3'/><br>
>> </channel><br>
>> <input type='mouse' bus='ps2'/><br>
>> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes'<br>
>> listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54'<br>
>> connected='disconnect'><br>
>> <listen type='address' address='0'/><br>
>> <channel name='main' mode='secure'/><br>
>> <channel name='display' mode='secure'/><br>
>> <channel name='inputs' mode='secure'/><br>
>> <channel name='cursor' mode='secure'/><br>
>> <channel name='playback' mode='secure'/><br>
>> <channel name='record' mode='secure'/><br>
>> <channel name='smartcard' mode='secure'/><br>
>> <channel name='usbredir' mode='secure'/><br>
>> </graphics><br>
>> <video><br>
>> <model type='qxl' ram='65536' vram='65536' heads='1'/><br>
>> <alias name='video0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x02'<br>
>> function='0x0'/><br>
>> </video><br>
>> <memballoon model='virtio'><br>
>> <alias name='balloon0'/><br>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x06'<br>
>> function='0x0'/><br>
>> </memballoon><br>
>> </devices><br>
>> <seclabel type='none'/><br>
>> </domain><br>
>><br>
>> Thank you very much for your help.<br>
>><br>
>> Regards.<br>
>><br>
>> Neil Wilson.<br>
>><br>
>><br>
>><br>
>> On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <<a href="mailto:dron@redhat.com">dron@redhat.com</a>> wrote:<br>
>>> Hi Neil,<br>
>>><br>
>>> the error in the log suggests that the vm is missing a disk...<br>
>>> can you look at the vm dialogue and see what boot devices the vm has?<br>
>>> can you write to the vm?<br>
>>> can you please dump the vm xml from libvirt? (it's one of the commands that<br>
>>> you have in virsh)<br>
>>><br>
>>> Thanks,<br>
>>><br>
>>> Dafna<br>
>>><br>
>>><br>
>>><br>
>>> On 01/08/2014 02:42 PM, Neil wrote:<br>
>>>><br>
>>>> Hi guys,<br>
>>>><br>
>>>> Apologies for the late reply.<br>
>>>><br>
>>>> The VM (Tux) was created about 2 years ago, it was converted from a<br>
>>>> physical machine using Clonezilla. It's been migrated a number of<br>
>>>> times in the past, only now when trying to move it off node03 is it<br>
>>>> giving this error.<br>
>>>><br>
>>>> I've looked for any attached images/cd's and found none unfortunately.<br>
>>>><br>
>>>> Thank you so much for your assistance so far.<br>
>>>><br>
>>>> Regards.<br>
>>>><br>
>>>> Neil Wilson.<br>
>>>><br>
>>>><br>
>>>><br>
>>>> On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <<a href="mailto:dron@redhat.com">dron@redhat.com</a>> wrote:<br>
>>>>><br>
>>>>> Thread-847747::INFO::2014-01-07<br>
>>>>> 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect:<br>
>>>>> inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3')<br>
>>>>> Thread-847747::INFO::2014-01-07<br>
>>>>> 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect:<br>
>>>>> inappropriateDevices, Return response: None<br>
>>>>><br>
>>>>> Please check if the vm's were booted with a cd...<br>
>>>>><br>
>>>>><br>
>>>>> bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at<br>
>>>>> 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0<br>
>>>>> serial: truesize:0 *type:cdrom* volExtensionChunk:1024<br>
>>>>> watermarkLimit:536870912<br>
>>>>><br>
>>>>> Traceback (most recent call last):<br>
>>>>> File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath<br>
>>>>> res = self.irs.teardownImage(drive['domainID'],<br>
>>>>> File "/usr/share/vdsm/vm.py", line 1386, in __getitem__<br>
>>>>> raise KeyError(key)<br>
>>>>> KeyError: 'domainID'<br>
>>>>> Thread-847747::WARNING::2014-01-07<br>
>>>>> 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a<br>
>>>>> vdsm<br>
>>>>> image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50<br>
>>>>> _blockDev:True _checkIoTuneCategories:<bound method D<br>
>>>>> rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>><br>
>>>>> _customize:<bound method Drive._customize of <vm.Drive object at<br>
>>>>> 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no"<br>
>>>>> type="block"><br>
>>>>> <driver cache="none" error_policy="stop" io="native" name="qemu"<br>
>>>>> type="raw"/><br>
>>>>> <source<br>
>>>>><br>
>>>>> dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/><br>
>>>>> <target bus="ide" dev="hda"/><br>
>>>>> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial><br>
>>>>> <alias name="ide0-0-0"/><br>
>>>>> <address bus="0" controller="0" target="0" type="drive" unit="0"/><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On 01/08/2014 06:28 AM, Neil wrote:<br>
>>>>>><br>
>>>>>> Hi Dafna,<br>
>>>>>><br>
>>>>>> Thanks for the reply.<br>
>>>>>><br>
>>>>>> Attached is the log from the source server (node03).<br>
>>>>>><br>
>>>>>> I'll reply to your other questions as soon as I'm back in the office<br>
>>>>>> this afternoon, have to run off to a meeting.<br>
>>>>>><br>
>>>>>> Regards.<br>
>>>>>><br>
>>>>>> Neil Wilson.<br>
>>>>>><br>
>>>>>><br>
>>>>>> On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <<a href="mailto:dron@redhat.com">dron@redhat.com</a>> wrote:<br>
>>>>>>><br>
>>>>>>> Ok... several things :)<br>
>>>>>>><br>
>>>>>>> 1. for migration we need to see vdsm logs from both src and dst.<br>
>>>>>>><br>
>>>>>>> 2. Is it possible that the vm has an iso attached? because I see that<br>
>>>>>>> you<br>
>>>>>>> are having problems with the iso domain:<br>
>>>>>>><br>
>>>>>>> 2014-01-07 14:26:27,714 ERROR<br>
>>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]<br>
>>>>>>> (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso<br>
>>>>>>> was<br>
>>>>>>> reported with error code 358<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> Thread-1165153::DEBUG::2014-01-07<br>
>>>>>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)<br>
>>>>>>> Unknown<br>
>>>>>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no<br>
>>>>>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'<br>
>>>>>>><br>
>>>>>>> hread-19::ERROR::2014-01-07<br>
>>>>>>> 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain)<br>
>>>>>>> domain<br>
>>>>>>> e9ab725d-69c1-4a59-b225-b995d095c289 not found<br>
>>>>>>> Traceback (most recent call last):<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain<br>
>>>>>>> dom = findMethod(sdUUID)<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 171, in<br>
>>>>>>> _findUnfetchedDomain<br>
>>>>>>> raise se.StorageDomainDoesNotExist(sdUUID)<br>
>>>>>>> StorageDomainDoesNotExist: Storage domain does not exist:<br>
>>>>>>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',)<br>
>>>>>>> Thread-19::ERROR::2014-01-07<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)<br>
>>>>>>> Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289<br>
>>>>>>> monitoring information<br>
>>>>>>> Traceback (most recent call last):<br>
>>>>>>> File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in<br>
>>>>>>> _monitorDomain<br>
>>>>>>> self.domain = sdCache.produce(self.sdUUID)<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce<br>
>>>>>>> domain.getRealDomain()<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain<br>
>>>>>>> return self._cache._realProduce(self._sdUUID)<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce<br>
>>>>>>> domain = self._findDomain(sdUUID)<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain<br>
>>>>>>> dom = findMethod(sdUUID)<br>
>>>>>>> File "/usr/share/vdsm/storage/sdc.py", line 171, in<br>
>>>>>>> _findUnfetchedDomain<br>
>>>>>>> raise se.StorageDomainDoesNotExist(sdUUID)<br>
>>>>>>> StorageDomainDoesNotExist: Storage domain does not exist:<br>
>>>>>>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',)<br>
>>>>>>> Dummy-29013::DEBUG::2014-01-07<br>
>>>>>>><br>
>>>>>>> 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)<br>
>>>>>>> 'dd<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox<br>
>>>>>>> iflag=direct,fullblock count=1 bs=1024000' (cwd N<br>
>>>>>>> one)<br>
>>>>>>><br>
>>>>>>> 3. The migration fails with libvirt error but we need the trace from<br>
>>>>>>> the<br>
>>>>>>> second log:<br>
>>>>>>><br>
>>>>>>> Thread-1165153::DEBUG::2014-01-07<br>
>>>>>>> 13:39:42,451::sampling::292::vm.Vm::(stop)<br>
>>>>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection<br>
>>>>>>> Thread-1163583::DEBUG::2014-01-07<br>
>>>>>>> 13:39:42,452::sampling::323::vm.Vm::(run)<br>
>>>>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished<br>
>>>>>>> Thread-1165153::DEBUG::2014-01-07<br>
>>>>>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)<br>
>>>>>>> Unknown<br>
>>>>>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no<br>
>>>>>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660<br>
>>>>>>> a4fb7b3'<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> 4. But I am worried about this and would more info about this vm...<br>
>>>>>>><br>
>>>>>>> Thread-247::ERROR::2014-01-07<br>
>>>>>>> 15:35:14,868::sampling::355::vm.Vm::(collect)<br>
>>>>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:<br>
>>>>>>> <AdvancedStatsFunction _highWrite at 0x2ce0998><br>
>>>>>>> Traceback (most recent call last):<br>
>>>>>>> File "/usr/share/vdsm/sampling.py", line 351, in collect<br>
>>>>>>> statsFunction()<br>
>>>>>>> File "/usr/share/vdsm/sampling.py", line 226, in __call__<br>
>>>>>>> retValue = self._function(*args, **kwargs)<br>
>>>>>>> File "/usr/share/vdsm/vm.py", line 509, in _highWrite<br>
>>>>>>> if not vmDrive.blockDev or vmDrive.format != 'cow':<br>
>>>>>>> AttributeError: 'Drive' object has no attribute 'format'<br>
>>>>>>><br>
>>>>>>> How did you create this vm? was it from the UI? was it from a script?<br>
>>>>>>> what<br>
>>>>>>> are the parameters you used?<br>
>>>>>>><br>
>>>>>>> Thanks,<br>
>>>>>>><br>
>>>>>>> Dafna<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> On 01/07/2014 04:34 PM, Neil wrote:<br>
>>>>>>>><br>
>>>>>>>> Hi Elad,<br>
>>>>>>>><br>
>>>>>>>> Thanks for assisting me, yes the same condition exists, if I try to<br>
>>>>>>>> migrate Tux it says "The VM Tux is being migrated".<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Below are the details requested.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> [root@node01 ~]# virsh -r list<br>
>>>>>>>> Id Name State<br>
>>>>>>>> ----------------------------------------------------<br>
>>>>>>>> 1 adam running<br>
>>>>>>>><br>
>>>>>>>> [root@node01 ~]# pgrep qemu<br>
>>>>>>>> 11232<br>
>>>>>>>> [root@node01 ~]# vdsClient -s 0 list table<br>
>>>>>>>> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> [root@node03 ~]# virsh -r list<br>
>>>>>>>> Id Name State<br>
>>>>>>>> ----------------------------------------------------<br>
>>>>>>>> 7 tux running<br>
>>>>>>>><br>
>>>>>>>> [root@node03 ~]# pgrep qemu<br>
>>>>>>>> 32333<br>
>>>>>>>> [root@node03 ~]# vdsClient -s 0 list table<br>
>>>>>>>> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up<br>
>>>>>>>><br>
>>>>>>>> Thanks.<br>
>>>>>>>><br>
>>>>>>>> Regards.<br>
>>>>>>>><br>
>>>>>>>> Neil Wilson.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <<a href="mailto:ebenahar@redhat.com">ebenahar@redhat.com</a>><br>
>>>>>>>> wrote:<br>
>>>>>>>>><br>
>>>>>>>>> Is it still in the same condition?<br>
>>>>>>>>> If yes, please add the outputs from both hosts for:<br>
>>>>>>>>><br>
>>>>>>>>> #virsh -r list<br>
>>>>>>>>> #pgrep qemu<br>
>>>>>>>>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you<br>
>>>>>>>>> are<br>
>>>>>>>>> working in insecure mode)<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> Thnaks,<br>
>>>>>>>>><br>
>>>>>>>>> Elad Ben Aharon<br>
>>>>>>>>> RHEV-QE storage team<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> ----- Original Message -----<br>
>>>>>>>>> From: "Neil" <<a href="mailto:nwilson123@gmail.com">nwilson123@gmail.com</a>><br>
>>>>>>>>> To: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
>>>>>>>>> Sent: Tuesday, January 7, 2014 4:21:43 PM<br>
>>>>>>>>> Subject: [Users] Migration Failed<br>
>>>>>>>>><br>
>>>>>>>>> Hi guys,<br>
>>>>>>>>><br>
>>>>>>>>> I've tried to migrate a VM from one host(node03) to another(node01),<br>
>>>>>>>>> and it failed to migrate, and the VM(tux) remained on the original<br>
>>>>>>>>> host. I've now tried to migrate the same VM again, and it picks up<br>
>>>>>>>>> that the previous migration is still in progress and refuses to<br>
>>>>>>>>> migrate.<br>
>>>>>>>>><br>
>>>>>>>>> I've checked for the KVM process on each of the hosts and the VM is<br>
>>>>>>>>> definitely still running on node03 so there doesn't appear to be any<br>
>>>>>>>>> chance of the VM trying to run on both hosts (which I've had before<br>
>>>>>>>>> which is very scary).<br>
>>>>>>>>><br>
>>>>>>>>> These are my versions... and attached are my engine.log and my<br>
>>>>>>>>> vdsm.log<br>
>>>>>>>>><br>
>>>>>>>>> Centos 6.5<br>
>>>>>>>>> ovirt-iso-uploader-3.3.1-1.el6.noarch<br>
>>>>>>>>> ovirt-host-deploy-1.1.2-1.el6.noarch<br>
>>>>>>>>> ovirt-release-el6-9-1.noarch<br>
>>>>>>>>> ovirt-engine-setup-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-host-deploy-java-1.1.2-1.el6.noarch<br>
>>>>>>>>> ovirt-image-uploader-3.3.1-1.el6.noarch<br>
>>>>>>>>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch<br>
>>>>>>>>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-userportal-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-log-collector-3.3.1-1.el6.noarch<br>
>>>>>>>>> ovirt-engine-tools-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-lib-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-backend-3.3.1-2.el6.noarch<br>
>>>>>>>>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch<br>
>>>>>>>>> ovirt-engine-restapi-3.3.1-2.el6.noarch<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> vdsm-python-4.13.0-11.el6.x86_64<br>
>>>>>>>>> vdsm-cli-4.13.0-11.el6.noarch<br>
>>>>>>>>> vdsm-xmlrpc-4.13.0-11.el6.noarch<br>
>>>>>>>>> vdsm-4.13.0-11.el6.x86_64<br>
>>>>>>>>> vdsm-python-cpopen-4.13.0-11.el6.x86_64<br>
>>>>>>>>><br>
>>>>>>>>> I've had a few issues with this particular installation in the past,<br>
>>>>>>>>> as it's from a very old pre release of ovirt, then upgrading to the<br>
>>>>>>>>> dreyou repo, then finally moving to the official Centos ovirt repo.<br>
>>>>>>>>><br>
>>>>>>>>> Thanks, any help is greatly appreciated.<br>
>>>>>>>>><br>
>>>>>>>>> Regards.<br>
>>>>>>>>><br>
>>>>>>>>> Neil Wilson.<br>
>>>>>>>>><br>
>>>>>>>>> _______________________________________________<br>
>>>>>>>>> Users mailing list<br>
>>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>>>>>>>><br>
>>>>>>>> _______________________________________________<br>
>>>>>>>> Users mailing list<br>
>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> --<br>
>>>>>>> Dafna Ron<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> --<br>
>>>>> Dafna Ron<br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>> Dafna Ron<br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div>