[Users] can't import ­­vms from 3.1 ovirt to 3.­2

Maor Lipchuk mlipchuk at redhat.com
Mon May 27 15:22:04 UTC 2013


Hi Vadim,
>From the messages log it seems that your host seems to be under heavy
load and processes are blocked.
The connection to the export domain get timeout and because of that VDSM
get restarted.

 from the messages log:
 INFO: task sanlock:3201 blocked for more than 120 seconds.
May 27 15:20:26 kvm02 kernel: "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.

I tried to reproduce your scenario on my env with no luck (VM was
imported successfully).

What curious me is that you noted that the VMs exported from 3.2 were
imported successfully, perhaps those VMs were slimmer then the 3.1 VM.

Did u managed to reproduced it on other Vms from 3.1?
Was it reproduced all the time?

Regards,
Maor

On 05/27/2013 02:47 PM, ovirt at qip.ru wrote:
> Hi, Maor
> 
>  + messages
> 
> Thanks,
> Vadim
> 
> Пнд 27 Май 2013 12:56:50 +0400, Maor Lipchuk <mlipchuk at redhat.com> написал:
>> Hi,
>> Can u please also add /var/log/messages
>> There could be storage I/O issues with the export domain which reached
>> timeout, and it can be seen in the /var/log/messages
>>
>> Thanks,
>> Maor
>>
>> On 05/27/2013 07:42 AM, ovirt at qip.ru wrote:
>>> Hi, Maor
>>>
>>> in attach  vdsm+engine logs
>>>
>>> Thanks,
>>> Vadim
>>>
>>> Вск 26 Май 2013 21:41:54 +0400, Maor Lipchuk <mlipchuk at redhat.com> написал:
>>>> Hi,
>>>> Can u please also attach the engine log, and the full log of VDSM
>>>>
>>>> In the log you sent I see you got the message
>>>> supervdsm::190::SuperVdsmProxy::(_connect) Connect to svdsm failed
>>>> This behaviour could be related to https://bugzilla.redhat.com/910005
>>>> which was fixed in later version of VDSM.
>>>> But to be sure, we need to see the full logs.
>>>>
>>>> Regards,
>>>> Maor
>>>>
>>>> On 05/24/2013 04:58 PM, ovirt at qip.ru wrote:
>>>>> i have export/NFS domain created in ovirt 3.1 with vms copies when i try to import vm from this domain to ovirt 3.2 during import process vdsm host loses SPM status and process failed, see attach. ( dc 3.2  with one vdsm host)
>>>>>
>>>>> engine and vdsm are on different hosts and were installed on centos 6.4 from dreyou repo
>>>>>
>>>>> on vdsm host
>>>>>
>>>>> vdsm-xmlrpc-4.10.3-0.36.23.el6.noarch
>>>>> vdsm-4.10.3-0.36.23.el6.x86_64
>>>>> vdsm-cli-4.10.3-0.36.23.el6.noarch
>>>>> vdsm-python-4.10.3-0.36.23.el6.x86_64
>>>>> [root at kvm02 rhev]# rpm -qa | fgrep sanlock
>>>>> sanlock-lib-2.6-2.el6.x86_64
>>>>> sanlock-2.6-2.el6.x86_64
>>>>> sanlock-python-2.6-2.el6.x86_64
>>>>> libvirt-lock-sanlock-0.10.2-18.el6_4.4.x86_64
>>>>>
>>>>>
>>>>> i also tried to import vms to dc engine on fedora18 and vdsm on fedora18 from ovirt3.2 stable repo but rezult was the same  (vdsm host loses SPM status)
>>>>>
>>>>> on  export/nfs domain created in 3.2 i can export and import vm to/from it
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>
>>> --
>>>
> 
> --
> 





More information about the Users mailing list