[Users] creating and running a vm fails

Rami Vaknin rvaknin at redhat.com
Thu Mar 29 08:55:50 UTC 2012


On 03/29/2012 10:49 AM, Rene Rosenberger wrote:
> Hi,
>
> here the creation process:
>
> Thread-1688::DEBUG::2012-03-29 10:43:19,814::vm::577::vm.Vm::(_startUnderlyingVm) vmId=`eda8737f-204f-4a33-8d24-49c4276309b5`::_ongoingCreations released
> Thread-1688::ERROR::2012-03-29 10:43:19,849::vm::601::vm.Vm::(_startUnderlyingVm) vmId=`eda8737f-204f-4a33-8d24-49c4276309b5`::The vm start process failed
> Traceback (most recent call last):
>    File "/usr/share/vdsm/vm.py", line 567, in _startUnderlyingVm
>      self._run()
>    File "/usr/share/vdsm/libvirtvm.py", line 1315, in _run
>      self._connection.createXML(domxml, flags),
>    File "/usr/share/vdsm/libvirtconnection.py", line 82, in wrapper
>      ret = f(*args, **kwargs)
>    File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2087, in createXML
>      if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
> libvirtError: internal error Process exited while reading console log output: Supported machines are:
> pc         RHEL 6.2.0 PC (alias of rhel6.2.0)
> rhel6.2.0  RHEL 6.2.0 PC (default)
> rhel6.1.0  RHEL 6.1.0 PC
> rhel6.0.0  RHEL 6.0.0 PC
> rhel5.5.0  RHEL 5.5.0 PC
> rhel5.4.4  RHEL 5.4.4 PC
> rhel5.4.0  RHEL 5.4.0 PC
Your emulated machine type does not exist in libvirt's list, you can 
workaround that by changing the EmulatedMachine value in vdc_options 
table to one of the types in that list.

It should be something like:
update vdc_options set option_value = 'rhel6.2.0' where option_name = 
'EmulatedMachine' and version = '3.0';
or
update vdc_options set option_value = 'rhel6.2.0' where option_name = 
'EmulatedMachine' and version = '3.1';
depends on your version.
>
> Thread-1688::DEBUG::2012-03-29 10:43:19,876::vm::919::vm.Vm::(setDownStatus) vmId=`eda8737f-204f-4a33-8d24-49c4276309b5`::Changed state to Down: internal error Process exited while reading console log output: Supported machines are:
> pc         RHEL 6.2.0 PC (alias of rhel6.2.0)
> rhel6.2.0  RHEL 6.2.0 PC (default)
> rhel6.1.0  RHEL 6.1.0 PC
> rhel6.0.0  RHEL 6.0.0 PC
> rhel5.5.0  RHEL 5.5.0 PC
> rhel5.4.4  RHEL 5.4.4 PC
> rhel5.4.0  RHEL 5.4.0 PC
>
> Thread-1694::DEBUG::2012-03-29 10:43:20,231::BindingXMLRPC::869::vds::(wrapper) client [192.168.100.10]::call vmGetStats with ('eda8737f-204f-4a33-8d24-49c4276309b5',) {} flowID [4dfa400f]
> Thread-1694::DEBUG::2012-03-29 10:43:20,278::BindingXMLRPC::875::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'hash': '0', 'exitMessage': 'internal error Process exited while reading console log output: Supported machines are:\npc         RHEL 6.2.0 PC (alias of rhel6.2.0)\nrhel6.2.0  RHEL 6.2.0 PC (default)\nrhel6.1.0  RHEL 6.1.0 PC\nrhel6.0.0  RHEL 6.0.0 PC\nrhel5.5.0  RHEL 5.5.0 PC\nrhel5.4.4  RHEL 5.4.4 PC\nrhel5.4.0  RHEL 5.4.0 PC\n', 'vmId': 'eda8737f-204f-4a33-8d24-49c4276309b5', 'timeOffset': '0', 'exitCode': 1}]}
> Thread-1695::DEBUG::2012-03-29 10:43:20,294::BindingXMLRPC::869::vds::(wrapper) client [192.168.100.10]::call vmDestroy with ('eda8737f-204f-4a33-8d24-49c4276309b5',) {} flowID [4dfa400f]
> Thread-1695::INFO::2012-03-29 10:43:20,296::API::300::vds::(destroy) vmContainerLock acquired by vm eda8737f-204f-4a33-8d24-49c4276309b5
> Thread-1695::DEBUG::2012-03-29 10:43:20,298::libvirtvm::2016::vm.Vm::(destroy) vmId=`eda8737f-204f-4a33-8d24-49c4276309b5`::destroy Called
> Thread-1695::INFO::2012-03-29 10:43:20,300::libvirtvm::1978::vm.Vm::(releaseVm) vmId=`eda8737f-204f-4a33-8d24-49c4276309b5`::Release VM resources
> Thread-1695::WARNING::2012-03-29 10:43:20,301::vm::327::vm.Vm::(_set_lastStatus) vmId=`eda8737f-204f-4a33-8d24-49c4276309b5`::trying to set state to Powering down when already Down
> Thread-1695::DEBUG::2012-03-29 10:43:20,304::utils::602::Storage.Misc.excCmd::(execCmd) '/usr/bin/sudo -n /sbin/service ksmtuned retune' (cwd None)
> Thread-1695::DEBUG::2012-03-29 10:43:20,505::utils::602::Storage.Misc.excCmd::(execCmd) SUCCESS:<err>  = '';<rc>  = 0
> Thread-1695::DEBUG::2012-03-29 10:43:20,512::task::588::TaskManager.Task::(_updateState) Task=`21a70ac7-d608-4d94-9d8b-5d69da80a9d8`::moving from state init ->  state preparing
> Thread-1695::INFO::2012-03-29 10:43:20,514::logUtils::37::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='eda8737f-204f-4a33-8d24-49c4276309b5')
> Thread-1695::INFO::2012-03-29 10:43:20,527::logUtils::39::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
> Thread-1695::DEBUG::2012-03-29 10:43:20,529::task::1172::TaskManager.Task::(prepare) Task=`21a70ac7-d608-4d94-9d8b-5d69da80a9d8`::finished: None
> Thread-1695::DEBUG::2012-03-29 10:43:20,531::task::588::TaskManager.Task::(_updateState) Task=`21a70ac7-d608-4d94-9d8b-5d69da80a9d8`::moving from state preparing ->  state finished
>
> I want to create a cntos6.2 vm.
>
> Regards, rene
>
>
> -----Ursprüngliche Nachricht-----
> Von: Moran Goldboim [mailto:mgoldboi at redhat.com]
> Gesendet: Donnerstag, 29. März 2012 10:11
> An: Rene Rosenberger
> Cc: users at oVirt.org
> Betreff: Re: AW: [Users] creating and running a vm fails
>
> Rene, this log doesn't contain the run vm command either it's not
> getting there or log snip doesn't contain it. the verb is: call vmCreate
> Trail it when you are trying to run the vm on that host.
> are you still getting same logs on the engine?
>
> Moran.
>
> On 03/29/2012 09:53 AM, Rene Rosenberger wrote:
>> Hi,
>>
>> this is what vdms.log says on the host i want to run the vm:
>>
>> Thread-55012::DEBUG::2012-03-29 09:51:02,514::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' for lock type 'shared'
>> Thread-55012::DEBUG::2012-03-29 09:51:02,516::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free. Now locking as 'shared' (1 active user)
>> Thread-55012::DEBUG::2012-03-29 09:51:02,518::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.13080edc-77ea-11e1-b6a4-525400c49d2a`ReqID=`2c8c0dd9-feef-431f-a246-d8a6ae2a6fe1`::Granted request
>> Thread-55012::DEBUG::2012-03-29 09:51:02,521::task::817::TaskManager.Task::(resourceAcquired) Task=`2f85e9fd-3ae4-4ecc-9b42-53543228fcd4`::_resourcesAcquired: Storage.13080edc-77ea-11e1-b6a4-525400c49d2a (shared)
>> Thread-55012::DEBUG::2012-03-29 09:51:02,523::task::978::TaskManager.Task::(_decref) Task=`2f85e9fd-3ae4-4ecc-9b42-53543228fcd4`::ref 1 aborting False
>> Thread-55012::INFO::2012-03-29 09:51:02,529::logUtils::39::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 'name': 'Default', 'version': '2', 'domains': u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0:Active,bb13e29f-0e5a-4f79-8bf2-f235a0dbd515:Active,8ed25a57-f53a-4cf0-bb92-781f3ce36a48:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 1, 'lver': 1}, 'dominfo': {u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'status': u'Active', 'diskfree': '235838373888', 'alerts': [], 'disktotal': '257822818304'}, u'8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'status': u'Active', 'diskfree': '2141846503424', 'alerts': [], 'disktotal': '2147080994816'}, u'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'status': u'Active', 'diskfree': '5267626524672', 'alerts': [], 'disktotal': '5902284816384'}}}
>> Thread-55012::DEBUG::2012-03-29 09:51:02,531::task::1172::TaskManager.Task::(prepare) Task=`2f85e9fd-3ae4-4ecc-9b42-53543228fcd4`::finished: {'info': {'spm_id': 1, 'master_uuid': '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 'name': 'Default', 'version': '2', 'domains': u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0:Active,bb13e29f-0e5a-4f79-8bf2-f235a0dbd515:Active,8ed25a57-f53a-4cf0-bb92-781f3ce36a48:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 1, 'lver': 1}, 'dominfo': {u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'status': u'Active', 'diskfree': '235838373888', 'alerts': [], 'disktotal': '257822818304'}, u'8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'status': u'Active', 'diskfree': '2141846503424', 'alerts': [], 'disktotal': '2147080994816'}, u'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'status': u'Active', 'diskfree': '5267626524672', 'alerts': [], 'disktotal': '5902284816384'}}}
>> Thread-55012::DEBUG::2012-03-29 09:51:02,533::task::588::TaskManager.Task::(_updateState) Task=`2f85e9fd-3ae4-4ecc-9b42-53543228fcd4`::moving from state preparing ->   state finished
>> Thread-55012::DEBUG::2012-03-29 09:51:02,535::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a':<   ResourceRef 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', isValid: 'True' obj: 'None'>}
>> Thread-55012::DEBUG::2012-03-29 09:51:02,536::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55012::DEBUG::2012-03-29 09:51:02,539::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a'
>> Thread-55012::DEBUG::2012-03-29 09:51:02,541::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' (0 active users)
>> Thread-55012::DEBUG::2012-03-29 09:51:02,542::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free, finding out if anyone is waiting for it.
>> Thread-55012::DEBUG::2012-03-29 09:51:02,544::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', Clearing records.
>> Thread-55012::DEBUG::2012-03-29 09:51:02,546::task::978::TaskManager.Task::(_decref) Task=`2f85e9fd-3ae4-4ecc-9b42-53543228fcd4`::ref 0 aborting False
>> Dummy-93::DEBUG::2012-03-29 09:51:03,920::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:04,372::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.187424 s, 5.5 MB/s\n';<rc>   = 0
>> Dummy-93::DEBUG::2012-03-29 09:51:06,420::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:06,895::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.209075 s, 4.9 MB/s\n';<rc>   = 0
>> Dummy-93::DEBUG::2012-03-29 09:51:08,953::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:09,403::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.187018 s, 5.5 MB/s\n';<rc>   = 0
>> Thread-55016::DEBUG::2012-03-29 09:51:10,110::task::588::TaskManager.Task::(_updateState) Task=`83108b7c-5bbd-4aad-bfb2-da6ebf87882f`::moving from state init ->   state preparing
>> Thread-55016::INFO::2012-03-29 09:51:10,113::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
>> Thread-55016::INFO::2012-03-29 09:51:10,116::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'delay': '0.00430202484131', 'lastCheck': 1333007465.9121721, 'code': 0, 'valid': True}, '8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'delay': '0.00339603424072', 'lastCheck': 1333007467.4905169, 'code': 0, 'valid': True}, 'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'delay': '0.00301098823547', 'lastCheck': 1333007468.0463679, 'code': 0, 'valid': True}}
>> Thread-55016::DEBUG::2012-03-29 09:51:10,117::task::1172::TaskManager.Task::(prepare) Task=`83108b7c-5bbd-4aad-bfb2-da6ebf87882f`::finished: {'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'delay': '0.00430202484131', 'lastCheck': 1333007465.9121721, 'code': 0, 'valid': True}, '8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'delay': '0.00339603424072', 'lastCheck': 1333007467.4905169, 'code': 0, 'valid': True}, 'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'delay': '0.00301098823547', 'lastCheck': 1333007468.0463679, 'code': 0, 'valid': True}}
>> Thread-55016::DEBUG::2012-03-29 09:51:10,120::task::588::TaskManager.Task::(_updateState) Task=`83108b7c-5bbd-4aad-bfb2-da6ebf87882f`::moving from state preparing ->   state finished
>> Thread-55016::DEBUG::2012-03-29 09:51:10,121::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
>> Thread-55016::DEBUG::2012-03-29 09:51:10,123::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55016::DEBUG::2012-03-29 09:51:10,125::task::978::TaskManager.Task::(_decref) Task=`83108b7c-5bbd-4aad-bfb2-da6ebf87882f`::ref 0 aborting False
>> Dummy-93::DEBUG::2012-03-29 09:51:11,451::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:11,921::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.181907 s, 5.6 MB/s\n';<rc>   = 0
>> Thread-55019::DEBUG::2012-03-29 09:51:12,583::BindingXMLRPC::167::vds::(wrapper) [192.168.100.10]
>> Thread-55019::DEBUG::2012-03-29 09:51:12,587::task::588::TaskManager.Task::(_updateState) Task=`ec333fa9-0878-479f-a781-b91c236c2e50`::moving from state init ->   state preparing
>> Thread-55019::INFO::2012-03-29 09:51:12,589::logUtils::37::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='13080edc-77ea-11e1-b6a4-525400c49d2a', options=None)
>> Thread-55019::INFO::2012-03-29 09:51:12,592::logUtils::39::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}}
>> Thread-55019::DEBUG::2012-03-29 09:51:12,593::task::1172::TaskManager.Task::(prepare) Task=`ec333fa9-0878-479f-a781-b91c236c2e50`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}}
>> Thread-55019::DEBUG::2012-03-29 09:51:12,595::task::588::TaskManager.Task::(_updateState) Task=`ec333fa9-0878-479f-a781-b91c236c2e50`::moving from state preparing ->   state finished
>> Thread-55019::DEBUG::2012-03-29 09:51:12,597::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
>> Thread-55019::DEBUG::2012-03-29 09:51:12,598::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55019::DEBUG::2012-03-29 09:51:12,600::task::978::TaskManager.Task::(_decref) Task=`ec333fa9-0878-479f-a781-b91c236c2e50`::ref 0 aborting False
>> Thread-55020::DEBUG::2012-03-29 09:51:12,610::BindingXMLRPC::167::vds::(wrapper) [192.168.100.10]
>> Thread-55020::DEBUG::2012-03-29 09:51:12,613::task::588::TaskManager.Task::(_updateState) Task=`67cf8ade-753b-4a9f-9f06-9991412582b8`::moving from state init ->   state preparing
>> Thread-55020::INFO::2012-03-29 09:51:12,615::logUtils::37::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='13080edc-77ea-11e1-b6a4-525400c49d2a', options=None)
>> Thread-55020::DEBUG::2012-03-29 09:51:12,619::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.13080edc-77ea-11e1-b6a4-525400c49d2a`ReqID=`4a7a5857-9d8c-4f4c-8177-2386473f98d7`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2037' at 'getStoragePoolInfo'
>> Thread-55020::DEBUG::2012-03-29 09:51:12,620::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' for lock type 'shared'
>> Thread-55020::DEBUG::2012-03-29 09:51:12,622::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free. Now locking as 'shared' (1 active user)
>> Thread-55020::DEBUG::2012-03-29 09:51:12,624::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.13080edc-77ea-11e1-b6a4-525400c49d2a`ReqID=`4a7a5857-9d8c-4f4c-8177-2386473f98d7`::Granted request
>> Thread-55020::DEBUG::2012-03-29 09:51:12,632::task::817::TaskManager.Task::(resourceAcquired) Task=`67cf8ade-753b-4a9f-9f06-9991412582b8`::_resourcesAcquired: Storage.13080edc-77ea-11e1-b6a4-525400c49d2a (shared)
>> Thread-55020::DEBUG::2012-03-29 09:51:12,634::task::978::TaskManager.Task::(_decref) Task=`67cf8ade-753b-4a9f-9f06-9991412582b8`::ref 1 aborting False
>> Thread-55020::INFO::2012-03-29 09:51:12,640::logUtils::39::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 'name': 'Default', 'version': '2', 'domains': u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0:Active,bb13e29f-0e5a-4f79-8bf2-f235a0dbd515:Active,8ed25a57-f53a-4cf0-bb92-781f3ce36a48:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 1, 'lver': 1}, 'dominfo': {u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'status': u'Active', 'diskfree': '235838373888', 'alerts': [], 'disktotal': '257822818304'}, u'8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'status': u'Active', 'diskfree': '2141846503424', 'alerts': [], 'disktotal': '2147080994816'}, u'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'status': u'Active', 'diskfree': '5267626524672', 'alerts': [], 'disktotal': '5902284816384'}}}
>> Thread-55020::DEBUG::2012-03-29 09:51:12,642::task::1172::TaskManager.Task::(prepare) Task=`67cf8ade-753b-4a9f-9f06-9991412582b8`::finished: {'info': {'spm_id': 1, 'master_uuid': '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 'name': 'Default', 'version': '2', 'domains': u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0:Active,bb13e29f-0e5a-4f79-8bf2-f235a0dbd515:Active,8ed25a57-f53a-4cf0-bb92-781f3ce36a48:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 1, 'lver': 1}, 'dominfo': {u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'status': u'Active', 'diskfree': '235838373888', 'alerts': [], 'disktotal': '257822818304'}, u'8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'status': u'Active', 'diskfree': '2141846503424', 'alerts': [], 'disktotal': '2147080994816'}, u'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'status': u'Active', 'diskfree': '5267626524672', 'alerts': [], 'disktotal': '5902284816384'}}}
>> Thread-55020::DEBUG::2012-03-29 09:51:12,644::task::588::TaskManager.Task::(_updateState) Task=`67cf8ade-753b-4a9f-9f06-9991412582b8`::moving from state preparing ->   state finished
>> Thread-55020::DEBUG::2012-03-29 09:51:12,646::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a':<   ResourceRef 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', isValid: 'True' obj: 'None'>}
>> Thread-55020::DEBUG::2012-03-29 09:51:12,648::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55020::DEBUG::2012-03-29 09:51:12,650::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a'
>> Thread-55020::DEBUG::2012-03-29 09:51:12,652::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' (0 active users)
>> Thread-55020::DEBUG::2012-03-29 09:51:12,653::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free, finding out if anyone is waiting for it.
>> Thread-55020::DEBUG::2012-03-29 09:51:12,655::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', Clearing records.
>> Thread-55020::DEBUG::2012-03-29 09:51:12,657::task::978::TaskManager.Task::(_decref) Task=`67cf8ade-753b-4a9f-9f06-9991412582b8`::ref 0 aborting False
>> Dummy-93::DEBUG::2012-03-29 09:51:13,970::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:14,422::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.194021 s, 5.3 MB/s\n';<rc>   = 0
>> Dummy-93::DEBUG::2012-03-29 09:51:16,469::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:16,962::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.187019 s, 5.5 MB/s\n';<rc>   = 0
>> Dummy-93::DEBUG::2012-03-29 09:51:19,009::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:19,451::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.18092 s, 5.7 MB/s\n';<rc>   = 0
>> Thread-55024::DEBUG::2012-03-29 09:51:20,246::task::588::TaskManager.Task::(_updateState) Task=`73e39a42-ee05-459f-9d83-b1329d987665`::moving from state init ->   state preparing
>> Thread-55024::INFO::2012-03-29 09:51:20,249::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
>> Thread-55024::INFO::2012-03-29 09:51:20,252::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'delay': '0.00443387031555', 'lastCheck': 1333007475.9278209, 'code': 0, 'valid': True}, '8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'delay': '0.00118398666382', 'lastCheck': 1333007477.502805, 'code': 0, 'valid': True}, 'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'delay': '0.00273203849792', 'lastCheck': 1333007478.0588491, 'code': 0, 'valid': True}}
>> Thread-55024::DEBUG::2012-03-29 09:51:20,253::task::1172::TaskManager.Task::(prepare) Task=`73e39a42-ee05-459f-9d83-b1329d987665`::finished: {'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'delay': '0.00443387031555', 'lastCheck': 1333007475.9278209, 'code': 0, 'valid': True}, '8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'delay': '0.00118398666382', 'lastCheck': 1333007477.502805, 'code': 0, 'valid': True}, 'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'delay': '0.00273203849792', 'lastCheck': 1333007478.0588491, 'code': 0, 'valid': True}}
>> Thread-55024::DEBUG::2012-03-29 09:51:20,256::task::588::TaskManager.Task::(_updateState) Task=`73e39a42-ee05-459f-9d83-b1329d987665`::moving from state preparing ->   state finished
>> Thread-55024::DEBUG::2012-03-29 09:51:20,257::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
>> Thread-55024::DEBUG::2012-03-29 09:51:20,259::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55024::DEBUG::2012-03-29 09:51:20,261::task::978::TaskManager.Task::(_decref) Task=`73e39a42-ee05-459f-9d83-b1329d987665`::ref 0 aborting False
>> Dummy-93::DEBUG::2012-03-29 09:51:21,502::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:21,950::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.179002 s, 5.7 MB/s\n';<rc>   = 0
>> Thread-55027::DEBUG::2012-03-29 09:51:22,700::BindingXMLRPC::167::vds::(wrapper) [192.168.100.10]
>> Thread-55027::DEBUG::2012-03-29 09:51:22,704::task::588::TaskManager.Task::(_updateState) Task=`f6e28785-c757-4b89-87ce-c92286bc1bc2`::moving from state init ->   state preparing
>> Thread-55027::INFO::2012-03-29 09:51:22,706::logUtils::37::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='13080edc-77ea-11e1-b6a4-525400c49d2a', options=None)
>> Thread-55027::INFO::2012-03-29 09:51:22,709::logUtils::39::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}}
>> Thread-55027::DEBUG::2012-03-29 09:51:22,711::task::1172::TaskManager.Task::(prepare) Task=`f6e28785-c757-4b89-87ce-c92286bc1bc2`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}}
>> Thread-55027::DEBUG::2012-03-29 09:51:22,712::task::588::TaskManager.Task::(_updateState) Task=`f6e28785-c757-4b89-87ce-c92286bc1bc2`::moving from state preparing ->   state finished
>> Thread-55027::DEBUG::2012-03-29 09:51:22,714::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
>> Thread-55027::DEBUG::2012-03-29 09:51:22,716::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55027::DEBUG::2012-03-29 09:51:22,717::task::978::TaskManager.Task::(_decref) Task=`f6e28785-c757-4b89-87ce-c92286bc1bc2`::ref 0 aborting False
>> Thread-55028::DEBUG::2012-03-29 09:51:22,727::BindingXMLRPC::167::vds::(wrapper) [192.168.100.10]
>> Thread-55028::DEBUG::2012-03-29 09:51:22,730::task::588::TaskManager.Task::(_updateState) Task=`22afe74b-e4a8-421b-84b3-6c9295dce788`::moving from state init ->   state preparing
>> Thread-55028::INFO::2012-03-29 09:51:22,732::logUtils::37::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='13080edc-77ea-11e1-b6a4-525400c49d2a', options=None)
>> Thread-55028::DEBUG::2012-03-29 09:51:22,735::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.13080edc-77ea-11e1-b6a4-525400c49d2a`ReqID=`8e695a8f-e8b2-4ab6-a464-4447e37f0fe5`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2037' at 'getStoragePoolInfo'
>> Thread-55028::DEBUG::2012-03-29 09:51:22,737::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' for lock type 'shared'
>> Thread-55028::DEBUG::2012-03-29 09:51:22,739::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free. Now locking as 'shared' (1 active user)
>> Thread-55028::DEBUG::2012-03-29 09:51:22,741::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.13080edc-77ea-11e1-b6a4-525400c49d2a`ReqID=`8e695a8f-e8b2-4ab6-a464-4447e37f0fe5`::Granted request
>> Thread-55028::DEBUG::2012-03-29 09:51:22,743::task::817::TaskManager.Task::(resourceAcquired) Task=`22afe74b-e4a8-421b-84b3-6c9295dce788`::_resourcesAcquired: Storage.13080edc-77ea-11e1-b6a4-525400c49d2a (shared)
>> Thread-55028::DEBUG::2012-03-29 09:51:22,745::task::978::TaskManager.Task::(_decref) Task=`22afe74b-e4a8-421b-84b3-6c9295dce788`::ref 1 aborting False
>> Thread-55028::INFO::2012-03-29 09:51:22,752::logUtils::39::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 'name': 'Default', 'version': '2', 'domains': u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0:Active,bb13e29f-0e5a-4f79-8bf2-f235a0dbd515:Active,8ed25a57-f53a-4cf0-bb92-781f3ce36a48:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 1, 'lver': 1}, 'dominfo': {u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'status': u'Active', 'diskfree': '235838373888', 'alerts': [], 'disktotal': '257822818304'}, u'8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'status': u'Active', 'diskfree': '2141846503424', 'alerts': [], 'disktotal': '2147080994816'}, u'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'status': u'Active', 'diskfree': '5267626524672', 'alerts': [], 'disktotal': '5902284816384'}}}
>> Thread-55028::DEBUG::2012-03-29 09:51:22,753::task::1172::TaskManager.Task::(prepare) Task=`22afe74b-e4a8-421b-84b3-6c9295dce788`::finished: {'info': {'spm_id': 1, 'master_uuid': '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 'name': 'Default', 'version': '2', 'domains': u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0:Active,bb13e29f-0e5a-4f79-8bf2-f235a0dbd515:Active,8ed25a57-f53a-4cf0-bb92-781f3ce36a48:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 1, 'lver': 1}, 'dominfo': {u'93e2079e-ef3b-452f-af71-5b3e7eb32ba0': {'status': u'Active', 'diskfree': '235838373888', 'alerts': [], 'disktotal': '257822818304'}, u'8ed25a57-f53a-4cf0-bb92-781f3ce36a48': {'status': u'Active', 'diskfree': '2141846503424', 'alerts': [], 'disktotal': '2147080994816'}, u'bb13e29f-0e5a-4f79-8bf2-f235a0dbd515': {'status': u'Active', 'diskfree': '5267626524672', 'alerts': [], 'disktotal': '5902284816384'}}}
>> Thread-55028::DEBUG::2012-03-29 09:51:22,756::task::588::TaskManager.Task::(_updateState) Task=`22afe74b-e4a8-421b-84b3-6c9295dce788`::moving from state preparing ->   state finished
>> Thread-55028::DEBUG::2012-03-29 09:51:22,757::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a':<   ResourceRef 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', isValid: 'True' obj: 'None'>}
>> Thread-55028::DEBUG::2012-03-29 09:51:22,759::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
>> Thread-55028::DEBUG::2012-03-29 09:51:22,761::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a'
>> Thread-55028::DEBUG::2012-03-29 09:51:22,763::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' (0 active users)
>> Thread-55028::DEBUG::2012-03-29 09:51:22,764::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free, finding out if anyone is waiting for it.
>> Thread-55028::DEBUG::2012-03-29 09:51:22,766::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', Clearing records.
>> Thread-55028::DEBUG::2012-03-29 09:51:22,768::task::978::TaskManager.Task::(_decref) Task=`22afe74b-e4a8-421b-84b3-6c9295dce788`::ref 0 aborting False
>> Dummy-93::DEBUG::2012-03-29 09:51:23,999::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>> Dummy-93::DEBUG::2012-03-29 09:51:24,459::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) SUCCESS:<err>   = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.178758 s, 5.7 MB/s\n';<rc>   = 0
>> Dummy-93::DEBUG::2012-03-29 09:51:26,507::storage_mailbox::640::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/13080edc-77ea-11e1-b6a4-525400c49d2a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
>>
>> Regards, rene
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Rene Rosenberger
>> Gesendet: Donnerstag, 29. März 2012 09:32
>> An: Moran Goldboim
>> Cc: users at oVirt.org
>> Betreff: Re: [Users] creating and running a vm fails
>>
>> Hi,
>>
>> i have just retried to start the vm what fails again. Looking in /var/log/vsdm/vdsm.log shows last entry on 27.03.2012.
>>
>> MainThread::INFO::2012-03-27 10:58:13,525::vdsm::70::vds::(run) I am the actual vdsm 4.9-0
>> MainThread::DEBUG::2012-03-27 10:58:15,120::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage'
>> MainThread::DEBUG::2012-03-27 10:58:15,122::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
>> MainThread::DEBUG::2012-03-27 10:58:15,393::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False
>> MainThread::DEBUG::2012-03-27 10:58:15,398::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpkart5V /etc/multipath.conf' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:15,505::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err>   = '';<rc>   = 0
>> MainThread::DEBUG::2012-03-27 10:58:15,508::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:15,591::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err>   = '';<rc>   = 1
>> MainThread::DEBUG::2012-03-27 10:58:15,600::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:16,123::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err>   = '';<rc>   = 0
>> MainThread::DEBUG::2012-03-27 10:58:16,130::hsm::334::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:16,424::hsm::334::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err>   = '';<rc>   = 0
>> MainThread::DEBUG::2012-03-27 10:58:16,428::lvm::316::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
>> MainThread::DEBUG::2012-03-27 10:58:16,434::lvm::284::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:16,708::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:<err>   = '';<rc>   = 0
>> MainThread::DEBUG::2012-03-27 10:58:16,711::lvm::339::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
>> MainThread::DEBUG::2012-03-27 10:58:16,712::lvm::349::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
>> MainThread::DEBUG::2012-03-27 10:58:16,713::lvm::284::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:16,906::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:<err>   = '  No volume groups found\n';<rc>   = 0
>> MainThread::DEBUG::2012-03-27 10:58:16,909::lvm::376::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
>> MainThread::DEBUG::2012-03-27 10:58:16,910::lvm::284::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:17,119::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:<err>   = '  No volume groups found\n';<rc>   = 0
>> Thread-12::DEBUG::2012-03-27 10:58:17,123::misc::1022::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
>> MainThread::INFO::2012-03-27 10:58:17,130::dispatcher::118::Storage.Dispatcher::(__init__) Starting StorageDispatcher...
>> Thread-12::DEBUG::2012-03-27 10:58:17,132::misc::1024::SamplingMethod::(__call__) Got in to sampling method
>> Thread-12::DEBUG::2012-03-27 10:58:17,141::misc::1022::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
>> Thread-12::DEBUG::2012-03-27 10:58:17,142::misc::1024::SamplingMethod::(__call__) Got in to sampling method
>> Thread-12::DEBUG::2012-03-27 10:58:17,147::iscsiadm::48::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
>> MainThread::ERROR::2012-03-27 10:58:17,164::netinfo::126::root::(speed) cannot read eth0 speed Traceback (most recent call last):
>>     File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 113, in speed
>>       s = int(file('/sys/class/net/%s/speed' % dev).read())
>> IOError: [Errno 22] Invalid argument
>> MainThread::DEBUG::2012-03-27 10:58:17,198::utils::602::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None)
>> Thread-12::DEBUG::2012-03-27 10:58:17,297::iscsiadm::48::Storage.Misc.excCmd::(_runCmd) FAILED:<err>   = 'iscsiadm: No session found.\n';<rc>   = 21
>> Thread-12::DEBUG::2012-03-27 10:58:17,304::misc::1032::SamplingMethod::(__call__) Returning last result
>> Thread-12::DEBUG::2012-03-27 10:58:17,306::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid'
>> Thread-12::DEBUG::2012-03-27 10:58:17,308::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm
>> Thread-12::DEBUG::2012-03-27 10:58:17,309::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc f90ef831-7bff-4eb4-a9b5-890538959b22 2858' (cwd None)
>> MainThread::DEBUG::2012-03-27 10:58:17,390::utils::602::Storage.Misc.excCmd::(execCmd) SUCCESS:<err>   = '';<rc>   = 0
>> MainThread::INFO::2012-03-27 10:58:17,438::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds.
>> VM Channels Listener::INFO::2012-03-27 10:58:17,524::vmChannels::127::vds::(run) Starting VM channels listener thread.
>> MainThread::DEBUG::2012-03-27 10:58:17,864::supervdsmServer::201::SuperVdsm.Server::(main) Making sure I'm root
>> MainThread::DEBUG::2012-03-27 10:58:17,866::supervdsmServer::205::SuperVdsm.Server::(main) Parsing cmd args
>> MainThread::DEBUG::2012-03-27 10:58:17,867::supervdsmServer::208::SuperVdsm.Server::(main) Creating PID file
>> MainThread::DEBUG::2012-03-27 10:58:17,867::supervdsmServer::212::SuperVdsm.Server::(main) Cleaning old socket
>> MainThread::DEBUG::2012-03-27 10:58:17,868::supervdsmServer::216::SuperVdsm.Server::(main) Setting up keep alive thread
>> MainThread::DEBUG::2012-03-27 10:58:17,887::supervdsmServer::221::SuperVdsm.Server::(main) Creating remote object manager
>> MainThread::DEBUG::2012-03-27 10:58:17,894::supervdsmServer::232::SuperVdsm.Server::(main) Started serving super vdsm object
>> Thread-12::DEBUG::2012-03-27 10:58:19,378::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm
>> Thread-12::DEBUG::2012-03-27 10:58:19,431::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm
>> Thread-12::DEBUG::2012-03-27 10:58:19,737::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None)
>> Thread-12::DEBUG::2012-03-27 10:58:19,838::multipath::71::Storage.Misc.excCmd::(rescan) FAILED:<err>   = '';<rc>   = 1
>> Thread-12::DEBUG::2012-03-27 10:58:19,841::lvm::457::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
>> Thread-12::DEBUG::2012-03-27 10:58:19,842::lvm::459::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
>> Thread-12::DEBUG::2012-03-27 10:58:19,843::lvm::469::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
>> Thread-12::DEBUG::2012-03-27 10:58:19,844::lvm::471::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
>> Thread-12::DEBUG::2012-03-27 10:58:19,844::lvm::490::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
>> Thread-12::DEBUG::2012-03-27 10:58:19,845::lvm::492::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
>> Thread-12::DEBUG::2012-03-27 10:58:19,846::misc::1032::SamplingMethod::(__call__) Returning last result
>> Thread-12::DEBUG::2012-03-27 10:58:19,847::hsm::358::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center'
>> Thread-12::DEBUG::2012-03-27 10:58:19,853::hsm::390::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']
>> Thread-12::DEBUG::2012-03-27 10:58:19,857::hsm::391::Storage.HSM::(__cleanStorageRepository) Mount list: []
>> Thread-12::DEBUG::2012-03-27 10:58:19,857::hsm::393::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers
>> Thread-12::DEBUG::2012-03-27 10:58:19,858::hsm::436::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
>>
>> Regards, rene
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Moran Goldboim [mailto:mgoldboi at redhat.com]
>> Gesendet: Donnerstag, 29. März 2012 09:06
>> An: Rene Rosenberger
>> Cc: Itamar Heim; users at oVirt.org
>> Betreff: Re: [Users] creating and running a vm fails
>>
>> I suspect something happened to vdsm - can you attach vdsm logs
>> (/var/log/vdsm/vdsm.log) from that time and maybe sneak a pick at /var/log/core if we have something there.
>> Moran.
>>
>> On 03/29/2012 08:07 AM, Rene Rosenberger wrote:
>>> Hi,
>>>
>>> the host on which the vm should run is up. It has a green arrow.
>>>
>>> Regards, rene
>>>
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Itamar Heim [mailto:iheim at redhat.com]
>>> Gesendet: Mittwoch, 28. März 2012 22:07
>>> An: Rene Rosenberger
>>> Cc: users at oVirt.org
>>> Betreff: Re: [Users] creating and running a vm fails
>>>
>>> On 03/28/2012 02:33 PM, Rene Rosenberger wrote:
>>>> Hi,
>>>>
>>>> when i create a VM with it virtual disk and i want to run it once to
>>>> install from imported iso file I get the following error message:
>>>>
>>>> 2012-03-28 14:31:35,488
>>>> INFO[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
>>>> (http--0.0.0.0-8443-5) [cf5d84e] START, CreateVmVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a,
>>>> vmId=93220e0a-610a-4b20-986e-3c8b0d39e35f,
>>>> vm=org.ovirt.engine.core.common.businessentities.VM at 6e69e491), log id:
>>>> 3fc82aad
>>>>
>>>> 2012-03-28 14:31:35,498
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
>>>> (http--0.0.0.0-8443-5) [cf5d84e] START, CreateVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a,
>>>> vmId=93220e0a-610a-4b20-986e-3c8b0d39e35f,
>>>> vm=org.ovirt.engine.core.common.businessentities.VM at 6e69e491), log id:
>>>> 7e63d56b
>>>>
>>>> 2012-03-28 14:31:35,620
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
>>>> (http--0.0.0.0-8443-5) [cf5d84e]
>>>> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
>>>> spiceSslCipherSuite=DEFAULT,memSize=512,kvmEnable=true,boot=dc,smp=1,
>>>> e
>>>> mulatedMachine=pc-0.14,vmType=kvm,keyboardLayout=en-us,pitReinjection
>>>> =
>>>> false,nice=0,display=vnc,tabletEnable=true,smpCoresPerSocket=1,spiceS
>>>> e
>>>> cureChannels=,spiceMonitors=1,cdrom=/rhev/data-center/13080edc-77ea-1
>>>> 1
>>>> e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/1111
>>>> 1
>>>> 111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-bin-DVD1.iso,timeOf
>>>> f
>>>> set=0,transparentHugePages=true,drives=[Ljava.util.Map;@7f0ba1c1,vmId
>>>> =
>>>> 93220e0a-610a-4b20-986e-3c8b0d39e35f,acpiEnable=true,vmName=test,cpuT
>>>> y
>>>> pe=Opteron_G3,custom={}
>>>>
>>>> 2012-03-28 14:31:35,620
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
>>>> (http--0.0.0.0-8443-5) [cf5d84e] FINISH, CreateVDSCommand, log id:
>>>> 7e63d56b
>>>>
>>>> 2012-03-28 14:31:35,625
>>>> INFO[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
>>>> (http--0.0.0.0-8443-5) [cf5d84e]
>>>> IncreasePendingVms::CreateVmIncreasing
>>>> vds KVM-DMZ-04 pending vcpu count, now 1. Vm: test
>>>>
>>>> 2012-03-28 14:31:35,631
>>>> INFO[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
>>>> (http--0.0.0.0-8443-5) [cf5d84e] FINISH, CreateVmVDSCommand, return:
>>>> WaitForLaunch, log id: 3fc82aad
>>>>
>>>> 2012-03-28 14:31:39,316
>>>> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
>>>> (QuartzScheduler_Worker-35)
>>>> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
>>>> continuing.
>>>>
>>>> VDSNetworkException:
>>>>
>>>> 2012-03-28 14:31:41,328
>>>> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
>>>> (QuartzScheduler_Worker-29)
>>>> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
>>>> continuing.
>>>>
>>>> VDSNetworkException:
>>>>
>>>> 2012-03-28 14:31:43,340
>>>> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
>>>> (QuartzScheduler_Worker-45)
>>>> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
>>>> continuing.
>>>>
>>>> VDSNetworkException:
>>>>
>>>> 2012-03-28 14:31:43,519
>>>> INFO[org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
>>>> (QuartzScheduler_Worker-38) [7ed2f412] Running command:
>>>> SetStoragePoolStatusCommand internal: true. Entities affected :ID:
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a Type: StoragePool
>>>>
>>>> 2012-03-28 14:31:43,543 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>>> (QuartzScheduler_Worker-38) [7ed2f412]
>>>> IrsBroker::Failed::GetStoragePoolInfoVDS due to: ConnectException:
>>>> Connection refused
>>>>
>>>> 2012-03-28 14:31:45,351
>>>> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
>>>> (QuartzScheduler_Worker-47)
>>>> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
>>>> continuing.
>>>>
>>>> VDSNetworkException:
>>>>
>>>> 2012-03-28 14:31:47,363
>>>> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
>>>> (QuartzScheduler_Worker-71)
>>>> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
>>>> continuing.
>>>>
>>>> VDSNetworkException:
>>>>
>>>> 2012-03-28 14:31:51,091
>>>> INFO[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>>> (QuartzScheduler_Worker-55) vm test running in db and not running in
>>>> vds
>>>> - add to rerun treatment. vds KVM-DMZ-04
>>>>
>>>> 2012-03-28 14:31:51,104
>>>> INFO[org.ovirt.engine.core.bll.InitVdsOnUpCommand]
>>>> (QuartzScheduler_Worker-55) [6df61eca] Running command:
>>>> InitVdsOnUpCommand internal: true.
>>>>
>>>> 2012-03-28 14:31:51,182
>>>> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
>>>> s
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] Running command:
>>>> ConnectHostToStoragePoolServersCommand internal: true. Entities
>>>> affected
>>>> :ID: 13080edc-77ea-11e1-b6a4-525400c49d2a Type: StoragePool
>>>>
>>>> 2012-03-28 14:31:51,185
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
>>>> S
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] START,
>>>> ConnectStorageServerVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a, storageType = ISCSI,
>>>> connectionList = [{ id: e25e5093-9240-42ac-a21a-f0c216294944,
>>>> connection: 192.168.200.32 };]), log id: 11a3b069
>>>>
>>>> 2012-03-28 14:31:51,488
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
>>>> S
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {e25e5093-9240-42ac-a21a-f0c216294944=0}, log id: 11a3b069
>>>>
>>>> 2012-03-28 14:31:51,489
>>>> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
>>>> s
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] Host KVM-DMZ-04 storage
>>>> connection was succeeded
>>>>
>>>> 2012-03-28 14:31:51,492
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
>>>> S
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] START,
>>>> ConnectStorageServerVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a, storageType = NFS,
>>>> connectionList = [{ id: 3dd798da-77ea-11e1-969c-525400c49d2a, connection:
>>>> oVirt.dynetic.de:/mnt/iso };]), log id: 1d73fc4e
>>>>
>>>> 2012-03-28 14:31:51,535
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
>>>> S
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {3dd798da-77ea-11e1-969c-525400c49d2a=0}, log id: 1d73fc4e
>>>>
>>>> 2012-03-28 14:31:51,535
>>>> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
>>>> s
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] Host KVM-DMZ-04 storage
>>>> connection was succeeded
>>>>
>>>> 2012-03-28 14:31:51,538
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
>>>> S
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] START,
>>>> ConnectStorageServerVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a, storageType = NFS,
>>>> connectionList = [{ id: 57d6595f-1109-49f9-a7f3-f8fe255c34bd, connection:
>>>> 192.168.200.32:/nfsexport };]), log id: 618343ee
>>>>
>>>> 2012-03-28 14:31:51,579
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
>>>> S
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {57d6595f-1109-49f9-a7f3-f8fe255c34bd=0}, log id: 618343ee
>>>>
>>>> 2012-03-28 14:31:51,579
>>>> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
>>>> s
>>>> Command]
>>>> (QuartzScheduler_Worker-55) [48a331d9] Host KVM-DMZ-04 storage
>>>> connection was succeeded
>>>>
>>>> 2012-03-28 14:31:51,594
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSC
>>>> o
>>>> mmand]
>>>> (QuartzScheduler_Worker-55) [48a331d9] START,
>>>> ConnectStoragePoolVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a, vds_spm_id = 1, masterDomainId
>>>> = 8ed25a57-f53a-4cf0-bb92-781f3ce36a48, masterVersion = 1), log id:
>>>> 190d7dea
>>>>
>>>> 2012-03-28 14:32:02,207
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSC
>>>> o
>>>> mmand]
>>>> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
>>>> ConnectStoragePoolVDSCommand, log id: 190d7dea
>>>>
>>>> 2012-03-28 14:32:02,231
>>>> INFO[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
>>>> c
>>>> tor]
>>>> (QuartzScheduler_Worker-55) [48a331d9] No string for UNASSIGNED type.
>>>> Use default Log
>>>>
>>>> 2012-03-28 14:32:02,234
>>>> INFO[org.ovirt.engine.core.bll.MultipleActionsRunner]
>>>> (pool-5-thread-47) [48a331d9] MultipleActionsRunner of type
>>>> MigrateVmToServer invoked with no actions
>>>>
>>>> 2012-03-28 14:32:02,264
>>>> INFO[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedComma
>>>> n
>>>> d]
>>>> (QuartzScheduler_Worker-55) [75839ca5] Running command:
>>>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities
>>>> affected :ID: 0e0403a4-78ae-11e1-9c19-525400c49d2a Type: VDS
>>>>
>>>> 2012-03-28 14:32:02,272
>>>> INFO[org.ovirt.engine.core.bll.HandleVdsVersionCommand]
>>>> (QuartzScheduler_Worker-55) [2dd7e7a] Running command:
>>>> HandleVdsVersionCommand internal: true. Entities affected :ID:
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a Type: VDS
>>>>
>>>> 2012-03-28 14:32:02,276 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>>> (QuartzScheduler_Worker-55) [2dd7e7a] Rerun vm
>>>> 93220e0a-610a-4b20-986e-3c8b0d39e35f. Called from vds KVM-DMZ-04
>>>>
>>>> 2012-03-28 14:32:02,280
>>>> INFO[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
>>>> (pool-5-thread-47) [2dd7e7a] START,
>>>> UpdateVdsDynamicDataVDSCommand(vdsId
>>>> = 0e0403a4-78ae-11e1-9c19-525400c49d2a,
>>>> vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic at 2
>>>> f
>>>> 1f5a9f),
>>>> log id: 2901421c
>>>>
>>>> 2012-03-28 14:32:02,282
>>>> INFO[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
>>>> (pool-5-thread-47) [2dd7e7a] FINISH, UpdateVdsDynamicDataVDSCommand,
>>>> log
>>>> id: 2901421c
>>>>
>>>> 2012-03-28 14:32:02,299
>>>> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
>>>> (pool-5-thread-47) [2dd7e7a] START, IsValidVDSCommand(storagePoolId =
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a, ignoreFailoverLimit = false,
>>>> compatabilityVersion = null), log id: 2d73522e
>>>>
>>>> 2012-03-28 14:32:02,300
>>>> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
>>>> (pool-5-thread-47) [2dd7e7a] FINISH, IsValidVDSCommand, return:
>>>> false, log id: 2d73522e
>>>>
>>>> 2012-03-28 14:32:02,301
>>>> WARN[org.ovirt.engine.core.bll.RunVmOnceCommand]
>>>> (pool-5-thread-47) [2dd7e7a] CanDoAction of action RunVmOnce failed.
>>>> Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM
>>>> , ACTION_TYPE_FAILED_IMAGE_REPOSITORY_NOT_FOUND
>>>>
>>>> 2012-03-28 14:32:04,591
>>>> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>>> (QuartzScheduler_Worker-62) hostFromVds::selectedVds - KVM-DMZ-04,
>>>> spmStatus Free, storage pool Default
>>>>
>>>> 2012-03-28 14:32:05,430
>>>> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>>> (QuartzScheduler_Worker-62) SpmStatus on vds
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a: Free
>>>>
>>>> 2012-03-28 14:32:05,436
>>>> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>>> (QuartzScheduler_Worker-62) starting spm on vds KVM-DMZ-04, storage
>>>> pool Default, prevId 1, LVER 0
>>>>
>>>> 2012-03-28 14:32:05,439
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>>> (QuartzScheduler_Worker-62) START, SpmStartVDSCommand(vdsId =
>>>> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
>>>> 13080edc-77ea-11e1-b6a4-525400c49d2a, prevId=1, prevLVER=0,
>>>> storagePoolFormatType=V2, recoveryMode=Manual, SCSIFencing=false),
>>>> log
>>>> id: 25d38cf4
>>>>
>>>> 2012-03-28 14:32:05,490
>>>> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>>> (QuartzScheduler_Worker-62) spmStart polling started: taskId =
>>>> c2a0d9a2-e356-4aa9-bb29-f66a0899d8cc
>>>>
>>>> Any idea what to do?
>>>>
>>> looks like the host is no up?
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 

Thanks,

Rami Vaknin, QE @ Red Hat, TLV, IL.





More information about the Users mailing list