Hi,
i have just retried to start the vm what fails again. Looking in /var/log/vsdm/vdsm.log
shows last entry on 27.03.2012.
MainThread::INFO::2012-03-27 10:58:13,525::vdsm::70::vds::(run) I am the actual vdsm
4.9-0
MainThread::DEBUG::2012-03-27
10:58:15,120::resourceManager::379::ResourceManager::(registerNamespace) Registering
namespace 'Storage'
MainThread::DEBUG::2012-03-27 10:58:15,122::threadPool::45::Misc.ThreadPool::(__init__)
Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-03-27 10:58:15,393::multipath::109::Storage.Multipath::(isEnabled)
multipath Defaulting to False
MainThread::DEBUG::2012-03-27
10:58:15,398::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n
/bin/cp /tmp/tmpkart5V /etc/multipath.conf' (cwd None)
MainThread::DEBUG::2012-03-27
10:58:15,505::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-03-27
10:58:15,508::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n
/sbin/multipath -F' (cwd None)
MainThread::DEBUG::2012-03-27
10:58:15,591::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> =
''; <rc> = 1
MainThread::DEBUG::2012-03-27
10:58:15,600::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n
/sbin/service multipathd restart' (cwd None)
MainThread::DEBUG::2012-03-27
10:58:16,123::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-03-27
10:58:16,130::hsm::334::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo
-n /sbin/lvm dumpconfig global/locking_type' (cwd None)
MainThread::DEBUG::2012-03-27
10:58:16,424::hsm::334::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:
<err> = ''; <rc> = 0
MainThread::DEBUG::2012-03-27 10:58:16,428::lvm::316::OperationMutex::(_reloadpvs)
Operation 'lvm reload operation' got the operation mutex
MainThread::DEBUG::2012-03-27 10:58:16,434::lvm::284::Storage.Misc.excCmd::(cmd)
'/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0
} " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)
MainThread::DEBUG::2012-03-27 10:58:16,708::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:
<err> = ''; <rc> = 0
MainThread::DEBUG::2012-03-27 10:58:16,711::lvm::339::OperationMutex::(_reloadpvs)
Operation 'lvm reload operation' released the operation mutex
MainThread::DEBUG::2012-03-27 10:58:16,712::lvm::349::OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' got the operation mutex
MainThread::DEBUG::2012-03-27 10:58:16,713::lvm::284::Storage.Misc.excCmd::(cmd)
'/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0
} " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)
MainThread::DEBUG::2012-03-27 10:58:16,906::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:
<err> = ' No volume groups found\n'; <rc> = 0
MainThread::DEBUG::2012-03-27 10:58:16,909::lvm::376::OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' released the operation mutex
MainThread::DEBUG::2012-03-27 10:58:16,910::lvm::284::Storage.Misc.excCmd::(cmd)
'/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0
} " --noheadings --units b --nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)
MainThread::DEBUG::2012-03-27 10:58:17,119::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:
<err> = ' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-03-27 10:58:17,123::misc::1022::SamplingMethod::(__call__) Trying
to enter sampling method (storage.sdc.refreshStorage)
MainThread::INFO::2012-03-27 10:58:17,130::dispatcher::118::Storage.Dispatcher::(__init__)
Starting StorageDispatcher...
Thread-12::DEBUG::2012-03-27 10:58:17,132::misc::1024::SamplingMethod::(__call__) Got in
to sampling method
Thread-12::DEBUG::2012-03-27 10:58:17,141::misc::1022::SamplingMethod::(__call__) Trying
to enter sampling method (storage.iscsi.rescan)
Thread-12::DEBUG::2012-03-27 10:58:17,142::misc::1024::SamplingMethod::(__call__) Got in
to sampling method
Thread-12::DEBUG::2012-03-27 10:58:17,147::iscsiadm::48::Storage.Misc.excCmd::(_runCmd)
'/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
MainThread::ERROR::2012-03-27 10:58:17,164::netinfo::126::root::(speed) cannot read eth0
speed
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 113, in speed
s = int(file('/sys/class/net/%s/speed' % dev).read())
IOError: [Errno 22] Invalid argument
MainThread::DEBUG::2012-03-27 10:58:17,198::utils::602::Storage.Misc.excCmd::(execCmd)
'/usr/bin/pgrep -xf ksmd' (cwd None)
Thread-12::DEBUG::2012-03-27 10:58:17,297::iscsiadm::48::Storage.Misc.excCmd::(_runCmd)
FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21
Thread-12::DEBUG::2012-03-27 10:58:17,304::misc::1032::SamplingMethod::(__call__)
Returning last result
Thread-12::DEBUG::2012-03-27 10:58:17,306::supervdsm::83::SuperVdsmProxy::(_killSupervdsm)
Could not kill old Super Vdsm [Errno 2] No such file or directory:
'/var/run/vdsm/svdsm.pid'
Thread-12::DEBUG::2012-03-27
10:58:17,308::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm
Thread-12::DEBUG::2012-03-27
10:58:17,309::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n
/usr/bin/python /usr/share/vdsm/supervdsmServer.pyc f90ef831-7bff-4eb4-a9b5-890538959b22
2858' (cwd None)
MainThread::DEBUG::2012-03-27 10:58:17,390::utils::602::Storage.Misc.excCmd::(execCmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::INFO::2012-03-27 10:58:17,438::vmChannels::139::vds::(settimeout) Setting
channels' timeout to 30 seconds.
VM Channels Listener::INFO::2012-03-27 10:58:17,524::vmChannels::127::vds::(run) Starting
VM channels listener thread.
MainThread::DEBUG::2012-03-27 10:58:17,864::supervdsmServer::201::SuperVdsm.Server::(main)
Making sure I'm root
MainThread::DEBUG::2012-03-27 10:58:17,866::supervdsmServer::205::SuperVdsm.Server::(main)
Parsing cmd args
MainThread::DEBUG::2012-03-27 10:58:17,867::supervdsmServer::208::SuperVdsm.Server::(main)
Creating PID file
MainThread::DEBUG::2012-03-27 10:58:17,867::supervdsmServer::212::SuperVdsm.Server::(main)
Cleaning old socket
MainThread::DEBUG::2012-03-27 10:58:17,868::supervdsmServer::216::SuperVdsm.Server::(main)
Setting up keep alive thread
MainThread::DEBUG::2012-03-27 10:58:17,887::supervdsmServer::221::SuperVdsm.Server::(main)
Creating remote object manager
MainThread::DEBUG::2012-03-27 10:58:17,894::supervdsmServer::232::SuperVdsm.Server::(main)
Started serving super vdsm object
Thread-12::DEBUG::2012-03-27 10:58:19,378::supervdsm::92::SuperVdsmProxy::(_connect)
Trying to connect to Super Vdsm
Thread-12::DEBUG::2012-03-27 10:58:19,431::supervdsm::64::SuperVdsmProxy::(__init__)
Connected to Super Vdsm
Thread-12::DEBUG::2012-03-27 10:58:19,737::multipath::71::Storage.Misc.excCmd::(rescan)
'/usr/bin/sudo -n /sbin/multipath' (cwd None)
Thread-12::DEBUG::2012-03-27 10:58:19,838::multipath::71::Storage.Misc.excCmd::(rescan)
FAILED: <err> = ''; <rc> = 1
Thread-12::DEBUG::2012-03-27 10:58:19,841::lvm::457::OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-03-27 10:58:19,842::lvm::459::OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-03-27 10:58:19,843::lvm::469::OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-03-27 10:58:19,844::lvm::471::OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-03-27 10:58:19,844::lvm::490::OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-03-27 10:58:19,845::lvm::492::OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-03-27 10:58:19,846::misc::1032::SamplingMethod::(__call__)
Returning last result
Thread-12::DEBUG::2012-03-27
10:58:19,847::hsm::358::Storage.HSM::(__cleanStorageRepository) Started cleaning storage
repository at '/rhev/data-center'
Thread-12::DEBUG::2012-03-27
10:58:19,853::hsm::390::Storage.HSM::(__cleanStorageRepository) White list:
['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*',
'/rhev/data-center/mnt']
Thread-12::DEBUG::2012-03-27
10:58:19,857::hsm::391::Storage.HSM::(__cleanStorageRepository) Mount list: []
Thread-12::DEBUG::2012-03-27
10:58:19,857::hsm::393::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers
Thread-12::DEBUG::2012-03-27
10:58:19,858::hsm::436::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage
repository at '/rhev/data-center'
Regards, rene
-----Ursprüngliche Nachricht-----
Von: Moran Goldboim [mailto:mgoldboi@redhat.com]
Gesendet: Donnerstag, 29. März 2012 09:06
An: Rene Rosenberger
Cc: Itamar Heim; users(a)oVirt.org
Betreff: Re: [Users] creating and running a vm fails
I suspect something happened to vdsm - can you attach vdsm logs
(/var/log/vdsm/vdsm.log) from that time and maybe sneak a pick at /var/log/core if we have
something there.
Moran.
On 03/29/2012 08:07 AM, Rene Rosenberger wrote:
Hi,
the host on which the vm should run is up. It has a green arrow.
Regards, rene
-----Ursprüngliche Nachricht-----
Von: Itamar Heim [mailto:iheim@redhat.com]
Gesendet: Mittwoch, 28. März 2012 22:07
An: Rene Rosenberger
Cc: users(a)oVirt.org
Betreff: Re: [Users] creating and running a vm fails
On 03/28/2012 02:33 PM, Rene Rosenberger wrote:
> Hi,
>
> when i create a VM with it virtual disk and i want to run it once to
> install from imported iso file I get the following error message:
>
> 2012-03-28 14:31:35,488
> INFO[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> (http--0.0.0.0-8443-5) [cf5d84e] START, CreateVmVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a,
> vmId=93220e0a-610a-4b20-986e-3c8b0d39e35f,
> vm=org.ovirt.engine.core.common.businessentities.VM@6e69e491), log id:
> 3fc82aad
>
> 2012-03-28 14:31:35,498
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
> (http--0.0.0.0-8443-5) [cf5d84e] START, CreateVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a,
> vmId=93220e0a-610a-4b20-986e-3c8b0d39e35f,
> vm=org.ovirt.engine.core.common.businessentities.VM@6e69e491), log id:
> 7e63d56b
>
> 2012-03-28 14:31:35,620
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
> (http--0.0.0.0-8443-5) [cf5d84e]
> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
> spiceSslCipherSuite=DEFAULT,memSize=512,kvmEnable=true,boot=dc,smp=1,
> e
> mulatedMachine=pc-0.14,vmType=kvm,keyboardLayout=en-us,pitReinjection
> =
> false,nice=0,display=vnc,tabletEnable=true,smpCoresPerSocket=1,spiceS
> e
> cureChannels=,spiceMonitors=1,cdrom=/rhev/data-center/13080edc-77ea-1
> 1
> e1-b6a4-525400c49d2a/93e2079e-ef3b-452f-af71-5b3e7eb32ba0/images/1111
> 1
> 111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-bin-DVD1.iso,timeOf
> f
> set=0,transparentHugePages=true,drives=[Ljava.util.Map;@7f0ba1c1,vmId
> =
> 93220e0a-610a-4b20-986e-3c8b0d39e35f,acpiEnable=true,vmName=test,cpuT
> y
> pe=Opteron_G3,custom={}
>
> 2012-03-28 14:31:35,620
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
> (http--0.0.0.0-8443-5) [cf5d84e] FINISH, CreateVDSCommand, log id:
> 7e63d56b
>
> 2012-03-28 14:31:35,625
> INFO[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> (http--0.0.0.0-8443-5) [cf5d84e]
> IncreasePendingVms::CreateVmIncreasing
> vds KVM-DMZ-04 pending vcpu count, now 1. Vm: test
>
> 2012-03-28 14:31:35,631
> INFO[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> (http--0.0.0.0-8443-5) [cf5d84e] FINISH, CreateVmVDSCommand, return:
> WaitForLaunch, log id: 3fc82aad
>
> 2012-03-28 14:31:39,316
> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
> (QuartzScheduler_Worker-35)
> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
> continuing.
>
> VDSNetworkException:
>
> 2012-03-28 14:31:41,328
> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
> (QuartzScheduler_Worker-29)
> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
> continuing.
>
> VDSNetworkException:
>
> 2012-03-28 14:31:43,340
> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
> (QuartzScheduler_Worker-45)
> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
> continuing.
>
> VDSNetworkException:
>
> 2012-03-28 14:31:43,519
> INFO[org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> (QuartzScheduler_Worker-38) [7ed2f412] Running command:
> SetStoragePoolStatusCommand internal: true. Entities affected :ID:
> 13080edc-77ea-11e1-b6a4-525400c49d2a Type: StoragePool
>
> 2012-03-28 14:31:43,543 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-38) [7ed2f412]
> IrsBroker::Failed::GetStoragePoolInfoVDS due to: ConnectException:
> Connection refused
>
> 2012-03-28 14:31:45,351
> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
> (QuartzScheduler_Worker-47)
> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
> continuing.
>
> VDSNetworkException:
>
> 2012-03-28 14:31:47,363
> WARN[org.ovirt.engine.core.vdsbroker.VdsManager]
> (QuartzScheduler_Worker-71)
> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a : KVM-DMZ-04, VDS Network Error,
> continuing.
>
> VDSNetworkException:
>
> 2012-03-28 14:31:51,091
> INFO[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-55) vm test running in db and not running in
> vds
> - add to rerun treatment. vds KVM-DMZ-04
>
> 2012-03-28 14:31:51,104
> INFO[org.ovirt.engine.core.bll.InitVdsOnUpCommand]
> (QuartzScheduler_Worker-55) [6df61eca] Running command:
> InitVdsOnUpCommand internal: true.
>
> 2012-03-28 14:31:51,182
> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
> s
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] Running command:
> ConnectHostToStoragePoolServersCommand internal: true. Entities
> affected
> :ID: 13080edc-77ea-11e1-b6a4-525400c49d2a Type: StoragePool
>
> 2012-03-28 14:31:51,185
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
> S
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] START,
> ConnectStorageServerVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
> 13080edc-77ea-11e1-b6a4-525400c49d2a, storageType = ISCSI,
> connectionList = [{ id: e25e5093-9240-42ac-a21a-f0c216294944,
> connection: 192.168.200.32 };]), log id: 11a3b069
>
> 2012-03-28 14:31:51,488
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
> S
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
> ConnectStorageServerVDSCommand, return:
> {e25e5093-9240-42ac-a21a-f0c216294944=0}, log id: 11a3b069
>
> 2012-03-28 14:31:51,489
> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
> s
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] Host KVM-DMZ-04 storage
> connection was succeeded
>
> 2012-03-28 14:31:51,492
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
> S
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] START,
> ConnectStorageServerVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
> 13080edc-77ea-11e1-b6a4-525400c49d2a, storageType = NFS,
> connectionList = [{ id: 3dd798da-77ea-11e1-969c-525400c49d2a, connection:
> oVirt.dynetic.de:/mnt/iso };]), log id: 1d73fc4e
>
> 2012-03-28 14:31:51,535
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
> S
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
> ConnectStorageServerVDSCommand, return:
> {3dd798da-77ea-11e1-969c-525400c49d2a=0}, log id: 1d73fc4e
>
> 2012-03-28 14:31:51,535
> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
> s
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] Host KVM-DMZ-04 storage
> connection was succeeded
>
> 2012-03-28 14:31:51,538
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
> S
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] START,
> ConnectStorageServerVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
> 13080edc-77ea-11e1-b6a4-525400c49d2a, storageType = NFS,
> connectionList = [{ id: 57d6595f-1109-49f9-a7f3-f8fe255c34bd, connection:
> 192.168.200.32:/nfsexport };]), log id: 618343ee
>
> 2012-03-28 14:31:51,579
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVD
> S
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
> ConnectStorageServerVDSCommand, return:
> {57d6595f-1109-49f9-a7f3-f8fe255c34bd=0}, log id: 618343ee
>
> 2012-03-28 14:31:51,579
> INFO[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServer
> s
> Command]
> (QuartzScheduler_Worker-55) [48a331d9] Host KVM-DMZ-04 storage
> connection was succeeded
>
> 2012-03-28 14:31:51,594
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSC
> o
> mmand]
> (QuartzScheduler_Worker-55) [48a331d9] START,
> ConnectStoragePoolVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
> 13080edc-77ea-11e1-b6a4-525400c49d2a, vds_spm_id = 1, masterDomainId
> = 8ed25a57-f53a-4cf0-bb92-781f3ce36a48, masterVersion = 1), log id:
> 190d7dea
>
> 2012-03-28 14:32:02,207
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSC
> o
> mmand]
> (QuartzScheduler_Worker-55) [48a331d9] FINISH,
> ConnectStoragePoolVDSCommand, log id: 190d7dea
>
> 2012-03-28 14:32:02,231
> INFO[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> c
> tor]
> (QuartzScheduler_Worker-55) [48a331d9] No string for UNASSIGNED type.
> Use default Log
>
> 2012-03-28 14:32:02,234
> INFO[org.ovirt.engine.core.bll.MultipleActionsRunner]
> (pool-5-thread-47) [48a331d9] MultipleActionsRunner of type
> MigrateVmToServer invoked with no actions
>
> 2012-03-28 14:32:02,264
> INFO[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedComma
> n
> d]
> (QuartzScheduler_Worker-55) [75839ca5] Running command:
> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities
> affected :ID: 0e0403a4-78ae-11e1-9c19-525400c49d2a Type: VDS
>
> 2012-03-28 14:32:02,272
> INFO[org.ovirt.engine.core.bll.HandleVdsVersionCommand]
> (QuartzScheduler_Worker-55) [2dd7e7a] Running command:
> HandleVdsVersionCommand internal: true. Entities affected :ID:
> 0e0403a4-78ae-11e1-9c19-525400c49d2a Type: VDS
>
> 2012-03-28 14:32:02,276 ERROR
> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-55) [2dd7e7a] Rerun vm
> 93220e0a-610a-4b20-986e-3c8b0d39e35f. Called from vds KVM-DMZ-04
>
> 2012-03-28 14:32:02,280
> INFO[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
> (pool-5-thread-47) [2dd7e7a] START,
> UpdateVdsDynamicDataVDSCommand(vdsId
> = 0e0403a4-78ae-11e1-9c19-525400c49d2a,
> vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@2
> f
> 1f5a9f),
> log id: 2901421c
>
> 2012-03-28 14:32:02,282
> INFO[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
> (pool-5-thread-47) [2dd7e7a] FINISH, UpdateVdsDynamicDataVDSCommand,
> log
> id: 2901421c
>
> 2012-03-28 14:32:02,299
> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
> (pool-5-thread-47) [2dd7e7a] START, IsValidVDSCommand(storagePoolId =
> 13080edc-77ea-11e1-b6a4-525400c49d2a, ignoreFailoverLimit = false,
> compatabilityVersion = null), log id: 2d73522e
>
> 2012-03-28 14:32:02,300
> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
> (pool-5-thread-47) [2dd7e7a] FINISH, IsValidVDSCommand, return:
> false, log id: 2d73522e
>
> 2012-03-28 14:32:02,301
> WARN[org.ovirt.engine.core.bll.RunVmOnceCommand]
> (pool-5-thread-47) [2dd7e7a] CanDoAction of action RunVmOnce failed.
> Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM
> , ACTION_TYPE_FAILED_IMAGE_REPOSITORY_NOT_FOUND
>
> 2012-03-28 14:32:04,591
> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-62) hostFromVds::selectedVds - KVM-DMZ-04,
> spmStatus Free, storage pool Default
>
> 2012-03-28 14:32:05,430
> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-62) SpmStatus on vds
> 0e0403a4-78ae-11e1-9c19-525400c49d2a: Free
>
> 2012-03-28 14:32:05,436
> INFO[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-62) starting spm on vds KVM-DMZ-04, storage
> pool Default, prevId 1, LVER 0
>
> 2012-03-28 14:32:05,439
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-62) START, SpmStartVDSCommand(vdsId =
> 0e0403a4-78ae-11e1-9c19-525400c49d2a, storagePoolId =
> 13080edc-77ea-11e1-b6a4-525400c49d2a, prevId=1, prevLVER=0,
> storagePoolFormatType=V2, recoveryMode=Manual, SCSIFencing=false),
> log
> id: 25d38cf4
>
> 2012-03-28 14:32:05,490
> INFO[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-62) spmStart polling started: taskId =
> c2a0d9a2-e356-4aa9-bb29-f66a0899d8cc
>
> Any idea what to do?
>
looks like the host is no up?
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users