[Users] Failed to initialize storage

зоррыч zorro at megatrone.ru
Sun May 20 12:48:59 UTC 2012


Thanks

-----Original Message-----
From: Haim Ateya [mailto:hateya at redhat.com] 
Sent: Sunday, May 20, 2012 1:29 PM
To: зоррыч
Cc: users at ovirt.org
Subject: Re: [Users] Failed to initialize storage

vdsm now requires higher version of lvm: 

Requires: lvm2 >= 2.02.95

please use correct version and try again. 

we introduced this requirement in commit aa709c48778de1aadfe8331160280e51e2a83587

Thanks, 

Haim


----- Original Message -----
> From: "зоррыч" <zorro at megatrone.ru>
> To: "Haim Ateya" <hateya at redhat.com>
> Cc: users at ovirt.org
> Sent: Sunday, May 20, 2012 12:14:43 PM
> Subject: RE: [Users] Failed to initialize storage
> 
> 
> 
> 
> Host:
> 
> [root at noc-3-synt ~]# rpm -qa | grep lvm2
> 
> lvm2-libs-2.02.87-6.el6.x86_64
> 
> lvm2-2.02.87-6.el6.x86_64
> 
> 
> 
> ovirt:
> 
> [root at noc-2 ~]# rpm -qa | grep lvm2
> 
> lvm2-libs-2.02.87-6.el6.x86_64
> 
> lvm2-2.02.87-6.el6.x86_64
> 
> 
> 
> 
> 
> 
> 
> 
> 
> From: Haim Ateya [mailto:hateya at redhat.com]
> Sent: Sunday, May 20, 2012 8:03 AM
> To: зоррыч
> Cc: users at ovirt.org
> Subject: Re: [Users] Failed to initialize storage
> 
> 
> 
> 
> Hi,
> 
> 
> 
> 
> 
> What version of lvm2 are you using?
> 
> Haim
> 
> 
> 
> On May 20, 2012, at 1:16, зоррыч < zorro at megatrone.ru > wrote:
> 
> 
> 
> 
> Hi.
> 
> I installed ovirt and vdsm version:
> 
> [root at noc-2 vds]# rpm -qa | grep ovirt-engine
> 
> ovirt-engine-image-uploader-3.1.0_0001-1.8.el6.x86_64
> 
> ovirt-engine-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-restapi-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-notification-service-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-jboss-deps-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-userportal-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-tools-common-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-setup-plugin-allinone-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-jbossas-1.2-2.fc16.x86_64
> 
> ovirt-engine-log-collector-3.1.0_0001-1.8.el6.x86_64
> 
> ovirt-engine-setup-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-iso-uploader-3.1.0_0001-1.8.el6.x86_64
> 
> ovirt-engine-dbscripts-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-sdk-1.3-1.el6.noarch
> 
> ovirt-engine-backend-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-config-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-genericapi-3.1.0_0001-1.8.el6.noarch
> 
> ovirt-engine-webadmin-portal-3.1.0_0001-1.8.el6.noarch
> 
> 
> 
> [root at noc-2 vds]# rpm -qa | grep vdsm
> 
> vdsm-python-4.9.6-0.223.gitb3c6b0c.el6.x86_64
> 
> vdsm-bootstrap-4.9.6-0.223.gitb3c6b0c.el6.noarch
> 
> vdsm-4.9.6-0.223.gitb3c6b0c.el6.x86_64
> 
> 
> 
> Installing a new host is successful. The host goes to reboot.
> 
> However, after rebooting the status of the host:
> 
> Host 10.1.20.7 is initializing. Message: Failed to initialize storage
> 
> 
> 
> In the logs:
> 
> Engine.log:
> 
> 2012-05-19 17:36:45,183 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (QuartzScheduler_Worker-88) Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand
> return value
> 
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc at 60828
> 48f
> 
> 2012-05-19 17:36:45,183 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (QuartzScheduler_Worker-88) Vds: 10.1.20.7
> 
> 2012-05-19 17:36:45,183 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
> (QuartzScheduler_Worker-88) Command GetCapabilitiesVDS execution 
> failed. Error: VDSRecoveringException: Failed to initialize storage
> 
> 2012-05-19 17:36:47,203 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (QuartzScheduler_Worker-91) Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand
> return value
> 
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc at 70db2
> 9ad
> 
> 2012-05-19 17:36:47,203 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (QuartzScheduler_Worker-91) Vds: 10.1.20.7
> 
> 2012-05-19 17:36:47,203 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
> (QuartzScheduler_Worker-91) Command GetCapabilitiesVDS execution 
> failed. Error: VDSRecoveringException: Failed to initialize storage
> 
> 
> 
> Vdsm.log(host):
> 
> MainThread::INFO::2012-05-19 17:21:54,938::vdsm::78::vds::(run)
> <_MainThread(MainThread, started 140055851738880)>
> 
> MainThread::INFO::2012-05-19 17:21:54,938::vdsm::78::vds::(run)
> <Thread(libvirtEventLoop, started daemon 140055763654400)>
> 
> MainThread::INFO::2012-05-19 17:21:54,938::vdsm::78::vds::(run)
> <WorkerThread(Thread-5, started daemon 140055620335360)>
> 
> MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
> <WorkerThread(Thread-8, started daemon 140055249151744)>
> 
> MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
> <WorkerThread(Thread-10, started daemon 140055228172032)>
> 
> MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
> <KsmMonitorThread(KsmMonitor, started daemon 140054789879552)>
> 
> MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
> <WorkerThread(Thread-3, started daemon 140055641315072)>
> 
> MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
> <WorkerThread(Thread-6, started daemon 140055609845504)>
> 
> MainThread::INFO::2012-05-19 17:21:54,940::vdsm::78::vds::(run)
> <WorkerThread(Thread-2, started daemon 140055651804928)>
> 
> MainThread::INFO::2012-05-19 17:21:54,940::vdsm::78::vds::(run)
> <WorkerThread(Thread-1, started daemon 140055662294784)>
> 
> MainThread::INFO::2012-05-19 17:21:54,940::vdsm::78::vds::(run)
> <WorkerThread(Thread-7, started daemon 140055259641600)>
> 
> MainThread::INFO::2012-05-19
> 17:21:54,940::vmChannels::135::vds::(stop) VM channels listener was 
> stopped.
> 
> MainThread::INFO::2012-05-19 17:21:54,940::vdsm::78::vds::(run)
> <Listener(VM Channels Listener, started daemon 140054768899840)>
> 
> MainThread::INFO::2012-05-19 17:21:54,940::vdsm::78::vds::(run)
> <WorkerThread(Thread-9, started daemon 140055238661888)>
> 
> MainThread::INFO::2012-05-19 17:21:54,941::vdsm::78::vds::(run)
> <WorkerThread(Thread-4, started daemon 140055630825216)>
> 
> MainThread::INFO::2012-05-19 17:21:54,941::vdsm::78::vds::(run)
> <Thread(Thread-11, started daemon 140055217682176)>
> 
> MainThread::INFO::2012-05-19 17:30:55,476::vdsm::70::vds::(run) I am 
> the actual vdsm 4.9.6-0.223.gitb3c6b0c
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,219::resourceManager::379::ResourceManager::(registerNamespac
> e)
> Registering namespace 'Storage'
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,220::threadPool::45::Misc.ThreadPool::(__init__) Enter -
> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,234::sp::359::Storage.StoragePool::(cleanupMasterMount)
> master
> `/rhev/data-center/mnt/blockSD/e5a63624-716e-4bb4-ae60-cd4d7aae9ed2/ma
> ster`
> is not mounted, skipping
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,302::supervdsm::103::SuperVdsmProxy::(_killSupervdsm) Could 
> not kill old Super Vdsm [Errno 2] No such file or directory:
> '/var/run/vdsm/svdsm.pid'
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,302::supervdsm::91::SuperVdsmProxy::(_launchSupervdsm)
> Launching Super Vdsm
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,302::__init__::1164::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.py
> 1f16883e-d8b7-45ab-b527-bcee38e5fc87 2994' (cwd None)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,529::supervdsmServer::279::SuperVdsm.Server::(main) Making 
> sure I'm root
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,529::supervdsmServer::283::SuperVdsm.Server::(main) Parsing 
> cmd args
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,529::supervdsmServer::286::SuperVdsm.Server::(main)
> Creating PID file
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,529::supervdsmServer::290::SuperVdsm.Server::(main)
> Cleaning old socket
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,530::supervdsmServer::294::SuperVdsm.Server::(main) Setting 
> up keep alive thread
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,530::supervdsmServer::300::SuperVdsm.Server::(main)
> Creating remote object manager
> 
> MainThread::DEBUG::2012-05-19
> 17:30:56,531::supervdsmServer::311::SuperVdsm.Server::(main) Started 
> serving super vdsm object
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,309::supervdsm::113::SuperVdsmProxy::(_connect) Trying to 
> connect to Super Vdsm
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,314::supervdsm::84::SuperVdsmProxy::(__init__) Connected to 
> Super Vdsm
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,315::multipath::102::Storage.Multipath::(isEnabled) Current 
> revision of multipath.conf detected, preserving
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,315::__init__::1164::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd
> None)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,464::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
> <err> = ''; <rc> = 0
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,465::lvm::316::OperationMutex::(_reloadpvs) Operation 'lvm 
> reload operation' got the operation mutex
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,467::__init__::1164::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = 
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [ 
> \\"a%35000c50001770ea3%\\ ", \\"r%.*%\\ " ] } global {
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { 
> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix 
> --separator | -o 
> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
> (cwd None)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,613::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
> <err> = " Couldn't find device with uuid 
> jbH4vV-SWm9-NI0q-Apmd-12qW-KBPX-Rgg2lK.\n"; <rc> = 0
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,615::lvm::339::OperationMutex::(_reloadpvs) Operation 'lvm 
> reload operation' released the operation mutex
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,615::lvm::349::OperationMutex::(_reloadvgs) Operation 'lvm 
> reload operation' got the operation mutex
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,616::__init__::1164::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = 
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [ 
> \\"a%35000c50001770ea3%\\ ", \\"r%.*%\\ " ] } global {
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { 
> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix 
> --separator | -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
> (cwd None)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,764::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
> <err> = " Couldn't find device with uuid 
> jbH4vV-SWm9-NI0q-Apmd-12qW-KBPX-Rgg2lK.\n"; <rc> = 0
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,766::lvm::376::OperationMutex::(_reloadvgs) Operation 'lvm 
> reload operation' released the operation mutex
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,766::__init__::1164::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = 
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [ 
> \\"a%35000c50001770ea3%\\ ", \\"r%.*%\\ " ] } global {
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { 
> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix 
> --separator | -o 
> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,907::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
> <err> = " Couldn't find device with uuid 
> jbH4vV-SWm9-NI0q-Apmd-12qW-KBPX-Rgg2lK.\n"; <rc> = 0
> 
> MainThread::ERROR::2012-05-19
> 17:30:58,909::clientIF::201::vds::(_initIRS) Error initializing IRS
> 
> Traceback (most recent call last):
> 
> File "/usr/share/vdsm/clientIF.py", line 199, in _initIRS
> 
> self.irs = Dispatcher(HSM())
> 
> File "/usr/share/vdsm/storage/hsm.py", line 300, in __init__
> 
> lvm._lvminfo.bootstrap()
> 
> File "/usr/share/vdsm/storage/lvm.py", line 309, in bootstrap
> 
> self._reloadAllLvs()
> 
> File "/usr/share/vdsm/storage/lvm.py", line 435, in _reloadAllLvs
> 
> lv = makeLV(*fields)
> 
> File "/usr/share/vdsm/storage/lvm.py", line 218, in makeLV
> 
> attrs = _attr2NamedTuple(args[LV._fields.index("attr")],
> LV_ATTR_BITS, "LV_ATTR")
> 
> File "/usr/share/vdsm/storage/lvm.py", line 188, in _attr2NamedTuple
> 
> attrs = Attrs(*values)
> 
> TypeError: __new__() takes exactly 9 arguments (7 given)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,918::__init__::1164::Storage.Misc.excCmd::(_log)
> '/usr/bin/pgrep -xf ksmd' (cwd None)
> 
> MainThread::DEBUG::2012-05-19
> 17:30:58,945::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
> <err> = ''; <rc> = 0
> 
> MainThread::INFO::2012-05-19
> 17:30:58,946::vmChannels::139::vds::(settimeout) Setting channels'
> timeout to 30 seconds.
> 
> MainThread::ERROR::2012-05-19
> 17:30:58,951::clientIF::142::vds::(_prepareBindings) Unable to load 
> the rest server module. Please make sure it is installed.
> 
> VM Channels Listener::INFO::2012-05-19
> 17:30:58,952::vmChannels::127::vds::(run) Starting VM channels 
> listener thread.
> 
> 
> 
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




More information about the Users mailing list