[Users] How to change storage domain ip address

Haim Ateya hateya at redhat.com
Tue May 29 19:21:56 UTC 2012



Haim

On May 29, 2012, at 21:34, T-Sinjon <tscbj1989 at gmail.com> wrote:

> After doing this , master domain still can't be activated ..

Can you provide the vdsm.log after this operation?
I will also try this case in our labs.
> 
> On 30 May, 2012, at 2:15 AM, Haim Ateya wrote:
> 
>> i'm missing connectStorageServer here (API command which makes the connection between host and storage, in our case, mount).
>> the following errors comes from the fact connectStoragePool is sent, but host fails to read domain meta-data, and function fails on attrib error.
>> please try the following: 
>> 
>> - put both hosts on maintenance state 
>> - activate only one of the hosts 
>> - go to data-center, and activate master domain 
>> 
>> 
>> 
>> 
>> ----- Original Message -----
>>> From: "T-Sinjon" <tscbj1989 at gmail.com>
>>> To: "Haim Ateya" <hateya at redhat.com>
>>> Cc: users at ovirt.org
>>> Sent: Tuesday, May 29, 2012 9:06:30 PM
>>> Subject: Re: [Users] How to change storage domain ip address
>>> 
>>> i have 2 hosts , one is up and the other is in Non Operational status
>>> 
>>> node1  vdsm.log:
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,927::clientIF::261::Storage.Dispatcher.Protect::(wrapper)
>>> [172.30.0.229]
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,929::task::588::TaskManager.Task::(_updateState)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::moving from state init
>>> -> state preparing
>>> Thread-77492::INFO::2012-05-29
>>> 17:58:39,930::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> getSpmStatus(spUUID='524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> options=None)
>>> Thread-77492::ERROR::2012-05-29
>>> 17:58:39,930::task::855::TaskManager.Task::(_setError)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 863, in _run
>>> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>>> File "/usr/share/vdsm/storage/hsm.py", line 438, in getSpmStatus
>>> File "/usr/share/vdsm/storage/hsm.py", line 186, in getPool
>>> StoragePoolUnknown: Unknown pool id, pool not connected:
>>> ('524a7003-edec-4f52-a38e-b15cadfbe3ef',)
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,931::task::874::TaskManager.Task::(_run)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::Task._run:
>>> 9e61d75e-5673-4c01-a8ad-99fc737398de
>>> ('524a7003-edec-4f52-a38e-b15cadfbe3ef',) {} failed - stopping task
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,932::task::1201::TaskManager.Task::(stop)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::stopping in state
>>> preparing (force False)
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,933::task::980::TaskManager.Task::(_decref)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::ref 1 aborting True
>>> Thread-77492::INFO::2012-05-29
>>> 17:58:39,933::task::1159::TaskManager.Task::(prepare)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::aborting: Task is
>>> aborted: 'Unknown pool id, pool not connected' - code 309
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,934::task::1164::TaskManager.Task::(prepare)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::Prepare: aborted:
>>> Unknown pool id, pool not connected
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,935::task::980::TaskManager.Task::(_decref)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::ref 0 aborting True
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,935::task::915::TaskManager.Task::(_doAbort)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::Task._doAbort: force
>>> False
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,936::resourceManager::841::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,937::task::588::TaskManager.Task::(_updateState)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::moving from state
>>> preparing -> state aborting
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,937::task::537::TaskManager.Task::(__state_aborting)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::_aborting: recover
>>> policy none
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,938::task::588::TaskManager.Task::(_updateState)
>>> Task=`9e61d75e-5673-4c01-a8ad-99fc737398de`::moving from state
>>> aborting -> state failed
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,939::resourceManager::806::ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-77492::DEBUG::2012-05-29
>>> 17:58:39,939::resourceManager::841::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-77492::ERROR::2012-05-29
>>> 17:58:39,940::dispatcher::90::Storage.Dispatcher.Protect::(run)
>>> {'status': {'message': "Unknown pool id, pool not connected:
>>> ('524a7003-edec-4f52-a38e-b15cadfbe3ef',)", 'code': 309}}
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,964::clientIF::261::Storage.Dispatcher.Protect::(wrapper)
>>> [172.30.0.229]
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,965::task::588::TaskManager.Task::(_updateState)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::moving from state init
>>> -> state preparing
>>> Thread-77493::INFO::2012-05-29
>>> 17:58:39,966::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> connectStoragePool(spUUID='524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> hostID=1, scsiKey='524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> msdUUID='5e2ac537-6a73-4faf-8379-68f3ff26a75d', masterVersion=1,
>>> options=None)
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,967::resourceManager::175::ResourceManager.Request::(__init__)
>>> ResName=`Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef`ReqID=`64a4665d-f6e7-4f70-980d-035020fed461`::Request
>>> was made in '/usr/share/vdsm/storage/hsm.py' line '747' at
>>> '_connectStoragePool'
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,968::resourceManager::483::ResourceManager::(registerResource)
>>> Trying to register resource
>>> 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' for lock type
>>> 'exclusive'
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,969::resourceManager::525::ResourceManager::(registerResource)
>>> Resource 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' is free. Now
>>> locking as 'exclusive' (1 active user)
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,970::resourceManager::212::ResourceManager.Request::(grant)
>>> ResName=`Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef`ReqID=`64a4665d-f6e7-4f70-980d-035020fed461`::Granted
>>> request
>>> Thread-77493::INFO::2012-05-29
>>> 17:58:39,971::sp::608::Storage.StoragePool::(connect) Connect host
>>> #1 to the storage pool 524a7003-edec-4f52-a38e-b15cadfbe3ef with
>>> master domain: 5e2ac537-6a73-4faf-8379-68f3ff26a75d (ver = 1)
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,972::lvm::460::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,973::lvm::462::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,974::lvm::472::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,975::lvm::474::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,976::lvm::493::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,977::lvm::495::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,978::resourceManager::535::ResourceManager::(releaseResource)
>>> Trying to release resource
>>> 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef'
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,978::resourceManager::550::ResourceManager::(releaseResource)
>>> Released resource 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' (0
>>> active users)
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,979::resourceManager::555::ResourceManager::(releaseResource)
>>> Resource 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' is free,
>>> finding out if anyone is waiting for it.
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,980::resourceManager::562::ResourceManager::(releaseResource)
>>> No one is waiting for resource
>>> 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef', Clearing records.
>>> Thread-77493::ERROR::2012-05-29
>>> 17:58:39,981::task::855::TaskManager.Task::(_setError)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 863, in _run
>>> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>>> File "/usr/share/vdsm/storage/hsm.py", line 721, in
>>> connectStoragePool
>>> File "/usr/share/vdsm/storage/hsm.py", line 763, in
>>> _connectStoragePool
>>> File "/usr/share/vdsm/storage/sp.py", line 624, in connect
>>> File "/usr/share/vdsm/storage/sp.py", line 1097, in __rebuild
>>> File "/usr/share/vdsm/storage/sp.py", line 1437, in getMasterDomain
>>> File "/usr/share/vdsm/storage/sd.py", line 656, in isMaster
>>> File "/usr/share/vdsm/storage/sd.py", line 616, in getMetaParam
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 75, in
>>> __getitem__
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 185, in
>>> __getitem__
>>> KeyError: 'ROLE'
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,981::task::874::TaskManager.Task::(_run)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::Task._run:
>>> 47053840-dcdd-4c64-8788-e730ca0e87cb
>>> ('524a7003-edec-4f52-a38e-b15cadfbe3ef', 1,
>>> '524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> '5e2ac537-6a73-4faf-8379-68f3ff26a75d', 1) {} failed - stopping task
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,981::task::1201::TaskManager.Task::(stop)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::stopping in state
>>> preparing (force False)
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,982::task::980::TaskManager.Task::(_decref)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::ref 1 aborting True
>>> Thread-77493::INFO::2012-05-29
>>> 17:58:39,982::task::1159::TaskManager.Task::(prepare)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::aborting: Task is
>>> aborted: "'ROLE'" - code 100
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,982::task::1164::TaskManager.Task::(prepare)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::Prepare: aborted:
>>> 'ROLE'
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,982::task::980::TaskManager.Task::(_decref)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::ref 0 aborting True
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,982::task::915::TaskManager.Task::(_doAbort)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::Task._doAbort: force
>>> False
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,983::resourceManager::841::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,983::task::588::TaskManager.Task::(_updateState)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::moving from state
>>> preparing -> state aborting
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,983::task::537::TaskManager.Task::(__state_aborting)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::_aborting: recover
>>> policy none
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,983::task::588::TaskManager.Task::(_updateState)
>>> Task=`47053840-dcdd-4c64-8788-e730ca0e87cb`::moving from state
>>> aborting -> state failed
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,983::resourceManager::806::ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-77493::DEBUG::2012-05-29
>>> 17:58:39,984::resourceManager::841::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-77493::ERROR::2012-05-29
>>> 17:58:39,984::dispatcher::93::Storage.Dispatcher.Protect::(run)
>>> 'ROLE'
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/dispatcher.py", line 85, in run
>>> File "/usr/share/vdsm/storage/task.py", line 1166, in prepare
>>> KeyError: 'ROLE'
>>> 
>>> node2 vdsm.log
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,732::clientIF::76::vds::(wrapper) [172.30.0.229]::call
>>> getVdsCapabilities with () {}
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,774::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-kvm'
>>> (cwd None)
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,816::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,818::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-img'
>>> (cwd None)
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,863::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,865::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" vdsm' (cwd
>>> None)
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,912::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,913::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n"
>>> spice-server' (cwd None)
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,948::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,950::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" libvirt'
>>> (cwd None)
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,989::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163903::DEBUG::2012-05-29
>>> 17:58:51,992::clientIF::81::vds::(wrapper) return getVdsCapabilities
>>> with {'status': {'message': 'Done', 'code': 0}, 'info':
>>> {'HBAInventory': {'iSCSI': [{'InitiatorName':
>>> 'iqn.1994-05.com.redhat:80e221f0efc2'}], 'FC': []}, 'packages2':
>>> {'kernel': {'release': '2.fc16.x86_64', 'buildtime': 1328299688.0,
>>> 'version': '3.2.3'}, 'spice-server': {'release': '1.fc16',
>>> 'buildtime': '1327339129', 'version': '0.10.1'}, 'vdsm': {'release':
>>> '0.fc16', 'buildtime': '1327521056', 'version': '4.9.3.2'},
>>> 'qemu-kvm': {'release': '3.fc16', 'buildtime': '1321651456',
>>> 'version': '0.15.1'}, 'libvirt': {'release': '4.fc16', 'buildtime':
>>> '1324326688', 'version': '0.9.6'}, 'qemu-img': {'release': '3.fc16',
>>> 'buildtime': '1321651456', 'version': '0.15.1'}}, 'cpuModel':
>>> 'Six-Core AMD Opteron(tm) Processor 2435', 'hooks': {}, 'vmTypes':
>>> ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks':
>>> {'ovirtmgmt': {'addr': 'xxx.xxx.xxx.xxx', 'cfg': {'IPV6FORWARDING':
>>> 'no', 'IPV6INIT': 'no', 'SKIPLIBVIRT': 'True', 'IPADDR':
>>> 'xxx.xxx.xxx.xxx', 'PEERDNS': 'no', 'GATEWAY': 'xxx.xxx.xxx.xxx',
>>> 'DELAY': '0', 'IPV6_AUTOCONF': 'no', 'NETMASK': '255.255.254.0',
>>> 'BOOTPROTO': 'static', 'DEVICE': 'ovirtmgmt', 'PEERNTP': 'yes',
>>> 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports': ['eth0'], 'netmask':
>>> 'xxx.xxx.xxx.xxx'em2': {'addr': 'xxx.xx.xxx.xxx', 'cfg': {'IPADDR':
>>> 'xxx.xxx.xxx.xxx', 'DELAY': '0', 'NETMASK': '255.255.255.0', 'STP':
>>> 'no', 'DEVICE': 'em2', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports':
>>> ['eth1'], 'netmask': '255.255.255.0', 'stp': 'off', 'gateway':
>>> '0.0.0.0'}}, 'uuid':
>>> '44454C4C-5900-105A-804B-B7C04F563258_00:1D:09:FD:8B:80',
>>> 'lastClientIface': 'ovirtmgmt', 'nics': {'eth1': {'hwaddr':
>>> '00:1D:09:FD:8B:82', 'netmask': '', 'speed': 1000, 'addr': ''},
>>> 'eth0': {'hwaddr': '00:1D:09:FD:8B:80', 'netmask': '', 'speed':
>>> 1000, 'addr': ''}}, 'software_revision': '0', 'management_ip': '',
>>> 'clusterLevels': ['3.0'], 'cpuFlags':
>>> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,npt,lbrv,svm_lock,nrip_save,pausefilter,model_486,model_pentium,model_pentium2,model_pentium3,model_pentiumpro,model_qemu32,model_coreduo,model_qemu64,model_phenom,model_athlon,model_Opteron_G1,model_Opteron_G2,model_Opteron_G3',
>>> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:80e221f0efc2',
>>> 'memSize': '32109', 'reservedMem': '321', 'bondings': {'bond4':
>>> {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr':
>>> '', 'slaves': []}, 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg':
>>> {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond1': {'hwaddr':
>>> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves':
>>> []}, 'bond2': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask':
>>> '', 'addr': '', 'slaves': []}, 'bond3': {'hwaddr':
>>> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves':
>>> []}}, 'software_version': '4.9', 'cpuSpeed': '800.000',
>>> 'cpuSockets': '2', 'vlans': {}, 'cpuCores': '12', 'kvmEnabled':
>>> 'true', 'guestOverhead': '65', 'supportedRHEVMs': ['3.0'],
>>> 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-0.14', u'pc',
>>> u'fedora-13', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
>>> u'isapc', u'pc-0.14', u'pc', u'fedora-13', u'pc-0.13', u'pc-0.12',
>>> u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release':
>>> '1', 'version': '16', 'name': 'oVirt Node'}, 'lastClient':
>>> '172.30.0.229'}}
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,549::clientIF::76::vds::(wrapper) [172.30.0.229]::call
>>> getVdsCapabilities with () {}
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,586::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-kvm'
>>> (cwd None)
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,633::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,635::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-img'
>>> (cwd None)
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,682::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,684::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" vdsm' (cwd
>>> None)
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,725::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,726::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n"
>>> spice-server' (cwd None)
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,773::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,774::utils::595::Storage.Misc.excCmd::(execCmd) '/bin/rpm
>>> -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" libvirt'
>>> (cwd None)
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,823::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>> Thread-163904::DEBUG::2012-05-29
>>> 17:58:52,826::clientIF::81::vds::(wrapper) return getVdsCapabilities
>>> with {'status': {'message': 'Done', 'code': 0}, 'info':
>>> {'HBAInventory': {'iSCSI': [{'InitiatorName':
>>> 'iqn.1994-05.com.redhat:80e221f0efc2'}], 'FC': []}, 'packages2':
>>> {'kernel': {'release': '2.fc16.x86_64', 'buildtime': 1328299688.0,
>>> 'version': '3.2.3'}, 'spice-server': {'release': '1.fc16',
>>> 'buildtime': '1327339129', 'version': '0.10.1'}, 'vdsm': {'release':
>>> '0.fc16', 'buildtime': '1327521056', 'version': '4.9.3.2'},
>>> 'qemu-kvm': {'release': '3.fc16', 'buildtime': '1321651456',
>>> 'version': '0.15.1'}, 'libvirt': {'release': '4.fc16', 'buildtime':
>>> '1324326688', 'version': '0.9.6'}, 'qemu-img': {'release': '3.fc16',
>>> 'buildtime': '1321651456', 'version': '0.15.1'}}, 'cpuModel':
>>> 'Six-Core AMD Opteron(tm) Processor 2435', 'hooks': {}, 'vmTypes':
>>> ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks':
>>> {'ovirtmgmt': {'addr': 'xxx.xxx.xxx.xxx', 'cfg': {'IPV6FORWARDING':
>>> 'no', 'IPV6INIT': 'no', 'SKIPLIBVIRT': 'True', 'IPADDR':
>>> 'xxx.xxx.xxx.xxx', 'PEERDNS': 'no', 'GATEWAY': 'xxx.xxx.xxx.xxx',
>>> 'DELAY': '0', 'IPV6_AUTOCONF': 'no', 'NETMASK': '255.255.254.0',
>>> 'BOOTPROTO': 'static', 'DEVICE': 'ovirtmgmt', 'PEERNTP': 'yes',
>>> 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports': ['eth0'], 'netmask':
>>> '255.255.254.0', 'stp': 'off', 'gateway': 'xxx.xxx.xxx.xxx'}, 'em2':
>>> {'addr': 'xxx.xxx.xxx.xxx', 'cfg': {'IPADDR': 'xxx.xxx.xxx.xxx',
>>> 'DELAY': '0', 'NETMASK': '255.255.255.0', 'STP': 'no', 'DEVICE':
>>> 'em2', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports': ['eth1'],
>>> 'netmask': '255.255.255.0', 'stp': 'off', 'gateway': '0.0.0.0'}},
>>> 'uuid': '44454C4C-5900-105A-804B-B7C04F563258_00:1D:09:FD:8B:80',
>>> 'lastClientIface': 'ovirtmgmt', 'nics': {'eth1': {'hwaddr':
>>> '00:1D:09:FD:8B:82', 'netmask': '', 'speed': 1000, 'addr': ''},
>>> 'eth0': {'hwaddr': '00:1D:09:FD:8B:80', 'netmask': '', 'speed':
>>> 1000, 'addr': ''}}, 'software_revision': '0', 'management_ip': '',
>>> 'clusterLevels': ['3.0'], 'cpuFlags':
>>> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,npt,lbrv,svm_lock,nrip_save,pausefilter,model_486,model_pentium,model_pentium2,model_pentium3,model_pentiumpro,model_qemu32,model_coreduo,model_qemu64,model_phenom,model_athlon,model_Opteron_G1,model_Opteron_G2,model_Opteron_G3',
>>> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:80e221f0efc2',
>>> 'memSize': '32109', 'reservedMem': '321', 'bondings': {'bond4':
>>> {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr':
>>> '', 'slaves': []}, 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg':
>>> {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond1': {'hwaddr':
>>> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves':
>>> []}, 'bond2': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask':
>>> '', 'addr': '', 'slaves': []}, 'bond3': {'hwaddr':
>>> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves':
>>> []}}, 'software_version': '4.9', 'cpuSpeed': '800.000',
>>> 'cpuSockets': '2', 'vlans': {}, 'cpuCores': '12', 'kvmEnabled':
>>> 'true', 'guestOverhead': '65', 'supportedRHEVMs': ['3.0'],
>>> 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-0.14', u'pc',
>>> u'fedora-13', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
>>> u'isapc', u'pc-0.14', u'pc', u'fedora-13', u'pc-0.13', u'pc-0.12',
>>> u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release':
>>> '1', 'version': '16', 'name': 'oVirt Node'}, 'lastClient':
>>> '172.30.0.229'}}
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,014::clientIF::261::Storage.Dispatcher.Protect::(wrapper)
>>> [172.30.0.229]
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,015::task::588::TaskManager.Task::(_updateState)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::moving from state init
>>> -> state preparing
>>> Thread-163906::INFO::2012-05-29
>>> 17:58:53,016::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> connectStoragePool(spUUID='524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> hostID=2, scsiKey='524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> msdUUID='5e2ac537-6a73-4faf-8379-68f3ff26a75d', masterVersion=1,
>>> options=None)
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,018::resourceManager::175::ResourceManager.Request::(__init__)
>>> ResName=`Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef`ReqID=`466be789-4790-457e-a752-751b4c75f9b4`::Request
>>> was made in '/usr/share/vdsm/storage/hsm.py' line '747' at
>>> '_connectStoragePool'
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,018::resourceManager::483::ResourceManager::(registerResource)
>>> Trying to register resource
>>> 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' for lock type
>>> 'exclusive'
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,019::resourceManager::525::ResourceManager::(registerResource)
>>> Resource 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' is free. Now
>>> locking as 'exclusive' (1 active user)
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,020::resourceManager::212::ResourceManager.Request::(grant)
>>> ResName=`Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef`ReqID=`466be789-4790-457e-a752-751b4c75f9b4`::Granted
>>> request
>>> Thread-163906::INFO::2012-05-29
>>> 17:58:53,021::sp::608::Storage.StoragePool::(connect) Connect host
>>> #2 to the storage pool 524a7003-edec-4f52-a38e-b15cadfbe3ef with
>>> master domain: 5e2ac537-6a73-4faf-8379-68f3ff26a75d (ver = 1)
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,021::lvm::460::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,021::lvm::462::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,022::lvm::472::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,022::lvm::474::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,022::lvm::493::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,022::lvm::495::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,023::resourceManager::535::ResourceManager::(releaseResource)
>>> Trying to release resource
>>> 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef'
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,023::resourceManager::550::ResourceManager::(releaseResource)
>>> Released resource 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' (0
>>> active users)
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,023::resourceManager::555::ResourceManager::(releaseResource)
>>> Resource 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef' is free,
>>> finding out if anyone is waiting for it.
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,023::resourceManager::562::ResourceManager::(releaseResource)
>>> No one is waiting for resource
>>> 'Storage.524a7003-edec-4f52-a38e-b15cadfbe3ef', Clearing records.
>>> Thread-163906::ERROR::2012-05-29
>>> 17:58:53,024::task::855::TaskManager.Task::(_setError)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 863, in _run
>>> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>>> File "/usr/share/vdsm/storage/hsm.py", line 721, in
>>> connectStoragePool
>>> File "/usr/share/vdsm/storage/hsm.py", line 763, in
>>> _connectStoragePool
>>> File "/usr/share/vdsm/storage/sp.py", line 624, in connect
>>> File "/usr/share/vdsm/storage/sp.py", line 1097, in __rebuild
>>> File "/usr/share/vdsm/storage/sp.py", line 1437, in getMasterDomain
>>> File "/usr/share/vdsm/storage/sd.py", line 656, in isMaster
>>> File "/usr/share/vdsm/storage/sd.py", line 616, in getMetaParam
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 75, in
>>> __getitem__
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 185, in
>>> __getitem__
>>> KeyError: 'ROLE'
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,024::task::874::TaskManager.Task::(_run)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::Task._run:
>>> 844eca1f-ec6f-4e3b-ad97-e31939cb96d3
>>> ('524a7003-edec-4f52-a38e-b15cadfbe3ef', 2,
>>> '524a7003-edec-4f52-a38e-b15cadfbe3ef',
>>> '5e2ac537-6a73-4faf-8379-68f3ff26a75d', 1) {} failed - stopping task
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,024::task::1201::TaskManager.Task::(stop)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::stopping in state
>>> preparing (force False)
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,024::task::980::TaskManager.Task::(_decref)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::ref 1 aborting True
>>> Thread-163906::INFO::2012-05-29
>>> 17:58:53,025::task::1159::TaskManager.Task::(prepare)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::aborting: Task is
>>> aborted: "'ROLE'" - code 100
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,025::task::1164::TaskManager.Task::(prepare)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::Prepare: aborted:
>>> 'ROLE'
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,025::task::980::TaskManager.Task::(_decref)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::ref 0 aborting True
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,025::task::915::TaskManager.Task::(_doAbort)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::Task._doAbort: force
>>> False
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,025::resourceManager::841::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,026::task::588::TaskManager.Task::(_updateState)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::moving from state
>>> preparing -> state aborting
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,026::task::537::TaskManager.Task::(__state_aborting)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::_aborting: recover
>>> policy none
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,026::task::588::TaskManager.Task::(_updateState)
>>> Task=`844eca1f-ec6f-4e3b-ad97-e31939cb96d3`::moving from state
>>> aborting -> state failed
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,026::resourceManager::806::ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-163906::DEBUG::2012-05-29
>>> 17:58:53,026::resourceManager::841::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-163906::ERROR::2012-05-29
>>> 17:58:53,026::dispatcher::93::Storage.Dispatcher.Protect::(run)
>>> 'ROLE'
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/dispatcher.py", line 85, in run
>>> File "/usr/share/vdsm/storage/task.py", line 1166, in prepare
>>> KeyError: 'ROLE'
>>> 
>>> On 30 May, 2012, at 1:34 AM, Haim Ateya wrote:
>>> 
>>>> 
>>>> 
>>>> ----- Original Message -----
>>>>> From: "T-Sinjon" <tscbj1989 at gmail.com>
>>>>> To: "Haim Ateya" <hateya at redhat.com>
>>>>> Cc: users at ovirt.org
>>>>> Sent: Tuesday, May 29, 2012 8:31:01 PM
>>>>> Subject: Re: [Users] How to change storage domain ip address
>>>>> 
>>>>> I guess you mean engine.log , because i can't find any log new
>>>>> when i
>>>>> do this action.
>>>> 
>>>> vdsm.log can be found on your host (hyper-visor) under
>>>> /var/log/vdsm/vdsm.log.
>>>> how many hosts do you have in your pool ? what's the status of the
>>>> hosts ?
>>>> 
>>>>> 
>>>>> here is the full log:
>>>>> 2012-05-30 01:28:24,852 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] Lock Acquired to object EngineLock
>>>>> [exclusiveLocks= key:
>>>>> org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand
>>>>> value: 5e2ac537-6a73-4faf-8379-68f3ff26a75d
>>>>> , sharedLocks= ]
>>>>> 2012-05-30 01:28:24,864 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] Running command:
>>>>> ActivateStorageDomainCommand internal: false. Entities affected :
>>>>> ID: 5e2ac537-6a73-4faf-8379-68f3ff26a75d Type: Storage
>>>>> 2012-05-30 01:28:24,876 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] Lock freed to object EngineLock
>>>>> [exclusiveLocks= key:
>>>>> org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand
>>>>> value: 5e2ac537-6a73-4faf-8379-68f3ff26a75d
>>>>> , sharedLocks= ]
>>>>> 2012-05-30 01:28:24,876 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] ActivateStorage Domain. Before
>>>>> Connect
>>>>> all hosts to pool. Time:5/30/12 1:28 AM
>>>>> 2012-05-30 01:28:24,901 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] ActivateStorage Domain. After
>>>>> Connect
>>>>> all hosts to pool. Time:5/30/12 1:28 AM
>>>>> 2012-05-30 01:28:24,902 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
>>>>> (pool-5-thread-44) [18001bda] START,
>>>>> ActivateStorageDomainVDSCommand(storagePoolId =
>>>>> 524a7003-edec-4f52-a38e-b15cadfbe3ef, ignoreFailoverLimit = false,
>>>>> compatabilityVersion = null, storageDomainId =
>>>>> 5e2ac537-6a73-4faf-8379-68f3ff26a75d), log id: 49e134ff
>>>>> 2012-05-30 01:28:24,906 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
>>>>> (pool-5-thread-44) [18001bda] FINISH,
>>>>> ActivateStorageDomainVDSCommand, log id: 49e134ff
>>>>> 2012-05-30 01:28:24,907 ERROR
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] Command
>>>>> org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand
>>>>> throw
>>>>> Vdc Bll exception. With error message VdcBLLException: Cannot
>>>>> allocate IRS server
>>>>> 2012-05-30 01:28:24,914 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>> (pool-5-thread-44) [18001bda] Command
>>>>> [id=3c916f08-7fb3-44ea-882e-4f56bc9716a2]: Compensating
>>>>> CHANGED_STATUS_ONLY of
>>>>> org.ovirt.engine.core.common.businessentities.storage_pool_iso_map;
>>>>> snapshot: EntityStatusSnapshot [id=storagePoolId =
>>>>> 524a7003-edec-4f52-a38e-b15cadfbe3ef, storageId =
>>>>> 5e2ac537-6a73-4faf-8379-68f3ff26a75d, status=Maintenance]
>>>>> 
>>>>> On 30 May, 2012, at 1:14 AM, Haim Ateya wrote:
>>>>> 
>>>>>> 
>>>>>> 
>>>>>> ----- Original Message -----
>>>>>>> From: "T-Sinjon" <tscbj1989 at gmail.com>
>>>>>>> To: "Haim Ateya" <hateya at redhat.com>
>>>>>>> Cc: users at ovirt.org
>>>>>>> Sent: Tuesday, May 29, 2012 8:09:38 PM
>>>>>>> Subject: Re: [Users] How to change storage domain ip address
>>>>>>> 
>>>>>>> After i update postgreSQL , the ip changed correctlly.
>>>>>>> then  i try to active my VMDomain,  but it throws the error:
>>>>>>> 
>>>>>>> 2012-05-30 01:05:39,699 ERROR
>>>>>>> [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
>>>>>>> (pool-5-thread-46) [277fd6c5] Command
>>>>>>> org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand
>>>>>>> throw
>>>>>>> Vdc Bll exception. With error message VdcBLLException: Cannot
>>>>>>> allocate IRS server
>>>>>>> 
>>>>>>> what's the IRS server mean? and how to resolve it
>>>>>> 
>>>>>> IRS = Image Repository Server.
>>>>>> 
>>>>>> please attach full vdsm.log so we can examine the
>>>>>> connectStorageServer command and the corresponding mount point.
>>>>>>> 
>>>>>>> On 30 May, 2012, at 12:14 AM, Haim Ateya wrote:
>>>>>>> 
>>>>>>>> i'm not familiar with conventional way of doing such a change;
>>>>>>>> the
>>>>>>>> only way I can think of is altering storage related tables on
>>>>>>>> data-base.
>>>>>>>> i would start with the following table:
>>>>>>>> 
>>>>>>>> SELECT * from storage_server_connections;
>>>>>>>> 
>>>>>>>> then create a query that changes current ip address with new
>>>>>>>> one.
>>>>>>>> 
>>>>>>>> Haim
>>>>>>>> 
>>>>>>>> 
>>>>>>>> ----- Original Message -----
>>>>>>>>> From: "T-Sinjon" <tscbj1989 at gmail.com>
>>>>>>>>> To: users at ovirt.org
>>>>>>>>> Sent: Tuesday, May 29, 2012 6:47:20 PM
>>>>>>>>> Subject: [Users] How to change storage domain ip address
>>>>>>>>> 
>>>>>>>>> For some reason , the ip address of my NFS storage domain
>>>>>>>>> server
>>>>>>>>> has
>>>>>>>>> changed from 192.168.x.x to 172.16.x.x , then my VMDomain
>>>>>>>>> became
>>>>>>>>> inactive.
>>>>>>>>> 
>>>>>>>>> the VMDomain NFS Export Path should change to
>>>>>>>>> 172.16.x.x:/Path/To/VMDomain,  where can i change this to let
>>>>>>>>> the
>>>>>>>>> domain active again?
>>>>>>>>> _______________________________________________
>>>>>>>>> Users mailing list
>>>>>>>>> Users at ovirt.org
>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>> 
>>>>> 
>>> 
>>> 
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120529/450f5169/attachment-0001.html>


More information about the Users mailing list