Re: [Users] could not add local storage domain

This is a multi-part message in MIME format. --------------070401060008050901000001 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit
2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ******** 2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last): File "/bin/engine-setup", line 2370, in <module> main(confFile) File "/bin/engine-setup", line 2159, in main runSequences() File "/bin/engine-setup", line 2105, in runSequences controller.runAllSequences() File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences sequence.run() File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run step.run() File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run function() File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain raise Exception(ERROR_ADD_LOCAL_DOMAIN) Exception: Error: could not add local storage domain
XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me.
Any hint what causes this error?
Thanks,
Christian
P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error.
did this get resolved?
-
I'm not the original submitter of this issue, but I have exactly the same problem with the latest nightly all-in-one installation. We don't use public key auth for sshd on this machine so that's not the problem. This is what I see in the vdsm.log: MainThread::INFO::2012-11-14 12:45:51,444::vdsm::88::vds::(run) I am the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64) MainThread::DEBUG::2012-11-14 12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-11-14 12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-11-14 12:45:51,856::multipath::115::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-11-14 12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-11-14 12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-11-14 12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ''; <rc> = 1 MainThread::DEBUG::2012-11-14 12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-11-14 12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl restart multipathd.service\n'; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-11-14 12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2012-11-14 12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2012-11-14 12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2012-11-14 12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2012-11-14 12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' Thread-12::DEBUG::2012-11-14 12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::INFO::2012-11-14 12:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-12::DEBUG::2012-11-14 12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o *uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)** **MainThread::WARNING::2012-11-14 12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization failed and fall back to KsmMonitor** **Traceback (most recent call last):** ** File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM** ** self.mom = MomThread(momconf)** ** File "/usr/share/vdsm/momIF.py", line 34, in __init__** ** raise Exception("MOM is not available")** **Exception: MOM is not available* MainThread::DEBUG::2012-11-14 12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf ksmd' (cwd None) Thread-12::DEBUG::2012-11-14 12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None) MainThread::DEBUG::2012-11-14 12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::INFO::2012-11-14 12:45:52,354::ksm::43::vds::(__init__) starting ksm monitor thread, ksm pid is 40 KsmMonitor::DEBUG::2012-11-14 12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksmtuned start' (cwd None) MainThread::INFO::2012-11-14 12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::INFO::2012-11-14 12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener thread. *MainThread::WARNING::2012-11-14 12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed.** **MainThread::WARNING::2012-11-14 12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed.* Thread-12::DEBUG::2012-11-14 12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0 MainThread::INFO::2012-11-14 12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::INFO::2012-11-14 12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener thread. MainThread::WARNING::2012-11-14 12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed. MainThread::WARNING::2012-11-14 12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed. Thread-12::DEBUG::2012-11-14 12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0 KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksm start' (cwd None) Thread-12::DEBUG::2012-11-14 12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-12::DEBUG::2012-11-14 12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-12::DEBUG::2012-11-14 12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last result Thread-12::DEBUG::2012-11-14 12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm Thread-12::DEBUG::2012-11-14 12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-11-14 12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started serving super vdsm object Thread-14::DEBUG::2012-11-14 12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {} Thread-14::DEBUG::2012-11-14 12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '3200.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}} Thread-15::DEBUG::2012-11-14 12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {} Thread-15::DEBUG::2012-11-14 12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '192.168.122.1'}} MainThread::INFO::2012-11-14 12:45:55,916::vdsm::88::vds::(run) I am the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64) MainThread::DEBUG::2012-11-14 12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-11-14 12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::WARNING::2012-11-14 12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2012-11-14 12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm MainThread::DEBUG::2012-11-14 12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None) MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-11-14 12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-11-14 12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started serving super vdsm object MainThread::DEBUG::2012-11-14 12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2012-11-14 12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-11-14 12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-11-14 12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-11-14 12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2012-11-14 12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' MainThread::INFO::2012-11-14 12:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-12::DEBUG::2012-11-14 12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None) *MainThread::WARNING::2012-11-14 12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization failed and fall back to KsmMonitor** **Traceback (most recent call last):** ** File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM** ** self.mom = MomThread(momconf)** ** File "/usr/share/vdsm/momIF.py", line 34, in __init__** ** raise Exception("MOM is not available")** **Exception: MOM is not available* MainThread::DEBUG::2012-11-14 12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf ksmd' (cwd None) Thread-12::DEBUG::2012-11-14 12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None) MainThread::DEBUG::2012-11-14 12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::INFO::2012-11-14 12:46:10,721::ksm::43::vds::(__init__) starting ksm monitor thread, ksm pid is 40 KsmMonitor::DEBUG::2012-11-14 12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksmtuned start' (cwd None) MainThread::INFO::2012-11-14 12:46:10,724::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::INFO::2012-11-14 12:46:10,738::vmChannels::127::vds::(run) Starting VM channels listener thread. MainThread::WARNING::2012-11-14 12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed. MainThread::WARNING::2012-11-14 12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed. Thread-12::DEBUG::2012-11-14 12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0 KsmMonitor::DEBUG::2012-11-14 12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksm start' (cwd None) Thread-12::DEBUG::2012-11-14 12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-12::DEBUG::2012-11-14 12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-12::DEBUG::2012-11-14 12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-12::DEBUG::2012-11-14 12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last result MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host0/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host1/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host2/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host3/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host4/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host5/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing SCSI scan, this will take up to 30 seconds Thread-14::DEBUG::2012-11-14 12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {} Thread-14::DEBUG::2012-11-14 12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}} Thread-12::DEBUG::2012-11-14 12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/multipath' (cwd None) Thread-12::DEBUG::2012-11-14 12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2012-11-14 12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1] Thread-16::DEBUG::2012-11-14 12:46:14,044::task::568::TaskManager.Task::(_updateState) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init -> state preparing Thread-16::INFO::2012-11-14 12:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None) Thread-16::INFO::2012-11-14 12:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-16::DEBUG::2012-11-14 12:46:14,045::task::1151::TaskManager.Task::(prepare) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-16::DEBUG::2012-11-14 12:46:14,045::task::568::TaskManager.Task::(_updateState) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state preparing -> state finished Thread-16::DEBUG::2012-11-14 12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2012-11-14 12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2012-11-14 12:46:14,045::task::957::TaskManager.Task::(_decref) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False Thread-17::DEBUG::2012-11-14 12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1] Thread-17::DEBUG::2012-11-14 12:46:14,129::task::568::TaskManager.Task::(_updateState) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init -> state preparing Thread-17::INFO::2012-11-14 12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None) *Thread-17::ERROR::2012-11-14 12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not connect to storageServer** **Traceback (most recent call last):** ** File "/usr/share/vdsm/storage/hsm.py", line 2054, in connectStorageServer** ** conObj.connect()** ** File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect** ** if not self.checkTarget():** ** File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget** ** fileSD.validateDirAccess(self._path))** ** File "/usr/share/vdsm/storage/fileSD.py", line 51, in validateDirAccess** ** getProcPool().fileUtils.validateAccess(dirPath)** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in callCrabRPCFunction** ** *args, **kwargs)** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in callCrabRPCFunction** ** rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll** ** timeLeft):** ** File "/usr/lib64/python2.7/contextlib.py", line 84, in helper** ** return GeneratorContextManager(func(*args, **kwds))** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll** ** raise Timeout()** **Timeout* Thread-17::INFO::2012-11-14 12:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-17::DEBUG::2012-11-14 12:46:14,231::task::1151::TaskManager.Task::(prepare) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-17::DEBUG::2012-11-14 12:46:14,232::task::568::TaskManager.Task::(_updateState) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state preparing -> state finished Thread-17::DEBUG::2012-11-14 12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-17::DEBUG::2012-11-14 12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Kind regards, Jorick Astrego Netbulae B.V. --------------070401060008050901000001 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <blockquote type="cite"><br> <pre wrap="">2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ******** 2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last): File "/bin/engine-setup", line 2370, in <module> main(confFile) File "/bin/engine-setup", line 2159, in main runSequences() File "/bin/engine-setup", line 2105, in runSequences controller.runAllSequences() File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences sequence.run() File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run step.run() File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run function() File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain raise Exception(ERROR_ADD_LOCAL_DOMAIN) Exception: Error: could not add local storage domain XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me. Any hint what causes this error? Thanks, Christian P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error. </pre> </blockquote> <blockquote cite="mid:mailman.6600.1352797990.6397.users@ovirt.org" type="cite"> <blockquote type="cite"> <pre wrap=""> </pre> </blockquote> <pre wrap=""> did this get resolved? -</pre> </blockquote> I'm not the original submitter of this issue, but I have exactly the same problem with the latest nightly all-in-one installation. <br> <br> We don't use public key auth for sshd on this machine so that's not the problem. This is what I see in the vdsm.log:<br> <br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:51,444::vdsm::88::vds::(run) I am the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)<br> MainThread::DEBUG::2012-11-14 12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage'<br> MainThread::DEBUG::2012-11-14 12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0<br> MainThread::DEBUG::2012-11-14 12:45:51,856::multipath::115::Storage.Multipath::(isEnabled) multipath Defaulting to False<br> MainThread::DEBUG::2012-11-14 12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> MainThread::DEBUG::2012-11-14 12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/multipath -F' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ''; <rc> = 1<br> MainThread::DEBUG::2012-11-14 12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service multipathd restart' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl restart multipathd.service\n'; <rc> = 0<br> MainThread::DEBUG::2012-11-14 12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> MainThread::DEBUG::2012-11-14 12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center'<br> MainThread::DEBUG::2012-11-14 12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']<br> MainThread::DEBUG::2012-11-14 12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount list: []<br> MainThread::DEBUG::2012-11-14 12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers<br> MainThread::DEBUG::2012-11-14 12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'<br> Thread-12::DEBUG::2012-11-14 12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__) Starting StorageDispatcher...<br> Thread-12::DEBUG::2012-11-14 12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o <b>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)</b><b><br> </b><b>MainThread::WARNING::2012-11-14 12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization failed and fall back to KsmMonitor</b><b><br> </b><b>Traceback (most recent call last):</b><b><br> </b><b> File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM</b><b><br> </b><b> self.mom = MomThread(momconf)</b><b><br> </b><b> File "/usr/share/vdsm/momIF.py", line 34, in __init__</b><b><br> </b><b> raise Exception("MOM is not available")</b><b><br> </b><b>Exception: MOM is not available</b><br> MainThread::DEBUG::2012-11-14 12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf ksmd' (cwd None)<br> Thread-12::DEBUG::2012-11-14 12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,354::ksm::43::vds::(__init__) starting ksm monitor thread, ksm pid is 40<br> KsmMonitor::DEBUG::2012-11-14 12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksmtuned start' (cwd None)<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds.<br> VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener thread.<br> <b>MainThread::WARNING::2012-11-14 12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed.</b><b><br> </b><b>MainThread::WARNING::2012-11-14 12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed.</b><br> Thread-12::DEBUG::2012-11-14 12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br> KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds.<br> VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener thread.<br> MainThread::WARNING::2012-11-14 12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed.<br> MainThread::WARNING::2012-11-14 12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed.<br> Thread-12::DEBUG::2012-11-14 12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br> KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0<br> KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksm start' (cwd None)<br> Thread-12::DEBUG::2012-11-14 12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)<br> Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling method<br> Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br> Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling method<br> Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br> Thread-12::DEBUG::2012-11-14 12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21<br> Thread-12::DEBUG::2012-11-14 12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last result<br> Thread-12::DEBUG::2012-11-14 12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm<br> Thread-12::DEBUG::2012-11-14 12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None)<br> KsmMonitor::DEBUG::2012-11-14 12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0<br> MainThread::DEBUG::2012-11-14 12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making sure I'm root<br> MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd args<br> MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main) Creating PID file<br> MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning old socket<br> MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting up keep alive thread<br> MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main) Creating remote object manager<br> MainThread::DEBUG::2012-11-14 12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started serving super vdsm object<br> Thread-14::DEBUG::2012-11-14 12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {}<br> Thread-14::DEBUG::2012-11-14 12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '3200.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}}<br> Thread-15::DEBUG::2012-11-14 12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {}<br> Thread-15::DEBUG::2012-11-14 12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '192.168.122.1'}}<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:55,916::vdsm::88::vds::(run) I am the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)<br> MainThread::DEBUG::2012-11-14 12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage'<br> MainThread::DEBUG::2012-11-14 12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0<br> MainThread::WARNING::2012-11-14 12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir /rhev/data-center/mnt already exists<br> MainThread::DEBUG::2012-11-14 12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm<br> MainThread::DEBUG::2012-11-14 12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making sure I'm root<br> MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd args<br> MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main) Creating PID file<br> MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning old socket<br> MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting up keep alive thread<br> MainThread::DEBUG::2012-11-14 12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main) Creating remote object manager<br> MainThread::DEBUG::2012-11-14 12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started serving super vdsm object<br> MainThread::DEBUG::2012-11-14 12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm<br> MainThread::DEBUG::2012-11-14 12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving<br> MainThread::DEBUG::2012-11-14 12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> MainThread::DEBUG::2012-11-14 12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center'<br> MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']<br> MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount list: []<br> MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers<br> MainThread::DEBUG::2012-11-14 12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__) Starting StorageDispatcher...<br> Thread-12::DEBUG::2012-11-14 12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)<br> <b>MainThread::WARNING::2012-11-14 12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization failed and fall back to KsmMonitor</b><b><br> </b><b>Traceback (most recent call last):</b><b><br> </b><b> File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM</b><b><br> </b><b> self.mom = MomThread(momconf)</b><b><br> </b><b> File "/usr/share/vdsm/momIF.py", line 34, in __init__</b><b><br> </b><b> raise Exception("MOM is not available")</b><b><br> </b><b>Exception: MOM is not available</b><br> MainThread::DEBUG::2012-11-14 12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf ksmd' (cwd None)<br> Thread-12::DEBUG::2012-11-14 12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None)<br> MainThread::DEBUG::2012-11-14 12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:10,721::ksm::43::vds::(__init__) starting ksm monitor thread, ksm pid is 40<br> KsmMonitor::DEBUG::2012-11-14 12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksmtuned start' (cwd None)<br> MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:10,724::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds.<br> VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:10,738::vmChannels::127::vds::(run) Starting VM channels listener thread.<br> MainThread::WARNING::2012-11-14 12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed.<br> MainThread::WARNING::2012-11-14 12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed.<br> Thread-12::DEBUG::2012-11-14 12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br> KsmMonitor::DEBUG::2012-11-14 12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0<br> KsmMonitor::DEBUG::2012-11-14 12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksm start' (cwd None)<br> Thread-12::DEBUG::2012-11-14 12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)<br> Thread-12::DEBUG::2012-11-14 12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to sampling method<br> Thread-12::DEBUG::2012-11-14 12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br> Thread-12::DEBUG::2012-11-14 12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to sampling method<br> Thread-12::DEBUG::2012-11-14 12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br> KsmMonitor::DEBUG::2012-11-14 12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21<br> Thread-12::DEBUG::2012-11-14 12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last result<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host0/scan' (cwd None)<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host1/scan' (cwd None)<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host2/scan' (cwd None)<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host3/scan' (cwd None)<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host4/scan' (cwd None)<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host5/scan' (cwd None)<br> MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing SCSI scan, this will take up to 30 seconds<br> Thread-14::DEBUG::2012-11-14 12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {}<br> Thread-14::DEBUG::2012-11-14 12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}}<br> Thread-12::DEBUG::2012-11-14 12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/multipath' (cwd None)<br> Thread-12::DEBUG::2012-11-14 12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> Thread-12::DEBUG::2012-11-14 12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex<br> Thread-12::DEBUG::2012-11-14 12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last result<br> Thread-16::DEBUG::2012-11-14 12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]<br> Thread-16::DEBUG::2012-11-14 12:46:14,044::task::568::TaskManager.Task::(_updateState) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init -> state preparing<br> Thread-16::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)<br> Thread-16::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}<br> Thread-16::DEBUG::2012-11-14 12:46:14,045::task::1151::TaskManager.Task::(prepare) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}<br> Thread-16::DEBUG::2012-11-14 12:46:14,045::task::568::TaskManager.Task::(_updateState) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state preparing -> state finished<br> Thread-16::DEBUG::2012-11-14 12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-16::DEBUG::2012-11-14 12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-16::DEBUG::2012-11-14 12:46:14,045::task::957::TaskManager.Task::(_decref) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False<br> Thread-17::DEBUG::2012-11-14 12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]<br> Thread-17::DEBUG::2012-11-14 12:46:14,129::task::568::TaskManager.Task::(_updateState) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init -> state preparing<br> Thread-17::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)<br> <b>Thread-17::ERROR::2012-11-14 12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not connect to storageServer</b><b><br> </b><b>Traceback (most recent call last):</b><b><br> </b><b> File "/usr/share/vdsm/storage/hsm.py", line 2054, in connectStorageServer</b><b><br> </b><b> conObj.connect()</b><b><br> </b><b> File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect</b><b><br> </b><b> if not self.checkTarget():</b><b><br> </b><b> File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget</b><b><br> </b><b> fileSD.validateDirAccess(self._path))</b><b><br> </b><b> File "/usr/share/vdsm/storage/fileSD.py", line 51, in validateDirAccess</b><b><br> </b><b> getProcPool().fileUtils.validateAccess(dirPath)</b><b><br> </b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in callCrabRPCFunction</b><b><br> </b><b> *args, **kwargs)</b><b><br> </b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in callCrabRPCFunction</b><b><br> </b><b> rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)</b><b><br> </b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll</b><b><br> </b><b> timeLeft):</b><b><br> </b><b> File "/usr/lib64/python2.7/contextlib.py", line 84, in helper</b><b><br> </b><b> return GeneratorContextManager(func(*args, **kwds))</b><b><br> </b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll</b><b><br> </b><b> raise Timeout()</b><b><br> </b><b>Timeout</b><br> Thread-17::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}<br> Thread-17::DEBUG::2012-11-14 12:46:14,231::task::1151::TaskManager.Task::(prepare) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}<br> Thread-17::DEBUG::2012-11-14 12:46:14,232::task::568::TaskManager.Task::(_updateState) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state preparing -> state finished<br> Thread-17::DEBUG::2012-11-14 12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-17::DEBUG::2012-11-14 12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> <br> Kind regards,<br> <br> Jorick Astrego<br> Netbulae B.V.<br> <br> <br> <br> <br> </body> </html> --------------070401060008050901000001--

same here. And the db upgrade from 3.1 to nightly makes ovirt-engine unable to connect to the database, so we are stuck with the beta only. On Wed, Nov 14, 2012 at 2:45 PM, Jorick Astrego <jorick@netbulae.eu> wrote:
2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ******** 2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last): File "/bin/engine-setup", line 2370, in <module> main(confFile) File "/bin/engine-setup", line 2159, in main runSequences() File "/bin/engine-setup", line 2105, in runSequences controller.runAllSequences() File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences sequence.run() File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run step.run() File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run function() File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain raise Exception(ERROR_ADD_LOCAL_DOMAIN) Exception: Error: could not add local storage domain
XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me.
Any hint what causes this error?
Thanks,
Christian
P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error.
did this get resolved?
-
I'm not the original submitter of this issue, but I have exactly the same problem with the latest nightly all-in-one installation.
We don't use public key auth for sshd on this machine so that's not the problem. This is what I see in the vdsm.log:
MainThread::INFO::2012-11-14 12:45:51,444::vdsm::88::vds::(run) I am the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64) MainThread::DEBUG::2012-11-14 12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-11-14 12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-11-14 12:45:51,856::multipath::115::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-11-14 12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-11-14 12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-11-14 12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ''; <rc> = 1 MainThread::DEBUG::2012-11-14 12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-11-14 12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl restart multipathd.service\n'; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-11-14 12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2012-11-14 12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2012-11-14 12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2012-11-14 12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2012-11-14 12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' Thread-12::DEBUG::2012-11-14 12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::INFO::2012-11-1412:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-12::DEBUG::2012-11-14 12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o *uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)** **MainThread::WARNING::2012-11-14 12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization failed and fall back to KsmMonitor* * **Traceback (most recent call last):** * * File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM** ** self.mom = MomThread(momconf)** ** File "/usr/share/vdsm/momIF.py", line 34, in __init__** ** raise Exception("MOM is not available")** **Exception: MOM is not available* MainThread::DEBUG::2012-11-14 12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf ksmd' (cwd None) Thread-12::DEBUG::2012-11-14 12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None) MainThread::DEBUG::2012-11-14 12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::INFO::2012-11-14 12:45:52,354::ksm::43::vds::(__init__) starting ksm monitor thread, ksm pid is 40 KsmMonitor::DEBUG::2012-11-14 12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksmtuned start' (cwd None) MainThread::INFO::2012-11-1412:45:52,367::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::INFO::2012-11-1412:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener thread. *MainThread::WARNING::2012-11-14 12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed.** **MainThread::WARNING::2012-11-14 12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed.* Thread-12::DEBUG::2012-11-14 12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0 MainThread::INFO::2012-11-1412:45:52,367::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::INFO::2012-11-1412:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener thread. MainThread::WARNING::2012-11-14 12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed. MainThread::WARNING::2012-11-14 12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed. Thread-12::DEBUG::2012-11-14 12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0 KsmMonitor::DEBUG::2012-11-14 12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksm start' (cwd None) Thread-12::DEBUG::2012-11-14 12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-12::DEBUG::2012-11-14 12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-12::DEBUG::2012-11-14 12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last result Thread-12::DEBUG::2012-11-14 12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm Thread-12::DEBUG::2012-11-14 12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0 MainThread::DEBUG::2012-11-14 12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-11-14 12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-11-14 12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started serving super vdsm object Thread-14::DEBUG::2012-11-14 12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {} Thread-14::DEBUG::2012-11-14 12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '3200.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}} Thread-15::DEBUG::2012-11-14 12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {} Thread-15::DEBUG::2012-11-14 12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '192.168.122.1'}} MainThread::INFO::2012-11-14 12:45:55,916::vdsm::88::vds::(run) I am the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64) MainThread::DEBUG::2012-11-14 12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-11-14 12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::WARNING::2012-11-14 12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2012-11-14 12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm MainThread::DEBUG::2012-11-14 12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None) MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-11-14 12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-11-14 12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-11-14 12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started serving super vdsm object MainThread::DEBUG::2012-11-14 12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2012-11-14 12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-11-14 12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-11-14 12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-11-14 12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2012-11-14 12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2012-11-14 12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' MainThread::INFO::2012-11-1412:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-12::DEBUG::2012-11-14 12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None) *MainThread::WARNING::2012-11-14 12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization failed and fall back to KsmMonitor* * **Traceback (most recent call last):** * * File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM** ** self.mom = MomThread(momconf)** ** File "/usr/share/vdsm/momIF.py", line 34, in __init__** ** raise Exception("MOM is not available")** **Exception: MOM is not available* MainThread::DEBUG::2012-11-14 12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf ksmd' (cwd None) Thread-12::DEBUG::2012-11-14 12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None) MainThread::DEBUG::2012-11-14 12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainThread::INFO::2012-11-14 12:46:10,721::ksm::43::vds::(__init__) starting ksm monitor thread, ksm pid is 40 KsmMonitor::DEBUG::2012-11-14 12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksmtuned start' (cwd None) MainThread::INFO::2012-11-1412:46:10,724::vmChannels::139::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::INFO::2012-11-1412:46:10,738::vmChannels::127::vds::(run) Starting VM channels listener thread. MainThread::WARNING::2012-11-14 12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load the rest server module. Please make sure it is installed. MainThread::WARNING::2012-11-14 12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load the json rpc server module. Please make sure it is installed. Thread-12::DEBUG::2012-11-14 12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0 KsmMonitor::DEBUG::2012-11-14 12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/service ksm start' (cwd None) Thread-12::DEBUG::2012-11-14 12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-12::DEBUG::2012-11-14 12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-12::DEBUG::2012-11-14 12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to sampling method Thread-12::DEBUG::2012-11-14 12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) KsmMonitor::DEBUG::2012-11-14 12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = 'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-12::DEBUG::2012-11-14 12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last result MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host0/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host1/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host2/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host3/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host4/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd of=/sys/class/scsi_host/host5/scan' (cwd None) MainProcess|Thread-12::DEBUG::2012-11-14 12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing SCSI scan, this will take up to 30 seconds Thread-14::DEBUG::2012-11-14 12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client [192.168.122.1]::call getCapabilities with () {} Thread-14::DEBUG::2012-11-14 12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server': {'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17', 'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release': '19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid': '4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0', 'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}} Thread-12::DEBUG::2012-11-14 12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n /sbin/multipath' (cwd None) Thread-12::DEBUG::2012-11-14 12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-12::DEBUG::2012-11-14 12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-12::DEBUG::2012-11-14 12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2012-11-14 12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1] Thread-16::DEBUG::2012-11-14 12:46:14,044::task::568::TaskManager.Task::(_updateState) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init -> state preparing Thread-16::INFO::2012-11-1412:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None) Thread-16::INFO::2012-11-1412:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-16::DEBUG::2012-11-14 12:46:14,045::task::1151::TaskManager.Task::(prepare) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-16::DEBUG::2012-11-14 12:46:14,045::task::568::TaskManager.Task::(_updateState) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state preparing -> state finished Thread-16::DEBUG::2012-11-14 12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2012-11-14 12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2012-11-14 12:46:14,045::task::957::TaskManager.Task::(_decref) Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False Thread-17::DEBUG::2012-11-14 12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1] Thread-17::DEBUG::2012-11-14 12:46:14,129::task::568::TaskManager.Task::(_updateState) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init -> state preparing Thread-17::INFO::2012-11-1412:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None) *Thread-17::ERROR::2012-11-14 12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not connect to storageServer* * **Traceback (most recent call last):** * * File "/usr/share/vdsm/storage/hsm.py", line 2054, in connectStorageServer** ** conObj.connect()** ** File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect* * ** if not self.checkTarget():** ** File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget** ** fileSD.validateDirAccess(self._path))** ** File "/usr/share/vdsm/storage/fileSD.py", line 51, in validateDirAccess** ** getProcPool().fileUtils.validateAccess(dirPath)** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in callCrabRPCFunction** ** *args, **kwargs)** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in callCrabRPCFunction** ** rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll** ** timeLeft):** ** File "/usr/lib64/python2.7/contextlib.py", line 84, in helper** ** return GeneratorContextManager(func(*args, **kwds))** ** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll** ** raise Timeout()** **Timeout* Thread-17::INFO::2012-11-1412:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-17::DEBUG::2012-11-14 12:46:14,231::task::1151::TaskManager.Task::(prepare) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-17::DEBUG::2012-11-14 12:46:14,232::task::568::TaskManager.Task::(_updateState) Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state preparing -> state finished Thread-17::DEBUG::2012-11-14 12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-17::DEBUG::2012-11-14 12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Kind regards,
Jorick Astrego Netbulae B.V.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Jorick and Cristian, the error you posted here looks like an issue we recently fixed (vdsm on nfs with kernel 3.6). Anyway it's quite difficult to make a comprehensive list of things to report and tests to execute. For this particular issue (and not as a general rule) I suggest you to contact me on IRC (fsimonce on #ovirt OFTC) so that we can sort out the issue together. We will report back to the ML our findings so that it will be helpful for everyone else. -- Federico ----- Original Message -----
From: "Jorick Astrego" <jorick@netbulae.eu> To: users@ovirt.org Sent: Wednesday, November 14, 2012 1:45:21 PM Subject: Re: [Users] could not add local storage domain
- I'm not the original submitter of this issue, but I have exactly the same problem with the latest nightly all-in-one installation.
We don't use public key auth for sshd on this machine so that's not the problem. This is what I see in the vdsm.log:
[...]
Thread-17:: INFO::2012-11-14 12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None) Thread-17::ERROR::2012-11-14 12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2054, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect if not self.checkTarget(): File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget fileSD.validateDirAccess(self._path)) File "/usr/share/vdsm/storage/fileSD.py", line 51, in validateDirAccess getProcPool().fileUtils.validateAccess(dirPath) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in callCrabRPCFunction *args, **kwargs) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in callCrabRPCFunction rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll timeLeft): File "/usr/lib64/python2.7/contextlib.py", line 84, in helper return GeneratorContextManager(func(*args, **kwds)) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll raise Timeout() Timeout

it's working for me with the latest files. Current issues: - You need to create the db user as superuser - disable selinux. On Thu, Nov 15, 2012 at 3:05 PM, Federico Simoncelli <fsimonce@redhat.com>wrote:
Hi Jorick and Cristian, the error you posted here looks like an issue we recently fixed (vdsm on nfs with kernel 3.6). Anyway it's quite difficult to make a comprehensive list of things to report and tests to execute. For this particular issue (and not as a general rule) I suggest you to contact me on IRC (fsimonce on #ovirt OFTC) so that we can sort out the issue together.
We will report back to the ML our findings so that it will be helpful for everyone else.
-- Federico
----- Original Message -----
From: "Jorick Astrego" <jorick@netbulae.eu> To: users@ovirt.org Sent: Wednesday, November 14, 2012 1:45:21 PM Subject: Re: [Users] could not add local storage domain
- I'm not the original submitter of this issue, but I have exactly the same problem with the latest nightly all-in-one installation.
We don't use public key auth for sshd on this machine so that's not the problem. This is what I see in the vdsm.log:
[...]
Thread-17:: INFO::2012-11-14 12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=4, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None) Thread-17::ERROR::2012-11-14 12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2054, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect if not self.checkTarget(): File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget fileSD.validateDirAccess(self._path)) File "/usr/share/vdsm/storage/fileSD.py", line 51, in validateDirAccess getProcPool().fileUtils.validateAccess(dirPath) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in callCrabRPCFunction *args, **kwargs) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in callCrabRPCFunction rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll timeLeft): File "/usr/lib64/python2.7/contextlib.py", line 84, in helper return GeneratorContextManager(func(*args, **kwds)) File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll raise Timeout() Timeout

----- Original Message -----
From: "Cristian Falcas" <cristi.falcas@gmail.com> To: "Federico Simoncelli" <fsimonce@redhat.com> Cc: "Jorick Astrego" <jorick@netbulae.eu>, users@ovirt.org Sent: Friday, November 16, 2012 6:47:50 PM Subject: Re: [Users] could not add local storage domain
it's working for me with the latest files.
Current issues: - You need to create the db user as superuser - disable selinux.
Can you grab the relevant AVC errors and report them in a bug? Thanks, -- Federico

Please let me know how to do this, or if it's enough the bellow info. In the logs I found this when trying to activate the storage: Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: open error -13 /rhev/data-center/mnt/_media_ ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: s1956 open_disk /rhev/data-center/mnt/_media_ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids error -13 Nov 17 16:57:59 localhost setroubleshoot: SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage. For complete SELinux messages. run sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc Nov 17 16:58:01 localhost vdsm TaskManager.Task ERROR Task=`eb4b34ff-04a8-4d12-9338-ebce08f554ca`::Unexpected error Running the sealert command : root@localhost log]# sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage. ***** Plugin catchall (100. confidence) suggests *************************** If you believe that sanlock should be allowed search access on the Storage directory by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # grep sanlock /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp Additional Information: Source Context system_u:system_r:sanlock_t:s0-s0:c0.c1023 Target Context unconfined_u:object_r:public_content_rw_t:s0 Target Objects Storage [ dir ] Source sanlock Source Path /usr/sbin/sanlock Port <Unknown> Host localhost.localdomain Source RPM Packages sanlock-2.4-2.fc17.x86_64 Target RPM Packages Policy RPM selinux-policy-3.10.0-159.fc17.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name localhost.localdomain Platform Linux localhost.localdomain 3.6.6-1.fc17.x86_64 #1 SMP Mon Nov 5 21:59:35 UTC 2012 x86_64 x86_64 Alert Count 1980 First Seen 2012-11-16 11:03:19 EET Last Seen 2012-11-17 16:58:18 EET Local ID 026bd86b-153c-403a-ab2d-043e381be6cc Raw Audit Messages type=AVC msg=audit(1353164298.898:5507): avc: denied { search } for pid=13449 comm="sanlock" name="Storage" dev="dm-12" ino=4456450 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:public_content_rw_t:s0 tclass=dir type=SYSCALL msg=audit(1353164298.898:5507): arch=x86_64 syscall=open success=no exit=EACCES a0=7f50b80009c8 a1=105002 a2=0 a3=0 items=0 ppid=1 pid=13449 auid=4294967295 uid=179 gid=179 euid=179 suid=179 fsuid=179 egid=179 sgid=179 fsgid=179 tty=(none) ses=4294967295 comm=sanlock exe=/usr/sbin/sanlock subj=system_u:system_r:sanlock_t:s0-s0:c0.c1023 key=(null) Hash: sanlock,sanlock_t,public_content_rw_t,dir,search audit2allow #============= sanlock_t ============== allow sanlock_t public_content_rw_t:dir search; audit2allow -R #============= sanlock_t ============== allow sanlock_t public_content_rw_t:dir search; On Fri, Nov 16, 2012 at 7:51 PM, Federico Simoncelli <fsimonce@redhat.com>wrote:
----- Original Message -----
From: "Cristian Falcas" <cristi.falcas@gmail.com> To: "Federico Simoncelli" <fsimonce@redhat.com> Cc: "Jorick Astrego" <jorick@netbulae.eu>, users@ovirt.org Sent: Friday, November 16, 2012 6:47:50 PM Subject: Re: [Users] could not add local storage domain
it's working for me with the latest files.
Current issues: - You need to create the db user as superuser - disable selinux.
Can you grab the relevant AVC errors and report them in a bug?
Thanks, -- Federico

This is a multi-part message in MIME format. --------------060009090805050706040306 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cristian, This is the link for bug reports: https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt Regards, Jorick On 11/17/2012 06:16 PM, Cristian Falcas wrote:
Please let me know how to do this, or if it's enough the bellow info.
In the logs I found this when trying to activate the storage:
Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: open error -13 /rhev/data-center/mnt/_media_ ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: s1956 open_disk /rhev/data-center/mnt/_media_ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids error -13 Nov 17 16:57:59 localhost setroubleshoot: SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage. For complete SELinux messages. run sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc Nov 17 16:58:01 localhost vdsm TaskManager.Task ERROR Task=`eb4b34ff-04a8-4d12-9338-ebce08f554ca`::Unexpected error
Running the sealert command :
root@localhost log]# sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage.
***** Plugin catchall (100. confidence) suggests ***************************
If you believe that sanlock should be allowed search access on the Storage directory by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # grep sanlock /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp
Additional Information: Source Context system_u:system_r:sanlock_t:s0-s0:c0.c1023 Target Context unconfined_u:object_r:public_content_rw_t:s0 Target Objects Storage [ dir ] Source sanlock Source Path /usr/sbin/sanlock Port <Unknown> Host localhost.localdomain Source RPM Packages sanlock-2.4-2.fc17.x86_64 Target RPM Packages Policy RPM selinux-policy-3.10.0-159.fc17.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name localhost.localdomain Platform Linux localhost.localdomain 3.6.6-1.fc17.x86_64 #1 SMP Mon Nov 5 21:59:35 UTC 2012 x86_64 x86_64 Alert Count 1980 First Seen 2012-11-16 11:03:19 EET Last Seen 2012-11-17 16:58:18 EET Local ID 026bd86b-153c-403a-ab2d-043e381be6cc
Raw Audit Messages type=AVC msg=audit(1353164298.898:5507): avc: denied { search } for pid=13449 comm="sanlock" name="Storage" dev="dm-12" ino=4456450 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:public_content_rw_t:s0 tclass=dir
type=SYSCALL msg=audit(1353164298.898:5507): arch=x86_64 syscall=open success=no exit=EACCES a0=7f50b80009c8 a1=105002 a2=0 a3=0 items=0 ppid=1 pid=13449 auid=4294967295 uid=179 gid=179 euid=179 suid=179 fsuid=179 egid=179 sgid=179 fsgid=179 tty=(none) ses=4294967295 comm=sanlock exe=/usr/sbin/sanlock subj=system_u:system_r:sanlock_t:s0-s0:c0.c1023 key=(null)
Hash: sanlock,sanlock_t,public_content_rw_t,dir,search
audit2allow
#============= sanlock_t ============== allow sanlock_t public_content_rw_t:dir search;
audit2allow -R
#============= sanlock_t ============== allow sanlock_t public_content_rw_t:dir search;
On Fri, Nov 16, 2012 at 7:51 PM, Federico Simoncelli <fsimonce@redhat.com <mailto:fsimonce@redhat.com>> wrote:
----- Original Message ----- > From: "Cristian Falcas" <cristi.falcas@gmail.com <mailto:cristi.falcas@gmail.com>> > To: "Federico Simoncelli" <fsimonce@redhat.com <mailto:fsimonce@redhat.com>> > Cc: "Jorick Astrego" <jorick@netbulae.eu <mailto:jorick@netbulae.eu>>, users@ovirt.org <mailto:users@ovirt.org> > Sent: Friday, November 16, 2012 6:47:50 PM > Subject: Re: [Users] could not add local storage domain > > it's working for me with the latest files. > > Current issues: > - You need to create the db user as superuser > - disable selinux.
Can you grab the relevant AVC errors and report them in a bug?
Thanks, -- Federico
-- Met vriendelijke groet, Jorick Astrego Netbulae B.V. Staalsteden 4-13 7547 TA Enschede Tel. +31 (0)53 - 20 30 270 Email: jorick@netbulae.eu Site: http://www.netbulae.eu --------------060009090805050706040306 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Cristian, <br> <br> This is the link for bug reports:<br> <br> <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt">https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt</a><br> <br> Regards,<br> <br> Jorick<br> <br> On 11/17/2012 06:16 PM, Cristian Falcas wrote:<br> </div> <blockquote cite="mid:CAMo7R_c5jxEorXhtrmvrsCBL9wiiyquyYL423f4rjTCfunSMGQ@mail.gmail.com" type="cite">Please let me know how to do this, or if it's enough the bellow info.<br> <br> In the logs I found this when trying to activate the storage:<br> <br> Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: open error -13 /rhev/data-center/mnt/_media_ <div class="gmail_extra"> ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids<br> Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: s1956 open_disk /rhev/data-center/mnt/_media_ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids error -13<br> Nov 17 16:57:59 localhost setroubleshoot: SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage. For complete SELinux messages. run sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc<br> Nov 17 16:58:01 localhost vdsm TaskManager.Task ERROR Task=`eb4b34ff-04a8-4d12-9338-ebce08f554ca`::Unexpected error<br> <br> Running the sealert command :<br> <br> <br> root@localhost log]# sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc<br> SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage.<br> <br> ***** Plugin catchall (100. confidence) suggests ***************************<br> <br> If you believe that sanlock should be allowed search access on the Storage directory by default.<br> Then you should report this as a bug.<br> You can generate a local policy module to allow this access.<br> Do<br> allow this access for now by executing:<br> # grep sanlock /var/log/audit/audit.log | audit2allow -M mypol<br> # semodule -i mypol.pp<br> <br> <br> Additional Information:<br> Source Context system_u:system_r:sanlock_t:s0-s0:c0.c1023<br> Target Context unconfined_u:object_r:public_content_rw_t:s0<br> Target Objects Storage [ dir ]<br> Source sanlock<br> Source Path /usr/sbin/sanlock<br> Port <Unknown><br> Host localhost.localdomain<br> Source RPM Packages sanlock-2.4-2.fc17.x86_64<br> Target RPM Packages <br> Policy RPM selinux-policy-3.10.0-159.fc17.noarch<br> Selinux Enabled True<br> Policy Type targeted<br> Enforcing Mode Enforcing<br> Host Name localhost.localdomain<br> Platform Linux localhost.localdomain 3.6.6-1.fc17.x86_64 #1<br> SMP Mon Nov 5 21:59:35 UTC 2012 x86_64 x86_64<br> Alert Count 1980<br> First Seen 2012-11-16 11:03:19 EET<br> Last Seen 2012-11-17 16:58:18 EET<br> Local ID 026bd86b-153c-403a-ab2d-043e381be6cc<br> <br> Raw Audit Messages<br> type=AVC msg=audit(1353164298.898:5507): avc: denied { search } for pid=13449 comm="sanlock" name="Storage" dev="dm-12" ino=4456450 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:public_content_rw_t:s0 tclass=dir<br> <br> <br> type=SYSCALL msg=audit(1353164298.898:5507): arch=x86_64 syscall=open success=no exit=EACCES a0=7f50b80009c8 a1=105002 a2=0 a3=0 items=0 ppid=1 pid=13449 auid=4294967295 uid=179 gid=179 euid=179 suid=179 fsuid=179 egid=179 sgid=179 fsgid=179 tty=(none) ses=4294967295 comm=sanlock exe=/usr/sbin/sanlock subj=system_u:system_r:sanlock_t:s0-s0:c0.c1023 key=(null)<br> <br> Hash: sanlock,sanlock_t,public_content_rw_t,dir,search<br> <br> audit2allow<br> <br> #============= sanlock_t ==============<br> allow sanlock_t public_content_rw_t:dir search;<br> <br> audit2allow -R<br> <br> #============= sanlock_t ==============<br> allow sanlock_t public_content_rw_t:dir search;</div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Fri, Nov 16, 2012 at 7:51 PM, Federico Simoncelli <span dir="ltr"><<a moz-do-not-send="true" href="mailto:fsimonce@redhat.com" target="_blank">fsimonce@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div class="im">----- Original Message -----<br> > From: "Cristian Falcas" <<a moz-do-not-send="true" href="mailto:cristi.falcas@gmail.com">cristi.falcas@gmail.com</a>><br> > To: "Federico Simoncelli" <<a moz-do-not-send="true" href="mailto:fsimonce@redhat.com">fsimonce@redhat.com</a>><br> > Cc: "Jorick Astrego" <<a moz-do-not-send="true" href="mailto:jorick@netbulae.eu">jorick@netbulae.eu</a>>, <a moz-do-not-send="true" href="mailto:users@ovirt.org">users@ovirt.org</a><br> > Sent: Friday, November 16, 2012 6:47:50 PM<br> > Subject: Re: [Users] could not add local storage domain<br> ><br> </div> <div class="im">> it's working for me with the latest files.<br> ><br> > Current issues:<br> > - You need to create the db user as superuser<br> > - disable selinux.<br> <br> </div> Can you grab the relevant AVC errors and report them in a bug?<br> <br> Thanks,<br> <span class="HOEnZb"><font color="#888888">--<br> Federico<br> </font></span></blockquote> </div> <br> </div> </blockquote> <br> <br> <pre class="moz-signature" cols="72">-- Met vriendelijke groet, Jorick Astrego Netbulae B.V. Staalsteden 4-13 7547 TA Enschede Tel. +31 (0)53 - 20 30 270 Email: <a class="moz-txt-link-abbreviated" href="mailto:jorick@netbulae.eu">jorick@netbulae.eu</a> Site: <a class="moz-txt-link-freetext" href="http://www.netbulae.eu">http://www.netbulae.eu</a></pre> </body> </html> --------------060009090805050706040306--

I opened https://bugzilla.redhat.com/show_bug.cgi?id=877715 on vdsm On Sun, Nov 18, 2012 at 11:44 AM, Jorick Astrego <jorick@netbulae.eu> wrote:
Cristian,
This is the link for bug reports:
https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt
Regards,
Jorick
On 11/17/2012 06:16 PM, Cristian Falcas wrote:
Please let me know how to do this, or if it's enough the bellow info.
In the logs I found this when trying to activate the storage:
Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: open error -13 /rhev/data-center/mnt/_media_ ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids Nov 17 16:57:58 localhost sanlock[11899]: 2012-11-17 16:57:58+0200 29123 [13385]: s1956 open_disk /rhev/data-center/mnt/_media_ceva2_Ovirt_Storage/f021f6dd-0f88-4d5e-842f-b54e8cb5f846/dom_md/ids error -13 Nov 17 16:57:59 localhost setroubleshoot: SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage. For complete SELinux messages. run sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc Nov 17 16:58:01 localhost vdsm TaskManager.Task ERROR Task=`eb4b34ff-04a8-4d12-9338-ebce08f554ca`::Unexpected error
Running the sealert command :
root@localhost log]# sealert -l 026bd86b-153c-403a-ab2d-043e381be6cc SELinux is preventing /usr/sbin/sanlock from search access on the directory Storage.
***** Plugin catchall (100. confidence) suggests ***************************
If you believe that sanlock should be allowed search access on the Storage directory by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # grep sanlock /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp
Additional Information: Source Context system_u:system_r:sanlock_t:s0-s0:c0.c1023 Target Context unconfined_u:object_r:public_content_rw_t:s0 Target Objects Storage [ dir ] Source sanlock Source Path /usr/sbin/sanlock Port <Unknown> Host localhost.localdomain Source RPM Packages sanlock-2.4-2.fc17.x86_64 Target RPM Packages Policy RPM selinux-policy-3.10.0-159.fc17.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name localhost.localdomain Platform Linux localhost.localdomain 3.6.6-1.fc17.x86_64 #1 SMP Mon Nov 5 21:59:35 UTC 2012 x86_64 x86_64 Alert Count 1980 First Seen 2012-11-16 11:03:19 EET Last Seen 2012-11-17 16:58:18 EET Local ID 026bd86b-153c-403a-ab2d-043e381be6cc
Raw Audit Messages type=AVC msg=audit(1353164298.898:5507): avc: denied { search } for pid=13449 comm="sanlock" name="Storage" dev="dm-12" ino=4456450 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:public_content_rw_t:s0 tclass=dir
type=SYSCALL msg=audit(1353164298.898:5507): arch=x86_64 syscall=open success=no exit=EACCES a0=7f50b80009c8 a1=105002 a2=0 a3=0 items=0 ppid=1 pid=13449 auid=4294967295 uid=179 gid=179 euid=179 suid=179 fsuid=179 egid=179 sgid=179 fsgid=179 tty=(none) ses=4294967295 comm=sanlock exe=/usr/sbin/sanlock subj=system_u:system_r:sanlock_t:s0-s0:c0.c1023 key=(null)
Hash: sanlock,sanlock_t,public_content_rw_t,dir,search
audit2allow
#============= sanlock_t ============== allow sanlock_t public_content_rw_t:dir search;
audit2allow -R
#============= sanlock_t ============== allow sanlock_t public_content_rw_t:dir search;
On Fri, Nov 16, 2012 at 7:51 PM, Federico Simoncelli <fsimonce@redhat.com>wrote:
----- Original Message -----
From: "Cristian Falcas" <cristi.falcas@gmail.com> To: "Federico Simoncelli" <fsimonce@redhat.com> Cc: "Jorick Astrego" <jorick@netbulae.eu>, users@ovirt.org Sent: Friday, November 16, 2012 6:47:50 PM Subject: Re: [Users] could not add local storage domain
it's working for me with the latest files.
Current issues: - You need to create the db user as superuser - disable selinux.
Can you grab the relevant AVC errors and report them in a bug?
Thanks, -- Federico
-- Met vriendelijke groet,
Jorick Astrego
Netbulae B.V. Staalsteden 4-13 7547 TA Enschede
Tel. +31 (0)53 - 20 30 270
Email: jorick@netbulae.eu Site: http://www.netbulae.eu
participants (3)
-
Cristian Falcas
-
Federico Simoncelli
-
Jorick Astrego