[Users] NFS ISO DOmain
by suporte@logicworks.pt
------=_Part_1372_26712946.1365527645345
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I just installed the last version of engine and host using fedora18. I notice that in the engine the nfs server is not up and running, so is not possible to attach the iso domain.
How can I start the nfs server on engine?
Is this the correct thing to do?
vi /etc/idmapd.conf
# line 5: uncomment and change to your domain name
Domain =server.world
[root@dlp ~]# systemctl start rpcbind.service
[root@dlp ~]# systemctl start nfs-server.service
[root@dlp ~]# systemctl start nfs-lock.service
[root@dlp ~]# systemctl start nfs-idmap.service
[root@dlp ~]# systemctl enable rpcbind.service
[root@dlp ~]# systemctl enable nfs-server.service
[root@dlp ~]# systemctl enable nfs-lock.service
[root@dlp ~]# systemctl enable nfs-idmap.service
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_1372_26712946.1365527645345
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'>Hi,<br><br>I just installed the last version of engine and host using fedora18. I notice that in the engine the nfs server is not up and running, so is not possible to attach the iso domain.<br>How can I start the nfs server on engine?<br>Is this the correct thing to do?<br><br> <div class="color1"><a href="http://www.server-world.info/en/command/html/vi.html" style="color: rgb(255, 255, 0);">vi</a> /etc/idmapd.conf </div>
<div class="block">
<div class="color2"># line 5: uncomment and change to your domain name</div><br>
Domain =server.world
</div><br>[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> start rpcbind.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> start nfs-server.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> start nfs-lock.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> start nfs-idmap.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> enable rpcbind.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> enable nfs-server.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> enable nfs-lock.service <br>
[root@dlp ~]#<a href="http://www.server-world.info/en/command/html/systemctl.html" style="color: rgb(255, 255, 0);"> systemctl</a> enable nfs-idmap.service <br><br>-- <br><div><span name="x"></span><hr style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name="x"></span><br></div></div></body></html>
------=_Part_1372_26712946.1365527645345--
11 years, 7 months
[Users] Wrong Master domain or its version
by Jonas Israelsson
Greetings.
This post is related to my previous post
(http://lists.ovirt.org/pipermail/users/2013-April/013912.html) but with
some more investigation and findings.
When trying to activate my host/node it goes back to non-operational
since it's unable to attach the master storage domain.
A few snippet from the log (whole log attached):
Thread-12399::ERROR::2013-04-22
11:02:17,627::sp::1519::Storage.StoragePool::(getMasterDomain) Requested
master domain ea5643f7-f41f-4c44-a27f-f31240cd0f19 does not have
expected version 110 it is version 1
Thread-12399::INFO::2013-04-22
11:02:17,630::task::1134::TaskManager.Task::(prepare)
Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::aborting: Task is aborted:
'Wrong Master domain or its version' - code 324
Thread-12399::DEBUG::2013-04-22
11:02:17,630::task::1139::TaskManager.Task::(prepare)
Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::Prepare: aborted: Wrong
Master domain or its version
Not sure if "does not have expected version 110 it is version 1" is a
red herring or if this is actual problem. I get the following output
from vdsClient, where no version seem to match versions from the log.
vdsClient -s 0 getStorageDomainInfo ea5643f7-f41f-4c44-a27f-f31240cd0f19
uuid = ea5643f7-f41f-4c44-a27f-f31240cd0f19
vguuid = 5Qox85-NQCz-pyVM-aFRN-KJ2P-jB1T-IiETQr
lver = -1
state = OK
version = 3
role = Master
pool = ['dae1ba5e-548a-473e-94b6-7d08d3a6229a']
spm_id = -1
type = ISCSI
class = Data
master_ver = 0
name = hinken
Found these that seem somewhat similar:
https://bugzilla.redhat.com/show_bug.cgi?id=784038
http://lists.ovirt.org/pipermail/users/2012-December/010982.html
My whole site is dead now due to this, so any help to the resolve the
issue would be greatly appreciated.
With kind regards
Jonas
vdsm.log
Thread-12388::DEBUG::2013-04-22 11:02:09,292::BindingXMLRPC::913::vds::(wrapper) client [46.22.124.43]::call getCapabilities with () {}
Thread-12388::DEBUG::2013-04-22 11:02:09,349::BindingXMLRPC::920::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a07da5fa539'}], 'FC': []}, 'packages2': {'kernel': {'release': '205.fc18.x86_64', 'buildtime': 1361736602.0, 'version': '3.7.9'}, 'spice-server': {'release': '3.fc18', 'buildtime': 1358784016L, 'version': '0.12.2'}, 'vdsm': {'release': '9.fc18', 'buildtime': 1362046220L, 'version': '4.10.3'}, 'qemu-kvm': {'release': '6.fc18', 'buildtime': 1359843954L, 'version': '1.2.2'}, 'libvirt': {'release': '1.fc18', 'buildtime': 1359405439L, 'version': '0.10.2.3'}, 'qemu-img': {'release': '6.fc18', 'buildtime': 1359843954L, 'version': '1.2.2'}, 'mom': {'release': '1.fc18', 'buildtime': 1349470214L, 'version': '0.3.0'}}, 'cpuModel': 'AMD Opteron(TM) Processor 6276', 'hooks': {}, 'cpuSockets': '4', 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '46.22.124.37', 'cfg': {'IPADDR': '46.22.124.37', 'PEERDNS': 'no', 'GATEWAY': '46.22.124.36', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.224', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.224', 'stp': 'off', 'bridged': True, 'gateway': '46.22.124.36', 'ports': ['em1']}, 'san': {'iface': 'san', 'addr': '192.168.43.11', 'cfg': {'IPADDR': '192.168.43.11', 'PEERDNS': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'san', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['em2']}, 'vminternet': {'iface': 'vminternet', 'addr': '', 'cfg': {'PEERDNS': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'vminternet', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['em4']}}, 'bridges': {'ovirtmgmt': {'addr': '46.22.124.37', 'cfg': {'IPADDR': '46.22.124.37', 'PEERDNS': 'no', 'GATEWAY': '46.22.124.36', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.224', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.224', 'stp': 'off', 'ports': ['em1']}, 'san': {'addr': '192.168.43.11', 'cfg': {'IPADDR': '192.168.43.11', 'PEERDNS': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'san', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['em2']}, 'vminternet': {'addr': '', 'cfg': {'PEERDNS': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'vminternet', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ports': ['em4']}}, 'uuid': '4C4C4544-0042-4710-8030-B3C04F37354A', 'lastClientIface': 'ovirtmgmt', 'nics': {'em4': {'addr': '', 'cfg': {'BRIDGE': 'vminternet', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:20', 'STP': 'no', 'DEVICE': 'em4', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:20', 'speed': 100}, 'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:1a', 'STP': 'no', 'DEVICE': 'em1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1a', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:1e', 'STP': 'no', 'DEVICE': 'em3', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1e', 'speed': 0}, 'em2': {'addr': '', 'cfg': {'BRIDGE': 'san', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'MTU': '9000', 'HWADDR': 'd0:67:e5:f9:2e:1c', 'STP': 'no', 'DEVICE': 'em2', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1c', 'speed': 1000}}, 'software_revision': '9', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,amd_dcm,aperfmperf,pni,pclmulqdq,monitor,ssse3,cx16,sse4_1,sse4_2,popcnt,aes,xsave,avx,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,xop,skinit,wdt,lwp,fma4,nodeid_msr,topoext,perfctr_core,arat,cpb,hw_pstate,npt,lbrv,svm_lock,nrip_save,tsc_scale,vmcb_clean,flushbyasid,decodeassists,pausefilter,pfthreshold,model_Opteron_G3,model_Opteron_G1,model_Opteron_G4,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:a07da5fa539', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '64210', 'cpuSpeed': '2299.953', 'version_name': 'Snow Man', 'vlans': {}, 'cpuCores': '32', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'cpuThreads': '32', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '20120228.fc18', 'version': '2.6.1', 'name': 'oVirt Node'}, 'lastClient': '46.22.124.43'}}
Thread-12389::DEBUG::2013-04-22 11:02:09,364::BindingXMLRPC::913::vds::(wrapper) client [46.22.124.43]::call getHardwareInfo with () {}
Thread-12389::DEBUG::2013-04-22 11:02:09,365::BindingXMLRPC::920::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'PowerEdge R715', 'systemSerialNumber': '3BG075J', 'systemFamily': 'Not Specified', 'systemVersion': 'Not Specified', 'systemUUID': '4C4C4544-0042-4710-8030-B3C04F37354A', 'systemManufacturer': 'Dell Inc.'}}
Thread-12390::DEBUG::2013-04-22 11:02:11,238::BindingXMLRPC::913::vds::(wrapper) client [46.22.124.43]::call getCapabilities with () {}
Thread-12390::DEBUG::2013-04-22 11:02:11,296::BindingXMLRPC::920::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a07da5fa539'}], 'FC': []}, 'packages2': {'kernel': {'release': '205.fc18.x86_64', 'buildtime': 1361736602.0, 'version': '3.7.9'}, 'spice-server': {'release': '3.fc18', 'buildtime': 1358784016L, 'version': '0.12.2'}, 'vdsm': {'release': '9.fc18', 'buildtime': 1362046220L, 'version': '4.10.3'}, 'qemu-kvm': {'release': '6.fc18', 'buildtime': 1359843954L, 'version': '1.2.2'}, 'libvirt': {'release': '1.fc18', 'buildtime': 1359405439L, 'version': '0.10.2.3'}, 'qemu-img': {'release': '6.fc18', 'buildtime': 1359843954L, 'version': '1.2.2'}, 'mom': {'release': '1.fc18', 'buildtime': 1349470214L, 'version': '0.3.0'}}, 'cpuModel': 'AMD Opteron(TM) Processor 6276', 'hooks': {}, 'cpuSockets': '4', 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '46.22.124.37', 'cfg': {'IPADDR': '46.22.124.37', 'PEERDNS': 'no', 'GATEWAY': '46.22.124.36', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.224', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.224', 'stp': 'off', 'bridged': True, 'gateway': '46.22.124.36', 'ports': ['em1']}, 'san': {'iface': 'san', 'addr': '192.168.43.11', 'cfg': {'IPADDR': '192.168.43.11', 'PEERDNS': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'san', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['em2']}, 'vminternet': {'iface': 'vminternet', 'addr': '', 'cfg': {'PEERDNS': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'vminternet', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['em4']}}, 'bridges': {'ovirtmgmt': {'addr': '46.22.124.37', 'cfg': {'IPADDR': '46.22.124.37', 'PEERDNS': 'no', 'GATEWAY': '46.22.124.36', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.224', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.224', 'stp': 'off', 'ports': ['em1']}, 'san': {'addr': '192.168.43.11', 'cfg': {'IPADDR': '192.168.43.11', 'PEERDNS': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'san', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['em2']}, 'vminternet': {'addr': '', 'cfg': {'PEERDNS': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'vminternet', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ports': ['em4']}}, 'uuid': '4C4C4544-0042-4710-8030-B3C04F37354A', 'lastClientIface': 'ovirtmgmt', 'nics': {'em4': {'addr': '', 'cfg': {'BRIDGE': 'vminternet', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:20', 'STP': 'no', 'DEVICE': 'em4', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:20', 'speed': 100}, 'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:1a', 'STP': 'no', 'DEVICE': 'em1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1a', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:1e', 'STP': 'no', 'DEVICE': 'em3', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1e', 'speed': 0}, 'em2': {'addr': '', 'cfg': {'BRIDGE': 'san', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'MTU': '9000', 'HWADDR': 'd0:67:e5:f9:2e:1c', 'STP': 'no', 'DEVICE': 'em2', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1c', 'speed': 1000}}, 'software_revision': '9', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,amd_dcm,aperfmperf,pni,pclmulqdq,monitor,ssse3,cx16,sse4_1,sse4_2,popcnt,aes,xsave,avx,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,xop,skinit,wdt,lwp,fma4,nodeid_msr,topoext,perfctr_core,arat,cpb,hw_pstate,npt,lbrv,svm_lock,nrip_save,tsc_scale,vmcb_clean,flushbyasid,decodeassists,pausefilter,pfthreshold,model_Opteron_G3,model_Opteron_G1,model_Opteron_G4,model_Opteron_G2', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:a07da5fa539', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'], 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize': '64210', 'cpuSpeed': '2299.953', 'version_name': 'Snow Man', 'vlans': {}, 'cpuCores': '32', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '', 'cpuThreads': '32', 'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '20120228.fc18', 'version': '2.6.1', 'name': 'oVirt Node'}, 'lastClient': '46.22.124.43'}}
Thread-12391::DEBUG::2013-04-22 11:02:11,306::BindingXMLRPC::913::vds::(wrapper) client [46.22.124.43]::call getHardwareInfo with () {}
Thread-12391::DEBUG::2013-04-22 11:02:11,307::BindingXMLRPC::920::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'PowerEdge R715', 'systemSerialNumber': '3BG075J', 'systemFamily': 'Not Specified', 'systemVersion': 'Not Specified', 'systemUUID': '4C4C4544-0042-4710-8030-B3C04F37354A', 'systemManufacturer': 'Dell Inc.'}}
Thread-12393::DEBUG::2013-04-22 11:02:11,366::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12393::DEBUG::2013-04-22 11:02:11,367::task::568::TaskManager.Task::(_updateState) Task=`bf358e1a-079c-4d29-8a9b-99cd5267d4f8`::moving from state init -> state preparing
Thread-12393::INFO::2013-04-22 11:02:11,367::logUtils::41::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=3, spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', conList=[{'connection': '192.168.43.10', 'iqn': 'iqn.2012-02.se.elementary:ovirt', 'portal': '1', 'user': '', 'password': '******', 'id': '906d9ac3-4ea0-4c5c-bb74-207f768412c4', 'port': '3260'}], options=None)
Thread-12393::INFO::2013-04-22 11:02:11,368::logUtils::44::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '906d9ac3-4ea0-4c5c-bb74-207f768412c4'}]}
Thread-12393::DEBUG::2013-04-22 11:02:11,368::task::1151::TaskManager.Task::(prepare) Task=`bf358e1a-079c-4d29-8a9b-99cd5267d4f8`::finished: {'statuslist': [{'status': 0, 'id': '906d9ac3-4ea0-4c5c-bb74-207f768412c4'}]}
Thread-12393::DEBUG::2013-04-22 11:02:11,368::task::568::TaskManager.Task::(_updateState) Task=`bf358e1a-079c-4d29-8a9b-99cd5267d4f8`::moving from state preparing -> state finished
Thread-12393::DEBUG::2013-04-22 11:02:11,368::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12393::DEBUG::2013-04-22 11:02:11,368::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12393::DEBUG::2013-04-22 11:02:11,368::task::957::TaskManager.Task::(_decref) Task=`bf358e1a-079c-4d29-8a9b-99cd5267d4f8`::ref 0 aborting False
Thread-12394::DEBUG::2013-04-22 11:02:11,378::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12394::DEBUG::2013-04-22 11:02:11,379::task::568::TaskManager.Task::(_updateState) Task=`e6d93206-9c45-4326-b1f5-bc74775605a1`::moving from state init -> state preparing
Thread-12394::INFO::2013-04-22 11:02:11,379::logUtils::41::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', conList=[{'connection': '192.168.43.10:/storage/ovirt/nfs/iso', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '480ce9bb-4299-43f8-9fe0-8fbac71e54be', 'port': ''}], options=None)
Thread-12394::INFO::2013-04-22 11:02:11,379::logUtils::44::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '480ce9bb-4299-43f8-9fe0-8fbac71e54be'}]}
Thread-12394::DEBUG::2013-04-22 11:02:11,379::task::1151::TaskManager.Task::(prepare) Task=`e6d93206-9c45-4326-b1f5-bc74775605a1`::finished: {'statuslist': [{'status': 0, 'id': '480ce9bb-4299-43f8-9fe0-8fbac71e54be'}]}
Thread-12394::DEBUG::2013-04-22 11:02:11,379::task::568::TaskManager.Task::(_updateState) Task=`e6d93206-9c45-4326-b1f5-bc74775605a1`::moving from state preparing -> state finished
Thread-12394::DEBUG::2013-04-22 11:02:11,380::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12394::DEBUG::2013-04-22 11:02:11,380::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12394::DEBUG::2013-04-22 11:02:11,380::task::957::TaskManager.Task::(_decref) Task=`e6d93206-9c45-4326-b1f5-bc74775605a1`::ref 0 aborting False
Thread-12395::DEBUG::2013-04-22 11:02:11,387::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12395::DEBUG::2013-04-22 11:02:11,387::task::568::TaskManager.Task::(_updateState) Task=`5348f964-3322-4250-a78a-d35e682664dd`::moving from state init -> state preparing
Thread-12395::INFO::2013-04-22 11:02:11,388::logUtils::41::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', conList=[{'connection': '192.168.43.10:/storage/ovirt/nfs/export_domain', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': 'cbec69e2-884b-4904-89ec-4b947e1f33ba', 'port': ''}], options=None)
Thread-12395::INFO::2013-04-22 11:02:11,388::logUtils::44::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': 'cbec69e2-884b-4904-89ec-4b947e1f33ba'}]}
Thread-12395::DEBUG::2013-04-22 11:02:11,388::task::1151::TaskManager.Task::(prepare) Task=`5348f964-3322-4250-a78a-d35e682664dd`::finished: {'statuslist': [{'status': 0, 'id': 'cbec69e2-884b-4904-89ec-4b947e1f33ba'}]}
Thread-12395::DEBUG::2013-04-22 11:02:11,388::task::568::TaskManager.Task::(_updateState) Task=`5348f964-3322-4250-a78a-d35e682664dd`::moving from state preparing -> state finished
Thread-12395::DEBUG::2013-04-22 11:02:11,388::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12395::DEBUG::2013-04-22 11:02:11,388::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12395::DEBUG::2013-04-22 11:02:11,389::task::957::TaskManager.Task::(_decref) Task=`5348f964-3322-4250-a78a-d35e682664dd`::ref 0 aborting False
Thread-12396::DEBUG::2013-04-22 11:02:11,396::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12396::DEBUG::2013-04-22 11:02:11,396::task::568::TaskManager.Task::(_updateState) Task=`f5d1e7c3-a014-4d63-983b-6dc088a56f49`::moving from state init -> state preparing
Thread-12396::INFO::2013-04-22 11:02:11,397::logUtils::41::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', conList=[{'connection': '192.168.43.10', 'iqn': 'iqn.2012-02.se.elementary:ovirt', 'portal': '1', 'user': '', 'password': '******', 'id': '906d9ac3-4ea0-4c5c-bb74-207f768412c4', 'port': '3260'}], options=None)
Thread-12396::DEBUG::2013-04-22 11:02:11,397::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2012-02.se.elementary:ovirt -I default -p 192.168.43.10:3260 --op=new' (cwd None)
Thread-12396::DEBUG::2013-04-22 11:02:11,420::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12396::DEBUG::2013-04-22 11:02:11,421::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2012-02.se.elementary:ovirt -I default -p 192.168.43.10:3260 -l' (cwd None)
Thread-12396::DEBUG::2013-04-22 11:02:11,441::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12396::DEBUG::2013-04-22 11:02:11,442::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2012-02.se.elementary:ovirt -I default -p 192.168.43.10:3260 -n node.startup -v manual --op=update' (cwd None)
Thread-12396::DEBUG::2013-04-22 11:02:11,460::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12396::INFO::2013-04-22 11:02:11,460::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '906d9ac3-4ea0-4c5c-bb74-207f768412c4'}]}
Thread-12396::DEBUG::2013-04-22 11:02:11,461::task::1151::TaskManager.Task::(prepare) Task=`f5d1e7c3-a014-4d63-983b-6dc088a56f49`::finished: {'statuslist': [{'status': 0, 'id': '906d9ac3-4ea0-4c5c-bb74-207f768412c4'}]}
Thread-12396::DEBUG::2013-04-22 11:02:11,461::task::568::TaskManager.Task::(_updateState) Task=`f5d1e7c3-a014-4d63-983b-6dc088a56f49`::moving from state preparing -> state finished
Thread-12396::DEBUG::2013-04-22 11:02:11,461::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12396::DEBUG::2013-04-22 11:02:11,461::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12396::DEBUG::2013-04-22 11:02:11,462::task::957::TaskManager.Task::(_decref) Task=`f5d1e7c3-a014-4d63-983b-6dc088a56f49`::ref 0 aborting False
Thread-12397::DEBUG::2013-04-22 11:02:11,470::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12397::DEBUG::2013-04-22 11:02:11,470::task::568::TaskManager.Task::(_updateState) Task=`768743d8-d35d-4a1d-b861-b8ef0d74000c`::moving from state init -> state preparing
Thread-12397::INFO::2013-04-22 11:02:11,471::logUtils::41::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', conList=[{'connection': '192.168.43.10:/storage/ovirt/nfs/iso', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '480ce9bb-4299-43f8-9fe0-8fbac71e54be', 'port': ''}], options=None)
Thread-12397::INFO::2013-04-22 11:02:12,695::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '480ce9bb-4299-43f8-9fe0-8fbac71e54be'}]}
Thread-12397::DEBUG::2013-04-22 11:02:12,696::task::1151::TaskManager.Task::(prepare) Task=`768743d8-d35d-4a1d-b861-b8ef0d74000c`::finished: {'statuslist': [{'status': 0, 'id': '480ce9bb-4299-43f8-9fe0-8fbac71e54be'}]}
Thread-12397::DEBUG::2013-04-22 11:02:12,696::task::568::TaskManager.Task::(_updateState) Task=`768743d8-d35d-4a1d-b861-b8ef0d74000c`::moving from state preparing -> state finished
Thread-12397::DEBUG::2013-04-22 11:02:12,696::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12397::DEBUG::2013-04-22 11:02:12,696::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12397::DEBUG::2013-04-22 11:02:12,696::task::957::TaskManager.Task::(_decref) Task=`768743d8-d35d-4a1d-b861-b8ef0d74000c`::ref 0 aborting False
Thread-12398::DEBUG::2013-04-22 11:02:12,705::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12398::DEBUG::2013-04-22 11:02:12,705::task::568::TaskManager.Task::(_updateState) Task=`cd8323a6-b1fe-4fc3-a457-c352bb0712f7`::moving from state init -> state preparing
Thread-12398::INFO::2013-04-22 11:02:12,706::logUtils::41::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', conList=[{'connection': '192.168.43.10:/storage/ovirt/nfs/export_domain', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': 'cbec69e2-884b-4904-89ec-4b947e1f33ba', 'port': ''}], options=None)
Thread-12398::INFO::2013-04-22 11:02:13,902::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'cbec69e2-884b-4904-89ec-4b947e1f33ba'}]}
Thread-12398::DEBUG::2013-04-22 11:02:13,903::task::1151::TaskManager.Task::(prepare) Task=`cd8323a6-b1fe-4fc3-a457-c352bb0712f7`::finished: {'statuslist': [{'status': 0, 'id': 'cbec69e2-884b-4904-89ec-4b947e1f33ba'}]}
Thread-12398::DEBUG::2013-04-22 11:02:13,903::task::568::TaskManager.Task::(_updateState) Task=`cd8323a6-b1fe-4fc3-a457-c352bb0712f7`::moving from state preparing -> state finished
Thread-12398::DEBUG::2013-04-22 11:02:13,903::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12398::DEBUG::2013-04-22 11:02:13,903::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12398::DEBUG::2013-04-22 11:02:13,903::task::957::TaskManager.Task::(_decref) Task=`cd8323a6-b1fe-4fc3-a457-c352bb0712f7`::ref 0 aborting False
Thread-12399::DEBUG::2013-04-22 11:02:13,917::BindingXMLRPC::161::vds::(wrapper) [46.22.124.43]
Thread-12399::DEBUG::2013-04-22 11:02:13,917::task::568::TaskManager.Task::(_updateState) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::moving from state init -> state preparing
Thread-12399::INFO::2013-04-22 11:02:13,918::logUtils::41::dispatcher::(wrapper) Run and protect: connectStoragePool(spUUID='dae1ba5e-548a-473e-94b6-7d08d3a6229a', hostID=1, scsiKey='dae1ba5e-548a-473e-94b6-7d08d3a6229a', msdUUID='ea5643f7-f41f-4c44-a27f-f31240cd0f19', masterVersion=110, options=None)
Thread-12399::DEBUG::2013-04-22 11:02:13,918::resourceManager::190::ResourceManager.Request::(__init__) ResName=`Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a`ReqID=`15473924-edce-43f6-a4dc-04b7fa1aff61`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at '__init__'
Thread-12399::DEBUG::2013-04-22 11:02:13,918::resourceManager::504::ResourceManager::(registerResource) Trying to register resource 'Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a' for lock type 'exclusive'
Thread-12399::DEBUG::2013-04-22 11:02:13,919::resourceManager::547::ResourceManager::(registerResource) Resource 'Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a' is free. Now locking as 'exclusive' (1 active user)
Thread-12399::DEBUG::2013-04-22 11:02:13,919::resourceManager::227::ResourceManager.Request::(grant) ResName=`Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a`ReqID=`15473924-edce-43f6-a4dc-04b7fa1aff61`::Granted request
Thread-12399::INFO::2013-04-22 11:02:13,920::sp::625::Storage.StoragePool::(connect) Connect host #1 to the storage pool dae1ba5e-548a-473e-94b6-7d08d3a6229a with master domain: ea5643f7-f41f-4c44-a27f-f31240cd0f19 (ver = 110)
Thread-12399::DEBUG::2013-04-22 11:02:13,920::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:13,921::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:13,921::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:13,921::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:13,921::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:13,922::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:13,922::misc::1054::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-12399::DEBUG::2013-04-22 11:02:13,922::misc::1056::SamplingMethod::(__call__) Got in to sampling method
Thread-12399::DEBUG::2013-04-22 11:02:13,922::misc::1054::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-12399::DEBUG::2013-04-22 11:02:13,923::misc::1056::SamplingMethod::(__call__) Got in to sampling method
Thread-12399::DEBUG::2013-04-22 11:02:13,923::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
Thread-12399::DEBUG::2013-04-22 11:02:13,947::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12399::DEBUG::2013-04-22 11:02:13,948::misc::1064::SamplingMethod::(__call__) Returning last result
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,950::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host0/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,955::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host1/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,959::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host2/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,964::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host3/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,968::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host4/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,972::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host5/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,976::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host6/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,980::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host7/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,985::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host8/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,989::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd of=/sys/class/scsi_host/host10/scan' (cwd None)
MainProcess|Thread-12399::DEBUG::2013-04-22 11:02:13,993::iscsi::402::Storage.ISCSI::(forceIScsiScan) Performing SCSI scan, this will take up to 30 seconds
Thread-12399::DEBUG::2013-04-22 11:02:15,997::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/multipath' (cwd None)
Thread-12399::DEBUG::2013-04-22 11:02:16,048::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12399::DEBUG::2013-04-22 11:02:16,048::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,049::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,049::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,049::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,049::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,050::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,050::misc::1064::SamplingMethod::(__call__) Returning last result
Thread-12399::DEBUG::2013-04-22 11:02:16,050::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,057::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1IET_00020001|1IET_00020002|1IET_00020003|3500000e019ee26e0|3500000e019ee27f0|3500000e019ee2830|3500000e019efff80|3600508e000000000fdc993aef946ac0f%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free ea5643f7-f41f-4c44-a27f-f31240cd0f19' (cwd None)
Thread-12399::DEBUG::2013-04-22 11:02:16,243::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12399::DEBUG::2013-04-22 11:02:16,245::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:16,245::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend
Thread-12399::DEBUG::2013-04-22 11:02:17,380::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/ea5643f7-f41f-4c44-a27f-f31240cd0f19/metadata count=1' (cwd None)
Thread-12399::DEBUG::2013-04-22 11:02:17,390::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.000642052 s, 3.2 MB/s\n'; <rc> = 0
Thread-12399::DEBUG::2013-04-22 11:02:17,390::misc::325::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.000642052 s, 3.2 MB/s'], size: 2048
Thread-12399::DEBUG::2013-04-22 11:02:17,390::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]
Thread-12399::WARNING::2013-04-22 11:02:17,391::persistentDict::256::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is
Thread-12399::DEBUG::2013-04-22 11:02:17,392::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend
Thread-12399::DEBUG::2013-04-22 11:02:17,392::lvm::483::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,393::lvm::485::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,393::lvm::493::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,393::lvm::505::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,394::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,394::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1IET_00020001|1IET_00020002|1IET_00020003|3500000e019ee26e0|3500000e019ee27f0|3500000e019ee2830|3500000e019efff80|3600508e000000000fdc993aef946ac0f%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free ea5643f7-f41f-4c44-a27f-f31240cd0f19' (cwd None)
Thread-12399::DEBUG::2013-04-22 11:02:17,504::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12399::DEBUG::2013-04-22 11:02:17,506::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,507::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['PHYBLKSIZE=4096', 'SDUUID=ea5643f7-f41f-4c44-a27f-f31240cd0f19', 'VERSION=3', 'TYPE=ISCSI', u'PV0=pv:1IET_00020001,uuid:0RBLsc-FB0t-RA6I-d12o-jtUx-lIx3-eQwv8Z,pestart:0,pecount:1951,mapoffset:0', 'LOGBLKSIZE=512', 'LEASERETRIES=3', 'DESCRIPTION=hinken', 'LOCKRENEWALINTERVALSEC=5', 'LOCKPOLICY=', 'VGUUID=5Qox85-NQCz-pyVM-aFRN-KJ2P-jB1T-IiETQr', 'CLASS=Data', 'LEASETIMESEC=60', 'IOOPTIMEOUTSEC=10', 'MASTER_VERSION=1', 'ROLE=Master', 'POOL_DESCRIPTION=Elementary', 'POOL_UUID=dae1ba5e-548a-473e-94b6-7d08d3a6229a', 'POOL_SPM_ID=1', u'POOL_DOMAINS=ea5643f7-f41f-4c44-a27f-f31240cd0f19:Active,c5c68e94-14c6-4b29-8f83-2daa680dc6c0:Active,fecb861a-5cf6-4f1e-8bfe-b7a71e122c2f:Active,9f296bdb-2903-4e8c-a019-f10bc3ae4d17:Active,c7452b13-e72e-4ce8-83e1-79c22f76dabf:Active', 'POOL_SPM_LVER=12', '_SHA_CKSUM=0924d492075d543bc008063efa0cd0239f60920e']
Thread-12399::DEBUG::2013-04-22 11:02:17,508::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
Thread-12399::DEBUG::2013-04-22 11:02:17,508::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1IET_00020001|1IET_00020002|1IET_00020003|3500000e019ee26e0|3500000e019ee27f0|3500000e019ee2830|3500000e019efff80|3600508e000000000fdc993aef946ac0f%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)
Thread-12399::DEBUG::2013-04-22 11:02:17,624::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-12399::DEBUG::2013-04-22 11:02:17,625::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
Thread-12399::WARNING::2013-04-22 11:02:17,626::sd::361::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace ea5643f7-f41f-4c44-a27f-f31240cd0f19_imageNS already registered
Thread-12399::WARNING::2013-04-22 11:02:17,626::sd::369::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace ea5643f7-f41f-4c44-a27f-f31240cd0f19_volumeNS already registered
Thread-12399::WARNING::2013-04-22 11:02:17,626::blockSD::432::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace ea5643f7-f41f-4c44-a27f-f31240cd0f19_lvmActivationNS already registered
Thread-12399::ERROR::2013-04-22 11:02:17,627::sp::1519::Storage.StoragePool::(getMasterDomain) Requested master domain ea5643f7-f41f-4c44-a27f-f31240cd0f19 does not have expected version 110 it is version 1
Thread-12399::DEBUG::2013-04-22 11:02:17,627::resourceManager::557::ResourceManager::(releaseResource) Trying to release resource 'Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a'
Thread-12399::DEBUG::2013-04-22 11:02:17,628::resourceManager::573::ResourceManager::(releaseResource) Released resource 'Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a' (0 active users)
Thread-12399::DEBUG::2013-04-22 11:02:17,628::resourceManager::578::ResourceManager::(releaseResource) Resource 'Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a' is free, finding out if anyone is waiting for it.
Thread-12399::DEBUG::2013-04-22 11:02:17,628::resourceManager::585::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.dae1ba5e-548a-473e-94b6-7d08d3a6229a', Clearing records.
Thread-12399::ERROR::2013-04-22 11:02:17,628::task::833::TaskManager.Task::(_setError) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 840, in _run
File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
File "/usr/share/vdsm/storage/hsm.py", line 926, in connectStoragePool
File "/usr/share/vdsm/storage/hsm.py", line 973, in _connectStoragePool
File "/usr/share/vdsm/storage/sp.py", line 642, in connect
File "/usr/share/vdsm/storage/sp.py", line 1166, in __rebuild
File "/usr/share/vdsm/storage/sp.py", line 1520, in getMasterDomain
StoragePoolWrongMaster: Wrong Master domain or its version: 'SD=ea5643f7-f41f-4c44-a27f-f31240cd0f19, pool=dae1ba5e-548a-473e-94b6-7d08d3a6229a'
Thread-12399::DEBUG::2013-04-22 11:02:17,629::task::852::TaskManager.Task::(_run) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::Task._run: 086e8dc5-7bbd-499e-a12d-736cb14fe79b ('dae1ba5e-548a-473e-94b6-7d08d3a6229a', 1, 'dae1ba5e-548a-473e-94b6-7d08d3a6229a', 'ea5643f7-f41f-4c44-a27f-f31240cd0f19', 110) {} failed - stopping task
Thread-12399::DEBUG::2013-04-22 11:02:17,629::task::1177::TaskManager.Task::(stop) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::stopping in state preparing (force False)
Thread-12399::DEBUG::2013-04-22 11:02:17,630::task::957::TaskManager.Task::(_decref) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::ref 1 aborting True
Thread-12399::INFO::2013-04-22 11:02:17,630::task::1134::TaskManager.Task::(prepare) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::aborting: Task is aborted: 'Wrong Master domain or its version' - code 324
Thread-12399::DEBUG::2013-04-22 11:02:17,630::task::1139::TaskManager.Task::(prepare) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::Prepare: aborted: Wrong Master domain or its version
Thread-12399::DEBUG::2013-04-22 11:02:17,630::task::957::TaskManager.Task::(_decref) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::ref 0 aborting True
Thread-12399::DEBUG::2013-04-22 11:02:17,630::task::892::TaskManager.Task::(_doAbort) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::Task._doAbort: force False
Thread-12399::DEBUG::2013-04-22 11:02:17,631::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12399::DEBUG::2013-04-22 11:02:17,631::task::568::TaskManager.Task::(_updateState) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::moving from state preparing -> state aborting
Thread-12399::DEBUG::2013-04-22 11:02:17,631::task::523::TaskManager.Task::(__state_aborting) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::_aborting: recover policy none
Thread-12399::DEBUG::2013-04-22 11:02:17,631::task::568::TaskManager.Task::(_updateState) Task=`086e8dc5-7bbd-499e-a12d-736cb14fe79b`::moving from state aborting -> state failed
Thread-12399::DEBUG::2013-04-22 11:02:17,631::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12399::DEBUG::2013-04-22 11:02:17,632::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12399::ERROR::2013-04-22 11:02:17,632::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Wrong Master domain or its version: 'SD=ea5643f7-f41f-4c44-a27f-f31240cd0f19, pool=dae1ba5e-548a-473e-94b6-7d08d3a6229a'", 'code': 324}}
11 years, 7 months
[Users] virt-v2v
by suporte@logicworks.pt
------=_Part_237_880352.1366218346602
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Im trying to migrate a VM from a qemu kvm server. I'm not sure of the correct syntax:
virt-v2v -ic qemu+ssh://root@ server_ip_addr /system -op export_domain --bridge ovirtmgmt kvm_vm_name
I have setup a NFS/export storage on the manager, for the export domain. How can I find the correct path of the export_domain?
regards
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_237_880352.1366218346602
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'>Hi,<br><br>Im trying to migrate a VM from a qemu kvm server. I'm not sure of the correct syntax:<br><br>virt-v2v -ic qemu+ssh://root@<span style="font-style: italic;">server_ip_addr</span>/system -op <span style="font-style: italic;">export_domain</span> --bridge ovirtmgmt <span style="font-style: italic;">kvm_vm_name</span><br><br>I have setup a NFS/export storage on the manager, for the export domain. How can I find the correct path of the export_domain?<br><br>regards<br><br>-- <br><div><span name="x"></span><hr style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name="x"></span><br></div></div></body></html>
------=_Part_237_880352.1366218346602--
11 years, 7 months
[Users] I can't start a virutal machine,when I stop it.
by 882406701
------=_Part_60826_581986091.1366596603490
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit
I use ovirt already half a year. Now, ovirt have an error, and I can't solve it.
The virtual machine returns a permission error When I want startup it.
the permission error is:
"VM kaoqin is down. Exit message: cannot open file '/rhev/data-center/78e07dee-36bc-439b-b71e-72cfd87a2bb3/d69d6576-511c-4b98-b5f3-69c7252c5738/images/dd4c9d3e-8d37-4c92-bcc7-ca7e17fc1ee0/b9d038a0-e3eb-447d-9780-55e2af35f361': Permission denied."
"VM kaoqin is down. Exit message: operation failed: domain 'kaoqin' already exists with uuid 0a558c62-6268-f27e-3437-d7b2da4a42b4."
And I can start this virtual machine on after a while.
But ovirt-engine continued to give me a error :
"VM kaoqin is down. Exit message: cannot open file '/rhev/data-center/78e07dee-36bc-439b-b71e-72cfd87a2bb3/d69d6576-511c-4b98-b5f3-69c7252c5738/images/dd4c9d3e-8d37-4c92-bcc7-ca7e17fc1ee0/b9d038a0-e3eb-447d-9780-55e2af35f361': Permission denied."
And now,when I stop this virtual machine, I can't start it..
------=_Part_60826_581986091.1366596603490
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit
<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><pre style="margin-top: 0px; margin-bottom: 0px; padding: 0px; font-family: monospace; font-size: 12px; color: rgb(51, 51, 51); line-height: normal;">I use ovirt already half a year. Now, ovirt have an error, and I can't solve it.
The virtual machine returns a permission error When I want startup it.
the permission error is:
"VM kaoqin is down. Exit message: cannot open file '/rhev/data-center/78e07dee-36bc-439b-b71e-72cfd87a2bb3/d69d6576-511c-4b98-b5f3-69c7252c5738/images/dd4c9d3e-8d37-4c92-bcc7-ca7e17fc1ee0/b9d038a0-e3eb-447d-9780-55e2af35f361': Permission denied."
"VM kaoqin is down. Exit message: operation failed: domain 'kaoqin' already exists with uuid 0a558c62-6268-f27e-3437-d7b2da4a42b4."
And I can start this virtual machine on after a while.
But ovirt-engine continued to give me a error :
"VM kaoqin is down. Exit message: cannot open file '/rhev/data-center/78e07dee-36bc-439b-b71e-72cfd87a2bb3/d69d6576-511c-4b98-b5f3-69c7252c5738/images/dd4c9d3e-8d37-4c92-bcc7-ca7e17fc1ee0/b9d038a0-e3eb-447d-9780-55e2af35f361': Permission denied."
And now,when I stop this virtual machine, I can't start it..</pre></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_60826_581986091.1366596603490--
11 years, 7 months
[Users] users quota and limit ips
by Andrej Bagon
Hi all,
we are wondering how can we limit a user to use IPs we give him and not
others.
Best is understood from an example:
- we give a user a quota (with x CPU, y memory and z disk space)
- a user can create one VirtualMachine with all the resources, or more
VirtualMachines with smaller resources.
- we want to give a user a pool of IPs. He should not use other IPs. If
he uses other IP it should not be routable.
Is there a solution for this problem?
Thank you.
Best Regards,
Andrej Bagon
Arnes
11 years, 7 months
[Users] 1 node cluster problem
by Jakub Bittner
This is a multi-part message in MIME format.
--------------060205000207060203010908
Content-Type: text/plain; charset=ISO-8859-2; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I have one node cluster (ovirt version 3.2) and this node went down
because kernel panic. In wui I can see it is down, but VMs are still
reported to be running. (they dont, ofcourse..) I manualy rebooted that
host and now it is up. But problem is, that I can not activate it in wui
(still red)
If I press "Confirm 'Host has been Rebooted'" it says:
Manual fence did not revoke the selected SPM (node1.site.com) since the
master storage domain was not active or could not use another host for
the fence operation.
If I try to put host in maitenance it says:
Error while executing action: Cannot switch the following Hosts to
Maintenance mode: node1.site.com.
One or more running VMs are indicated as non-migratable. The
non-migratable VMs are: vm2.site.com.
I removed "HA mode" from vm2.site.com, but did not do the job.
I can not even Power off running hosts.
Is there any way what to do?
Thank you,
Jakub Bittner
--------------060205000207060203010908
Content-Type: text/html; charset=ISO-8859-2
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-2">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
I have one node cluster (ovirt version 3.2) and this node went down
because kernel panic. In wui I can see it is down, but VMs are still
reported to be running. (they dont, ofcourse..) I manualy rebooted
that host and now it is up. But problem is, that I can not activate
it in wui (still red)<br>
<br>
If I press "Confirm 'Host has been Rebooted'" it says: <br>
<div title="" style="outline: medium none;" tabindex="0">
<div id="gwt-uid-1428_col2_row1">Manual fence did not revoke the
selected SPM (node1.site.com) since the master storage domain
was not active or could not use another host for the fence
operation.<br>
<br>
If I try to put host in maitenance it says: <br>
<br>
Error while executing action: Cannot switch the following Hosts
to Maintenance mode: node1.site.com.<br>
One or more running VMs are indicated as non-migratable. The
non-migratable VMs are: vm2.site.com.<br>
</div>
</div>
<br>
<br>
I removed "HA mode" from vm2.site.com, but did not do the job.<br>
<br>
I can not even Power off running hosts.<br>
<br>
Is there any way what to do?<br>
<br>
Thank you,<br>
Jakub Bittner<br>
</body>
</html>
--------------060205000207060203010908--
11 years, 7 months
[Users] how to limit each guest only have access to its own usb device?
by bigclouds
------=_Part_50858_1412755383.1366595987775
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit
hi,all
i use thin client to visit guestVM(throught spice), if i attach usb device to thin client, how to assure that this usb can be only accessed by the guestVM corresponding to thin client?
thanks
------=_Part_50858_1412755383.1366595987775
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit
<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>hi,all</div><div>i use thin client to visit guestVM(throught spice), if i attach usb device to thin client, how to assure that this usb can be only accessed by the guestVM corresponding to thin client?</div><div> </div><div>thanks</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_50858_1412755383.1366595987775--
11 years, 7 months
[Users] High Availability OVirt
by victor nunes
Is there any way to create redundancy oVirt-engine?
We install oVirt-engine in a machine, and what happens if that burning
machine?
Is there anything that can be done to remedy this?
Att,
--
“Encarada do ponto de vista da juventude, a vida parece um futuro
indefinidamente longo, ao passo que, na velhice, ela parece um passado
deveras curto. Assim, a vida no seu início se apresenta do mesmo modo
que as coisas quando as olhamos através de um binóculo usado ao contrário;
mas, ao
seu final, ela se parece com as coisas tal qual são vistas quando o
binóculo
é usado de modo normal. Um homem precisa ter envelhecido e vivido
bastante para perceber como a vida é curta”.
(Poema de Arthur Schopenhauer)
11 years, 7 months
[Users] Configure spice plugin for wan
by Gianluca Cecchi
I remember someone talking about an option of setting spice to operate
over wan, in the sense that this way automatically it would disable
effects in windows 7 VM, ecc...
I don't remember if this is a VM parameter, hook or other...
Or if this is an option available only in RHEV-M
Any pointer?
Thanks,
Gianluca
11 years, 7 months
[Users] video performance
by Ryan Wilkinson
Not getting very good quality playback when watching videos on a vm through
spice console. It works, just very jittery. Have installed client tools
and verified display driver is using the redhat driver. Any other ideas
for improving video performance?
11 years, 7 months