<html><body>
<p><tt><font size="2">Michal Skrivanek <michal.skrivanek@redhat.com> ΞΑΠΙΣΑΞΟ 20.09.2012 16:23:31:<br>
<br>
> οΤ: Michal Skrivanek <michal.skrivanek@redhat.com></font></tt><br>
<tt><font size="2">> λΟΝΥ: Dmitriy A Pyryakov <DPyryakov@ekb.beeline.ru></font></tt><br>
<tt><font size="2">> λΟΠΙΡ: users@ovirt.org</font></tt><br>
<tt><font size="2">> δΑΤΑ: 20.09.2012 16:24</font></tt><br>
<tt><font size="2">> τΕΝΑ: Re: [Users] Fatal error during migration</font></tt><br>
<tt><font size="2">> <br>
> <br>
> On Sep 20, 2012, at 12:19 , Dmitriy A Pyryakov wrote:<br>
> <br>
> > Michal Skrivanek <michal.skrivanek@redhat.com> ΞΑΠΙΣΑΞΟ 20.09.201216:13:16:<br>
> > <br>
> > > οΤ: Michal Skrivanek <michal.skrivanek@redhat.com><br>
> > > λΟΝΥ: Dmitriy A Pyryakov <DPyryakov@ekb.beeline.ru><br>
> > > λΟΠΙΡ: users@ovirt.org<br>
> > > δΑΤΑ: 20.09.2012 16:13<br>
> > > τΕΝΑ: Re: [Users] Fatal error during migration<br>
> > > <br>
> > > <br>
> > > On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:<br>
> > > <br>
> > > > Michal Skrivanek <michal.skrivanek@redhat.com> ΞΑΠΙΣΑΞΟ 20.09.<br>
> 201216:02:11:<br>
> > > > <br>
> > > > > οΤ: Michal Skrivanek <michal.skrivanek@redhat.com><br>
> > > > > λΟΝΥ: Dmitriy A Pyryakov <DPyryakov@ekb.beeline.ru><br>
> > > > > λΟΠΙΡ: users@ovirt.org<br>
> > > > > δΑΤΑ: 20.09.2012 16:02<br>
> > > > > τΕΝΑ: Re: [Users] Fatal error during migration<br>
> > > > > <br>
> > > > > Hi,<br>
> > > > > well, so what is the other side saying? Maybe some connectivity <br>
> > > > > problems between those 2 hosts? firewall? <br>
> > > > > <br>
> > > > > Thanks,<br>
> > > > > michal<br>
> > > > <br>
> > > > Yes, firewall is not configured properly by default. If I stop it,<br>
> > > migration done.<br>
> > > > Thanks.<br>
> > > The default is supposed to be:<br>
> > > <br>
> > > # oVirt default firewall configuration. Automatically generated by <br>
> > > vdsm bootstrap script.<br>
> > > *filter<br>
> > > :INPUT ACCEPT [0:0]<br>
> > > :FORWARD ACCEPT [0:0]<br>
> > > :OUTPUT ACCEPT [0:0]<br>
> > > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT<br>
> > > -A INPUT -p icmp -j ACCEPT<br>
> > > -A INPUT -i lo -j ACCEPT<br>
> > > # vdsm<br>
> > > -A INPUT -p tcp --dport 54321 -j ACCEPT<br>
> > > # libvirt tls<br>
> > > -A INPUT -p tcp --dport 16514 -j ACCEPT<br>
> > > # SSH<br>
> > > -A INPUT -p tcp --dport 22 -j ACCEPT<br>
> > > # guest consoles<br>
> > > -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT<br>
> > > # migration<br>
> > > -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT<br>
> > > # snmp<br>
> > > -A INPUT -p udp --dport 161 -j ACCEPT<br>
> > > # Reject any other input traffic<br>
> > > -A INPUT -j REJECT --reject-with icmp-host-prohibited<br>
> > > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with<br>
> > > icmp-host-prohibited<br>
> > > COMMIT<br>
> > <br>
> > my default is:<br>
> > <br>
> > # cat /etc/sysconfig/iptables<br>
> > # oVirt automatically generated firewall configuration<br>
> > *filter<br>
> > :INPUT ACCEPT [0:0]<br>
> > :FORWARD ACCEPT [0:0]<br>
> > :OUTPUT ACCEPT [0:0]<br>
> > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT<br>
> > -A INPUT -p icmp -j ACCEPT<br>
> > -A INPUT -i lo -j ACCEPT<br>
> > #vdsm<br>
> > -A INPUT -p tcp --dport 54321 -j ACCEPT<br>
> > # SSH<br>
> > -A INPUT -p tcp --dport 22 -j ACCEPT<br>
> > # guest consoles<br>
> > -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT<br>
> > # migration<br>
> > -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT<br>
> > # snmp<br>
> > -A INPUT -p udp --dport 161 -j ACCEPT<br>
> > #<br>
> > -A INPUT -j REJECT --reject-with icmp-host-prohibited<br>
> > -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-<br>
> with icmp-host-prohibited<br>
> > COMMIT<br>
> > <br>
> > > <br>
> > > did you change it manually or is the default missing anything?<br>
> > <br>
> > default missing "libvirt tls" field.<br>
> was it an upgrade of some sort?</font></tt><br>
<tt><font size="2">No.</font></tt><br>
<br>
<tt><font size="2">> These are installed at node setup <br>
> from ovirt-engine. Check the engine version and/or the <br>
> IPTablesConfig in vdc_options table on engine<br>
</font></tt><br>
<tt><font size="2">oVirt engine version: 3.1.0-2.fc17</font></tt><br>
<br>
<tt><font size="2">engine=# select * from vdc_options where option_id=100;</font></tt><br>
<tt><font size="2"> option_id | option_name | option_value | version</font></tt><br>
<tt><font size="2">-----------+----------------+-------------------------------------------------------------------------------------------+---------</font></tt><br>
<tt><font size="2"> 100 | IPTablesConfig | # oVirt default firewall configuration. Automatically generated by vdsm bootstrap script.+| general</font></tt><br>
<tt><font size="2"> | | *filter +|</font></tt><br>
<tt><font size="2"> | | :INPUT ACCEPT [0:0] +|</font></tt><br>
<tt><font size="2"> | | :FORWARD ACCEPT [0:0] +|</font></tt><br>
<tt><font size="2"> | | :OUTPUT ACCEPT [0:0] +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p icmp -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -i lo -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # vdsm +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p tcp --dport 54321 -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # libvirt tls +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p tcp --dport 16514 -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # SSH +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p tcp --dport 22 -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # guest consoles +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # migration +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # snmp +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -p udp --dport 161 -j ACCEPT +|</font></tt><br>
<tt><font size="2"> | | # Reject any other input traffic +|</font></tt><br>
<tt><font size="2"> | | -A INPUT -j REJECT --reject-with icmp-host-prohibited +|</font></tt><br>
<tt><font size="2"> | | -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited+|</font></tt><br>
<tt><font size="2"> | | COMMIT +|</font></tt><br>
<tt><font size="2"> | | |</font></tt><br>
<br>
<tt><font size="2">IPTablesConfig is right.</font></tt><br>
<br>
<tt><font size="2">When I add my nodes to engine, I just approve it. I don't have an "Automatically configure host firewall" option.</font></tt><br>
<tt><font size="2"><br>
> > <br>
> > > thanks,<br>
> > > michal<br>
> > > > > On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:<br>
> > > > > <br>
> > > > > > Hello,<br>
> > > > > > <br>
> > > > > > I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.<br>
> > > > > > <br>
> > > > > > When I try to migrate VM from one host to another, I have an <br>
> > > > > error: Migration failed due to Error: Fatal error during migration.<br>
> > > > > > <br>
> > > > > > vdsm.log:<br>
> > > > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::<br>
> > > > > 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with <br>
> > > > > ({'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId': <br>
> > > > > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'},) {} <br>
> > > > > flowID [180ad979]<br>
> > > > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::<br>
> > > > > (migrate) {'src': '192.168.10.13', 'dst': '192.168.10.12:54321', <br>
> > > > > 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::<br>
> > > > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::Destination server is: 192.168.10.12:54321<br>
> > > > > > Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::<br>
> > > > > 865::vds::(wrapper) return vmMigrate with {'status': {'message': <br>
> > > > > 'Migration process starting', 'code': 0}}<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::<br>
> > > > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::Initiating connection with destination<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::Disk hdc stats not available<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::<br>
> > > > > (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::migration Process begins<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)<br>
> > > > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration <br>
> semaphore acquired<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::<br>
> > > > > 427::vm.Vm::(_startUnderlyingMigration) <br>
> > > > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to <br>
> > > > > qemu+tls://192.168.10.12/system<br>
> > > > > > Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::<br>
> > > > > 325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::migration downtime thread started<br>
> > > > > > Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::<br>
> > > > > 353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::starting migration monitor thread<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::<br>
> > > > > 340::vm.Vm::(cancel) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::canceling migration downtime thread<br>
> > > > > > Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::<br>
> > > > > 390::vm.Vm::(stop) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::stopping migration monitor thread<br>
> > > > > > Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::<br>
> > > > > 337::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::migration downtime thread exiting<br>
> > > > > > Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::<br>
> > > > > (_recover) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation <br>
> > > > > failed: Failed to connect to remote libvirt URI qemu+tls://192.168.<br>
> > > > > 10.12/system<br>
> > > > > > Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)<br>
> > > > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate<br>
> > > > > > Traceback (most recent call last):<br>
> > > > > > File "/usr/share/vdsm/vm.py", line 223, in run<br>
> > > > > > File "/usr/share/vdsm/libvirtvm.py", line 451, in <br>
> > > _startUnderlyingMigration<br>
> > > > > > File "/usr/share/vdsm/libvirtvm.py", line 491, in f<br>
> > > > > > File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",<br>
> > > > > line 82, in wrapper<br>
> > > > > > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1034, <br>
> > > > > in migrateToURI2<br>
> > > > > > libvirtError: operation failed: Failed to connect to remote <br>
> > > > > libvirt URI qemu+tls://192.168.10.12/system<br>
> > > > > > <br>
> > > > > > Thread-3802::DEBUG::2012-09-20 09:42:57,793::BindingXMLRPC::<br>
> > > > > 859::vds::(wrapper) client [192.168.10.10]::call vmGetStats with <br>
> > > > > ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}<br>
> > > > > > Thread-3802::DEBUG::2012-09-20 09:42:57,793::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::Disk hdc stats not available<br>
> > > > > > Thread-3802::DEBUG::2012-09-20 09:42:57,794::BindingXMLRPC::<br>
> > > > > 865::vds::(wrapper) return vmGetStats with {'status': {'message': <br>
> > > > > 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': <br>
> > > > > 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '22047', <br>
> > > > > 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session': <br>
> > > > > 'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash': <br>
> > > > > '3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '', <br>
> > > > > 'kvmEnable': 'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:<br>
> > > > > 0a:08', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', <br>
> > > > > 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': <br>
> > > > > 'unknown', 'speed': '1000', 'name': u'vnet6'}}, 'vmId': <br>
> > > > > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayType': 'qxl', <br>
> > > > > 'cpuUser': '13.27', 'disks': {u'hdc': {'flushLatency': '0', <br>
> > > > > 'readLatency': '0', 'writeLatency': '0'}, u'hda': {'readLatency': <br>
> > > > > '6183805', 'apparentsize': '11811160064', 'writeLatency': '0', <br>
> > > > > 'imageID': 'd96d19f6-5a28-4fef-892f-4a04549d4e38', 'flushLatency': <br>
> > > > > '0', 'readRate': '271.87', 'truesize': '11811160064', 'writeRate': <br>
> > > > > '0.00'}}, 'monitorResponse': '0', 'statsAge': '0.77', 'cpuIdle': <br>
> > > > > '86.73', 'elapsedTime': '3941', 'vmType': 'kvm', 'cpuSys': '0.00', <br>
> > > > > 'appsList': [], 'guestIPs': '', 'nice': ''}]}<br>
> > > > > > Thread-3803::DEBUG::2012-09-20 09:42:57,869::BindingXMLRPC::<br>
> > > > > 859::vds::(wrapper) client [192.168.10.10]::call <br>
> > > > > vmGetMigrationStatus with ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}<br>
> > > > > > Thread-3803::DEBUG::2012-09-20 09:42:57,870::BindingXMLRPC::<br>
> > > > > 865::vds::(wrapper) return vmGetMigrationStatus with {'status': <br>
> > > > > {'message': 'Fatal error during migration', 'code': 12}}<br>
> > > > > > Dummy-1264::DEBUG::2012-09-20 09:42:58,172::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/<br>
> > > > > 332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox <br>
> > > > > iflag=direct,fullblock count=1 bs=1024000' (cwd None)<br>
> > > > > > Dummy-1264::DEBUG::2012-09-20 09:42:58,262::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in<br>
> > > > > \n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0515109 s, 19.9 <br>
> > > > > MB/s\n'; <rc> = 0<br>
> > > > > > Dummy-1264::DEBUG::2012-09-20 09:43:00,271::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/<br>
> > > > > 332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox <br>
> > > > > iflag=direct,fullblock count=1 bs=1024000' (cwd None)<br>
> > > > > > Dummy-1264::DEBUG::2012-09-20 09:43:00,362::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in<br>
> > > > > \n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0530171 s, 19.3 <br>
> > > > > MB/s\n'; <rc> = 0<br>
> > > > > > Thread-21::DEBUG::2012-09-20 09:43:00,612::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) '/usr/bin/dd iflag=direct if=/dev/<br>
> > > > > 26187d25-bfcb-40c7-97d1-667705ad2223/metadata bs=4096 <br>
> count=1' (cwd None)<br>
> > > > > > Thread-21::DEBUG::2012-09-20 09:43:00,629::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in<br>
> > > > > \n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000937698 s, 4.4 <br>
> > > > > MB/s\n'; <rc> = 0<br>
> > > > > > Thread-3805::DEBUG::2012-09-20 09:43:01,901::task::<br>
> > > > > 588::TaskManager.Task::(_updateState) Task=`ff134ecc-5597-4a83-81d6-<br>
> > > > > e4f9804871ff`::moving from state init -> state preparing<br>
> > > > > > Thread-3805::INFO::2012-09-20 09:43:01,902::logUtils::<br>
> > > > > 37::dispatcher::(wrapper) Run and protect: repoStats(options=None)<br>
> > > > > > Thread-3805::INFO::2012-09-20 09:43:01,902::logUtils::<br>
> > > > > 39::dispatcher::(wrapper) Run and protect: repoStats, Return <br>
> > > > > response: {'26187d25-bfcb-40c7-97d1-667705ad2223': {'delay': '0.<br>
> > > > > 0180931091309', 'lastCheck': 1348134180.825892, 'code': 0, 'valid': <br>
> > > > > True}, '90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay': '0.<br>
> > > > > 000955820083618', 'lastCheck': 1348134175.493277, 'code': 0, <br>
> > > 'valid': True}}<br>
> > > > > > Thread-3805::DEBUG::2012-09-20 09:43:01,902::task::<br>
> > > > > 1172::TaskManager.Task::(prepare) Task=`ff134ecc-5597-4a83-81d6-<br>
> > > > > e4f9804871ff`::finished: {'26187d25-bfcb-40c7-97d1-667705ad2223': <br>
> > > > > {'delay': '0.0180931091309', 'lastCheck': 1348134180.825892, 'code':<br>
> > > > > 0, 'valid': True}, '90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay':<br>
> > > > > '0.000955820083618', 'lastCheck': 1348134175.493277, 'code': 0, <br>
> > > > > 'valid': True}}<br>
> > > > > > Thread-3805::DEBUG::2012-09-20 09:43:01,902::task::<br>
> > > > > 588::TaskManager.Task::(_updateState) Task=`ff134ecc-5597-4a83-81d6-<br>
> > > > > e4f9804871ff`::moving from state preparing -> state finished<br>
> > > > > > Thread-3805::DEBUG::2012-09-20 09:43:01,903::resourceManager::<br>
> > > > > 809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests <br>
> > > > > {} resources {}<br>
> > > > > > Thread-3805::DEBUG::2012-09-20 09:43:01,903::resourceManager::<br>
> > > > > 844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>
> > > > > > Thread-3805::DEBUG::2012-09-20 09:43:01,903::task::<br>
> > > > > 978::TaskManager.Task::(_decref) Task=`ff134ecc-5597-4a83-81d6-<br>
> > > > > e4f9804871ff`::ref 0 aborting False<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,931::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`540335f0-2269-4bc4-<br>
> > > > > aaf4-11bf5990013f`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,931::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`2c3af5f5-<br>
> > > > > f877-4e6b-8a34-05bbe78b3c82`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,932::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`0ac0dd3a-ae2a-4963-<br>
> > > > > adf1-918993031f6b`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,932::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`35a65bb8-cbca-4049-<br>
> > > > > a428-28914bcb094a`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,933::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`4ef3258c-0380-4919-991f-<br>
> > > > > ee7be7e9f7fa`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,933::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`252e6d46-f362-46aa-a7ed-<br>
> > > > > dd00a86af6f0`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,933::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`509e608c-e657-473a-b031-<br>
> > > > > f0811da96bde`::Disk hdc stats not available<br>
> > > > > > Thread-3806::DEBUG::2012-09-20 09:43:01,934::libvirtvm::<br>
> > > > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-<br>
> > > > > fc2aeeae2e86`::Disk hdc stats not available<br>
> > > > > > Dummy-1264::DEBUG::2012-09-20 09:43:02,371::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/<br>
> > > > > 332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox <br>
> > > > > iflag=direct,fullblock count=1 bs=1024000' (cwd None)<br>
> > > > > > Dummy-1264::DEBUG::2012-09-20 09:43:02,462::__init__::<br>
> > > > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in<br>
> > > > > \n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0525183 s, 19.5 <br>
> > > > > MB/s\n'; <rc> = 0<br>
> > > > > > <br>
> > > > > > - -<br>
> > > > > > _______________________________________________<br>
> > > > > > Users mailing list<br>
> > > > > > Users@ovirt.org<br>
> > > > > > <a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > > > > <br>
> > > > <br>
> > > <br>
> > <br>
> <br>
</font></tt></body></html>