Michal Skrivanek <michal.skrivanek(a)redhat.com> написано
20.09.2012 16:13:16:
> От: Michal Skrivanek <michal.skrivanek(a)redhat.com>
> Кому: Dmitriy A Pyryakov <DPyryakov(a)ekb.beeline.ru>
> Копия: users(a)ovirt.org
> Дата: 20.09.2012 16:13
> Тема: Re: [Users] Fatal error during migration
>
>
> On Sep 20, 2012, at 12:07 , Dmitriy A Pyryakov wrote:
>
> > Michal Skrivanek <michal.skrivanek(a)redhat.com> написано
20.09.201216:02:11:
> >
> > > От: Michal Skrivanek <michal.skrivanek(a)redhat.com>
> > > Кому: Dmitriy A Pyryakov <DPyryakov(a)ekb.beeline.ru>
> > > Копия: users(a)ovirt.org
> > > Дата: 20.09.2012 16:02
> > > Тема: Re: [Users] Fatal error during migration
> > >
> > > Hi,
> > > well, so what is the other side saying? Maybe some connectivity
> > > problems between those 2 hosts? firewall?
> > >
> > > Thanks,
> > > michal
> >
> > Yes, firewall is not configured properly by default. If I stop it,
> migration done.
> > Thanks.
> The default is supposed to be:
>
> # oVirt default firewall configuration. Automatically generated by
> vdsm bootstrap script.
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> # vdsm
> -A INPUT -p tcp --dport 54321 -j ACCEPT
> # libvirt tls
> -A INPUT -p tcp --dport 16514 -j ACCEPT
> # SSH
> -A INPUT -p tcp --dport 22 -j ACCEPT
> # guest consoles
> -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
> # migration
> -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
> # snmp
> -A INPUT -p udp --dport 161 -j ACCEPT
> # Reject any other input traffic
> -A INPUT -j REJECT --reject-with icmp-host-prohibited
> -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
> icmp-host-prohibited
> COMMIT
my default is:
# cat /etc/sysconfig/iptables
# oVirt automatically generated firewall configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
#vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT
#
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with
icmp-host-prohibited
COMMIT
>
> did you change it manually or is the default missing anything?
default missing "libvirt tls" field.
was it an upgrade of some sort?
These are installed at node setup from ovirt-engine. Check the engine version and/or the
IPTablesConfig in vdc_options table on engine
> thanks,
> michal
> > > On Sep 20, 2012, at 11:55 , Dmitriy A Pyryakov wrote:
> > >
> > > > Hello,
> > > >
> > > > I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
> > > >
> > > > When I try to migrate VM from one host to another, I have an
> > > error: Migration failed due to Error: Fatal error during migration.
> > > >
> > > > vdsm.log:
> > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::BindingXMLRPC::
> > > 859::vds::(wrapper) client [192.168.10.10]::call vmMigrate with
> > > ({'src': '192.168.10.13', 'dst':
'192.168.10.12:54321', 'vmId':
> > > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method':
'online'},) {}
> > > flowID [180ad979]
> > > > Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::
> > > (migrate) {'src': '192.168.10.13', 'dst':
'192.168.10.12:54321',
> > > 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',
'method': 'online'}
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::122::vm.Vm::
> > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Destination server is: 192.168.10.12:54321
> > > > Thread-3797::DEBUG::2012-09-20 09:42:56,441::BindingXMLRPC::
> > > 865::vds::(wrapper) return vmMigrate with {'status':
{'message':
> > > 'Migration process starting', 'code': 0}}
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,441::vm::124::vm.Vm::
> > > (_setupVdsConnection) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Initiating connection with destination
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,452::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Disk hdc stats not available
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,457::vm::170::vm.Vm::
> > > (_prepareGuest) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::migration Process begins
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
> > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acquired
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,888::libvirtvm::
> > > 427::vm.Vm::(_startUnderlyingMigration)
> > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to
> > > qemu+tls://192.168.10.12/system
> > > > Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::
> > > 325::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::migration downtime thread started
> > > > Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::
> > > 353::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::starting migration monitor thread
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,903::libvirtvm::
> > > 340::vm.Vm::(cancel) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::canceling migration downtime thread
> > > > Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
> > > 390::vm.Vm::(stop) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::stopping migration monitor thread
> > > > Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::
> > > 337::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::migration downtime thread exiting
> > > > Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::
> > > (_recover) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation
> > > failed: Failed to connect to remote libvirt URI qemu+tls://192.168.
> > > 10.12/system
> > > > Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
> > > vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
> > > > Traceback (most recent call last):
> > > > File "/usr/share/vdsm/vm.py", line 223, in run
> > > > File "/usr/share/vdsm/libvirtvm.py", line 451, in
> _startUnderlyingMigration
> > > > File "/usr/share/vdsm/libvirtvm.py", line 491, in f
> > > > File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> > > line 82, in wrapper
> > > > File "/usr/lib64/python2.7/site-packages/libvirt.py", line
1034,
> > > in migrateToURI2
> > > > libvirtError: operation failed: Failed to connect to remote
> > > libvirt URI qemu+tls://192.168.10.12/system
> > > >
> > > > Thread-3802::DEBUG::2012-09-20 09:42:57,793::BindingXMLRPC::
> > > 859::vds::(wrapper) client [192.168.10.10]::call vmGetStats with
> > > ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
> > > > Thread-3802::DEBUG::2012-09-20 09:42:57,793::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Disk hdc stats not available
> > > > Thread-3802::DEBUG::2012-09-20 09:42:57,794::BindingXMLRPC::
> > > 865::vds::(wrapper) return vmGetStats with {'status':
{'message':
> > > 'Done', 'code': 0}, 'statsList':
[{'status': 'Up', 'username':
> > > 'Unknown', 'memUsage': '0', 'acpiEnable':
'true', 'pid': '22047',
> > > 'displayIp': '192.168.10.13', 'displayPort':
u'5912', 'session':
> > > 'Unknown', 'displaySecurePort': u'5913',
'timeOffset': '0', 'hash':
> > > '3018874162324753083', 'pauseCode': 'NOERR',
'clientIp': '',
> > > 'kvmEnable': 'true', 'network': {u'vnet6':
{'macAddr': '00:1a:4a:a8:
> > > 0a:08', 'rxDropped': '0', 'rxErrors':
'0', 'txDropped': '0',
> > > 'txRate': '0.0', 'rxRate': '0.0',
'txErrors': '0', 'state':
> > > 'unknown', 'speed': '1000', 'name':
u'vnet6'}}, 'vmId':
> > > '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayType':
'qxl',
> > > 'cpuUser': '13.27', 'disks': {u'hdc':
{'flushLatency': '0',
> > > 'readLatency': '0', 'writeLatency': '0'},
u'hda': {'readLatency':
> > > '6183805', 'apparentsize': '11811160064',
'writeLatency': '0',
> > > 'imageID': 'd96d19f6-5a28-4fef-892f-4a04549d4e38',
'flushLatency':
> > > '0', 'readRate': '271.87', 'truesize':
'11811160064', 'writeRate':
> > > '0.00'}}, 'monitorResponse': '0',
'statsAge': '0.77', 'cpuIdle':
> > > '86.73', 'elapsedTime': '3941', 'vmType':
'kvm', 'cpuSys': '0.00',
> > > 'appsList': [], 'guestIPs': '', 'nice':
''}]}
> > > > Thread-3803::DEBUG::2012-09-20 09:42:57,869::BindingXMLRPC::
> > > 859::vds::(wrapper) client [192.168.10.10]::call
> > > vmGetMigrationStatus with
('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
> > > > Thread-3803::DEBUG::2012-09-20 09:42:57,870::BindingXMLRPC::
> > > 865::vds::(wrapper) return vmGetMigrationStatus with {'status':
> > > {'message': 'Fatal error during migration',
'code': 12}}
> > > > Dummy-1264::DEBUG::2012-09-20 09:42:58,172::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/
> > > 332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox
> > > iflag=direct,fullblock count=1 bs=1024000' (cwd None)
> > > > Dummy-1264::DEBUG::2012-09-20 09:42:58,262::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records
in
> > > \n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0515109 s, 19.9
> > > MB/s\n'; <rc> = 0
> > > > Dummy-1264::DEBUG::2012-09-20 09:43:00,271::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/
> > > 332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox
> > > iflag=direct,fullblock count=1 bs=1024000' (cwd None)
> > > > Dummy-1264::DEBUG::2012-09-20 09:43:00,362::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records
in
> > > \n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0530171 s, 19.3
> > > MB/s\n'; <rc> = 0
> > > > Thread-21::DEBUG::2012-09-20 09:43:00,612::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) '/usr/bin/dd iflag=direct if=/dev/
> > > 26187d25-bfcb-40c7-97d1-667705ad2223/metadata bs=4096 count=1' (cwd
None)
> > > > Thread-21::DEBUG::2012-09-20 09:43:00,629::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records
in
> > > \n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000937698 s, 4.4
> > > MB/s\n'; <rc> = 0
> > > > Thread-3805::DEBUG::2012-09-20 09:43:01,901::task::
> > > 588::TaskManager.Task::(_updateState) Task=`ff134ecc-5597-4a83-81d6-
> > > e4f9804871ff`::moving from state init -> state preparing
> > > > Thread-3805::INFO::2012-09-20 09:43:01,902::logUtils::
> > > 37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> > > > Thread-3805::INFO::2012-09-20 09:43:01,902::logUtils::
> > > 39::dispatcher::(wrapper) Run and protect: repoStats, Return
> > > response: {'26187d25-bfcb-40c7-97d1-667705ad2223':
{'delay': '0.
> > > 0180931091309', 'lastCheck': 1348134180.825892,
'code': 0, 'valid':
> > > True}, '90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay':
'0.
> > > 000955820083618', 'lastCheck': 1348134175.493277,
'code': 0,
> 'valid': True}}
> > > > Thread-3805::DEBUG::2012-09-20 09:43:01,902::task::
> > > 1172::TaskManager.Task::(prepare) Task=`ff134ecc-5597-4a83-81d6-
> > > e4f9804871ff`::finished: {'26187d25-bfcb-40c7-97d1-667705ad2223':
> > > {'delay': '0.0180931091309', 'lastCheck':
1348134180.825892, 'code':
> > > 0, 'valid': True}, '90104c3d-837b-47dd-8c82-dda92eec30d9':
{'delay':
> > > '0.000955820083618', 'lastCheck': 1348134175.493277,
'code': 0,
> > > 'valid': True}}
> > > > Thread-3805::DEBUG::2012-09-20 09:43:01,902::task::
> > > 588::TaskManager.Task::(_updateState) Task=`ff134ecc-5597-4a83-81d6-
> > > e4f9804871ff`::moving from state preparing -> state finished
> > > > Thread-3805::DEBUG::2012-09-20 09:43:01,903::resourceManager::
> > > 809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests
> > > {} resources {}
> > > > Thread-3805::DEBUG::2012-09-20 09:43:01,903::resourceManager::
> > > 844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> > > > Thread-3805::DEBUG::2012-09-20 09:43:01,903::task::
> > > 978::TaskManager.Task::(_decref) Task=`ff134ecc-5597-4a83-81d6-
> > > e4f9804871ff`::ref 0 aborting False
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,931::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`540335f0-2269-4bc4-
> > > aaf4-11bf5990013f`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,931::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`2c3af5f5-
> > > f877-4e6b-8a34-05bbe78b3c82`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,932::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`0ac0dd3a-ae2a-4963-
> > > adf1-918993031f6b`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,932::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`35a65bb8-cbca-4049-
> > > a428-28914bcb094a`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,933::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`4ef3258c-0380-4919-991f-
> > > ee7be7e9f7fa`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,933::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`252e6d46-f362-46aa-a7ed-
> > > dd00a86af6f0`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,933::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`509e608c-e657-473a-b031-
> > > f0811da96bde`::Disk hdc stats not available
> > > > Thread-3806::DEBUG::2012-09-20 09:43:01,934::libvirtvm::
> > > 240::vm.Vm::(_getDiskStats) vmId=`2bf3e6eb-49e4-42c7-8188-
> > > fc2aeeae2e86`::Disk hdc stats not available
> > > > Dummy-1264::DEBUG::2012-09-20 09:43:02,371::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/
> > > 332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox
> > > iflag=direct,fullblock count=1 bs=1024000' (cwd None)
> > > > Dummy-1264::DEBUG::2012-09-20 09:43:02,462::__init__::
> > > 1249::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records
in
> > > \n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0525183 s, 19.5
> > > MB/s\n'; <rc> = 0
> > > >
> > > > - -
> > > > _______________________________________________
> > > > Users mailing list
> > > > Users(a)ovirt.org
> > > >
http://lists.ovirt.org/mailman/listinfo/users
> > >
> >
>