[ovirt-users] getVdsCapabilites unexpected exception [was: Re: AIO 3.4 on fedora 19 initial errors before coming up]
Gianluca Cecchi
gianluca.cecchi at gmail.com
Sun May 11 21:49:06 UTC 2014
On Sun, May 11, 2014 at 8:41 PM, Gianluca Cecchi
<gianluca.cecchi at gmail.com>wrote:
>
> On Sun, May 11, 2014 at 8:18 PM, Gianluca Cecchi <
> gianluca.cecchi at gmail.com> wrote:
>
>> On Sun, May 11, 2014 at 4:10 PM, Roy Golan <rgolan at redhat.com> wrote:
>>
>>>
>>> The vm will stay in "Waiting.." as the getVdsCaps is failing and the
>>> monitoring of Vms will not take place.
>>> need to fix this "Unexpected error" first. is it a matter of ssl enabled
>>> configuration for host communication? i.e. can you try vdsClient -s 0
>>> getVdsCaps ?
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> I didn't change anything on this server.
>> It is an all-in-one config so both engine and vdsm host are on it.
>> Yesterday and every previous day I was able to start the system and start
>> the VM; I only applied the yum update command yeseterday
>> (with --exclude=sos due to the opened bug) and then I made shutdown of
>> the system.
>> Today after startup I got this problem.
>>
>> [root at tekkaman vdsm]# vdsClient -s 0 getVdsCaps
>> Unexpected exception
>>
>> it seems the error in vdsm.log when I run the command above is of this
>> type:
>>
>> Thread-25::ERROR::2014-05-11
>> 20:18:02,202::BindingXMLRPC::1086::vds::(wrapper) unexpected error
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/BindingXMLRPC.py", line 1070, in wrapper
>> res = f(*args, **kwargs)
>> File "/usr/share/vdsm/BindingXMLRPC.py", line 393, in getCapabilities
>> ret = api.getCapabilities()
>> File "/usr/share/vdsm/API.py", line 1185, in getCapabilities
>> c = caps.get()
>> File "/usr/share/vdsm/caps.py", line 369, in get
>> caps.update(netinfo.get())
>> File "/usr/lib64/python2.7/site-packages/vdsm/netinfo.py", line 557, in
>> get
>> netAttr.get('qosOutbound'))
>> File "/usr/lib64/python2.7/site-packages/vdsm/netinfo.py", line 487, in
>> _getNetInfo
>> ipv4addr, ipv4netmask, ipv6addrs = getIpInfo(iface)
>> File "/usr/lib64/python2.7/site-packages/vdsm/netinfo.py", line 317, in
>> getIpInfo
>> ipv6addrs = devInfo.get_ipv6_addresses()
>> SystemError: error return without exception set
>>
>>
> Based on above errors, I think that for some reason these two python
> related packages that were updated yesterday are causing some problems with
> vdsm.
> Can you confirm that you can run ok the 3.4 vdsm with those?
>
> vdsm-4.14.6-0.fc19.x86_64
>
>
> May 10 21:24:23 Updated: python-ethtool-0.9-2.fc19.x86_64
> May 10 21:24:23 Updated: python-lxml-3.3.5-1.fc19.x86_64
>
> I can also try to rollback and see...
>
>
I was right.
Against what to bugzilla?
This is a show stopper for fedora 19 ovirt users...
[root at tekkaman log]# vdsClient -s 0 getVdsCaps
Unexpected exception
[root at tekkaman log]# yum downgrade python-lxml python-ethtool
Loaded plugins: fastestmirror, langpacks, refresh-packagekit, versionlock
Dropbox
| 951 B 00:00:00
adobe-linux-x86_64
| 951 B 00:00:00
fedora-virt-preview
| 2.9 kB 00:00:00
google-chrome
| 951 B 00:00:00
livna
| 1.3 kB 00:00:00
ovirt-3.3.3
| 2.9 kB 00:00:00
ovirt-stable
| 2.9 kB 00:00:00
rpmfusion-free-updates
| 3.3 kB 00:00:00
rpmfusion-nonfree-updates
| 3.3 kB 00:00:00
updates/19/x86_64/metalink
| 28 kB 00:00:00
Loading mirror speeds from cached hostfile
* fedora: mirror.netcologne.de
* livna: rpm.livna.org
* rpmfusion-free: mirror.switch.ch
* rpmfusion-free-updates: mirror.switch.ch
* rpmfusion-nonfree: mirror.switch.ch
* rpmfusion-nonfree-updates: mirror.switch.ch
* updates: mirror.netcologne.de
Resolving Dependencies
--> Running transaction check
---> Package python-ethtool.x86_64 0:0.8-1.fc19 will be a downgrade
---> Package python-ethtool.x86_64 0:0.9-2.fc19 will be erased
---> Package python-lxml.x86_64 0:3.2.1-1.fc19 will be a downgrade
---> Package python-lxml.x86_64 0:3.3.5-1.fc19 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
======================================================================================================================================================
Package Arch
Version Repository Size
======================================================================================================================================================
Downgrading:
python-ethtool x86_64
0.8-1.fc19 fedora 32 k
python-lxml x86_64
3.2.1-1.fc19 fedora 752 k
Transaction Summary
======================================================================================================================================================
Downgrade 2 Packages
Total download size: 785 k
Is this ok [y/d/N]:
Downloading packages:
(1/2):
python-ethtool-0.8-1.fc19.x86_64.rpm
| 32 kB 00:00:00
(2/2):
python-lxml-3.2.1-1.fc19.x86_64.rpm
| 752 kB 00:00:02
------------------------------------------------------------------------------------------------------------------------------------------------------
Total
317 kB/s | 785 kB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing :
python-lxml-3.2.1-1.fc19.x86_64
1/4
Installing :
python-ethtool-0.8-1.fc19.x86_64
2/4
Cleanup :
python-lxml-3.3.5-1.fc19.x86_64
3/4
Cleanup :
python-ethtool-0.9-2.fc19.x86_64
4/4
Verifying :
python-ethtool-0.8-1.fc19.x86_64
1/4
Verifying :
python-lxml-3.2.1-1.fc19.x86_64
2/4
Verifying :
python-ethtool-0.9-2.fc19.x86_64
3/4
Verifying :
python-lxml-3.3.5-1.fc19.x86_64
4/4
Removed:
python-ethtool.x86_64
0:0.9-2.fc19 python-lxml.x86_64
0:3.3.5-1.fc19
Installed:
python-ethtool.x86_64
0:0.8-1.fc19 python-lxml.x86_64
0:3.2.1-1.fc19
Complete!
[root at tekkaman log]# vdsClient -s 0 getVdsCaps
Unexpected exception
[root at tekkaman log]# systemctl restart vdsmd
[root at tekkaman log]# systemctl status vdsmd
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Sun 2014-05-11 23:44:52 CEST; 16s ago
Process: 13935 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
--post-stop (code=exited, status=0/SUCCESS)
Process: 13939 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=0/SUCCESS)
Main PID: 14003 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
├─14003 /usr/bin/python /usr/share/vdsm/vdsm
├─14074 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 32 30
├─14081 rpc.statd --no-notify
├─14103 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 34 32
├─14105 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 41 40
├─14106 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 48 46
├─14107 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 54 48
├─14108 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 41 40
├─14121 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 50 41
├─14123 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 66 65
├─14125 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 71 50
├─14129 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 75 71
├─14131 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 81 75
├─14135 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 92 89
├─14137 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 75 65
└─14138 /usr/bin/python
/usr/share/vdsm/storage/remoteFileHandler.pyc 89 75
May 11 23:44:53 tekkaman.localdomain.local vdsm[14003]: vdsm vds WARNING
Unable to load the json rpc server module. Please make sure it is installed.
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5 client
step 2
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5
parse_server_challenge()
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5
ask_user_info()
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5 client
step 2
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5
ask_user_info()
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5
make_client_response()
May 11 23:44:53 tekkaman.localdomain.local python[14003]: DIGEST-MD5 client
step 3
May 11 23:44:57 tekkaman.localdomain.local rpc.statd[14081]: Version 1.2.7
starting
May 11 23:44:57 tekkaman.localdomain.local rpc.statd[14081]: Flags: TI-RPC
[root at tekkaman log]# vdsClient -s 0 getVdsCaps
HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:e6aa759a959'}]}
ISCSIInitiatorName = 'iqn.1994-05.com.redhat:e6aa759a959'
bondings = {'bond0': {'addr': '',
'cfg': {},
'hwaddr': '9a:0f:68:19:0f:87',
'ipv6addrs': [],
'mtu': '1500',
'netmask': '',
'slaves': []}}
bridges = {';vdsmdummy;': {'addr': '',
'cfg': {},
'gateway': '',
'ipv6addrs': [],
'ipv6gateway': '::',
'mtu': '1500',
'netmask': '',
'ports': [],
'stp': 'off'},
'ovirtmgmt': {'addr': '192.168.1.101',
'cfg': {'BOOTPROTO': 'none',
'DELAY': '0',
'DEVICE': 'ovirtmgmt',
'GATEWAY': '192.168.1.1',
'IPADDR': '192.168.1.101',
'NETMASK': '255.255.255.0',
'NM_CONTROLLED': 'no',
'ONBOOT': 'yes',
'TYPE': 'Bridge'},
'gateway': '192.168.1.1',
'ipv6addrs': ['fe80::92e6:baff:fec9:f1e1/64'],
'ipv6gateway': '::',
'mtu': '1500',
'netmask': '255.255.255.0',
'ports': ['p10p1'],
'stp': 'off'}}
clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4']
cpuCores = '4'
cpuFlags =
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2'
cpuModel = 'AMD Athlon(tm) II X4 630 Processor'
cpuSockets = '1'
cpuSpeed = '2800.000'
cpuThreads = '4'
emulatedMachines = ['pc',
'pc-q35-1.4',
'pc-q35-1.5',
'q35',
'isapc',
'pc-0.10',
'pc-0.11',
'pc-0.12',
'pc-0.13',
'pc-0.14',
'pc-0.15',
'pc-1.0',
'pc-1.1',
'pc-1.2',
'pc-1.3',
'pc-i440fx-1.4',
'pc-i440fx-1.5',
'none']
guestOverhead = '65'
hooks = {}
kvmEnabled = 'true'
lastClient = '192.168.1.101'
lastClientIface = 'ovirtmgmt'
management_ip = '0.0.0.0'
memSize = '16048'
netConfigDirty = 'False'
networks = {'ovirtmgmt': {'addr': '192.168.1.101',
'bridged': True,
'cfg': {'BOOTPROTO': 'none',
'DELAY': '0',
'DEVICE': 'ovirtmgmt',
'GATEWAY': '192.168.1.1',
'IPADDR': '192.168.1.101',
'NETMASK': '255.255.255.0',
'NM_CONTROLLED': 'no',
'ONBOOT': 'yes',
'TYPE': 'Bridge'},
'gateway': '192.168.1.1',
'iface': 'ovirtmgmt',
'ipv6addrs': ['fe80::92e6:baff:fec9:f1e1/64'],
'ipv6gateway': '::',
'mtu': '1500',
'netmask': '255.255.255.0',
'ports': ['p10p1'],
'stp': 'off'}}
nics = {'p10p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt',
'DEVICE': 'p10p1',
'HWADDR': '90:e6:ba:c9:f1:e1',
'NM_CONTROLLED': 'no',
'ONBOOT': 'yes'},
'hwaddr': '90:e6:ba:c9:f1:e1',
'ipv6addrs': ['fe80::92e6:baff:fec9:f1e1/64'],
'mtu': '1500',
'netmask': '',
'speed': 100}}
operatingSystem = {'name': 'Fedora', 'release': '8', 'version': '19'}
packages2 = {'kernel': {'buildtime': 1398276657.0,
'release': '100.fc19.x86_64',
'version': '3.13.11'},
'libvirt': {'buildtime': 1387094943,
'release': '1.fc19',
'version': '1.1.3.2'},
'mom': {'buildtime': 1391183561, 'release': '1.fc19',
'version': '0.4.0'},
'qemu-img': {'buildtime': 1384762225,
'release': '2.fc19',
'version': '1.6.1'},
'qemu-kvm': {'buildtime': 1384762225,
'release': '2.fc19',
'version': '1.6.1'},
'spice-server': {'buildtime': 1383130020,
'release': '3.fc19',
'version': '0.12.4'},
'vdsm': {'buildtime': 1395804204, 'release': '0.fc19',
'version': '4.14.6'}}
reservedMem = '321'
rngSources = ['random']
software_revision = '0'
software_version = '4.14'
supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4']
supportedProtocols = ['2.2', '2.3']
uuid = 'E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1'
version_name = 'Snow Man'
vlans = {}
vmTypes = ['kvm']
And I'm now able to start and connect to my VM again.
HIH,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140511/7bd9c987/attachment-0001.html>
More information about the Users
mailing list