On Tue, Jan 15, 2013 at 12:14:45PM +0100, Gianluca Cecchi wrote:
On Tue, Jan 15, 2013 at 11:49 AM, Dan Kenigsberg wrote:
> vdsClient -s 0 getVdsCaps
>
> would access localhost vdsm (assuming you are using the default ssl). If it
> barfs, we have a problem.
>
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: inactive (dead)
So vdsm is enabled, it should've been up after reboot.
But it has died. Maybe you can findout when it has happened, and
correlate this to vdsm.log or libvirtd.log for clues.
I'm out of guesses right now.
CGroup: name=systemd:/system/vdsmd.service
# vdsClient -s 0 getVdsCaps
Connection to 10.4.4.59:54321 refused
# systemctl start vdsmd.service
#
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue, 2013-01-15 12:09:11 CET; 15s ago
Process: 6566 ExecStart=/lib/systemd/systemd-vdsmd start (code=exited,
status=0/SUCCESS)
Main PID: 6834 (respawn)
CGroup: name=systemd:/system/vdsmd.service
├ 6834 /bin/bash -e /usr/share/vdsm/respawn --minlifetime 10 --daemon
--masterpid /var/run/vdsm/respawn.pid ...
├ 6837 /usr/bin/python /usr/share/vdsm/vdsm
├ 6855 /usr/bin/sudo -n /usr/bin/python
/usr/share/vdsm/supervdsmServer.py d1587fa5-b439-4a5e-bca3-e3971dc08...
└ 6856 /usr/bin/python /usr/share/vdsm/supervdsmServer.py
d1587fa5-b439-4a5e-bca3-e3971dc08627 6837 /var/run...
Jan 15 12:09:11 f18ovn03runuser[6831]: pam_unix(runuser:session): session
closed for user vdsm
Jan 15 12:09:11 f18ovn03 systemd-vdsmd[6566]: [27B blob data]
Jan 15 12:09:11 f18ovn03 systemd[1]: Started Virtual Desktop Server Manager.
Jan 15 12:09:11 f18ovn03 python[6837]: DIGEST-MD5 client step 2
Jan 15 12:09:11 f18ovn03 python[6837]: DIGEST-MD5 client step 2
Jan 15 12:09:11 f18ovn03 python[6837]: DIGEST-MD5 client step 3
Jan 15 12:09:11 f18ovn03 vdsm[6837]: vdsm fileUtils WARNING Dir
/rhev/data-center/mnt already exists
Jan 15 12:09:14 f18ovn03 vdsm[6837]: vdsm Storage.LVM WARNING lvm pvs
failed: 5 [] [' Skipping clustered vo...T01']
Jan 15 12:09:14 f18ovn03 vdsm[6837]: vdsm Storage.LVM WARNING lvm vgs
failed: 5 [] [' Skipping clustered vo...T01']
Jan 15 12:09:14 f18ovn03 vdsm[6837]: vdsm vds WARNING Unable to load the
json rpc server module. Please make...lled.
Is it normal the message regarding
WARNING Dir /rhev/data-center/mnt already exists
yeah, just another log noise, I suppose :-(
?
Now
# vdsClient -s 0 getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:3563e5612db4'}], 'FC': [{'wwpn':
'50014380011bf958', 'wwnn': '50014380011bf959', 'model':
'QMH2462 -
PCI-Express Dual Channel 4Gb Fibre Channel Mezzanine HBA'}, {'wwpn':
'50014380011bf95a', 'wwnn': '50014380011bf95b', 'model':
'QMH2462 -
PCI-Express Dual Channel 4Gb Fibre Channel Mezzanine HBA'}]}
ISCSIInitiatorName = iqn.1994-05.com.redhat:3563e5612db4
bondings = {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '',
'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0':
{'addr': '', 'cfg':
{}, 'mtu': '1500', 'netmask': '', 'slaves': [],
'hwaddr':
'00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '10.4.4.59', 'cfg':
{'IPV6INIT': 'no',
'IPADDR': '10.4.4.59', 'ONBOOT': 'yes', 'DELAY':
'0', 'NM_CONTROLLED':
'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'GATEWAY':
'10.4.4.250'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off',
'ports': ['em3']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 8
cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,rdtscp,lm,3dnowext,3dnow,rep_good,nopl,extd_apicid,pni,cx16,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,model_athlon,model_Opteron_G1,model_Opteron_G2
cpuModel = Dual-Core AMD Opteron(tm) Processor 8222
cpuSockets = 4
cpuSpeed = 3013.706
cpuThreads = 8
emulatedMachines = ['pc-1.2', 'none', 'pc', 'pc-1.1',
'pc-1.0', 'pc-0.15',
'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11',
'pc-0.10', 'isapc']
guestOverhead = 65
hooks = {}
kvmEnabled = true
lastClient = 10.4.4.60
lastClientIface = ovirtmgmt
management_ip =
memSize = 32176
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'10.4.4.59', 'cfg':
{'IPV6INIT': 'no', 'IPADDR': '10.4.4.59',
'ONBOOT': 'yes', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0',
'BOOTPROTO': 'none',
'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'GATEWAY': '10.4.4.250'}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'off',
'bridged': True,
'gateway': '10.4.4.250', 'ports': ['em3']}}
nics = {'em4': {'addr': '', 'cfg': {'DEVICE':
'em4', 'HWADDR':
'00:1c:c4:ab:3a:de', 'ONBOOT': 'yes', 'NM_CONTROLLED':
'no'}, 'mtu':
'1500', 'netmask': '', 'hwaddr':
'00:1c:c4:ab:3a:de', 'speed': 1000},
'em1': {'addr': '', 'cfg': {'DEVICE':
'em1', 'NM_CONTROLLED': 'no', 'TYPE':
'Ethernet', 'ONBOOT': 'yes', 'HWADDR':
'00:1E:0B:21:B8:C4'}, 'mtu': '1500',
'netmask': '', 'hwaddr': '00:1e:0b:21:b8:c4',
'speed': 1000}, 'em3':
{'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt',
'IPV6INIT': 'no',
'NM_CONTROLLED': 'no', 'DEVICE': 'em3', 'HWADDR':
'00:1c:c4:ab:3a:dd',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'', 'hwaddr':
'00:1c:c4:ab:3a:dd', 'speed': 1000}, 'em2': {'addr':
'', 'cfg': {'DEVICE':
'em2', 'NM_CONTROLLED': 'no', 'TYPE': 'Ethernet',
'ONBOOT': 'yes',
'HWADDR': '00:1E:0B:21:B8:C6'}, 'mtu': '1500',
'netmask': '', 'hwaddr':
'00:1e:0b:21:b8:c6', 'speed': 1000}}
operatingSystem = {'release': '1', 'version': '18',
'name': 'Fedora'}
packages2 = {'kernel': {'release': '3.fc18.x86_64',
'buildtime':
1355776539.0, 'version': '3.6.11'}, 'spice-server':
{'release': '1.fc18',
'buildtime': 1356035501, 'version': '0.12.2'}, 'vdsm':
{'release':
'0.78.gitb005b54.fc18', 'buildtime': 1358090637, 'version':
'4.10.3'},
'qemu-kvm': {'release': '1.fc18', 'buildtime':
1355702442, 'version':
'1.2.2'}, 'libvirt': {'release': '3.fc18',
'buildtime': 1355788803,
'version': '0.10.2.2'}, 'qemu-img': {'release':
'1.fc18', 'buildtime':
1355702442, 'version': '1.2.2'}, 'mom': {'release':
'1.fc18', 'buildtime':
1349470214, 'version': '0.3.0'}}
reservedMem = 321
software_revision = 0.78
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = 34353439-3036-435A-4A38-303330393338
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']
and from webadmin I see the host up...
Is it repeatable that vdsm is down after boot?
Dan.