please make sure sanlock service is running properly.
----- Original Message -----
From: "Juan Jose" <jj197005(a)gmail.com>
To: users(a)ovirt.org
Sent: Monday, January 21, 2013 7:18:51 AM
Subject: [Users] Error run once VM
Hello everybody,
I'm following the "
http://www.ovirt.org/Quick_Start_Guide#Create_a_Fedora_Virtual_Machine
" and when I click OK buttom after put all parameters in "Run
Virtual Machine", I receive bellow error in events and in vdsm.log
file from my host:
Thread-352921::DEBUG::2013-01-21
15:55:40,709::task::978::TaskManager.Task::(_decref)
Task=`8bb281a1-434b-4506-b4a8-2d6665bb382f`::ref 0 aborting False
Thread-352921::INFO::2013-01-21
15:55:40,709::clientIF::274::vds::(prepareVolumePath) prepared
volume path:
/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/c77ff9d7-6280-4454-b342-faa206989d2a/bf973de9-d344-455d-a628-3dbfbf2693d9
Thread-352921::DEBUG::2013-01-21
15:55:40,717::libvirtvm::1338::vm.Vm::(_run)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::<?xml version="1.0"
encoding="utf-8"?> <domain type="kvm">
<name>Fedora17</name>
<uuid>51738dae-c758-4e77-bad7-281f56c4d61d</uuid>
<memory>1048576</memory> <currentMemory>1048576</currentMemory>
<vcpu>1</vcpu> <devices> <channel type="unix">
<target
name="com.redhat.rhevm.vdsm" type="virtio"/> <source
mode="bind"
path="/var/lib/libvirt/qemu/channels/Fedora17.com.redhat.rhevm.vdsm"/>
</channel> <input bus="ps2" type="mouse"/> <channel
type="spicevmc">
<target name="com.redhat.spice.0" type="virtio"/>
</channel>
<graphics autoport="yes" keymap="en-us" listen="0"
passwd="*****"
passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1"
type="spice"> <channel mode="secure" name="main"/>
<channel
mode="secure" name="inputs"/> <channel mode="secure"
name="cursor"/>
<channel mode="secure" name="playback"/> <channel
mode="secure"
name="record"/> <channel mode="secure"
name="display"/> </graphics>
<console type="pty"> <target port="0"
type="virtio"/> </console>
<video> <model heads="1" type="qxl"
vram="65536"/> </video>
<interface type="bridge"> <mac
address="00:1a:4a:6d:ca:00"/> <model
type="virtio"/> <source bridge="ovirtmgmt"/> <boot
order="3"/>
</interface> <memballoon model="virtio"/> <disk
device="cdrom"
snapshot="no" type="file"> <source
file="/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/cd9b45e6-2150-44d9-af1a-a557840fde9e/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-Live-XFCE.iso"
startupPolicy="optional"/> <target bus="ide"
dev="hdc"/> <readonly/>
<serial></serial> <boot order="1"/> </disk> <disk
device="disk"
snapshot="no" type="file"> <source
file="/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/c77ff9d7-6280-4454-b342-faa206989d2a/bf973de9-d344-455d-a628-3dbfbf2693d9"/>
<target bus="virtio" dev="vda"/>
<serial>c77ff9d7-6280-4454-b342-faa206989d2a</serial> <boot
order="2"/> <driver cache="none" error_policy="stop"
io="threads"
name="qemu" type="raw"/> </disk> </devices> <os>
<type arch="x86_64"
machine="pc-0.14">hvm</type> <smbios mode="sysinfo"/>
</os> <sysinfo
type="smbios"> <system> <entry name="manufacturer">Red
Hat</entry>
<entry name="product">RHEV Hypervisor</entry> <entry
name="version">17-1</entry> <entry
name="serial">36303030-3139-3236-3800-00199935CC54_00:19:99:35:cc:54</entry>
<entry name="uuid">51738dae-c758-4e77-bad7-281f56c4d61d</entry>
</system> </sysinfo> <clock adjustment="0"
offset="variable"> <timer
name="rtc" tickpolicy="catchup"/> </clock> <features>
<acpi/>
</features> <cpu match="exact"> <model>Conroe</model>
<topology
cores="1" sockets="1" threads="1"/> </cpu>
</domain>
Thread-352921::DEBUG::2013-01-21
15:55:41,258::vm::580::vm.Vm::(_startUnderlyingVm)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::_ongoingCreations
released Thread-352921::ERROR::2013-01-21
15:55:41,259::vm::604::vm.Vm::(_startUnderlyingVm)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::The vm start process
failed Traceback (most recent call last): File
"/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
self._connection.createXML(domxml, flags), File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
82, in wrapper ret = f(*args, **kwargs) File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in
createXML if ret is None:raise libvirtError('virDomainCreateXML()
failed', conn=self) libvirtError: internal error Failed to open
socket to sanlock daemon: No such file or directory
Thread-352921::DEBUG::2013-01-21
15:55:41,262::vm::920::vm.Vm::(setDownStatus)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::Changed state to Down:
internal error Failed to open socket to sanlock daemon: No such file
or directory
In Tree VMs, click in my "Fedora17" VM, and in detail windows events:
Failed to run VM Fedora17 (User: admin@internal).
Failed to run VM Fedora17 on Host host1.
VM Fedora17 is down. Exit message: internal error Failed to open
socket to sanlock daemon: No such file or directory.
In /var/log/vdsm/libvirt.log:
2013-01-21 14:55:41.258+0000: 10619: error :
virNetClientProgramDispatchError:174 : internal error Failed to open
socket to sanlock daemon: No such file or directory
if I make a "systemctl status sanlock" I see below error message:
sanlock.service - Shared Storage Lease Manager
Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled)
Active: failed (Result: exit-code) since Mon, 21 Jan 2013 13:17:32
+0100; 2h 57min ago
Process: 23911 ExecStop=/lib/systemd/systemd-sanlock stop
(code=exited, status=0/SUCCESS)
Process: 23898 ExecStart=/lib/systemd/systemd-sanlock start
(code=exited, status=0/SUCCESS)
Main PID: 23904 (code=exited, status=255)
CGroup: name=systemd:/system/sanlock.service
Jan 21 13:17:32 ovirt-host systemd-sanlock[23898]: Starting sanlock:
[ OK ]
Jan 21 13:17:32 ovirt-host sanlock[23904]: 2013-01-21 13:17:32+0100
2854380 [23904]: sanlock daemon started 2.4 aio...70652
Jan 21 13:17:32 ovirt-host sanlock[23904]: 2013-01-21 13:17:32+0100
2854380 [23904]: wdmd connect failed for watchd...dling
Could someone guide me about what could be the problem, please?
Many thanks in avanced,
Juanjo.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users