[Users] Error run once VM
by Juan Jose
Hello everybody,
I'm following the "
http://www.ovirt.org/Quick_Start_Guide#Create_a_Fedora_Virtual_Machine" and
when I click OK buttom after put all parameters in "Run Virtual Machine", I
receive bellow error in events and in vdsm.log file from my host:
*Thread-352921::DEBUG::2013-01-21
15:55:40,709::task::978::TaskManager.Task::(_decref)
Task=`8bb281a1-434b-4506-b4a8-2d6665bb382f`::ref 0 aborting
FalseThread-352921::INFO::2013-01-21
15:55:40,709::clientIF::274::vds::(prepareVolumePath) prepared volume
path: /rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/c77ff9d7-6280-4454-b342-faa206989d2a/bf973de9-d344-455d-a628-3dbfbf2693d9Thread-352921::DEBUG::2013-01-21
15:55:40,717::libvirtvm::1338::vm.Vm::(_run)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::<?xml version="1.0"
encoding="utf-8"?><domain
type="kvm"> <name>Fedora17</name> <uuid>51738dae-c758-4e77-bad7-281f56c4d61d</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <devices> <channel
type="unix"> <target name="com.redhat.rhevm.vdsm"
type="virtio"/> <source mode="bind"
path="/var/lib/libvirt/qemu/channels/Fedora17.com.redhat.rhevm.vdsm"/> </channel> <input
bus="ps2" type="mouse"/> <channel type="spicevmc"> <target
name="com.redhat.spice.0" type="virtio"/> </channel> <graphics
autoport="yes" keymap="en-us" listen="0" passwd="*****"
passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1"
type="spice"> <channel mode="secure" name="main"/> <channel
mode="secure" name="inputs"/> <channel mode="secure"
name="cursor"/> <channel mode="secure" name="playback"/> <channel
mode="secure" name="record"/> <channel mode="secure"
name="display"/> </graphics> <console type="pty"> <target port="0"
type="virtio"/> </console> <video> <model heads="1" type="qxl"
vram="65536"/> </video> <interface type="bridge"> <mac
address="00:1a:4a:6d:ca:00"/> <model type="virtio"/> <source
bridge="ovirtmgmt"/> <boot order="3"/> </interface> <memballoon
model="virtio"/> <disk device="cdrom" snapshot="no"
type="file"> <source
file="/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/cd9b45e6-2150-44d9-af1a-a557840fde9e/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-Live-XFCE.iso"
startupPolicy="optional"/> <target bus="ide"
dev="hdc"/> <readonly/> <serial></serial> <boot
order="1"/> </disk> <disk device="disk" snapshot="no"
type="file"> <source
file="/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/c77ff9d7-6280-4454-b342-faa206989d2a/bf973de9-d344-455d-a628-3dbfbf2693d9"/> <target
bus="virtio" dev="vda"/> <serial>c77ff9d7-6280-4454-b342-faa206989d2a</serial> <boot
order="2"/> <driver cache="none" error_policy="stop" io="threads"
name="qemu" type="raw"/> </disk> </devices> <os> <type arch="x86_64"
machine="pc-0.14">hvm</type> <smbios mode="sysinfo"/> </os> <sysinfo
type="smbios"> <system> <entry name="manufacturer">Red
Hat</entry> <entry name="product">RHEV Hypervisor</entry> <entry
name="version">17-1</entry> <entry
name="serial">36303030-3139-3236-3800-00199935CC54_00:19:99:35:cc:54</entry> <entry
name="uuid">51738dae-c758-4e77-bad7-281f56c4d61d</entry> </system> </sysinfo> <clock
adjustment="0" offset="variable"> <timer name="rtc"
tickpolicy="catchup"/> </clock> <features> <acpi/> </features> <cpu
match="exact"> <model>Conroe</model> <topology cores="1" sockets="1"
threads="1"/> </cpu></domain>
Thread-352921::DEBUG::2013-01-21
15:55:41,258::vm::580::vm.Vm::(_startUnderlyingVm)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::_ongoingCreations
releasedThread-352921::ERROR::2013-01-21
15:55:41,259::vm::604::vm.Vm::(_startUnderlyingVm)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::The vm start process
failedTraceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 82, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)libvirtError: internal error Failed to open socket to
sanlock daemon: No such file or
directoryThread-352921::DEBUG::2013-01-21
15:55:41,262::vm::920::vm.Vm::(setDownStatus)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::Changed state to Down:
internal error Failed to open socket to sanlock daemon: No such file
or directory*
In Tree VMs, click in my "Fedora17" VM, and in detail windows events:
Failed to run VM Fedora17 (User: admin@internal).
Failed to run VM Fedora17 on Host host1.
VM Fedora17 is down. Exit message: internal error Failed to open socket to
sanlock daemon: No such file or directory.
In /var/log/vdsm/libvirt.log:
2013-01-21 14:55:41.258+0000: 10619: error :
virNetClientProgramDispatchError:174 : internal error Failed to open socket
to sanlock daemon: No such file or directory
if I make a "systemctl status sanlock" I see below error message:
sanlock.service - Shared Storage Lease Manager
Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled)
Active: *failed* (Result: exit-code) since Mon, 21 Jan 2013 13:17:32
+0100; 2h 57min ago
Process: 23911 ExecStop=/lib/systemd/systemd-sanlock stop (code=exited,
status=0/SUCCESS)
Process: 23898 ExecStart=/lib/systemd/systemd-sanlock start (code=exited,
status=0/SUCCESS)
Main PID: 23904 (code=exited, status=255)
CGroup: name=systemd:/system/sanlock.service
Jan 21 13:17:32 ovirt-host systemd-sanlock[23898]: Starting sanlock: [ OK
]
Jan 21 13:17:32 ovirt-host sanlock[23904]: 2013-01-21 13:17:32+0100 2854380
[23904]: sanlock daemon started 2.4 aio...70652
Jan 21 13:17:32 ovirt-host sanlock[23904]: 2013-01-21 13:17:32+0100 2854380
[23904]: wdmd connect failed for watchd...dling
Could someone guide me about what could be the problem, please?
Many thanks in avanced,
Juanjo.
11 years, 10 months
[Users] SLOW I/O performance
by Alex Leonhardt
Hi All,
This is my current setup:
HV1 has :
storage_domain_1
is SPM master
HV2 has :
storage_domain_2
is normal (not master)
HV1 has storage_domain_1 mounted via 127.0.0.1 (network name, but hosts
entry sends it to loopback)
HV2 has storage_domain_2 mounted via 127.0.0.1 (network name, but hosts
entry sends it to loopback)
All VMs on HV1 have its storage set to storage_domain_1 and all VMs on HV2
have their storage set to storage_domain_2
My problem now is that after I finally created all the disks on HV2 over a
super slow mgmt network (ovirtmgmt), it's a 100 Mbit only, I'm now trying
to kickstart all the VMs I created, however, formatting the disk is taking
for ever ~ 20-30mins for 12 GB, that is roughly how long it took to create
the disks over the 100Mbit link.
The weirdness really starts with HV2, as all VMs on HV1 with disks on
storage_domain_1 have "good" I/O throughput, all VMs on HV2 are awfully
slow in reading/writing to disk.
I've tried some network settings to increase throughput, but those didnt
help / had no effect at all.
Anyone come across this issue ? Is it something to do with the ovirtmgmt
interface only being 100Mbit ?
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
11 years, 10 months
[Users] more useful repo to use right now?
by Gianluca Cecchi
Hello,
I see that under
http://resources.ovirt.org/releases/beta/rpm/Fedora/18
there are populated rpms.
Does this mean 3.2 beta has been released? Where to find announcements
in case for future?
I presume if this is the case, it would be better to use them instead
of nightly at this stage, correct?
If I install beta, can I then "regularly" update to final 3.2 with the
usual "engine-update" and so on?
I'm going to test on Dell R310 (engine) and R815 (node) tomorrow, so
I'm available to test what is best for the project.
I'm going to test both on local storage and iSCSI EQL PS5000X
thanks
Gianluca
11 years, 10 months
Re: [Users] mark VM to start on boot
by Roy Golan
This is a multi-part message in MIME format.
--------------010204010309060805010107
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
On 01/20/2013 07:25 PM, Jim Kinney wrote:
>
> They are marked "highly available" but I thought that was for
> migration only. I saw a database boolean but other than an update
> command, I see no other way to boot on powerup.
>
your VMs probably failed to restart on your hosts. maybe storage isn't
connected yet. check the engine.log
> On Jan 20, 2013 1:38 AM, "Roy Golan" <rgolan(a)redhat.com
> <mailto:rgolan@redhat.com>> wrote:
>
> On 01/18/2013 08:27 PM, Jim Kinney wrote:
>> How do I mark a VM to startup if a node fails?
>>
>> 2 hosts in cluster, windows domain controller on one host, backup
>> on second host. Both are marked high priority.
>> "Bad Things" happen and both hosts get rebooted. I want those
>> domain controllers to automatically restart.
>>
> are those VMs also marked as "Highly available" under High
> Availability tab?
>> I'm assuming the failure of the hosts did not knock down the
>> manager. (I have them on separate floors, power, etc).
>> --
>> --
>> James P. Kinney III
>> ////
>> ////Every time you stop a school, you will have to build a jail.
>> What you gain at one end you lose at the other. It's like feeding
>> a dog on his own tail. It won't fatten the dog.
>> - Speech 11/23/1900 Mark Twain
>> ////
>> http://electjimkinney.org
>> http://heretothereideas.blogspot.com/
>> ////
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>
--------------010204010309060805010107
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 01/20/2013 07:25 PM, Jim Kinney
wrote:<br>
</div>
<blockquote
cite="mid:CAEo=5PxpxC7AmspcfvxdMjSmN3aP=FVEy-G_L_zvhq3ecX=e_Q@mail.gmail.com"
type="cite">
<p>They are marked "highly available" but I thought that was for
migration only. I saw a database boolean but other than an
update command, I see no other way to boot on powerup.</p>
</blockquote>
your VMs probably failed to restart on your hosts. maybe storage
isn't connected yet. check the engine.log<br>
<blockquote
cite="mid:CAEo=5PxpxC7AmspcfvxdMjSmN3aP=FVEy-G_L_zvhq3ecX=e_Q@mail.gmail.com"
type="cite">
<div class="gmail_quote">On Jan 20, 2013 1:38 AM, "Roy Golan" <<a
moz-do-not-send="true" href="mailto:rgolan@redhat.com">rgolan(a)redhat.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>On 01/18/2013 08:27 PM, Jim Kinney wrote:<br>
</div>
<blockquote type="cite">How do I mark a VM to startup if a
node fails? <br>
<br>
2 hosts in cluster, windows domain controller on one host,
backup on second host. Both are marked high priority. <br
clear="all">
"Bad Things" happen and both hosts get rebooted. I want
those domain controllers to automatically restart.<br>
<br>
</blockquote>
are those VMs also marked as "Highly available" under High
Availability tab?<br>
<blockquote type="cite">I'm assuming the failure of the
hosts did not knock down the manager. (I have them on
separate floors, power, etc).<br>
-- <br>
-- <br>
James P. Kinney III<br>
<i><i><i><i><br>
</i></i></i></i>Every time you stop a school, you
will have to build a jail. What you gain at one end you
lose at the other. It's like feeding a dog on his own
tail. It won't fatten the dog.<br>
- Speech 11/23/1900 Mark Twain<br>
<i><i><i><i><br>
<a moz-do-not-send="true"
href="http://electjimkinney.org" target="_blank">http://electjimkinney.org</a><br>
<a moz-do-not-send="true"
href="http://heretothereideas.blogspot.com/"
target="_blank">http://heretothereideas.blogspot.com/</a><br>
</i></i></i></i> <br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>
--------------010204010309060805010107--
11 years, 10 months
[Users] Planned outage :: resources.ovirt.org/lists.ovirt.org :: 2013-01-21 01:00 UTC
by Karsten 'quaid' Wade
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
------enig2QPOTWAJSDKQFRARSDTUF
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
There will be an outage of www.ovirt.org for approximately 45 minutes.
The outage will occur at 2013-01-21 01:00 UTC. To view in your local time=
:
date -d '2013-01-21 01:00 UTC'
I may start part of the outage 15 minutes before that. If you anticipate
needing services until the outage window, reply back to me with details
ASAP.
=3D=3D Details =3D=3D
We need to resize the Linode instance to get another 15 GB for storage
until we can move services off the Linode permanently, as planned. This
resizing should give us some breathing room.
The account resize is estimated to take 30 minutes, during which time
the Linode VM will be offline. After that, there will be a few minutes
for reboot and restart of services, including the manual starting of the
IRC bot 'ovirtbot'.
The time window chosen coincides with the lowest CPU usage typically
seen on any given day - 01:00 UTC tends to be very quiet for about an
hour. Hopefully no one will even notice the downtime.
If you have any services, such as Jenkins or Gerrit backup, that may go
off during that window, you may want to retime it or be prepared for an
error.
=3D=3D Affected services =3D=3D
* resources.ovirt.org
** meeting logs
** packages
* lists.ovirt.org (MailMan)
* ovirtbot
* Gerrit backup (anacron may pick this up)
* Other cronjobs (anacron may pick this up)
=3D=3D Not-affected services =3D=3D
* www.ovirt.org (MediaWiki)
* jenkins.ovirt.org
* gerrit.ovirt.org
* alterway{01,02}.ovirt.org
--=20
Karsten 'quaid' Wade, Sr. Analyst - Community Growth
http://TheOpenSourceWay.org .^\ http://community.redhat.com
@quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41
------enig2QPOTWAJSDKQFRARSDTUF
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iD8DBQFQ/C5s2ZIOBq0ODEERAielAKCYj7E23olhFjrftzhIdMey/n4k9QCdG4MR
ILrAtGG53Wnn545S12rdacI=
=C+px
-----END PGP SIGNATURE-----
------enig2QPOTWAJSDKQFRARSDTUF--
11 years, 10 months
[Users] Attaching an existing KVM installation to oVirt
by Eric_E_Smith@DELL.com
Hello - I'm new to the list and thought I would send my first email. Is there a way to attach an existing KVM installation (Non Fedora - non node based installation of say, CentOS or Ubuntu) to oVirt?
Thanks in advance,
Eric
11 years, 10 months
[Users] Storage domain weirdness (or design)
by Alex Leonhardt
Hi,
I see a strange behaviour -
Setup:
1 ovirtmgmt / ovirt-engine hsot
2 ovirt nodes / HVs
2 storage domains in same cluster & DC
HV1 => storage domain 1 (master)
HV2 => storage domain 2
Issue:
When I create a host with say ~40GB disk to be on HV2/storage_domain_2, it
does that via HV1 ? Why is that ? I realize a storage domain is "attached
to the cluster/dc", however, when creating the VM I explicitly seleted to
only run on HV2, so why would it still create the disk via HV1 ? The mgmt
network (for now) is only just that, a mgmt network, not meant to create 40
GB disks (dd from /dev/null) over NFS .. it's currently only a 100mbit
switch
Question:
How, if at all, can I make Ovirt create the Disk from the host where it's
meant to run on ?
Thanks,
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
11 years, 10 months
[Users] mark VM to start on boot
by Jim Kinney
How do I mark a VM to startup if a node fails?
2 hosts in cluster, windows domain controller on one host, backup on second
host. Both are marked high priority.
"Bad Things" happen and both hosts get rebooted. I want those domain
controllers to automatically restart.
I'm assuming the failure of the hosts did not knock down the manager. (I
have them on separate floors, power, etc).
--
--
James P. Kinney III
*
*Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*
http://electjimkinney.org
http://heretothereideas.blogspot.com/
*
11 years, 10 months