[Users] Compiling ovirt-guest-agent on FreeBSD
by Karli Sjöberg
--_000_5F9E965F5A80BC468BE5F40576769F092E70EBE0exchange21_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGkhDQoNCkFzIHBhcnQgb2YgYSB0ZW1wbGF0ZSBJwrRtIHByZXBhcmluZywgScK0bSB3b25kZXJp
bmcgaG93IHRvIGNvbXBpbGUgdGhlIGFnZW50IHdpdGgganVzdCB0aGUgYmFzaWNzIGluY2x1ZGVk
LiBJwrR2ZSB0cmllZCBydW5uaW5nIGxpa2U6DQoNCk9QVElPTlM9Jw0KLS13aXRob3V0LWdkbS1w
bHVnaW4gLS13aXRob3V0LWdkbTItcGx1Z2luICAtLXdpdGhvdXQta2RtLXBsdWdpbiAtLXdpdGhv
dXQtcGFtLW92aXJ0LWNyZWQNCi0td2l0aC1nZG0tcGx1Z2luPW5vIC0td2l0aC1nZG0yLXBsdWdp
bj1ubyAtLXdpdGgta2RtLXBsdWdpbj1ubyAtLXdpdGgtcGFtLW92aXJ0LWNyZWQ9bm8NCi0tZGlz
YWJsZS1nZG0tcGx1Z2luIC0tZGlzYWJsZS1nZG0yLXBsdWdpbiAtLWRpc2FibGUta2RtLXBsdWdp
biAtLWRpc2FibGUtcGFtLW92aXJ0LWNyZWQNCi0tZW5hYmxlLWdkbS1wbHVnaW49bm8gLS1lbmFi
bGUtZ2RtMi1wbHVnaW49bm8gLS1lbmFibGUta2RtLXBsdWdpbj1ubyAtLWVuYWJsZXBhbS1vdmly
dC1jcmVkLT1ubycNCg0KIyAuL2NvbmZpZ3VyZSAke09QVElPTlN9DQoNClJlZ2FyZGxlc3Mgb2Yg
aG93IEkgdHJ5LCBpdCBqdXN0IHJlc3BvbmRzOg0KY29uZmlndXJlOiBXQVJOSU5HOiB1bnJlY29n
bml6ZWQgb3B0aW9uczogJHtPUFRJT05TfQ0KDQpJIHRvb2sgdGhlIHBhY2thZ2UgZnJvbSB0aGUg
Im9mZmljaWFsIiBvVmlydC5vcmcgcmVwbywgc3JjIGZpbGU6DQpvdmlydC1ndWVzdC1hZ2VudC0x
LjAuNi50YXIuYnoyDQoNCldoYXQgYW0gSSBkb2luZyB3cm9uZz8NCg0KLS0NCg0KTWVkIFbDpG5s
aWdhIEjDpGxzbmluZ2FyDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQpLYXJsaSBTasO2YmVyZw0K
U3dlZGlzaCBVbml2ZXJzaXR5IG9mIEFncmljdWx0dXJhbCBTY2llbmNlcw0KQm94IDcwNzkgKFZp
c2l0aW5nIEFkZHJlc3MgS3JvbsOlc3bDpGdlbiA4KQ0KUy03NTAgMDcgVXBwc2FsYSwgU3dlZGVu
DQpQaG9uZTogICs0Ni0oMCkxOC02NyAxNSA2Ng0Ka2FybGkuc2pvYmVyZ0BzbHUuc2U8bWFpbHRv
OmthcmxpLnNqb2JlcmdAYWRtLnNsdS5zZT4NCg==
--_000_5F9E965F5A80BC468BE5F40576769F092E70EBE0exchange21_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64
PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv
L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv
bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii
IGNvbnRlbnQ9Ikd0a0hUTUwvNC42LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQpIaSE8YnI+DQo8YnI+
DQpBcyBwYXJ0IG9mIGEgdGVtcGxhdGUgScK0bSBwcmVwYXJpbmcsIEnCtG0gd29uZGVyaW5nIGhv
dyB0byBjb21waWxlIHRoZSBhZ2VudCB3aXRoIGp1c3QgdGhlIGJhc2ljcyBpbmNsdWRlZC4gScK0
dmUgdHJpZWQgcnVubmluZyBsaWtlOjxicj4NCjxicj4NCk9QVElPTlM9Jzxicj4NCi0td2l0aG91
dC1nZG0tcGx1Z2luIC0td2l0aG91dC1nZG0yLXBsdWdpbiZuYnNwOyAtLXdpdGhvdXQta2RtLXBs
dWdpbiAtLXdpdGhvdXQtcGFtLW92aXJ0LWNyZWQ8YnI+DQotLXdpdGgtZ2RtLXBsdWdpbj1ubyAt
LXdpdGgtZ2RtMi1wbHVnaW49bm8gLS13aXRoLWtkbS1wbHVnaW49bm8gLS13aXRoLXBhbS1vdmly
dC1jcmVkPW5vPGJyPg0KLS1kaXNhYmxlLWdkbS1wbHVnaW4gLS1kaXNhYmxlLWdkbTItcGx1Z2lu
IC0tZGlzYWJsZS1rZG0tcGx1Z2luIC0tZGlzYWJsZS1wYW0tb3ZpcnQtY3JlZDxicj4NCi0tZW5h
YmxlLWdkbS1wbHVnaW49bm8gLS1lbmFibGUtZ2RtMi1wbHVnaW49bm8gLS1lbmFibGUta2RtLXBs
dWdpbj1ubyAtLWVuYWJsZXBhbS1vdmlydC1jcmVkLT1ubyc8YnI+DQo8YnI+DQojIC4vY29uZmln
dXJlICR7T1BUSU9OU308YnI+DQo8YnI+DQpSZWdhcmRsZXNzIG9mIGhvdyBJIHRyeSwgaXQganVz
dCByZXNwb25kczo8YnI+DQpjb25maWd1cmU6IFdBUk5JTkc6IHVucmVjb2duaXplZCBvcHRpb25z
OiAke09QVElPTlN9PGJyPg0KPGJyPg0KSSB0b29rIHRoZSBwYWNrYWdlIGZyb20gdGhlICZxdW90
O29mZmljaWFsJnF1b3Q7IG9WaXJ0Lm9yZyByZXBvLCBzcmMgZmlsZTo8YnI+DQpvdmlydC1ndWVz
dC1hZ2VudC0xLjAuNi50YXIuYnoyPGJyPg0KPGJyPg0KV2hhdCBhbSBJIGRvaW5nIHdyb25nPzxi
cj4NCjxicj4NCjx0YWJsZSBjZWxsc3BhY2luZz0iMCIgY2VsbHBhZGRpbmc9IjAiIHdpZHRoPSIx
MDAlIj4NCjx0Ym9keT4NCjx0cj4NCjx0ZD4tLSA8YnI+DQo8YnI+DQpNZWQgVsOkbmxpZ2EgSMOk
bHNuaW5nYXI8YnI+DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPGJyPg0KS2FybGkgU2rDtmJlcmc8
YnI+DQpTd2VkaXNoIFVuaXZlcnNpdHkgb2YgQWdyaWN1bHR1cmFsIFNjaWVuY2VzPGJyPg0KQm94
IDcwNzkgKFZpc2l0aW5nIEFkZHJlc3MgS3JvbsOlc3bDpGdlbiA4KTxicj4NClMtNzUwIDA3IFVw
cHNhbGEsIFN3ZWRlbjxicj4NClBob25lOiAmbmJzcDsmIzQzOzQ2LSgwKTE4LTY3IDE1IDY2PGJy
Pg0KPGEgaHJlZj0ibWFpbHRvOmthcmxpLnNqb2JlcmdAYWRtLnNsdS5zZSI+a2FybGkuc2pvYmVy
Z0BzbHUuc2U8L2E+IDwvdGQ+DQo8L3RyPg0KPC90Ym9keT4NCjwvdGFibGU+DQo8L2JvZHk+DQo8
L2h0bWw+DQo=
--_000_5F9E965F5A80BC468BE5F40576769F092E70EBE0exchange21_--
9 years, 5 months
[Users] A mobile monitoring application for oVirt
by Martin Betak
Hello oVirt users,
I'm in the process of developing a simple monitoring application for oVirt on the Android platform.
This is still under heavy development, but first usable version can be found at [1]
Please note that this is still a development preview so it can be a little unstable and the UI design is not yet perfect
(well ... design by programmer :-)) but I hope it could be useful. All comments, remarks,
feature requests and general feedback are very welcome. You can file any issues directly at [2].
Below follow the details of using and configuring the app.
Description:
The goal of this project was to create a simple Android app that would enable oVirt admins to configure conditions on Vms, Clusters,
or whole datacenter upon which they want to be notified. At the moment you can configure 3 types of "Triggers":
- when Vm CPU is over given level
- when Vm Memory usage is over given level
- when Vm enters given state (Down, Unknown ...)
You can also choose if you want just simple standard android notification or also want the device to vibrate.
You can also define all these triggers on per-Vm, per-Cluster or "global" level.
Configuration:
On first run the app will prompt you to enter connection parameters of your running oVirt engine instance.
API URL should be in the form of http://host:port/ovirt-engine/api
Username is user@domain i.e. admin@internal
Password is ... well the above user's password :-)
sadly only http (not https) is supported so far for endpoint url.
If you have any more questions feel free to use this thread and I'll do my best to answer them :-)
Best regards,
Martin
[1] https://github.com/matobet/moVirt/blob/master/moVirt/moVirt.apk
[2] https://github.com/matobet/moVirt/issues
9 years, 9 months
[Users] connecting to oVirt/RHEV VMs through proxy and the oVirt API
by i iordanov
Hello,
My apologies for cross-posting, but this discussion concerns both mailing
lists, I think.
In Opaque, I recently started setting the proxy property of SpiceSession
from a console file, as is done in remote-viewer, in order to support
installations where the nodes are not "visible" from the point of view of
the client.
However, then I remembered that I have not seen the proxy property being
available through the oVirt/RHEV API. I looked at the latest remote-viewer
code, and I looked through the oVirt/RHEV API documentation, and indeed, I
see no way to get proxy information other than through the console.vv file.
If I am correct, then this means that both Opaque and remote-viewer cannot
connect to VMs behind a proxy unless a console.vv file is used, which also
means that since console.vv files are unobtainable through the User Portal
on mobile devices, mobile devices effectively cannot at all connect to such
VMs if the user is not an Administrator.
Would it be possible to expose the proxy parameter through the API? I
couldn't find a bug opened about this, but it doesn't mean it isn't there
:).
Thanks!
iordan
--
The conscious mind has only one thread of execution.
10 years
[Users] oVirt Weekly Meeting Minutes -- 2013-10-30
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-30-14.00.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-30-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-30-14.00.log.html
============================
#ovirt: oVirt Weekly Meeting
============================
Meeting started by mburns at 14:00:43 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-30-14.00.log.html
.
Meeting summary
---------------
* Agenda and roll call (mburns, 14:00:59)
* 3.3 update releases (mburns, 14:01:06)
* 3.4 planning (mburns, 14:01:11)
* conferences and workshops (mburns, 14:01:19)
* infra update (mburns, 14:01:23)
* other topics (mburns, 14:01:28)
* oVirt 3.3 updates (mburns, 14:07:09)
* 3.3.1 beta was posted last week (mburns, 14:07:26)
* all pending bugs have patches that are merged (mburns, 14:08:08)
* new build coming today (mburns, 14:08:15)
* ACTION: ybronhei to build vdsm with one additional fix (for selinux
issues) (mburns, 14:09:41)
* ACTION: fabiand_ to build new ovirt-node images once vdsm is ready
(mburns, 14:09:46)
* ACTION: sbonazzo to build new ovirt-engine (mburns, 14:10:00)
* tentative plan is to have some testing this week, and release early
next week (assuming no issues) (mburns, 14:10:31)
* ACTION: mburns to send notice of new packages out once they're
available (mburns, 14:12:08)
* LINK:
http://lists.ovirt.org/pipermail/users/2013-October/017263.html do
you want me to do bug report for this? (samppah, 14:16:16)
* ACTION: sahina to follow up on gluster domains using fuse instead of
native gluster (mburns, 14:20:14)
* ovirt 3.4 release (mburns, 14:28:35)
* code freeze is at end of December (mburns, 14:28:49)
* Release set for end of January (mburns, 14:28:56)
* still need to do some work to get a list of features committed for
this release (mburns, 14:29:35)
* ACTION: sbonazzo to create 3.4 release management page (mburns,
14:31:27)
* with links to itamar's feature planning doc (mburns, 14:31:49)
* more details on exact dates, builds, beta, test days, etc to come in
the next few weeks (mburns, 14:33:31)
* Conferences and Workshops (mburns, 14:35:24)
* big developer meetup last week during KVM Forum/LinuxCon EU
(mburns, 14:35:41)
* many talks, many sessions across all the
conferences/workshops/meetings (mburns, 14:35:59)
* itamar sent a writeup already detailing the event to the ovirt
mailing lists (mburns, 14:36:53)
* planning for future presentations and workshops is underway, but no
details just yet (mburns, 14:37:34)
* there will be a devroom at FOSDEM and it's open for CFP now
(mburns, 14:38:18)
* plan is to have at least a few oVirt related talks there (mburns,
14:38:42)
* and hopefully a booth as well (mburns, 14:39:48)
* Infra updates (mburns, 14:41:02)
* no updates from infra team this week, please see their meeting
minutes for updates (mburns, 14:44:22)
* Other topics (mburns, 14:44:26)
* LINK:
https://en.wikipedia.org/wiki/Posting_style#Choosing_the_proper_posting_s...
(SvenKieske, 14:47:13)
Meeting ended at 14:52:41 UTC.
Action Items
------------
* ybronhei to build vdsm with one additional fix (for selinux issues)
* fabiand_ to build new ovirt-node images once vdsm is ready
* sbonazzo to build new ovirt-engine
* mburns to send notice of new packages out once they're available
* sahina to follow up on gluster domains using fuse instead of native
gluster
* sbonazzo to create 3.4 release management page
Action Items, by person
-----------------------
* fabiand_
* fabiand_ to build new ovirt-node images once vdsm is ready
* mburns
* mburns to send notice of new packages out once they're available
* sahina
* sahina to follow up on gluster domains using fuse instead of native
gluster
* sbonazzo
* sbonazzo to build new ovirt-engine
* sbonazzo to create 3.4 release management page
* ybronhei
* ybronhei to build vdsm with one additional fix (for selinux issues)
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (79)
* danken (23)
* itamar (18)
* sbonazzo (10)
* samppah (6)
* ovirtbot (6)
* SvenKieske (5)
* fabiand_ (4)
* JosueDelgado (3)
* backblue (3)
* sahina (2)
* YamakasY (1)
* lvernia (1)
* apuimedo (1)
* ybronhei (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
10 years, 1 month
[Users] How to See Ovirt Console From a Remote Windows Host
by Jon Forrest
I know that this is simple for some of you, but I also
know from Googling around that lots of people have had
trouble seeing their ovirt console from a remote Windows
host. Below I describe what finally worked for me. I hope
this helps somebody avoid wasting as much time as I did
today.
I'm a fairly experienced VMWare user who's learning ovirt.
I just installed an all-in-one ovirt server and copied a
CentOS 6.5 iso into it. I then tried to boot a new VM
but soon learned that console access is different in ovirt
than on VMWare.
I then spent over an hour trying the various documented
ways to view a remote console using Spice on my Windows 7
desktop. I even tried using a Linux VM to see if the Firefox
plugin for Spice would work. Nothing.
What finally worked was installing the virt-viewer Windows
client (http://virt-manager.org/download/). Then, I opened
the ovirt Administration Portal in Firefox running on my
Windows 7 desktop. I created a new VM and configured it
the way I wanted. Then, from the "Virtual Machines" tab, I started
the new VM. Pretty soon the little console icon turned green so
I clicked on it. I got the prompt from Firefox asking me what
app I wanted to associate with the ".vv" URL that opened when
I clicked on the console icon. I browsed around and selected
\Program Files\VirtViewer\bin\remote-viewer.exe
which is from the virt-viewer client package I installed above.
I told Firefox to always use this app for this kind of file.
This works great! I was able to boot the CentOS system and
install it with no problems.
Good luck!
Jon Forrest
10 years, 1 month
oVirt 3.2 - iSCSI offload (broadcom - bnx2i)
by Ricardo Esteves
Hi,
I've put my host on maintenance, then i configured iscsi offload for my
broadcom cards changing target's file's (192.168.12.2,3260 and
192.168.12.4,3260) in my node
iqn.1986-03.com.hp:storage.msa2324i.1226151a6 to use interface
bnx2i.d8:d3:85:67:e3:bb, but after activating the host, configurations
are back to default.
[root@blade6 iscsi]# ll ifaces/
total 16
-rw-------. 1 root root 248 Mai 13 2013 bnx2i.d8:d3:85:67:e3:b9
-rw-------. 1 root root 282 Abr 10 22:10 bnx2i.d8:d3:85:67:e3:bb
-rw-------. 1 root root 247 Ago 15 2012 bnx2i.d8:d3:85:bf:e9:b1
-rw-------. 1 root root 247 Ago 15 2012 bnx2i.d8:d3:85:bf:e9:b5
[root@blade6 iscsi]# ll
nodes/iqn.1986-03.com.hp\:storage.msa2324i.1226151a60/
total 16
-rw-------. 1 root root 1782 Abr 10 22:34 192.168.11.1,3260
-rw-------. 1 root root 1782 Abr 10 22:34 192.168.11.3,3260
-rw-------. 1 root root 1782 Abr 10 22:34 192.168.12.2,3260
-rw-------. 1 root root 1782 Abr 10 22:34 192.168.12.4,3260
Anyone know how to configure iscsi offload for ovirt?
10 years, 1 month
[Users] Migrate cluster 3.3 -> 3.4 hosted on existing hosts
by Ted Miller
--------------080102010006090400050504
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit
Current setup:
* 3 identical hosts running on HP GL180 g5 servers
o gluster running 5 volumes in replica 3
* engine running on VMWare Server on another computer (that computer is NOT
available to convert to a host)
Where I want to end up:
* 3 identical hosted-engine hosts running on HP GL180 g5 servers
o gluster running 6 volumes in replica 3
+ new volume will be nfs storage for engine VM
* hosted engine in oVirt VM
* as few changes to current setup as possible
The two pages I found on the wiki are: Hosted Engine Howto
<http://www.ovirt.org/Hosted_Engine_Howto> and Migrate to Hosted Engine
<http://www.ovirt.org/Migrate_to_Hosted_Engine>. Both were written during
the testing process, and have not been updated to reflect production status.
I don't know if anything in the process has changed since they were written.
Process outlined in above two pages (as I understand it):
have nfs file store ready to hold VM
Do minimal install (not clear if ovirt node, Centos, or Fedora was
used--I am Centos-based)
# yum install ovirt-hosted-engine-setup
# hosted-engine --deploy
Install OS on VM
return to host console
at "Please install the engine in the VM" prompt on host
on VM console
# yum install ovirt-engine
on old engine:
service ovirt-engine stop
chkconfig ovirt-engine off
set up dns for new engine
# engine-backup --mode=backup --file=backup1 --log=backup1.log
scp backup file to new engine VM
on new VM:
# engine-backup --mode=restore --file=backup1 --log=backup1-restore.log
--change-db-credentials --db-host=didi-lap --db-user=engine --db-password
--db-name=engine
# engine-setup
on host:
run script until: "The system will wait until the VM is down."
on new VM:
# reboot
on Host: finish script
My questions:
1. Is the above still the recommended way to do a hosted-engine install?
2. Will it blow up at me if I use my existing host (with glusterfs all set
up, etc) as the starting point, instead of a clean install?
Thank you for letting me benefit from your experience,
Ted Miller
Elkhart, IN, USA
--------------080102010006090400050504
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Current setup:<br>
<ul>
<li>3 identical hosts running on HP GL180 g5 servers</li>
<ul>
<li>gluster running 5 volumes in replica 3<br>
</li>
</ul>
<li>engine running on VMWare Server on another computer (that
computer is NOT available to convert to a host)<br>
</li>
</ul>
Where I want to end up:<br>
<ul>
<li>3 identical hosted-engine hosts running on HP GL180 g5 servers</li>
<ul>
<li>gluster running 6 volumes in replica 3</li>
<ul>
<li>new volume will be nfs storage for engine VM</li>
</ul>
</ul>
<li>hosted engine in oVirt VM</li>
<li>as few changes to current setup as possible</li>
</ul>
<p>The two pages I found on the wiki are: <a
href="http://www.ovirt.org/Hosted_Engine_Howto">Hosted Engine
Howto</a> and <a
href="http://www.ovirt.org/Migrate_to_Hosted_Engine">Migrate to
Hosted Engine</a>. Both were written during the testing
process, and have not been updated to reflect production status.
I don't know if anything in the process has changed since they
were written.<br>
</p>
<p>Process outlined in above two pages (as I understand it):<br>
</p>
<blockquote>
<p>have nfs file store ready to hold VM</p>
<p>Do minimal install (not clear if ovirt node, Centos, or Fedora
was used--I am Centos-based)</p>
# yum install ovirt-hosted-engine-setup<br>
# hosted-engine --deploy<br>
<p>Install OS on VM<br>
</p>
<p>return to host console<br>
</p>
<p>at "Please install the engine in the VM" prompt on host<br>
</p>
<p>on VM console<br>
# yum install ovirt-engine<br>
</p>
<p>on old engine: <br>
service ovirt-engine stop<br>
chkconfig ovirt-engine off</p>
<p>set up dns for new engine<br>
</p>
<p># engine-backup --mode=backup --file=backup1 --log=backup1.log<br>
scp backup file to new engine VM<br>
</p>
<p>on new VM:<br>
# engine-backup --mode=restore --file=backup1
--log=backup1-restore.log --change-db-credentials
--db-host=didi-lap --db-user=engine --db-password
--db-name=engine<br>
# engine-setup<br>
</p>
<p>on host:<br>
run script until: "The system will wait until the VM is down."<br>
</p>
<p>on new VM:<br>
# reboot<br>
</p>
<p>on Host: finish script<br>
</p>
</blockquote>
My questions:<br>
<br>
1. Is the above still the recommended way to do a hosted-engine
install?<br>
<br>
2. Will it blow up at me if I use my existing host (with glusterfs
all set up, etc) as the starting point, instead of a clean install?<br>
<br>
Thank you for letting me benefit from your experience,<br>
Ted Miller<br>
Elkhart, IN, USA<br>
<br>
</body>
</html>
--------------080102010006090400050504--
10 years, 6 months
[Users] Unable to delete a snapshot
by Nicolas Ecarnot
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've
properly shutdown.
It ran so far.
Today, after having shut it down properly, I'm trying to delete the
snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]
(ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu
siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM
, sharedLocks= ]
2014-01-06 10:10:58,837 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]
(ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal:
false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338
Type: VM
2014-01-06 10:10:58,840 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]
(ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks=
key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM
, sharedLocks= ]
2014-01-06 10:10:58,844 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand]
(ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand
internal: true. Entities affected : ID:
00000000-0000-0000-0000-000000000000 Type: Storage
2014-01-06 10:10:58,848 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand]
(ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId =
5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId =
69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId =
506085b6-40e0-4176-a4df-9102857f51f2, imageId2 =
c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId =
cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b
2014-01-06 10:10:59,511 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand]
(ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b
2014-01-06 10:10:59,518 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8)
CommandAsyncTask::Adding CommandMultiAsyncTasks object for command
b402868f-b7f9-4c0e-a6fd-bdc51ff49952
2014-01-06 10:10:59,519 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching
task 6caec3bc-fc66-42be-a642-7733fc033103 to command
b402868f-b7f9-4c0e-a6fd-bdc51ff49952.
2014-01-06 10:10:59,525 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8)
Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command
RemoveSnapshot, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling
hasn't started yet..
2014-01-06 10:10:59,530 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID:
53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID:
-1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674'
was initiated by necarnot.
2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to
poll task 6caec3bc-fc66-42be-a642-7733fc033103.
2014-01-06 10:11:01,811 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2
tasks, 1 tasks to poll now
2014-01-06 10:11:01,824 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS
method
2014-01-06 10:11:01,825 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-20) Error code GeneralException and error
message VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2'
2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task
6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
status finished, result 'cleanSuccess'.
2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure:
Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command
RemoveSnapshot, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
failure:^M
-- Result: cleanSuccess^M
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2'
2014-01-06 10:11:01,832 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-20)
CommandAsyncTask::EndActionIfNecessary: All tasks of command
b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction
2014-01-06 10:11:01,833 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending
action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952):
calling EndAction .
2014-01-06 10:11:01,834 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::EndCommandAction [within thread] context: Attempting
to EndAction RemoveSnapshot, executionIndex: 0
2014-01-06 10:11:01,839 ERROR
[org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27)
[3b3e6fb1] Ending command with failure:
org.ovirt.engine.core.bll.RemoveSnapshotCommand
2014-01-06 10:11:01,844 ERROR
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand]
(pool-6-thread-27) [33fa2a5d] Ending command with failure:
org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand
2014-01-06 10:11:01,848 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID:
53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID:
-1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM
'uc-674'.
2014-01-06 10:11:01,850 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for
action type RemoveSnapshot completed, handling the result.
2014-01-06 10:11:01,851 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for
action type RemoveSnapshot succeeded, clearing tasks.
2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear
task 6caec3bc-fc66-42be-a642-7733fc033103
2014-01-06 10:11:01,853 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId =
5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf
2014-01-06 10:11:01,873 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName =
serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24,
taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929
2014-01-06 10:11:01,884 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929
2014-01-06 10:11:01,885 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf
2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task
6caec3bc-fc66-42be-a642-7733fc033103 from DataBase
2014-01-06 10:11:01,887 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
b402868f-b7f9-4c0e-a6fd-bdc51ff49952
2014-01-06 10:11:07,703 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map
contains now 1 tasks
2014-01-06 10:12:07,703 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map
contains now 0 tasks
2014-01-06 10:12:07,704 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool
5849b030-626e-47cb-ad90-3ce782d831b3.
--
Nicolas Ecarnot
--
Nicolas Ecarnot
10 years, 6 months
"Could not connect host to Data Center" after rebooting, how to resolve?
by Boudewijn Ector ICT
Hi list,
I had to do some fsck-related things last week on my ovirt box (centos,
single node and NFS on localhost).
Afterwards ovirt refused to start VMs according to the webinterface
because it can't connect host to Data Center.
On my OS it works fine (I just replaced the IP by 'IP'):
[root@server ovirt-engine]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_leiden-lv_root
51606140 6872592 42112108 15% /
tmpfs 3978120 0 3978120 0% /dev/shm
/dev/sdb1 495844 99542 370702 22% /boot
/dev/mapper/vg_server-lv_home
62990260 2584272 57206196 5% /home
/dev/sda1 5814366992 4629615452 1184751540 80% /raid
IP:/raid/ovirt/data
5814367232 4629615616 1184751616 80%
/rhev/data-center/mnt/IP:_raid_ovirt_data
IP:/raid/ovirt/iso
5814367232 4629615616 1184751616 80%
/rhev/data-center/mnt/IP:_raid_ovirt_iso
in my event log in the webinterface I found correlation ID 7a735111 .
So grepping my logs for that one:
engine.log:2014-04-30 15:59:22,851 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(ajp--127.0.0.1-8702-8) [7a735111] Lock Acquired to object EngineLock
[exclusiveLocks= key: 6bee0e2d-961c-453d-a266-e4623f91e162 value: STORAGE
engine.log:2014-04-30 15:59:22,888 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] Running command:
ActivateStorageDomainCommand internal: false. Entities affected : ID:
6bee0e2d-961c-453d-a266-e4623f91e162 Type: Storage
engine.log:2014-04-30 15:59:22,894 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] Lock freed to object
EngineLock [exclusiveLocks= key: 6bee0e2d-961c-453d-a266-e4623f91e162
value: STORAGE
engine.log:2014-04-30 15:59:22,895 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] ActivateStorage Domain.
Before Connect all hosts to pool. Time:4/30/14 3:59 PM
engine.log:2014-04-30 15:59:22,945 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] START,
ActivateStorageDomainVDSCommand( storagePoolId =
00000002-0002-0002-0002-0000000000ec, ignoreFailoverLimit = false,
storageDomainId = 6bee0e2d-961c-453d-a266-e4623f91e162), log id: 5fecb439
engine.log:2014-04-30 15:59:23,011 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] hostFromVds::selectedVds
- server, spmStatus Unknown_Pool, storage pool Default
engine.log:2014-04-30 15:59:23,015 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] START,
ConnectStoragePoolVDSCommand(HostName = server, HostId =
ff23de79-f17c-439d-939e-d8f3d9672367, storagePoolId =
00000002-0002-0002-0002-0000000000ec, vds_spm_id = 1, masterDomainId =
6bee0e2d-961c-453d-a266-e4623f91e162, masterVersion = 1), log id: 15866a95
engine.log:2014-04-30 15:59:23,151 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] FINISH,
ConnectStoragePoolVDSCommand, log id: 15866a95
engine.log:2014-04-30 15:59:23,152 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111]
IrsBroker::Failed::ActivateStorageDomainVDS due to:
IRSNonOperationalException: IRSGenericException: IRSErrorException:
IRSNonOperationalException: Could not connect host to Data
Center(Storage issue)
engine.log:2014-04-30 15:59:23,156 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] FINISH,
ActivateStorageDomainVDSCommand, log id: 5fecb439
engine.log:2014-04-30 15:59:23,157 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] Command
org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc
Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSNonOperationalException:
IRSGenericException: IRSErrorException: IRSNonOperationalException:
Could not connect host to Data Center(Storage issue) (Failed with error
ENGINE and code 5001)
engine.log:2014-04-30 15:59:23,162 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111] Command
[id=c2d8b433-f203-4d9e-b241-222eebf3dbae]: Compensating
CHANGED_STATUS_ONLY of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot: EntityStatusSnapshot [id=storagePoolId =
00000002-0002-0002-0002-0000000000ec, storageId =
6bee0e2d-961c-453d-a266-e4623f91e162, status=InActive].
engine.log:2014-04-30 15:59:23,170 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-49) [7a735111] Correlation ID: 7a735111,
Job ID: 75dcda45-e0a3-46e3-b79f-8df9f0ed9d85, Call Stack: null, Custom
Event ID: -1, Message: Failed to activate Storage Domain data (Data
Center Default) by admin
I guess the most relevant error is:
engine.log:2014-04-30 15:59:23,152 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-49) [7a735111]
IrsBroker::Failed::ActivateStorageDomainVDS due to:
IRSNonOperationalException: IRSGenericException: IRSErrorException:
IRSNonOperationalException: Could not connect host to Data
Center(Storage issue)
Okay so it can't connect to the data center. Well the NFS storage is
mounted and looks fine.
The directory is indeed looking fine:
[root@leiden data]# ls -al
total 12
drwxr-xr-x. 3 vdsm kvm 4096 Mar 30 03:27 .
drwxr-xr-x. 4 vdsm kvm 4096 Mar 30 03:26 ..
drwxr-xr-x. 5 vdsm kvm 4096 Mar 30 03:27
6bee0e2d-961c-453d-a266-e4623f91e162
-rwxr-xr-x. 1 vdsm kvm 0 Mar 30 03:27 __DIRECT_IO_TEST__
(and in the uuid there are indeed my VMs).
How should I solve this? To me everything looks just fine.
Cheers
Boudewijn Ector
10 years, 6 months