[QE][ACTION REQUIRED] oVirt 3.5.1 RC status
by Sandro Bonazzola
Hi,
according to the new oVirt 3.5.1 schedule, we're going to start composing oVirt 3.5.1 RC on *2015-01-07 08:00 UTC* from 3.5 branch.
The new GA release date is now targeted to 2015-01-14.
VDSM team decided to release a new vdsm package as async release for 3.5.0. Due to issues with Fedora Koji Build System packages are not yet ready to
be released.
ACTION: VDSM team to follow up when the packages will be ready to be released.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
- ACTION: assignee please provide ETA on above blocker
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 62 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 41 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- ACTION: Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- ACTION: Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 11 months
FQDN for vm creating with hosted-engine
by Yue, Cong
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA3F65svrcaexch1atg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi
Now I am trying to confirm KVM's HA with ovirt, and doing the walk through =
as the following guide.
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
During the VM setting, FQDN is asked, what FQDN means about? Does it mean t=
he hostname of VM host? In my case, it is compute2-2.
The following is my hosts file, my VM host and storage is 10.0.0.92.
And I am trying to assign 10.0.0.95 to the hosted VM.
---
[root@compute2-2 ~]# cat /etc/hosts
10.0.0.93 compute2-2 nfs2-2
10.0.0.95 ovrit-test
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdoma=
in4
::1 localhost localhost.localdomain localhost6 localhost6.localdoma=
in6
----
Also how I can remove the VM I installed, as for when I try to do hosted-en=
gine --deploy, it shows
---
[root@compute2-2 ~]# hosted-engine --deploy [ INFO ] Stage: Initializing
Continuing will configure this host for serving as hypervisor and=
create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: Yes [ INFO ] =
Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141216144036=
-30j0wk.log
Version: otopi-1.3.0 (otopi-1.3.0-1.el7) [ INFO ] Hardware suppo=
rts virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Sta=
ge: Environment packages setup [ INFO ] Stage: Programs detection [ INFO =
] Stage: Environment setup [ ERROR ] The following VMs has been found: ac4c=
8d35-ca47-4394-afa8-1180c768128c
[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted =
Engine with other VMs running [ INFO ] Stage: Clean up [ INFO ] Generatin=
g answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Answer file '/etc/ovirt-hosted-engine/answers.conf' has been upda=
ted [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
--
Thanks,
Cong
________________________________
This e-mail message is for the sole use of the intended recipient(s) and ma=
y contain confidential and privileged information. Any unauthorized review,=
use, disclosure or distribution is prohibited. If you are not the intended=
recipient, please contact the sender by reply e-mail and destroy all copie=
s of the original message. If you are the intended recipient, please be adv=
ised that the content of this message is subject to access, review and disc=
losure by the sender's e-mail System Administrator.
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA3F65svrcaexch1atg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"\@SimSun";
panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
{mso-style-priority:99;
mso-style-link:"Plain Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
span.PlainTextChar
{mso-style-name:"Plain Text Char";
mso-style-priority:99;
mso-style-link:"Plain Text";
font-family:"Calibri","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoPlainText">Hi<o:p></o:p></p>
<p class=3D"MsoPlainText"><o:p> </o:p></p>
<p class=3D"MsoPlainText">Now I am trying to confirm KVM's HA with ovirt, a=
nd doing the walk through as the following guide.<o:p></o:p></p>
<p class=3D"MsoPlainText"><a href=3D"http://community.redhat.com/blog/2014/=
10/up-and-running-with-ovirt-3-5/">http://community.redhat.com/blog/2014/10=
/up-and-running-with-ovirt-3-5/</a><o:p></o:p></p>
<p class=3D"MsoPlainText"><o:p> </o:p></p>
<p class=3D"MsoPlainText">During the VM setting, FQDN is asked, what FQDN m=
eans about? Does it mean the hostname of VM host? In my case, it is compute=
2-2.<o:p></o:p></p>
<p class=3D"MsoPlainText">The following is my hosts file, my VM host and st=
orage is 10.0.0.92.<o:p></o:p></p>
<p class=3D"MsoPlainText">And I am trying to assign 10.0.0.95 to the hosted=
VM.<o:p></o:p></p>
<p class=3D"MsoPlainText">---<o:p></o:p></p>
<p class=3D"MsoPlainText">[root@compute2-2 ~]# cat /etc/hosts<o:p></o:p></p=
>
<p class=3D"MsoPlainText">10.0.0.93 compute2-2 nfs2-2<o:p></o:p></p>
<p class=3D"MsoPlainText">10.0.0.95 ovrit-test<o:p></o:p></p>
<p class=3D"MsoPlainText">127.0.0.1 localhost localhost.localdo=
main localhost4 localhost4.localdomain4<o:p></o:p></p>
<p class=3D"MsoPlainText">::1 &nbs=
p; localhost localhost.localdomain localhost6 localhost6.localdomain6<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">----<o:p></o:p></p>
<p class=3D"MsoPlainText">Also how I can remove the VM I installed, as for =
when I try to do hosted-engine --deploy, it shows<o:p></o:p></p>
<p class=3D"MsoPlainText"><o:p> </o:p></p>
<p class=3D"MsoPlainText">---<o:p></o:p></p>
<p class=3D"MsoPlainText">[root@compute2-2 ~]# hosted-engine --deploy [ INF=
O ] Stage: Initializing<o:p></o:p></p>
<p class=3D"MsoPlainText"> &=
nbsp; Continuing will configure this host for serving as hypervisor and cre=
ate a VM where you have to install oVirt Engine afterwards.<o:p></o:p></p>
<p class=3D"MsoPlainText"> &=
nbsp; Are you sure you want to continue? (Yes, No)[Yes]: Yes [ INFO ]=
Generating a temporary VNC password.<o:p></o:p></p>
<p class=3D"MsoPlainText">[ INFO ] Stage: Environment setup<o:p></o:p=
></p>
<p class=3D"MsoPlainText"> &=
nbsp; Configuration files: []<o:p></o:p></p>
<p class=3D"MsoPlainText"> &=
nbsp; Log file:<o:p></o:p></p>
<p class=3D"MsoPlainText">/var/log/ovirt-hosted-engine-setup/ovirt-hosted-e=
ngine-setup-20141216144036-30j0wk.log<o:p></o:p></p>
<p class=3D"MsoPlainText"> &=
nbsp; Version: otopi-1.3.0 (otopi-1.3.0-1.el7) [ INFO ] Hardware supp=
orts virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO&=
nbsp; ] Stage: Environment packages setup [ INFO ] Stage: Programs de=
tection [ INFO ] Stage:
Environment setup [ ERROR ] The following VMs has been found: ac4c8d35-ca4=
7-4394-afa8-1180c768128c<o:p></o:p></p>
<p class=3D"MsoPlainText">[ ERROR ] Failed to execute stage 'Environment se=
tup': Cannot setup Hosted Engine with other VMs running [ INFO ] Stag=
e: Clean up [ INFO ] Generating answer file '/etc/ovirt-hosted-engine=
/answers.conf'<o:p></o:p></p>
<p class=3D"MsoPlainText">[ INFO ] Answer file '/etc/ovirt-hosted-eng=
ine/answers.conf' has been updated [ INFO ] Stage: Pre-termination [ =
INFO ] Stage: Termination<o:p></o:p></p>
<p class=3D"MsoPlainText">--<o:p></o:p></p>
<p class=3D"MsoPlainText">Thanks,<o:p></o:p></p>
<p class=3D"MsoPlainText">Cong<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">This e-mail message is for t=
he sole use of the intended recipient(s) and may contain confidential and p=
rivileged information. Any unauthorized review, use, disclosure or distribu=
tion is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy =
all copies of the original message. If you are the intended recipient, plea=
se be advised that the content of this message is subject to access, review=
and disclosure by the sender's
e-mail System Administrator.<br>
</font>
</body>
</html>
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA3F65svrcaexch1atg_--
9 years, 11 months
Ovirt Node Heartbeat Settings
by Punit Dambiwal
Hi,
I want to know more about the Ovirt engine and host node heartbeat settings
and interval....that means how and when the engine node consider the host
node as dead or shoot the command to fence it for reboot....
Is there any way to modify those interval and settings to prevent the
false-positive etc...
Thanks,
Punit
9 years, 11 months
About translation of oVirt-engine-reports on Zanata.
by 张亚琪
hi all,
I have forced on ovirt-engine-reports recently. I have found some
little problem on Zanata. And
the content of this project is not synchronize with the latest code.Would
you update the project of
Ovirt Engine Reports on Zanata. Thank you for your time. :)
9 years, 11 months
Re: [ovirt-users] [RFI] oVirt 3.6 Planning
by Itamar Heim
On Dec 13, 2014 7:07 AM, Jason Greene <jason.greene(a)redhat.com> wrote:
>
> > On 12.09.2014 14:22, Itamar Heim wrote:
> >
> > With oVirt 3.5 nearing GA, time to ask for "what do you want to see in
> > oVirt 3.6”?
>
> + Windows HV Support: https://bugzilla.redhat.com/show_bug.cgi?id=1125297
>
Just to note you can do this today by either:
- vdsm custom hook
- change of engine config if you know your hosts are only 7.0
- iirc, also possible by editing the specific osinfo config file to add the flags there
> Without these flags, my testing shows a completely idle 4 vcpu win slave
> uses ~15% of a host core, which limits overcommit ability. With them it
> goes down to 3.6% in my testing. hv_relaxed on its own shows no
> improvement over idle time.
>
> Unfortunately there is a KVM kernel bug that leads to win hangs with
> these flags, and so until RHEL gets 3.16, which looks like 7.1, only Fedora works:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1091818
>
> + Ability to add local storage without putting the host in maintenance mode
>
This should work today. Only the wizard to 'configure local storage' requires this at it performs create DC, create cluster, move host to new cluster, add storage domain.
The 'move host to new cluster' requires host to be in maint. Adding another local domain. Should not require this.
> + Some out-of-the-box option for self-hosted engine without shared storage
> (e.g. gluser, ceph, drdb, application directed replication, etc)
>
Focus here will be gluster for 3.6
Thanks,
Itamar
>
> Thanks!
>
> --
> Jason T. Greene
> WildFly Lead / JBoss EAP Platform Architect
> JBoss, a division of Red Hat
>
9 years, 11 months
Complete CentOS 7 environment
by Aslam, Usman
We are upgrading hardware and I'm upgrading/rebuilding our Ovirt infrastructure.
Are CentOS 7 host nodes supported? And can the engine be installed on CentOS 7? (3.5 repo isn't working for me)
Thanks,
Usman
9 years, 11 months
Re: [ovirt-users] 3. vdsm losing connection to libvirt (Chris Adams)
by Nikolai Sednev
------=_Part_10957803_1008904620.1418749854056
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Can I get engine, libvirt, vdsm, mom, logs from host8 and connectivity log?
Have you tried installing clean OSs on hosts, especially on problematic host?
I'd also try to disable JSONRPC on hosts, by putting them to maintenance and then removing JSONRPC from the check box on all hosts, just to compare if it resolves the issue.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Tuesday, December 16, 2014 5:50:28 PM
Subject: Users Digest, Vol 39, Issue 98
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: Free Ovirt Powered Cloud (Lior Vernia)
2. gluster rpms not found (Pat Pierson)
3. vdsm losing connection to libvirt (Chris Adams)
4. Re: Creating new users on oVirt 3.5 (Donny Davis)
5. Re: gfapi, 3.5.1 (Alex Crow)
----------------------------------------------------------------------
Message: 1
Date: Tue, 16 Dec 2014 15:55:02 +0200
From: Lior Vernia <lvernia(a)redhat.com>
To: Donny Davis <donny(a)cloudspin.me>
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Free Ovirt Powered Cloud
Message-ID: <549039B6.2010804(a)redhat.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi Donny,
On 15/12/14 18:24, Donny Davis wrote:
> Hi guys, I'm providing a free public cloud solution entirely based on
> vanilla oVirt called cloudspin.me <http://cloudspin.me>
>
This looks great! :)
> It runs on IPv6, and I am looking for people to use the system, host
> services and report back to me with their results.
>
Do you also use IPv6 internally in your deployment? e.g. assign IPv6
addresses to your hosts, storage domain, power management etc.? We'd be
very interested to hear what works and what doesn't. And perhaps help
push forward what doesn't, if you need it :)
> Data I am looking for
>
> Connection Speed - Is it comparable to other services
>
> User experience - Are there any changes recommended
>
> Does it work for you - What does, and does not work for you.
>
>
>
> I am trying to get funding to keep this a free resource for everyone to
> use. (not from here:)
>
> I am completely open to any and all suggestions, and or help with
> things. I am a one man show at the moment.
>
> If anyone has any questions please email me back
>
> Donny D
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
------------------------------
Message: 2
Date: Tue, 16 Dec 2014 09:08:57 -0500
From: Pat Pierson <ihasn2004(a)gmail.com>
To: nathan(a)robotics.net
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: [ovirt-users] gluster rpms not found
Message-ID:
<CAMRYiEiKL1MEGoHWjKtnhW3DXjouU0w3hs5zFx75sfBL8M4JaQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Nathan,
Did you find a work around for this? I am running into the same issue.
Is there a way to force vdsm to see gluster? Or a way to manually run the
search so I can see why it fails?
>*<>
*nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580
|www.broadsoft.com
On Fri, Jun 20, 2014 at 11:01 AM, Nathan Stratton <nathan at
robotics.net <http://lists.ovirt.org/mailman/listinfo/users>>
wrote:
>* Actually I have vdsm-gluster, that is why vdsm tries to find the gluster
*>* packages. Is there a way I can run the vdsm gluster rpm search manually to
*>* see what is going wrong?
*>>* [root at virt01a <http://lists.ovirt.org/mailman/listinfo/users>
~]# yum list installed |grep vdsm
*>* vdsm.x86_64 4.14.9-0.el6 @ovirt-3.4-stable
*>>* vdsm-cli.noarch 4.14.9-0.el6 @ovirt-3.4-stable
*>>* vdsm-gluster.noarch 4.14.9-0.el6 @ovirt-3.4-stable
*>>* vdsm-python.x86_64 4.14.9-0.el6 @ovirt-3.4-stable
*>>* vdsm-python-zombiereaper.noarch
*>* vdsm-xmlrpc.noarch 4.14.9-0.el6 @ovirt-3.4-stable
*>>>>* ><>
*>* nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580
<%2B1-240-404-6580> |
*>* www.broadsoft.com <http://www.broadsoft.com/>
*>>>* On Thu, Jun 19, 2014 at 8:39 PM, Andrew Lau <andrew at
andrewklau.com <http://lists.ovirt.org/mailman/listinfo/users>> wrote:
*>>>* You're missing vdsm-gluster
*>>>>* yum install vdsm-gluster
*>>>>* On Fri, Jun 20, 2014 at 6:24 AM, Nathan Stratton <nathan at
robotics.net <http://lists.ovirt.org/mailman/listinfo/users>>
*>>* wrote:
*>>* > I am running ovirt 3.4 and have gluster installed:
*>>* >
*>>* > [root at virt01a
<http://lists.ovirt.org/mailman/listinfo/users>]# yum list installed
|grep gluster
*>>* > glusterfs.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* > glusterfs-api.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* > glusterfs-cli.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* > glusterfs-fuse.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* > glusterfs-libs.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* > glusterfs-rdma.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* > glusterfs-server.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel
*>>* >
*>>* > However vdsm can't seem to find them:
*>>* >
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,250::caps::458::root::(_getKeyPackages) rpm package
*>>* glusterfs-rdma
*>>* > not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,250::caps::458::root::(_getKeyPackages) rpm package
*>>* glusterfs-fuse
*>>* > not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,251::caps::458::root::(_getKeyPackages) rpm package
*>>* gluster-swift
*>>* > not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,252::caps::458::root::(_getKeyPackages) rpm package
*>>* > gluster-swift-object not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,252::caps::458::root::(_getKeyPackages) rpm package glusterfs
*>>* not
*>>* > found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,252::caps::458::root::(_getKeyPackages) rpm package
*>>* > gluster-swift-plugin not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package
*>>* > gluster-swift-account not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package
*>>* > gluster-swift-proxy not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package
*>>* > gluster-swift-doc not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package
*>>* > glusterfs-server not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package
*>>* > gluster-swift-container not found
*>>* > Thread-13::DEBUG::2014-06-19
*>>* > 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package
*>>* > glusterfs-geo-replication not found
*>>* >
*>>* > Any ideas?
*>>* >
*>>* >><>
*>>* > nathan stratton | vp technology | broadsoft, inc |
+1-240-404-6580 <%2B1-240-404-6580> |
*>>* > www.broadsoft.com <http://www.broadsoft.com/>
*>>* >
*>>* > _______________________________________________
*>>* > Users mailing list
*>>* > Users at ovirt.org <http://lists.ovirt.org/mailman/listinfo/users>
*>>* > http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
*>>* >
*>>>>-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140621/9b14c8fe/atta...>
--
Patrick Pierson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141216/58d14872/atta...>
------------------------------
Message: 3
Date: Tue, 16 Dec 2014 08:48:48 -0600
From: Chris Adams <cma(a)cmadams.net>
To: users(a)ovirt.org
Subject: [ovirt-users] vdsm losing connection to libvirt
Message-ID: <20141216144848.GA1708(a)cmadams.net>
Content-Type: text/plain; charset=us-ascii
I have a oVirt setup that has three nodes, all running CentOS 7, with a
hosted engine running CentOS 6. Two of the nodes (node8 and node9) are
configured for hosted engine, and the third (node2) is just a "regular"
node (as you might guess from the names, more nodes are coming as I
migrate VMs to oVirt).
On one node, node8, vdsm periodically loses its connection to libvirt,
which causes vdsm to restart. There doesn't appear to be any trigger
that I can see (not time of day, load, etc. related). The engine VM is
up and running on node8 (don't know if that has anything to do with it).
I get some entries in /var/log/messages repeated continuously; the
"ovirt-ha-broker: sending ioctl 5401 to a partition" I mentioned before,
and the following:
Dec 15 20:56:23 node8 journal: User record for user '107' was not found: No such file or directory
Dec 15 20:56:23 node8 journal: Group record for user '107' was not found: No such file or directory
I don't think those have any relevance (don't know where they come
from); filtering those out, I see:
Dec 15 20:56:33 node8 journal: End of file while reading data: Input/output error
Dec 15 20:56:33 node8 journal: Tried to close invalid fd 0
Dec 15 20:56:38 node8 journal: vdsm root WARNING connection to libvirt broken. ecode: 1 edom: 7
Dec 15 20:56:38 node8 journal: vdsm root CRITICAL taking calling process down.
Dec 15 20:56:38 node8 journal: vdsm vds ERROR libvirt error
Dec 15 20:56:38 node8 journal: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsCapabilities: Error 16 from getVdsCapabilities: Unexpected exception
Dec 15 20:56:45 node8 journal: End of file while reading data: Input/output error
Dec 15 20:56:45 node8 vdsmd_init_common.sh: vdsm: Running run_final_hooks
Dec 15 20:56:45 node8 systemd: Starting Virtual Desktop Server Manager...
<and then all the normal-looking vdsm startup>
It is happening about once a day, but not at any regular interval or
time (was 02:23 Sunday, then 20:56 Monday).
vdsm.log has this at that time:
Thread-601576::DEBUG::2014-12-15 20:56:38,715::BindingXMLRPC::1132::vds::(wrapper) client [127.0.0.1]::call getCapabilities with () {}
Thread-601576::DEBUG::2014-12-15 20:56:38,718::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None)
Thread-601576::DEBUG::2014-12-15 20:56:38,746::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
Thread-601576::WARNING::2014-12-15 20:56:38,754::libvirtconnection::135::root::(wrapper) connection to libvirt broken. ecode: 1 edom: 7
Thread-601576::CRITICAL::2014-12-15 20:56:38,754::libvirtconnection::137::root::(wrapper) taking calling process down.
MainThread::DEBUG::2014-12-15 20:56:38,754::vdsm::58::vds::(sigtermHandler) Received signal 15
Thread-601576::DEBUG::2014-12-15 20:56:38,755::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 1 edom: 7 level: 2 message: internal error: client socket is closed
MainThread::DEBUG::2014-12-15 20:56:38,755::protocoldetector::135::vds.MultiProtocolAcceptor::(stop) Stopping Acceptor
MainThread::INFO::2014-12-15 20:56:38,755::__init__::563::jsonrpc.JsonRpcServer::(stop) Stopping JsonRPC Server
Detector thread::DEBUG::2014-12-15 20:56:38,756::protocoldetector::106::vds.MultiProtocolAcceptor::(_cleanup) Cleaning Acceptor
MainThread::INFO::2014-12-15 20:56:38,757::vmchannels::188::vds::(stop) VM channels listener was stopped.
MainThread::INFO::2014-12-15 20:56:38,758::momIF::91::MOM::(stop) Shutting down MOM
MainThread::DEBUG::2014-12-15 20:56:38,759::task::595::Storage.TaskManager.Task::(_updateState) Task=`26c7680c-23e2-42bb-964c-272e778a168a`::moving from state init -> state preparing
MainThread::INFO::2014-12-15 20:56:38,759::logUtils::44::dispatcher::(wrapper) Run and protect: prepareForShutdown(options=None)
Thread-601576::ERROR::2014-12-15 20:56:38,755::BindingXMLRPC::1142::vds::(wrapper) libvirt error
Traceback (most recent call last):
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 463, in getCapabilities
ret = api.getCapabilities()
File "/usr/share/vdsm/API.py", line 1245, in getCapabilities
c = caps.get()
File "/usr/share/vdsm/caps.py", line 615, in get
caps.update(netinfo.get())
File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 812, in get
nets = networks()
File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 119, in networks
allNets = ((net, net.name()) for net in conn.listAllNetworks(0))
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 129, in wrapper
__connections.get(id(target)).pingLibvirt()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3642, in getLibVersion
if ret == -1: raise libvirtError ('virConnectGetLibVersion() failed', conn=self)
libvirtError: internal error: client socket is closed
--
Chris Adams <cma(a)cmadams.net>
------------------------------
Message: 4
Date: Tue, 16 Dec 2014 07:57:16 -0700
From: "Donny Davis" <donny(a)cloudspin.me>
To: "'Alon Bar-Lev'" <alonbl(a)redhat.com>, "'Fedele Stabile'"
<fedele.stabile(a)fis.unical.it>
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Creating new users on oVirt 3.5
Message-ID: <008801d01940$9682f2f0$c388d8d0$(a)cloudspin.me>
Content-Type: text/plain; charset="us-ascii"
Check out my write-up on AAA,
I tried my best to break it down, and make it simple
https://cloudspin.me/ovirt-simple-ldap-aaa/
-----Original Message-----
From: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of
Alon Bar-Lev
Sent: Tuesday, December 16, 2014 1:49 AM
To: Fedele Stabile
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Creating new users on oVirt 3.5
----- Original Message -----
> From: "Fedele Stabile" <fedele.stabile(a)fis.unical.it>
> To: users(a)ovirt.org
> Sent: Monday, December 15, 2014 8:05:28 PM
> Subject: [ovirt-users] Creating new users on oVirt 3.5
>
> Hello,
> I have to create some users on my oVirt 3.5 infrastructure.
> On FridayI was following istructions on
> http://www.ovirt.org/LDAP_Quick_Start
> LDAP Quick Start
> so I correctly created a OpenLDAP server and a Kerberos service, but
> this morning I read that the instructions are obsolete...
> Now I'm trying to understand how to implement the new mechanism... but
> I'm in troubles:
> 1) run yum install ovirt-engine-extension-aaa-ldap
> 2) copied files in /etc/ovirt-engine/extensions.d and modified the
> name in fis.unical.it-auth(n/z).properties
> 3) copied files in /etc/ovirt-engine/aaa but now I can't do anything
>
> Can you help me with newbye instructions to install the aaa-extensions?
> Thank you very much
> Fedele Stabile
Hello,
Have you read[1]?
We of course need help in improving documentation :) Can you please send
engine.log when starting up engine so I can see if there are any issues?
Please make sure that at /etc/ovirt-engine/extensions.d you set the
config.profile.file.1 to absolute file, /etc/ovirt-enigne/aaa/ as we wait
for 3.5.1 to support relative names.
The simplest sequence is:
1. copy recursive /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple
to /etc/ovirt-engine 2. edit /etc/ovirt-engine/extension.d/* replace ../aaa
to /etc/ovirt-engine/aaa this is pending 3.5.1.
3. edit /etc/ovirt-engine/aaa/ldap1.properties and set vars.server,
vars.user, vars.password to meet your setup.
4. restart engine.
5. send me engine.log
Regards,
Alon
[1]
http://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob;
f=README;hb=HEAD
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
------------------------------
Message: 5
Date: Tue, 16 Dec 2014 15:50:23 +0000
From: Alex Crow <acrow(a)integrafin.co.uk>
To: users(a)ovirt.org
Subject: Re: [ovirt-users] gfapi, 3.5.1
Message-ID: <549054BF.2090105(a)integrafin.co.uk>
Content-Type: text/plain; charset=utf-8; format=flowed
Hi,
Anyone know if this is due to work correctly in the next iteration of 3.5?
Thanks
Alex
On 09/12/14 10:33, Alex Crow wrote:
> Hi,
>
> Will the vdsm patches to properly enable libgfapi storage for VMs (and
> matching refactored code in the hosted-engine setup scripts) for VMs
> make it into 3.5.1? It's not in the snapshots yet it seems.
>
> I notice it's in master/3.6 snapshot but something stops the HA stuff
> in self-hosted setups from connecting storage:
>
> from Master test setup:
> /var/log/ovirt-hosted-engine-ha/broker.log
>
> MainThread::INFO::2014-12-08
> 19:22:56,287::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
> Found certificate common name: 172.17.10.50
> MainThread::WARNING::2014-12-08
> 19:22:56,395::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:23:11,501::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:23:26,610::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:23:41,717::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:23:56,824::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::ERROR::2014-12-08
> 19:24:11,840::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed trying to connect storage:
> MainThread::ERROR::2014-12-08
> 19:24:11,840::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 'Failed trying to connect storage' - trying to restart agent
> MainThread::WARNING::2014-12-08
> 19:24:16,845::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '8'
> MainThread::INFO::2014-12-08
> 19:24:16,855::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
> Found certificate common name: 172.17.10.50
> MainThread::WARNING::2014-12-08
> 19:24:16,962::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:24:32,069::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:24:47,181::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:25:02,288::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::WARNING::2014-12-08
> 19:25:17,389::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed to connect storage, waiting '15' seconds before the next attempt
> MainThread::ERROR::2014-12-08
> 19:25:32,404::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Failed trying to connect storage:
> MainThread::ERROR::2014-12-08
> 19:25:32,404::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 'Failed trying to connect storage' - trying to restart agent
> MainThread::WARNING::2014-12-08
> 19:25:37,409::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '9'
> MainThread::ERROR::2014-12-08
> 19:25:37,409::agent::178::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Too many errors occurred, giving up. Please review the log and
> consider filing a bug.
> MainThread::INFO::2014-12-08
> 19:25:37,409::agent::118::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
> Agent shutting down
> (END) - Next: /var/log/ovirt-hosted-engine-ha/broker.log
>
> vdsm.log:
>
> Detector thread::DEBUG::2014-12-08
> 19:20:45,458::protocoldetector::214::vds.MultiProtocolAcceptor::(_remove_connection)
> Removing connection 127.0.0.1:53083
> Detector thread::DEBUG::2014-12-08
> 19:20:45,458::BindingXMLRPC::1193::XmlDetector::(handleSocket) xml
> over http detected from ('127.0.0.1', 53083)
> Thread-44::DEBUG::2014-12-08
> 19:20:45,459::BindingXMLRPC::318::vds::(wrapper) client [127.0.0.1]
> Thread-44::DEBUG::2014-12-08
> 19:20:45,460::task::592::Storage.TaskManager.Task::(_updateState)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state init ->
> state preparing
> Thread-44::INFO::2014-12-08
> 19:20:45,460::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=1,
> spUUID='ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', conList=[{'connection':
> 'zebulon.ifa.net:/engine', 'iqn': ',', 'protocol_version': '3'
> , 'kvm': 'password', '=': 'user', ',': '='}], options=None)
> Thread-44::DEBUG::2014-12-08
> 19:20:45,461::hsm::2384::Storage.HSM::(__prefetchDomains) nfs local
> path: /rhev/data-center/mnt/zebulon.ifa.net:_engine
> Thread-44::DEBUG::2014-12-08
> 19:20:45,462::hsm::2408::Storage.HSM::(__prefetchDomains) Found SD
> uuids: (u'd3240928-dae9-4ed0-8a28-7ab552455063',)
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::hsm::2464::Storage.HSM::(connectStorageServer) knownSDs:
> {d3240928-dae9-4ed0-8a28-7ab552455063: storage.nfsSD.findDomain}
> Thread-44::ERROR::2014-12-08
> 19:20:45,463::task::863::Storage.TaskManager.Task::(_setError)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 870, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 2466, in
> connectStorageServer
> res.append({'id': conDef["id"], 'status': status})
> KeyError: 'id'
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::882::Storage.TaskManager.Task::(_run)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._run:
> b5accf8f-014a-412d-9fb8-9e9447d49b72 (1,
> 'ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', [{'kvm': 'password', ',': '=',
> 'conn
> ection': 'zebulon.ifa.net:/engine', 'iqn': ',', 'protocol_version':
> '3', '=': 'user'}]) {} failed - stopping task
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::1214::Storage.TaskManager.Task::(stop)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::stopping in state
> preparing (force False)
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::990::Storage.TaskManager.Task::(_decref)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::ref 1 aborting True
> Thread-44::INFO::2014-12-08
> 19:20:45,463::task::1168::Storage.TaskManager.Task::(prepare)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::aborting: Task is
> aborted: u"'id'" - code 100
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::1173::Storage.TaskManager.Task::(prepare)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Prepare: aborted: 'id'
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::990::Storage.TaskManager.Task::(_decref)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::ref 0 aborting True
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::925::Storage.TaskManager.Task::(_doAbort)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._doAbort: force False
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-44::DEBUG::2014-12-08
> 19:20:45,463::task::592::Storage.TaskManager.Task::(_updateState)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state
> preparing -> state aborting
> Thread-44::DEBUG::2014-12-08
> 19:20:45,464::task::547::Storage.TaskManager.Task::(__state_aborting)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::_aborting: recover policy
> none
> Thread-44::DEBUG::2014-12-08
> 19:20:45,464::task::592::Storage.TaskManager.Task::(_updateState)
> Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state
> aborting -> state failed
> Thread-44::DEBUG::2014-12-08
> 19:20:45,464::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-44::DEBUG::2014-12-08
> 19:20:45,464::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-44::ERROR::2014-12-08
> 19:20:45,464::dispatcher::79::Storage.Dispatcher::(wrapper) 'id'
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper
> result = ctask.prepare(func, *args, **kwargs)
> File "/usr/share/vdsm/storage/task.py", line 103, in wrapper
> return m(self, *a, **kw)
> File "/usr/share/vdsm/storage/task.py", line 1176, in prepare
> raise self.error
> KeyError: 'id'
> clientIFinit::ERROR::2014-12-08
> 19:20:48,190::clientIF::460::vds::(_recoverExistingVms) Vm's recovery
> failed
> Traceback (most recent call last):
> File "/usr/share/vdsm/clientIF.py", line 404, in _recoverExistingVms
> caps.CpuTopology().cores())
> File "/usr/share/vdsm/caps.py", line 200, in __init__
> self._topology = _getCpuTopology(capabilities)
> File "/usr/share/vdsm/caps.py", line 232, in _getCpuTopology
> capabilities = _getFreshCapsXMLStr()
> File "/usr/share/vdsm/caps.py", line 222, in _getFreshCapsXMLStr
> return libvirtconnection.get().getCapabilities()
> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line 157, in get
> passwd)
> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line 102, in open_connection
> return utils.retry(libvirtOpen, timeout=10, sleep=0.2)
> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 935, in
> retry
> return func()
> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 102, in
> openAuth
> if ret is None:raise libvirtError('virConnectOpenAuth() failed')
> libvirtError: authentication failed: polkit:
> polkit\56retains_authorization_after_challenge=1
> Authorization requires authentication but no agent is available.
>
>
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 98
*************************************
------=_Part_10957803_1008904620.1418749854056
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hi,</div><div>Can I get engine, libvirt, vdsm, mom, logs f=
rom host8 and connectivity log?</div><div>Have you tried installing clean O=
Ss on hosts, especially on problematic host?</div><div>I'd also try to disa=
ble JSONRPC on hosts, by putting them to maintenance and then removing JSON=
RPC from the check box on all hosts, just to compare if it resolves the iss=
ue.</div><div><br></div><div><br></div><div><span name=3D"x"></span><br>Tha=
nks in advance.<br><div><br></div>Best regards,<br>Nikolai<br>_____________=
_______<br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red=
Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></d=
iv>Tel: +972 9 7692043<br>Mobile: +972 52 73427=
34<br>Email: nsednev(a)redhat.com<br>IRC: nsednev<span name=3D"x"></span><br>=
</div><div><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:=
normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,s=
ans-serif;font-size:12pt;"><b>From: </b>users-request(a)ovirt.org<br><b>To: <=
/b>users(a)ovirt.org<br><b>Sent: </b>Tuesday, December 16, 2014 5:50:28 PM<br=
><b>Subject: </b>Users Digest, Vol 39, Issue 98<br><div><br></div>Send User=
s mailing list submissions to<br> =
users(a)ovirt.org<br><div><br></div>To subscribe or unsubscribe via the=
World Wide Web, visit<br> h=
ttp://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a messa=
ge with subject or body 'help' to<br> &n=
bsp; users-request(a)ovirt.org<br><div><br></div>You can reach the perso=
n managing the list at<br> u=
sers-owner(a)ovirt.org<br><div><br></div>When replying, please edit your Subj=
ect line so it is more specific<br>than "Re: Contents of Users digest..."<b=
r><div><br></div><br>Today's Topics:<br><div><br></div> 1. Re: =
Free Ovirt Powered Cloud (Lior Vernia)<br> 2. glus=
ter rpms not found (Pat Pierson)<br> 3. vdsm losing conne=
ction to libvirt (Chris Adams)<br> 4. Re: Creating new us=
ers on oVirt 3.5 (Donny Davis)<br> 5. Re: gfapi, 3.5.1 (A=
lex Crow)<br><div><br></div><br>-------------------------------------------=
---------------------------<br><div><br></div>Message: 1<br>Date: Tue, 16 D=
ec 2014 15:55:02 +0200<br>From: Lior Vernia <lvernia(a)redhat.com><br>T=
o: Donny Davis <donny(a)cloudspin.me><br>Cc: users(a)ovirt.org<br>Subject=
: Re: [ovirt-users] Free Ovirt Powered Cloud<br>Message-ID: <549039B6.20=
10804(a)redhat.com><br>Content-Type: text/plain; charset=3DISO-8859-1<br><=
div><br></div>Hi Donny,<br><div><br></div>On 15/12/14 18:24, Donny Davis wr=
ote:<br>> Hi guys, I'm providing a free public cloud solution entirely b=
ased on<br>> vanilla oVirt called cloudspin.me <http://cloudspin.me&g=
t;<br>> <br><div><br></div>This looks great! :)<br><div><br></div>> I=
t runs on IPv6, and I am looking for people to use the system, host<br>>=
services and report back to me with their results.<br>> <br><div><br></=
div>Do you also use IPv6 internally in your deployment? e.g. assign IPv6<br=
>addresses to your hosts, storage domain, power management etc.? We'd be<br=
>very interested to hear what works and what doesn't. And perhaps help<br>p=
ush forward what doesn't, if you need it :)<br><div><br></div>> Data I a=
m looking for<br>> <br>> Connection Speed - Is it comparable to other=
services<br>> <br>> User experience - Are there any changes recommen=
ded<br>> <br>> Does it work for you - What does, and does not work fo=
r you.<br>> <br>> <br>> <br>> I am trying to get funding =
to keep this a free resource for everyone to<br>> use. (not from here:)<=
br>> <br>> I am completely open to any and all suggestions, and or he=
lp with<br>> things. I am a one man show at the moment.<br>> <br>>=
If anyone has any questions please email me back<br>> <br>> Donny D<=
br>> <br>> <br>> <br>> <br>> <br>> ________________=
_______________________________<br>> Users mailing list<br>> Users@ov=
irt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>> <br><=
div><br></div><br>------------------------------<br><div><br></div>Message:=
2<br>Date: Tue, 16 Dec 2014 09:08:57 -0500<br>From: Pat Pierson <ihasn2=
004(a)gmail.com><br>To: nathan(a)robotics.net<br>Cc: "users(a)ovirt.org" <u=
sers(a)ovirt.org><br>Subject: [ovirt-users] gluster rpms not found<b=
r>Message-ID:<br> <CAMRYi=
EiKL1MEGoHWjKtnhW3DXjouU0w3hs5zFx75sfBL8M4JaQ(a)mail.gmail.com><br>Content=
-Type: text/plain; charset=3D"utf-8"<br><div><br></div>Nathan,<br> &nb=
sp; Did you find a work around for this? I am running into the same i=
ssue.<br><div><br></div>Is there a way to force vdsm to see gluster? Or a w=
ay to manually run the<br>search so I can see why it fails?<br><div><br></d=
iv><br>>*<><br>*nathan stratton | vp technology | broadsoft, inc |=
+1-240-404-6580<br>|www.broadsoft.com<br><div><br></div><br>On Fri, Jun 20=
, 2014 at 11:01 AM, Nathan Stratton <nathan at<br>robotics.net <http:=
//lists.ovirt.org/mailman/listinfo/users>><br>wrote:<br><div><br></di=
v>>* Actually I have vdsm-gluster, that is why vdsm tries to find the gl=
uster<br>*>* packages. Is there a way I can run the vdsm gluster rpm sea=
rch manually to<br>*>* see what is going wrong?<br>*>>* [root at v=
irt01a <http://lists.ovirt.org/mailman/listinfo/users><br>~]# yum lis=
t installed |grep vdsm<br>*>* vdsm.x86_64 &n=
bsp; 4.14.9-0.el6 @ovirt-3.4-stable<br>*>>* vdsm-=
cli.noarch 4.14.9-0.el6 @ovirt-3.4=
-stable<br>*>>* vdsm-gluster.noarch 4.14.9-0.el6 =
@ovirt-3.4-stable<br>*>>* vdsm-python.x86_64 4.1=
4.9-0.el6 @ovirt-3.4-stable<br>*>>* vdsm-python-zombier=
eaper.noarch<br>*>* vdsm-xmlrpc.noarch 4.14.9-0.el6 =
@ovirt-3.4-stable<br>*>>>>* ><><br>*>* nath=
an stratton | vp technology | broadsoft, inc | +1-240-404-6580<br><%2B1-=
240-404-6580> |<br>*>* www.broadsoft.com <http://www.broadsoft.com=
/><br>*>>>* On Thu, Jun 19, 2014 at 8:39 PM, Andrew Lau <and=
rew at<br>andrewklau.com <http://lists.ovirt.org/mailman/listinfo/users&=
gt;> wrote:<br>*>>>* You're missing vdsm-gluster<br>*>>&g=
t;>* yum install vdsm-gluster<br>*>>>>* On Fri, Jun 20, 2014=
at 6:24 AM, Nathan Stratton <nathan at<br>robotics.net <http://lists=
.ovirt.org/mailman/listinfo/users>><br>*>>* wrote:<br>*>>=
* > I am running ovirt 3.4 and have gluster installed:<br>*>>* >=
;<br>*>>* > [root at virt01a<br><http://lists.ovirt.org/mailman=
/listinfo/users>]# yum list installed<br>|grep gluster<br>*>>* >=
; glusterfs.x86_64 3.5.0-2.el6 @ov=
irt-glusterfs-epel<br>*>>* > glusterfs-api.x86_64 3.5.0-2.e=
l6 @ovirt-glusterfs-epel<br>*>>* > glusterfs-c=
li.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel<br>*=
>>* > glusterfs-fuse.x86_64 3.5.0-2.el6 =
@ovirt-glusterfs-epel<br>*>>* > glusterfs-libs.x86_64 3.5.0-=
2.el6 @ovirt-glusterfs-epel<br>*>>* > glusterf=
s-rdma.x86_64 3.5.0-2.el6 @ovirt-glusterfs-epel<b=
r>*>>* > glusterfs-server.x86_64 3.5.0-2.el6 &=
nbsp;@ovirt-glusterfs-epel<br>*>>* ><br>*>>* > However vd=
sm can't seem to find them:<br>*>>* ><br>*>>* > Thread-13=
::DEBUG::2014-06-19<br>*>>* > 16:15:57,250::caps::458::root::(_get=
KeyPackages) rpm package<br>*>>* glusterfs-rdma<br>*>>* > no=
t found<br>*>>* > Thread-13::DEBUG::2014-06-19<br>*>>* > =
16:15:57,250::caps::458::root::(_getKeyPackages) rpm package<br>*>>* =
glusterfs-fuse<br>*>>* > not found<br>*>>* > Thread-13::D=
EBUG::2014-06-19<br>*>>* > 16:15:57,251::caps::458::root::(_getKey=
Packages) rpm package<br>*>>* gluster-swift<br>*>>* > not fo=
und<br>*>>* > Thread-13::DEBUG::2014-06-19<br>*>>* > 16:1=
5:57,252::caps::458::root::(_getKeyPackages) rpm package<br>*>>* >=
gluster-swift-object not found<br>*>>* > Thread-13::DEBUG::2014-0=
6-19<br>*>>* > 16:15:57,252::caps::458::root::(_getKeyPackages) rp=
m package glusterfs<br>*>>* not<br>*>>* > found<br>*>>=
* > Thread-13::DEBUG::2014-06-19<br>*>>* > 16:15:57,252::caps::=
458::root::(_getKeyPackages) rpm package<br>*>>* > gluster-swift-p=
lugin not found<br>*>>* > Thread-13::DEBUG::2014-06-19<br>*>>=
;* > 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package<br>*&g=
t;>* > gluster-swift-account not found<br>*>>* > Thread-13::=
DEBUG::2014-06-19<br>*>>* > 16:15:57,254::caps::458::root::(_getKe=
yPackages) rpm package<br>*>>* > gluster-swift-proxy not found<br>=
*>>* > Thread-13::DEBUG::2014-06-19<br>*>>* > 16:15:57,25=
4::caps::458::root::(_getKeyPackages) rpm package<br>*>>* > gluste=
r-swift-doc not found<br>*>>* > Thread-13::DEBUG::2014-06-19<br>*&=
gt;>* > 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package<=
br>*>>* > glusterfs-server not found<br>*>>* > Thread-13:=
:DEBUG::2014-06-19<br>*>>* > 16:15:57,255::caps::458::root::(_getK=
eyPackages) rpm package<br>*>>* > gluster-swift-container not foun=
d<br>*>>* > Thread-13::DEBUG::2014-06-19<br>*>>* > 16:15:=
57,255::caps::458::root::(_getKeyPackages) rpm package<br>*>>* > g=
lusterfs-geo-replication not found<br>*>>* ><br>*>>* > An=
y ideas?<br>*>>* ><br>*>>* >><><br>*>>* &g=
t; nathan stratton | vp technology | broadsoft, inc |<br>+1-240-404-6580 &l=
t;%2B1-240-404-6580> |<br>*>>* > www.broadsoft.com <http://w=
ww.broadsoft.com/><br>*>>* ><br>*>>* > _______________=
________________________________<br>*>>* > Users mailing list<br>*=
>>* > Users at ovirt.org <http://lists.ovirt.org/mailman/listin=
fo/users><br>*>>* > http://lists.ovirt.org/mailman/listinfo/use=
rs<br><http://lists.ovirt.org/mailman/listinfo/users><br>*>>* &=
gt;<br>*>>>>-------------- next part --------------<br>An HTML =
attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/use=
rs/attachments/20140621/9b14c8fe/attachment.html><br><div><br></div><br>=
-- <br>Patrick Pierson<br>-------------- next part --------------<br>An HTM=
L attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/u=
sers/attachments/20141216/58d14872/attachment-0001.html><br><div><br></d=
iv>------------------------------<br><div><br></div>Message: 3<br>Date: Tue=
, 16 Dec 2014 08:48:48 -0600<br>From: Chris Adams <cma(a)cmadams.net><b=
r>To: users(a)ovirt.org<br>Subject: [ovirt-users] vdsm losing connection to l=
ibvirt<br>Message-ID: <20141216144848.GA1708(a)cmadams.net><br>Content-=
Type: text/plain; charset=3Dus-ascii<br><div><br></div>I have a oVirt setup=
that has three nodes, all running CentOS 7, with a<br>hosted engine runnin=
g CentOS 6. Two of the nodes (node8 and node9) are<br>configured for =
hosted engine, and the third (node2) is just a "regular"<br>node (as you mi=
ght guess from the names, more nodes are coming as I<br>migrate VMs to oVir=
t).<br><div><br></div>On one node, node8, vdsm periodically loses its conne=
ction to libvirt,<br>which causes vdsm to restart. There doesn't appe=
ar to be any trigger<br>that I can see (not time of day, load, etc. related=
). The engine VM is<br>up and running on node8 (don't know if that ha=
s anything to do with it).<br><div><br></div>I get some entries in /var/log=
/messages repeated continuously; the<br>"ovirt-ha-broker: sending ioctl 540=
1 to a partition" I mentioned before,<br>and the following:<br><div><br></d=
iv>Dec 15 20:56:23 node8 journal: User record for user '107' was not found:=
No such file or directory<br>Dec 15 20:56:23 node8 journal: Group record f=
or user '107' was not found: No such file or directory<br><div><br></div>I =
don't think those have any relevance (don't know where they come<br>from); =
filtering those out, I see:<br><div><br></div>Dec 15 20:56:33 node8 journal=
: End of file while reading data: Input/output error<br>Dec 15 20:56:33 nod=
e8 journal: Tried to close invalid fd 0<br>Dec 15 20:56:38 node8 journal: v=
dsm root WARNING connection to libvirt broken. ecode: 1 edom: 7<br>Dec 15 2=
0:56:38 node8 journal: vdsm root CRITICAL taking calling process down.<br>D=
ec 15 20:56:38 node8 journal: vdsm vds ERROR libvirt error<br>Dec 15 20:56:=
38 node8 journal: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to ge=
tVdsCapabilities: Error 16 from getVdsCapabilities: Unexpected exception<br=
>Dec 15 20:56:45 node8 journal: End of file while reading data: Input/outpu=
t error<br>Dec 15 20:56:45 node8 vdsmd_init_common.sh: vdsm: Running run_fi=
nal_hooks<br>Dec 15 20:56:45 node8 systemd: Starting Virtual Desktop Server=
Manager...<br><and then all the normal-looking vdsm startup><br><div=
><br></div>It is happening about once a day, but not at any regular interva=
l or<br>time (was 02:23 Sunday, then 20:56 Monday).<br><div><br></div>vdsm.=
log has this at that time:<br><div><br></div>Thread-601576::DEBUG::2014-12-=
15 20:56:38,715::BindingXMLRPC::1132::vds::(wrapper) client [127.0.0.1]::ca=
ll getCapabilities with () {}<br>Thread-601576::DEBUG::2014-12-15 20:56:38,=
718::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all=
(cwd None)<br>Thread-601576::DEBUG::2014-12-15 20:56:38,746::utils::758::r=
oot::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br>Thread-6015=
76::WARNING::2014-12-15 20:56:38,754::libvirtconnection::135::root::(wrappe=
r) connection to libvirt broken. ecode: 1 edom: 7<br>Thread-601576::CRITICA=
L::2014-12-15 20:56:38,754::libvirtconnection::137::root::(wrapper) taking =
calling process down.<br>MainThread::DEBUG::2014-12-15 20:56:38,754::vdsm::=
58::vds::(sigtermHandler) Received signal 15<br>Thread-601576::DEBUG::2014-=
12-15 20:56:38,755::libvirtconnection::143::root::(wrapper) Unknown libvirt=
error: ecode: 1 edom: 7 level: 2 message: internal error: client socket is =
closed<br>MainThread::DEBUG::2014-12-15 20:56:38,755::protocoldetector::135=
::vds.MultiProtocolAcceptor::(stop) Stopping Acceptor<br>MainThread::INFO::=
2014-12-15 20:56:38,755::__init__::563::jsonrpc.JsonRpcServer::(stop) Stopp=
ing JsonRPC Server<br>Detector thread::DEBUG::2014-12-15 20:56:38,756::prot=
ocoldetector::106::vds.MultiProtocolAcceptor::(_cleanup) Cleaning Acceptor<=
br>MainThread::INFO::2014-12-15 20:56:38,757::vmchannels::188::vds::(stop) =
VM channels listener was stopped.<br>MainThread::INFO::2014-12-15 20:56:38,=
758::momIF::91::MOM::(stop) Shutting down MOM<br>MainThread::DEBUG::2014-12=
-15 20:56:38,759::task::595::Storage.TaskManager.Task::(_updateState) Task=
=3D`26c7680c-23e2-42bb-964c-272e778a168a`::moving from state init -> sta=
te preparing<br>MainThread::INFO::2014-12-15 20:56:38,759::logUtils::44::di=
spatcher::(wrapper) Run and protect: prepareForShutdown(options=3DNone)<br>=
Thread-601576::ERROR::2014-12-15 20:56:38,755::BindingXMLRPC::1142::vds::(w=
rapper) libvirt error<br>Traceback (most recent call last):<br> =
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper<br> =
; res =3D f(*args, **kwargs)<br> File "/usr/share/vd=
sm/rpc/BindingXMLRPC.py", line 463, in getCapabilities<br> &nbs=
p;ret =3D api.getCapabilities()<br> File "/usr/share/vdsm/API.py=
", line 1245, in getCapabilities<br> c =3D caps.get()<br>=
File "/usr/share/vdsm/caps.py", line 615, in get<br>  =
; caps.update(netinfo.get())<br> File "/usr/lib/python2.7/=
site-packages/vdsm/netinfo.py", line 812, in get<br> nets=
=3D networks()<br> File "/usr/lib/python2.7/site-packages/vdsm/=
netinfo.py", line 119, in networks<br> allNets =3D ((net,=
net.name()) for net in conn.listAllNetworks(0))<br> File "/usr/=
lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 129, in wrappe=
r<br> __connections.get(id(target)).pingLibvirt()<br>&nbs=
p; File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3642, in=
getLibVersion<br> if ret =3D=3D -1: raise libvirtError (=
'virConnectGetLibVersion() failed', conn=3Dself)<br>libvirtError: internal =
error: client socket is closed<br><div><br></div><br>-- <br>Chris Adams <=
;cma(a)cmadams.net><br><div><br></div><br>------------------------------<b=
r><div><br></div>Message: 4<br>Date: Tue, 16 Dec 2014 07:57:16 -0700<br>Fro=
m: "Donny Davis" <donny(a)cloudspin.me><br>To: "'Alon Bar-Lev'" <alo=
nbl(a)redhat.com>, "'Fedele=
Stabile'"<br> <fedele.st=
abile(a)fis.unical.it><br>Cc: users(a)ovirt.org<br>Subject: Re: [ovirt-users=
] Creating new users on oVirt 3.5<br>Message-ID: <008801d01940$9682f2f0$=
c388d8d0$(a)cloudspin.me><br>Content-Type: text/plain; &n=
bsp; charset=3D"us-ascii"<br><div><br></div>Check ou=
t my write-up on AAA, <br>I tried my best to break it down, and make it sim=
ple<br><div><br></div>https://cloudspin.me/ovirt-simple-ldap-aaa/<br><div><=
br></div>-----Original Message-----<br>From: users-bounces(a)ovirt.org [mailt=
o:users-bounces@ovirt.org] On Behalf Of<br>Alon Bar-Lev<br>Sent: Tuesday, D=
ecember 16, 2014 1:49 AM<br>To: Fedele Stabile<br>Cc: users(a)ovirt.org<br>Su=
bject: Re: [ovirt-users] Creating new users on oVirt 3.5<br><div><br></div>=
<br><div><br></div>----- Original Message -----<br>> From: "Fedele Stabi=
le" <fedele.stabile(a)fis.unical.it><br>> To: users(a)ovirt.org<br>>=
; Sent: Monday, December 15, 2014 8:05:28 PM<br>> Subject: [ovirt-users]=
Creating new users on oVirt 3.5<br>> <br>> Hello,<br>> I have to =
create some users on my oVirt 3.5 infrastructure.<br>> On FridayI =
was following istructions on <br>> http://www.ovirt.org/LDAP_Quick_Start=
<br>> LDAP Quick Start<br>> so I correctly created a OpenLDAP server =
and a Kerberos service, but <br>> this morning I read that the instructi=
ons are obsolete...<br>> Now I'm trying to understand how to implement t=
he new mechanism... but <br>> I'm in troubles:<br>> 1) run yum instal=
l ovirt-engine-extension-aaa-ldap<br>> 2) copied files in /etc/ovirt-eng=
ine/extensions.d and modified the <br>> name in fis.unical.it-auth(n/z).=
properties<br>> 3) copied files in /etc/ovirt-engine/aaa but now I can't=
do anything<br>> <br>> Can you help me with newbye instructions to i=
nstall the aaa-extensions?<br>> Thank you very much<br>> Fedele Stabi=
le<br><div><br></div>Hello,<br><div><br></div>Have you read[1]?<br>We of co=
urse need help in improving documentation :) Can you please send<br>engine.=
log when starting up engine so I can see if there are any issues?<br>Please=
make sure that at /etc/ovirt-engine/extensions.d you set the<br>config.pro=
file.file.1 to absolute file, /etc/ovirt-enigne/aaa/ as we wait<br>for 3.5.=
1 to support relative names.<br><div><br></div>The simplest sequence is:<br=
><div><br></div>1. copy recursive /usr/share/ovirt-engine-extension-aaa-lda=
p/examples/simple<br>to /etc/ovirt-engine 2. edit /etc/ovirt-engine/extensi=
on.d/* replace ../aaa<br>to /etc/ovirt-engine/aaa this is pending 3.5.1.<br=
>3. edit /etc/ovirt-engine/aaa/ldap1.properties and set vars.server,<br>var=
s.user, vars.password to meet your setup.<br>4. restart engine.<br>5. send =
me engine.log<br><div><br></div>Regards,<br>Alon<br><div><br></div>[1]<br>h=
ttp://gerrit.ovirt.org/gitweb?p=3Dovirt-engine-extension-aaa-ldap.git;a=3Db=
lob;<br>f=3DREADME;hb=3DHEAD<br>___________________________________________=
____<br>Users mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.org/mai=
lman/listinfo/users<br><div><br></div><br><div><br></div>------------------=
------------<br><div><br></div>Message: 5<br>Date: Tue, 16 Dec 2014 15:50:2=
3 +0000<br>From: Alex Crow <acrow(a)integrafin.co.uk><br>To: users@ovir=
t.org<br>Subject: Re: [ovirt-users] gfapi, 3.5.1<br>Message-ID: <549054B=
F.2090105(a)integrafin.co.uk><br>Content-Type: text/plain; charset=3Dutf-8=
; format=3Dflowed<br><div><br></div>Hi,<br><div><br></div>Anyone know if th=
is is due to work correctly in the next iteration of 3.5?<br><div><br></div=
>Thanks<br><div><br></div>Alex<br><div><br></div>On 09/12/14 10:33, Alex Cr=
ow wrote:<br>> Hi,<br>><br>> Will the vdsm patches to properly ena=
ble libgfapi storage for VMs (and <br>> matching refactored code in the =
hosted-engine setup scripts) for VMs <br>> make it into 3.5.1? It's not =
in the snapshots yet it seems.<br>><br>> I notice it's in master/3.6 =
snapshot but something stops the HA stuff <br>> in self-hosted setups fr=
om connecting storage:<br>><br>> from Master test setup:<br>> /var=
/log/ovirt-hosted-engine-ha/broker.log<br>><br>> MainThread::INFO::20=
14-12-08 <br>> 19:22:56,287::hosted_engine::222::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(_get_hostname) <br>> Found certificat=
e common name: 172.17.10.50<br>> MainThread::WARNING::2014-12-08 <br>>=
; 19:22:56,395::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(_initialize_vdsm) <br>> Failed to connect storage, wa=
iting '15' seconds before the next attempt<br>> MainThread::WARNING::201=
4-12-08 <br>> 19:23:11,501::hosted_engine::497::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>> Failed to conn=
ect storage, waiting '15' seconds before the next attempt<br>> MainThrea=
d::WARNING::2014-12-08 <br>> 19:23:26,610::hosted_engine::497::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>>=
Failed to connect storage, waiting '15' seconds before the next attempt<br=
>> MainThread::WARNING::2014-12-08 <br>> 19:23:41,717::hosted_engine:=
:497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize=
_vdsm) <br>> Failed to connect storage, waiting '15' seconds before the =
next attempt<br>> MainThread::WARNING::2014-12-08 <br>> 19:23:56,824:=
:hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(_initialize_vdsm) <br>> Failed to connect storage, waiting '15' seco=
nds before the next attempt<br>> MainThread::ERROR::2014-12-08 <br>> =
19:24:11,840::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(_initialize_vdsm) <br>> Failed trying to connect storag=
e:<br>> MainThread::ERROR::2014-12-08 <br>> 19:24:11,840::agent::173:=
:ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>> Error: 'Fa=
iled trying to connect storage' - trying to restart agent<br>> MainThrea=
d::WARNING::2014-12-08 <br>> 19:24:16,845::agent::176::ovirt_hosted_engi=
ne_ha.agent.agent.Agent::(_run_agent) <br>> Restarting agent, attempt '8=
'<br>> MainThread::INFO::2014-12-08 <br>> 19:24:16,855::hosted_engine=
::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostn=
ame) <br>> Found certificate common name: 172.17.10.50<br>> MainThrea=
d::WARNING::2014-12-08 <br>> 19:24:16,962::hosted_engine::497::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>>=
Failed to connect storage, waiting '15' seconds before the next attempt<br=
>> MainThread::WARNING::2014-12-08 <br>> 19:24:32,069::hosted_engine:=
:497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize=
_vdsm) <br>> Failed to connect storage, waiting '15' seconds before the =
next attempt<br>> MainThread::WARNING::2014-12-08 <br>> 19:24:47,181:=
:hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(_initialize_vdsm) <br>> Failed to connect storage, waiting '15' seco=
nds before the next attempt<br>> MainThread::WARNING::2014-12-08 <br>>=
; 19:25:02,288::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(_initialize_vdsm) <br>> Failed to connect storage, wa=
iting '15' seconds before the next attempt<br>> MainThread::WARNING::201=
4-12-08 <br>> 19:25:17,389::hosted_engine::497::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>> Failed to conn=
ect storage, waiting '15' seconds before the next attempt<br>> MainThrea=
d::ERROR::2014-12-08 <br>> 19:25:32,404::hosted_engine::500::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>> F=
ailed trying to connect storage:<br>> MainThread::ERROR::2014-12-08 <br>=
> 19:25:32,404::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_=
run_agent) <br>> Error: 'Failed trying to connect storage' - trying to r=
estart agent<br>> MainThread::WARNING::2014-12-08 <br>> 19:25:37,409:=
:agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>>=
; Restarting agent, attempt '9'<br>> MainThread::ERROR::2014-12-08 <br>&=
gt; 19:25:37,409::agent::178::ovirt_hosted_engine_ha.agent.agent.Agent::(_r=
un_agent) <br>> Too many errors occurred, giving up. Please review the l=
og and <br>> consider filing a bug.<br>> MainThread::INFO::2014-12-08=
<br>> 19:25:37,409::agent::118::ovirt_hosted_engine_ha.agent.agent.Agen=
t::(run) <br>> Agent shutting down<br>> (END) - Next: /var/log/ovirt-=
hosted-engine-ha/broker.log<br>><br>> vdsm.log:<br>><br>> Detec=
tor thread::DEBUG::2014-12-08 <br>> 19:20:45,458::protocoldetector::214:=
:vds.MultiProtocolAcceptor::(_remove_connection) <br>> Removing connecti=
on 127.0.0.1:53083<br>> Detector thread::DEBUG::2014-12-08 <br>> 19:2=
0:45,458::BindingXMLRPC::1193::XmlDetector::(handleSocket) xml <br>> ove=
r http detected from ('127.0.0.1', 53083)<br>> Thread-44::DEBUG::2014-12=
-08 <br>> 19:20:45,459::BindingXMLRPC::318::vds::(wrapper) client [127.0=
.0.1]<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,460::task::592=
::Storage.TaskManager.Task::(_updateState) <br>> Task=3D`b5accf8f-014a-4=
12d-9fb8-9e9447d49b72`::moving from state init -> <br>> state prepari=
ng<br>> Thread-44::INFO::2014-12-08 <br>> 19:20:45,460::logUtils::48:=
:dispatcher::(wrapper) Run and protect: <br>> connectStorageServer(domTy=
pe=3D1, <br>> spUUID=3D'ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', conList=
=3D[{'connection': <br>> 'zebulon.ifa.net:/engine', 'iqn': ',', 'protoco=
l_version': '3'<br>> , 'kvm': 'password', '=3D': 'user', ',': '=3D'}], o=
ptions=3DNone)<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,461::=
hsm::2384::Storage.HSM::(__prefetchDomains) nfs local <br>> path: /rhev/=
data-center/mnt/zebulon.ifa.net:_engine<br>> Thread-44::DEBUG::2014-12-0=
8 <br>> 19:20:45,462::hsm::2408::Storage.HSM::(__prefetchDomains) Found =
SD <br>> uuids: (u'd3240928-dae9-4ed0-8a28-7ab552455063',)<br>> Threa=
d-44::DEBUG::2014-12-08 <br>> 19:20:45,463::hsm::2464::Storage.HSM::(con=
nectStorageServer) knownSDs: <br>> {d3240928-dae9-4ed0-8a28-7ab552455063=
: storage.nfsSD.findDomain}<br>> Thread-44::ERROR::2014-12-08 <br>> 1=
9:20:45,463::task::863::Storage.TaskManager.Task::(_setError) <br>> Task=
=3D`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Unexpected error<br>> Traceba=
ck (most recent call last):<br>> File "/usr/share/vdsm/storage/ta=
sk.py", line 870, in _run<br>> return fn(*args, **kargs)<b=
r>> File "/usr/share/vdsm/logUtils.py", line 49, in wrapper<br>&g=
t; res =3D f(*args, **kwargs)<br>> File "/usr/share=
/vdsm/storage/hsm.py", line 2466, in <br>> connectStorageServer<br>> =
res.append({'id': conDef["id"], 'status': status})<br>> Ke=
yError: 'id'<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,463::ta=
sk::882::Storage.TaskManager.Task::(_run) <br>> Task=3D`b5accf8f-014a-41=
2d-9fb8-9e9447d49b72`::Task._run: <br>> b5accf8f-014a-412d-9fb8-9e9447d4=
9b72 (1, <br>> 'ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', [{'kvm': 'passwor=
d', ',': '=3D', <br>> 'conn<br>> ection': 'zebulon.ifa.net:/engine', =
'iqn': ',', 'protocol_version': <br>> '3', '=3D': 'user'}]) {} failed - =
stopping task<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,463::t=
ask::1214::Storage.TaskManager.Task::(stop) <br>> Task=3D`b5accf8f-014a-=
412d-9fb8-9e9447d49b72`::stopping in state <br>> preparing (force False)=
<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,463::task::990::Sto=
rage.TaskManager.Task::(_decref) <br>> Task=3D`b5accf8f-014a-412d-9fb8-9=
e9447d49b72`::ref 1 aborting True<br>> Thread-44::INFO::2014-12-08 <br>&=
gt; 19:20:45,463::task::1168::Storage.TaskManager.Task::(prepare) <br>> =
Task=3D`b5accf8f-014a-412d-9fb8-9e9447d49b72`::aborting: Task is <br>> a=
borted: u"'id'" - code 100<br>> Thread-44::DEBUG::2014-12-08 <br>> 19=
:20:45,463::task::1173::Storage.TaskManager.Task::(prepare) <br>> Task=
=3D`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Prepare: aborted: 'id'<br>> T=
hread-44::DEBUG::2014-12-08 <br>> 19:20:45,463::task::990::Storage.TaskM=
anager.Task::(_decref) <br>> Task=3D`b5accf8f-014a-412d-9fb8-9e9447d49b7=
2`::ref 0 aborting True<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20=
:45,463::task::925::Storage.TaskManager.Task::(_doAbort) <br>> Task=3D`b=
5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._doAbort: force False<br>> Th=
read-44::DEBUG::2014-12-08 <br>> 19:20:45,463::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) <br>> Owner.cancelAll requests {=
}<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,463::task::592::St=
orage.TaskManager.Task::(_updateState) <br>> Task=3D`b5accf8f-014a-412d-=
9fb8-9e9447d49b72`::moving from state <br>> preparing -> state aborti=
ng<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,464::task::547::S=
torage.TaskManager.Task::(__state_aborting) <br>> Task=3D`b5accf8f-014a-=
412d-9fb8-9e9447d49b72`::_aborting: recover policy <br>> none<br>> Th=
read-44::DEBUG::2014-12-08 <br>> 19:20:45,464::task::592::Storage.TaskMa=
nager.Task::(_updateState) <br>> Task=3D`b5accf8f-014a-412d-9fb8-9e9447d=
49b72`::moving from state <br>> aborting -> state failed<br>> Thre=
ad-44::DEBUG::2014-12-08 <br>> 19:20:45,464::resourceManager::940::Stora=
ge.ResourceManager.Owner::(releaseAll) <br>> Owner.releaseAll requests {=
} resources {}<br>> Thread-44::DEBUG::2014-12-08 <br>> 19:20:45,464::=
resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) <br>> O=
wner.cancelAll requests {}<br>> Thread-44::ERROR::2014-12-08 <br>> 19=
:20:45,464::dispatcher::79::Storage.Dispatcher::(wrapper) 'id'<br>> Trac=
eback (most recent call last):<br>> File "/usr/share/vdsm/storage=
/dispatcher.py", line 71, in wrapper<br>> result =3D ctask=
.prepare(func, *args, **kwargs)<br>> File "/usr/share/vdsm/storag=
e/task.py", line 103, in wrapper<br>> return m(self, *a, *=
*kw)<br>> File "/usr/share/vdsm/storage/task.py", line 1176, in p=
repare<br>> raise self.error<br>> KeyError: 'id'<br>>=
; clientIFinit::ERROR::2014-12-08 <br>> 19:20:48,190::clientIF::460::vds=
::(_recoverExistingVms) Vm's recovery <br>> failed<br>> Traceback (mo=
st recent call last):<br>> File "/usr/share/vdsm/clientIF.py", li=
ne 404, in _recoverExistingVms<br>> caps.CpuTopology().cor=
es())<br>> File "/usr/share/vdsm/caps.py", line 200, in __init__<=
br>> self._topology =3D _getCpuTopology(capabilities)<br>&=
gt; File "/usr/share/vdsm/caps.py", line 232, in _getCpuTopology<br>=
> capabilities =3D _getFreshCapsXMLStr()<br>> Fi=
le "/usr/share/vdsm/caps.py", line 222, in _getFreshCapsXMLStr<br>> &nbs=
p; return libvirtconnection.get().getCapabilities()<br>> F=
ile "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", <br>> =
line 157, in get<br>> passwd)<br>> File "/usr/li=
b/python2.7/site-packages/vdsm/libvirtconnection.py", <br>> line 102, in=
open_connection<br>> return utils.retry(libvirtOpen, time=
out=3D10, sleep=3D0.2)<br>> File "/usr/lib/python2.7/site-package=
s/vdsm/utils.py", line 935, in <br>> retry<br>> return =
func()<br>> File "/usr/lib64/python2.7/site-packages/libvirt.py",=
line 102, in <br>> openAuth<br>> if ret is None:raise =
libvirtError('virConnectOpenAuth() failed')<br>> libvirtError: authentic=
ation failed: polkit: <br>> polkit\56retains_authorization_after_challen=
ge=3D1<br>> Authorization requires authentication but no agent is availa=
ble.<br>><br>><br><div><br></div>-- <br>This message is intended only=
for the addressee and may contain<br>confidential information. Unless you =
are that person, you may not<br>disclose its contents or use it in any way =
and are requested to delete<br>the message along with any attachments and n=
otify us immediately.<br>"Transact" is operated by Integrated Financial Arr=
angements plc. 29<br>Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 =
Fax: (020) 7608<br>5300. (Registered office: as above; Registered in Englan=
d and Wales<br>under number: 3727592). Authorised and regulated by the Fina=
ncial<br>Conduct Authority (entered on the Financial Services Register; no.=
190856).<br><div><br></div><br><div><br></div>----------------------------=
--<br><div><br></div>_______________________________________________<br>Use=
rs mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.org/mailman/listin=
fo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 98<br>***=
**********************************<br></div><div><br></div></div></body></h=
tml>
------=_Part_10957803_1008904620.1418749854056--
9 years, 11 months
Creating new users on oVirt 3.5
by Fedele Stabile
Hello,
I have to create some users on my oVirt 3.5 infrastructure.
On FridayI was following istructions on http://www.ovirt.org/LDAP_Quick_Start
LDAP Quick Start
so I correctly created a OpenLDAP server and a Kerberos service, but
this morning I read that the instructions are obsolete...
Now I'm trying to understand how to implement the new mechanism... but I'm
in troubles:
1) run yum install ovirt-engine-extension-aaa-ldap
2) copied files in /etc/ovirt-engine/extensions.d and modified the name in
fis.unical.it-auth(n/z).properties
3) copied files in /etc/ovirt-engine/aaa
but now I can't do anything
Can you help me with newbye instructions to install the aaa-extensions?
Thank you very much
Fedele Stabile
9 years, 11 months
How to update zanata's source text ?
by plysan
Hi,
When I'm trying to compile ovirt-engine-3.5 branch with pulled zanata
source files, I get webadmin compilation errors. And the error message lead
me to the outdated zanata translation
file: org.ovirt.engine.ui.webadmin.ApplicationMessages
The file has a source text called "{0} (VLAN {1})", but recent commit
b068ec755198c27e65f936809104ba5068cd8fd2
has changed the text to "(VLAN {0})"
So is there a way to update the zanata's source text (text on the left) ?
It seems that I don't get any options to update it, I can only update the
target text (text on the right).
thanks.
9 years, 11 months
gfapi, 3.5.1
by Alex Crow
Hi,
Will the vdsm patches to properly enable libgfapi storage for VMs (and
matching refactored code in the hosted-engine setup scripts) for VMs
make it into 3.5.1? It's not in the snapshots yet it seems.
I notice it's in master/3.6 snapshot but something stops the HA stuff in
self-hosted setups from connecting storage:
from Master test setup:
/var/log/ovirt-hosted-engine-ha/broker.log
MainThread::INFO::2014-12-08
19:22:56,287::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
Found certificate common name: 172.17.10.50
MainThread::WARNING::2014-12-08
19:22:56,395::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:23:11,501::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:23:26,610::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:23:41,717::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:23:56,824::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::ERROR::2014-12-08
19:24:11,840::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed trying to connect storage:
MainThread::ERROR::2014-12-08
19:24:11,840::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: 'Failed trying to connect storage' - trying to restart agent
MainThread::WARNING::2014-12-08
19:24:16,845::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '8'
MainThread::INFO::2014-12-08
19:24:16,855::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
Found certificate common name: 172.17.10.50
MainThread::WARNING::2014-12-08
19:24:16,962::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:24:32,069::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:24:47,181::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:25:02,288::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::WARNING::2014-12-08
19:25:17,389::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed to connect storage, waiting '15' seconds before the next attempt
MainThread::ERROR::2014-12-08
19:25:32,404::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Failed trying to connect storage:
MainThread::ERROR::2014-12-08
19:25:32,404::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: 'Failed trying to connect storage' - trying to restart agent
MainThread::WARNING::2014-12-08
19:25:37,409::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '9'
MainThread::ERROR::2014-12-08
19:25:37,409::agent::178::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Too many errors occurred, giving up. Please review the log and consider
filing a bug.
MainThread::INFO::2014-12-08
19:25:37,409::agent::118::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent
shutting down
(END) - Next: /var/log/ovirt-hosted-engine-ha/broker.log
vdsm.log:
Detector thread::DEBUG::2014-12-08
19:20:45,458::protocoldetector::214::vds.MultiProtocolAcceptor::(_remove_connection)
Removing connection 127.0.0.1:53083
Detector thread::DEBUG::2014-12-08
19:20:45,458::BindingXMLRPC::1193::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 53083)
Thread-44::DEBUG::2014-12-08
19:20:45,459::BindingXMLRPC::318::vds::(wrapper) client [127.0.0.1]
Thread-44::DEBUG::2014-12-08
19:20:45,460::task::592::Storage.TaskManager.Task::(_updateState)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state init ->
state preparing
Thread-44::INFO::2014-12-08
19:20:45,460::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=1,
spUUID='ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', conList=[{'connection':
'zebulon.ifa.net:/engine', 'iqn': ',', 'protocol_version': '3'
, 'kvm': 'password', '=': 'user', ',': '='}], options=None)
Thread-44::DEBUG::2014-12-08
19:20:45,461::hsm::2384::Storage.HSM::(__prefetchDomains) nfs local
path: /rhev/data-center/mnt/zebulon.ifa.net:_engine
Thread-44::DEBUG::2014-12-08
19:20:45,462::hsm::2408::Storage.HSM::(__prefetchDomains) Found SD
uuids: (u'd3240928-dae9-4ed0-8a28-7ab552455063',)
Thread-44::DEBUG::2014-12-08
19:20:45,463::hsm::2464::Storage.HSM::(connectStorageServer) knownSDs:
{d3240928-dae9-4ed0-8a28-7ab552455063: storage.nfsSD.findDomain}
Thread-44::ERROR::2014-12-08
19:20:45,463::task::863::Storage.TaskManager.Task::(_setError)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 870, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2466, in connectStorageServer
res.append({'id': conDef["id"], 'status': status})
KeyError: 'id'
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::882::Storage.TaskManager.Task::(_run)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._run:
b5accf8f-014a-412d-9fb8-9e9447d49b72 (1,
'ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', [{'kvm': 'password', ',': '=', 'conn
ection': 'zebulon.ifa.net:/engine', 'iqn': ',', 'protocol_version': '3',
'=': 'user'}]) {} failed - stopping task
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::1214::Storage.TaskManager.Task::(stop)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::stopping in state preparing
(force False)
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::990::Storage.TaskManager.Task::(_decref)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::ref 1 aborting True
Thread-44::INFO::2014-12-08
19:20:45,463::task::1168::Storage.TaskManager.Task::(prepare)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::aborting: Task is aborted:
u"'id'" - code 100
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::1173::Storage.TaskManager.Task::(prepare)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Prepare: aborted: 'id'
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::990::Storage.TaskManager.Task::(_decref)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::ref 0 aborting True
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::925::Storage.TaskManager.Task::(_doAbort)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._doAbort: force False
Thread-44::DEBUG::2014-12-08
19:20:45,463::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-44::DEBUG::2014-12-08
19:20:45,463::task::592::Storage.TaskManager.Task::(_updateState)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state preparing
-> state aborting
Thread-44::DEBUG::2014-12-08
19:20:45,464::task::547::Storage.TaskManager.Task::(__state_aborting)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::_aborting: recover policy none
Thread-44::DEBUG::2014-12-08
19:20:45,464::task::592::Storage.TaskManager.Task::(_updateState)
Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state aborting
-> state failed
Thread-44::DEBUG::2014-12-08
19:20:45,464::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-44::DEBUG::2014-12-08
19:20:45,464::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-44::ERROR::2014-12-08
19:20:45,464::dispatcher::79::Storage.Dispatcher::(wrapper) 'id'
Traceback (most recent call last):
File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/share/vdsm/storage/task.py", line 103, in wrapper
return m(self, *a, **kw)
File "/usr/share/vdsm/storage/task.py", line 1176, in prepare
raise self.error
KeyError: 'id'
clientIFinit::ERROR::2014-12-08
19:20:48,190::clientIF::460::vds::(_recoverExistingVms) Vm's recovery failed
Traceback (most recent call last):
File "/usr/share/vdsm/clientIF.py", line 404, in _recoverExistingVms
caps.CpuTopology().cores())
File "/usr/share/vdsm/caps.py", line 200, in __init__
self._topology = _getCpuTopology(capabilities)
File "/usr/share/vdsm/caps.py", line 232, in _getCpuTopology
capabilities = _getFreshCapsXMLStr()
File "/usr/share/vdsm/caps.py", line 222, in _getFreshCapsXMLStr
return libvirtconnection.get().getCapabilities()
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 157, in get
passwd)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 102, in open_connection
return utils.retry(libvirtOpen, timeout=10, sleep=0.2)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 935, in retry
return func()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 102, in
openAuth
if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: authentication failed: polkit:
polkit\56retains_authorization_after_challenge=1
Authorization requires authentication but no agent is available.
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).
9 years, 11 months