<html><body><div style="font-family: georgia,serif; font-size: 12pt; color: #000000"><div>Hi,</div><div>Can I get engine, libvirt, vdsm, mom, logs from host8 and connectivity log?</div><div>Have you tried installing clean OSs on hosts, especially on problematic host?</div><div>I'd also try to disable JSONRPC on hosts, by putting them to maintenance and then removing JSONRPC from the check box on all hosts, just to compare if it resolves the issue.</div><div><br></div><div><br></div><div><span name="x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: &nbsp; &nbsp; &nbsp; +972 &nbsp; 9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev@redhat.com<br>IRC: nsednev<span name="x"></span><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovirt.org<br><b>To: </b>users@ovirt.org<br><b>Sent: </b>Tuesday, December 16, 2014 5:50:28 PM<br><b>Subject: </b>Users Digest, Vol 39, Issue 98<br><div><br></div>Send Users mailing list submissions to<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users@ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wide Web, visit<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;http://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with subject or body 'help' to<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users-request@ovirt.org<br><div><br></div>You can reach the person managing the list at<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users-owner@ovirt.org<br><div><br></div>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div>&nbsp;&nbsp; 1. Re: &nbsp;Free Ovirt Powered Cloud (Lior Vernia)<br>&nbsp;&nbsp; 2. &nbsp; gluster rpms not found (Pat Pierson)<br>&nbsp;&nbsp; 3. &nbsp;vdsm losing connection to libvirt (Chris Adams)<br>&nbsp;&nbsp; 4. Re: &nbsp;Creating new users on oVirt 3.5 (Donny Davis)<br>&nbsp;&nbsp; 5. Re: &nbsp;gfapi, 3.5.1 (Alex Crow)<br><div><br></div><br>----------------------------------------------------------------------<br><div><br></div>Message: 1<br>Date: Tue, 16 Dec 2014 15:55:02 +0200<br>From: Lior Vernia &lt;lvernia@redhat.com&gt;<br>To: Donny Davis &lt;donny@cloudspin.me&gt;<br>Cc: users@ovirt.org<br>Subject: Re: [ovirt-users] Free Ovirt Powered Cloud<br>Message-ID: &lt;549039B6.2010804@redhat.com&gt;<br>Content-Type: text/plain; charset=ISO-8859-1<br><div><br></div>Hi Donny,<br><div><br></div>On 15/12/14 18:24, Donny Davis wrote:<br>&gt; Hi guys, I'm providing a free public cloud solution entirely based on<br>&gt; vanilla oVirt called cloudspin.me &lt;http://cloudspin.me&gt;<br>&gt; <br><div><br></div>This looks great! :)<br><div><br></div>&gt; It runs on IPv6, and I am looking for people to use the system, host<br>&gt; services and report back to me with their results.<br>&gt; <br><div><br></div>Do you also use IPv6 internally in your deployment? e.g. assign IPv6<br>addresses to your hosts, storage domain, power management etc.? We'd be<br>very interested to hear what works and what doesn't. And perhaps help<br>push forward what doesn't, if you need it :)<br><div><br></div>&gt; Data I am looking for<br>&gt; <br>&gt; Connection Speed - Is it comparable to other services<br>&gt; <br>&gt; User experience - Are there any changes recommended<br>&gt; <br>&gt; Does it work for you - What does, and does not work for you.<br>&gt; <br>&gt; &nbsp;<br>&gt; <br>&gt; I am trying to get funding to keep this a free resource for everyone to<br>&gt; use. (not from here:)<br>&gt; <br>&gt; I am completely open to any and all suggestions, and or help with<br>&gt; things. I am a one man show at the moment.<br>&gt; <br>&gt; If anyone has any questions please email me back<br>&gt; <br>&gt; Donny D<br>&gt; <br>&gt; &nbsp;<br>&gt; <br>&gt; <br>&gt; <br>&gt; _______________________________________________<br>&gt; Users mailing list<br>&gt; Users@ovirt.org<br>&gt; http://lists.ovirt.org/mailman/listinfo/users<br>&gt; <br><div><br></div><br>------------------------------<br><div><br></div>Message: 2<br>Date: Tue, 16 Dec 2014 09:08:57 -0500<br>From: Pat Pierson &lt;ihasn2004@gmail.com&gt;<br>To: nathan@robotics.net<br>Cc: "users@ovirt.org" &lt;users@ovirt.org&gt;<br>Subject: [ovirt-users] &nbsp;gluster rpms not found<br>Message-ID:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;CAMRYiEiKL1MEGoHWjKtnhW3DXjouU0w3hs5zFx75sfBL8M4JaQ@mail.gmail.com&gt;<br>Content-Type: text/plain; charset="utf-8"<br><div><br></div>Nathan,<br>&nbsp;&nbsp; Did you find a work around for this? &nbsp;I am running into the same issue.<br><div><br></div>Is there a way to force vdsm to see gluster? Or a way to manually run the<br>search so I can see why it fails?<br><div><br></div><br>&gt;*&lt;&gt;<br>*nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580<br>|www.broadsoft.com<br><div><br></div><br>On Fri, Jun 20, 2014 at 11:01 AM, Nathan Stratton &lt;nathan at<br>robotics.net &lt;http://lists.ovirt.org/mailman/listinfo/users&gt;&gt;<br>wrote:<br><div><br></div>&gt;* Actually I have vdsm-gluster, that is why vdsm tries to find the gluster<br>*&gt;* packages. Is there a way I can run the vdsm gluster rpm search manually to<br>*&gt;* see what is going wrong?<br>*&gt;&gt;* [root at virt01a &lt;http://lists.ovirt.org/mailman/listinfo/users&gt;<br>~]# yum list installed |grep vdsm<br>*&gt;* vdsm.x86_64 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4.14.9-0.el6 &nbsp; &nbsp; @ovirt-3.4-stable<br>*&gt;&gt;* vdsm-cli.noarch &nbsp; &nbsp; &nbsp; &nbsp;4.14.9-0.el6 &nbsp; &nbsp; @ovirt-3.4-stable<br>*&gt;&gt;* vdsm-gluster.noarch &nbsp; &nbsp;4.14.9-0.el6 &nbsp; &nbsp; @ovirt-3.4-stable<br>*&gt;&gt;* vdsm-python.x86_64 &nbsp; &nbsp; 4.14.9-0.el6 &nbsp; &nbsp; @ovirt-3.4-stable<br>*&gt;&gt;* vdsm-python-zombiereaper.noarch<br>*&gt;* vdsm-xmlrpc.noarch &nbsp; &nbsp; 4.14.9-0.el6 &nbsp; &nbsp; @ovirt-3.4-stable<br>*&gt;&gt;&gt;&gt;* &gt;&lt;&gt;<br>*&gt;* nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580<br>&lt;%2B1-240-404-6580&gt; |<br>*&gt;* www.broadsoft.com &lt;http://www.broadsoft.com/&gt;<br>*&gt;&gt;&gt;* On Thu, Jun 19, 2014 at 8:39 PM, Andrew Lau &lt;andrew at<br>andrewklau.com &lt;http://lists.ovirt.org/mailman/listinfo/users&gt;&gt; wrote:<br>*&gt;&gt;&gt;* You're missing vdsm-gluster<br>*&gt;&gt;&gt;&gt;* yum install vdsm-gluster<br>*&gt;&gt;&gt;&gt;* On Fri, Jun 20, 2014 at 6:24 AM, Nathan Stratton &lt;nathan at<br>robotics.net &lt;http://lists.ovirt.org/mailman/listinfo/users&gt;&gt;<br>*&gt;&gt;* wrote:<br>*&gt;&gt;* &gt; I am running ovirt 3.4 and have gluster installed:<br>*&gt;&gt;* &gt;<br>*&gt;&gt;* &gt; [root at virt01a<br>&lt;http://lists.ovirt.org/mailman/listinfo/users&gt;]# yum list installed<br>|grep gluster<br>*&gt;&gt;* &gt; glusterfs.x86_64 &nbsp; &nbsp; &nbsp; 3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt; glusterfs-api.x86_64 &nbsp; 3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt; glusterfs-cli.x86_64 &nbsp; 3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt; glusterfs-fuse.x86_64 &nbsp;3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt; glusterfs-libs.x86_64 &nbsp;3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt; glusterfs-rdma.x86_64 &nbsp;3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt; glusterfs-server.x86_64 &nbsp;3.5.0-2.el6 &nbsp; &nbsp; &nbsp;@ovirt-glusterfs-epel<br>*&gt;&gt;* &gt;<br>*&gt;&gt;* &gt; However vdsm can't seem to find them:<br>*&gt;&gt;* &gt;<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,250::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* glusterfs-rdma<br>*&gt;&gt;* &gt; not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,250::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* glusterfs-fuse<br>*&gt;&gt;* &gt; not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,251::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* gluster-swift<br>*&gt;&gt;* &gt; not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,252::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; gluster-swift-object not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,252::caps::458::root::(_getKeyPackages) rpm package glusterfs<br>*&gt;&gt;* not<br>*&gt;&gt;* &gt; found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,252::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; gluster-swift-plugin not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; gluster-swift-account not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; gluster-swift-proxy not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,254::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; gluster-swift-doc not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; glusterfs-server not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; gluster-swift-container not found<br>*&gt;&gt;* &gt; Thread-13::DEBUG::2014-06-19<br>*&gt;&gt;* &gt; 16:15:57,255::caps::458::root::(_getKeyPackages) rpm package<br>*&gt;&gt;* &gt; glusterfs-geo-replication not found<br>*&gt;&gt;* &gt;<br>*&gt;&gt;* &gt; Any ideas?<br>*&gt;&gt;* &gt;<br>*&gt;&gt;* &gt;&gt;&lt;&gt;<br>*&gt;&gt;* &gt; nathan stratton | vp technology | broadsoft, inc |<br>+1-240-404-6580 &lt;%2B1-240-404-6580&gt; |<br>*&gt;&gt;* &gt; www.broadsoft.com &lt;http://www.broadsoft.com/&gt;<br>*&gt;&gt;* &gt;<br>*&gt;&gt;* &gt; _______________________________________________<br>*&gt;&gt;* &gt; Users mailing list<br>*&gt;&gt;* &gt; Users at ovirt.org &lt;http://lists.ovirt.org/mailman/listinfo/users&gt;<br>*&gt;&gt;* &gt; http://lists.ovirt.org/mailman/listinfo/users<br>&lt;http://lists.ovirt.org/mailman/listinfo/users&gt;<br>*&gt;&gt;* &gt;<br>*&gt;&gt;&gt;&gt;-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: &lt;http://lists.ovirt.org/pipermail/users/attachments/20140621/9b14c8fe/attachment.html&gt;<br><div><br></div><br>-- <br>Patrick Pierson<br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: &lt;http://lists.ovirt.org/pipermail/users/attachments/20141216/58d14872/attachment-0001.html&gt;<br><div><br></div>------------------------------<br><div><br></div>Message: 3<br>Date: Tue, 16 Dec 2014 08:48:48 -0600<br>From: Chris Adams &lt;cma@cmadams.net&gt;<br>To: users@ovirt.org<br>Subject: [ovirt-users] vdsm losing connection to libvirt<br>Message-ID: &lt;20141216144848.GA1708@cmadams.net&gt;<br>Content-Type: text/plain; charset=us-ascii<br><div><br></div>I have a oVirt setup that has three nodes, all running CentOS 7, with a<br>hosted engine running CentOS 6. &nbsp;Two of the nodes (node8 and node9) are<br>configured for hosted engine, and the third (node2) is just a "regular"<br>node (as you might guess from the names, more nodes are coming as I<br>migrate VMs to oVirt).<br><div><br></div>On one node, node8, vdsm periodically loses its connection to libvirt,<br>which causes vdsm to restart. &nbsp;There doesn't appear to be any trigger<br>that I can see (not time of day, load, etc. related). &nbsp;The engine VM is<br>up and running on node8 (don't know if that has anything to do with it).<br><div><br></div>I get some entries in /var/log/messages repeated continuously; the<br>"ovirt-ha-broker: sending ioctl 5401 to a partition" I mentioned before,<br>and the following:<br><div><br></div>Dec 15 20:56:23 node8 journal: User record for user '107' was not found: No such file or directory<br>Dec 15 20:56:23 node8 journal: Group record for user '107' was not found: No such file or directory<br><div><br></div>I don't think those have any relevance (don't know where they come<br>from); filtering those out, I see:<br><div><br></div>Dec 15 20:56:33 node8 journal: End of file while reading data: Input/output error<br>Dec 15 20:56:33 node8 journal: Tried to close invalid fd 0<br>Dec 15 20:56:38 node8 journal: vdsm root WARNING connection to libvirt broken. ecode: 1 edom: 7<br>Dec 15 20:56:38 node8 journal: vdsm root CRITICAL taking calling process down.<br>Dec 15 20:56:38 node8 journal: vdsm vds ERROR libvirt error<br>Dec 15 20:56:38 node8 journal: ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsCapabilities: Error 16 from getVdsCapabilities: Unexpected exception<br>Dec 15 20:56:45 node8 journal: End of file while reading data: Input/output error<br>Dec 15 20:56:45 node8 vdsmd_init_common.sh: vdsm: Running run_final_hooks<br>Dec 15 20:56:45 node8 systemd: Starting Virtual Desktop Server Manager...<br>&lt;and then all the normal-looking vdsm startup&gt;<br><div><br></div>It is happening about once a day, but not at any regular interval or<br>time (was 02:23 Sunday, then 20:56 Monday).<br><div><br></div>vdsm.log has this at that time:<br><div><br></div>Thread-601576::DEBUG::2014-12-15 20:56:38,715::BindingXMLRPC::1132::vds::(wrapper) client [127.0.0.1]::call getCapabilities with () {}<br>Thread-601576::DEBUG::2014-12-15 20:56:38,718::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None)<br>Thread-601576::DEBUG::2014-12-15 20:56:38,746::utils::758::root::(execCmd) SUCCESS: &lt;err&gt; = ''; &lt;rc&gt; = 0<br>Thread-601576::WARNING::2014-12-15 20:56:38,754::libvirtconnection::135::root::(wrapper) connection to libvirt broken. ecode: 1 edom: 7<br>Thread-601576::CRITICAL::2014-12-15 20:56:38,754::libvirtconnection::137::root::(wrapper) taking calling process down.<br>MainThread::DEBUG::2014-12-15 20:56:38,754::vdsm::58::vds::(sigtermHandler) Received signal 15<br>Thread-601576::DEBUG::2014-12-15 20:56:38,755::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 1 edom: 7 level: 2 message: internal error: client socket is closed<br>MainThread::DEBUG::2014-12-15 20:56:38,755::protocoldetector::135::vds.MultiProtocolAcceptor::(stop) Stopping Acceptor<br>MainThread::INFO::2014-12-15 20:56:38,755::__init__::563::jsonrpc.JsonRpcServer::(stop) Stopping JsonRPC Server<br>Detector thread::DEBUG::2014-12-15 20:56:38,756::protocoldetector::106::vds.MultiProtocolAcceptor::(_cleanup) Cleaning Acceptor<br>MainThread::INFO::2014-12-15 20:56:38,757::vmchannels::188::vds::(stop) VM channels listener was stopped.<br>MainThread::INFO::2014-12-15 20:56:38,758::momIF::91::MOM::(stop) Shutting down MOM<br>MainThread::DEBUG::2014-12-15 20:56:38,759::task::595::Storage.TaskManager.Task::(_updateState) Task=`26c7680c-23e2-42bb-964c-272e778a168a`::moving from state init -&gt; state preparing<br>MainThread::INFO::2014-12-15 20:56:38,759::logUtils::44::dispatcher::(wrapper) Run and protect: prepareForShutdown(options=None)<br>Thread-601576::ERROR::2014-12-15 20:56:38,755::BindingXMLRPC::1142::vds::(wrapper) libvirt error<br>Traceback (most recent call last):<br>&nbsp;&nbsp;File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper<br>&nbsp;&nbsp; &nbsp;res = f(*args, **kwargs)<br>&nbsp;&nbsp;File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 463, in getCapabilities<br>&nbsp;&nbsp; &nbsp;ret = api.getCapabilities()<br>&nbsp;&nbsp;File "/usr/share/vdsm/API.py", line 1245, in getCapabilities<br>&nbsp;&nbsp; &nbsp;c = caps.get()<br>&nbsp;&nbsp;File "/usr/share/vdsm/caps.py", line 615, in get<br>&nbsp;&nbsp; &nbsp;caps.update(netinfo.get())<br>&nbsp;&nbsp;File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 812, in get<br>&nbsp;&nbsp; &nbsp;nets = networks()<br>&nbsp;&nbsp;File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 119, in networks<br>&nbsp;&nbsp; &nbsp;allNets = ((net, net.name()) for net in conn.listAllNetworks(0))<br>&nbsp;&nbsp;File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 129, in wrapper<br>&nbsp;&nbsp; &nbsp;__connections.get(id(target)).pingLibvirt()<br>&nbsp;&nbsp;File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3642, in getLibVersion<br>&nbsp;&nbsp; &nbsp;if ret == -1: raise libvirtError ('virConnectGetLibVersion() failed', conn=self)<br>libvirtError: internal error: client socket is closed<br><div><br></div><br>-- <br>Chris Adams &lt;cma@cmadams.net&gt;<br><div><br></div><br>------------------------------<br><div><br></div>Message: 4<br>Date: Tue, 16 Dec 2014 07:57:16 -0700<br>From: "Donny Davis" &lt;donny@cloudspin.me&gt;<br>To: "'Alon Bar-Lev'" &lt;alonbl@redhat.com&gt;,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"'Fedele Stabile'"<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;fedele.stabile@fis.unical.it&gt;<br>Cc: users@ovirt.org<br>Subject: Re: [ovirt-users] Creating new users on oVirt 3.5<br>Message-ID: &lt;008801d01940$9682f2f0$c388d8d0$@cloudspin.me&gt;<br>Content-Type: text/plain;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;charset="us-ascii"<br><div><br></div>Check out my write-up on AAA, <br>I tried my best to break it down, and make it simple<br><div><br></div>https://cloudspin.me/ovirt-simple-ldap-aaa/<br><div><br></div>-----Original Message-----<br>From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of<br>Alon Bar-Lev<br>Sent: Tuesday, December 16, 2014 1:49 AM<br>To: Fedele Stabile<br>Cc: users@ovirt.org<br>Subject: Re: [ovirt-users] Creating new users on oVirt 3.5<br><div><br></div><br><div><br></div>----- Original Message -----<br>&gt; From: "Fedele Stabile" &lt;fedele.stabile@fis.unical.it&gt;<br>&gt; To: users@ovirt.org<br>&gt; Sent: Monday, December 15, 2014 8:05:28 PM<br>&gt; Subject: [ovirt-users] Creating new users on oVirt 3.5<br>&gt; <br>&gt; Hello,<br>&gt; I have to create some users on my oVirt 3.5 infrastructure.<br>&gt; On FridayI &nbsp;was following istructions on <br>&gt; http://www.ovirt.org/LDAP_Quick_Start<br>&gt; LDAP Quick Start<br>&gt; so I correctly created a OpenLDAP server and a Kerberos service, but <br>&gt; this morning I read that the instructions are obsolete...<br>&gt; Now I'm trying to understand how to implement the new mechanism... but <br>&gt; I'm in troubles:<br>&gt; 1) run yum install ovirt-engine-extension-aaa-ldap<br>&gt; 2) copied files in /etc/ovirt-engine/extensions.d and modified the <br>&gt; name in fis.unical.it-auth(n/z).properties<br>&gt; 3) copied files in /etc/ovirt-engine/aaa but now I can't do anything<br>&gt; <br>&gt; Can you help me with newbye instructions to install the aaa-extensions?<br>&gt; Thank you very much<br>&gt; Fedele Stabile<br><div><br></div>Hello,<br><div><br></div>Have you read[1]?<br>We of course need help in improving documentation :) Can you please send<br>engine.log when starting up engine so I can see if there are any issues?<br>Please make sure that at /etc/ovirt-engine/extensions.d you set the<br>config.profile.file.1 to absolute file, /etc/ovirt-enigne/aaa/ as we wait<br>for 3.5.1 to support relative names.<br><div><br></div>The simplest sequence is:<br><div><br></div>1. copy recursive /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple<br>to /etc/ovirt-engine 2. edit /etc/ovirt-engine/extension.d/* replace ../aaa<br>to /etc/ovirt-engine/aaa this is pending 3.5.1.<br>3. edit /etc/ovirt-engine/aaa/ldap1.properties and set vars.server,<br>vars.user, vars.password to meet your setup.<br>4. restart engine.<br>5. send me engine.log<br><div><br></div>Regards,<br>Alon<br><div><br></div>[1]<br>http://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob;<br>f=README;hb=HEAD<br>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br><div><br></div>------------------------------<br><div><br></div>Message: 5<br>Date: Tue, 16 Dec 2014 15:50:23 +0000<br>From: Alex Crow &lt;acrow@integrafin.co.uk&gt;<br>To: users@ovirt.org<br>Subject: Re: [ovirt-users] gfapi, 3.5.1<br>Message-ID: &lt;549054BF.2090105@integrafin.co.uk&gt;<br>Content-Type: text/plain; charset=utf-8; format=flowed<br><div><br></div>Hi,<br><div><br></div>Anyone know if this is due to work correctly in the next iteration of 3.5?<br><div><br></div>Thanks<br><div><br></div>Alex<br><div><br></div>On 09/12/14 10:33, Alex Crow wrote:<br>&gt; Hi,<br>&gt;<br>&gt; Will the vdsm patches to properly enable libgfapi storage for VMs (and <br>&gt; matching refactored code in the hosted-engine setup scripts) for VMs <br>&gt; make it into 3.5.1? It's not in the snapshots yet it seems.<br>&gt;<br>&gt; I notice it's in master/3.6 snapshot but something stops the HA stuff <br>&gt; in self-hosted setups from connecting storage:<br>&gt;<br>&gt; from Master test setup:<br>&gt; /var/log/ovirt-hosted-engine-ha/broker.log<br>&gt;<br>&gt; MainThread::INFO::2014-12-08 <br>&gt; 19:22:56,287::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) <br>&gt; Found certificate common name: 172.17.10.50<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:22:56,395::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:23:11,501::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:23:26,610::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:23:41,717::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:23:56,824::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::ERROR::2014-12-08 <br>&gt; 19:24:11,840::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed trying to connect storage:<br>&gt; MainThread::ERROR::2014-12-08 <br>&gt; 19:24:11,840::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>&gt; Error: 'Failed trying to connect storage' - trying to restart agent<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:24:16,845::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>&gt; Restarting agent, attempt '8'<br>&gt; MainThread::INFO::2014-12-08 <br>&gt; 19:24:16,855::hosted_engine::222::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) <br>&gt; Found certificate common name: 172.17.10.50<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:24:16,962::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:24:32,069::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:24:47,181::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:25:02,288::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:25:17,389::hosted_engine::497::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed to connect storage, waiting '15' seconds before the next attempt<br>&gt; MainThread::ERROR::2014-12-08 <br>&gt; 19:25:32,404::hosted_engine::500::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) <br>&gt; Failed trying to connect storage:<br>&gt; MainThread::ERROR::2014-12-08 <br>&gt; 19:25:32,404::agent::173::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>&gt; Error: 'Failed trying to connect storage' - trying to restart agent<br>&gt; MainThread::WARNING::2014-12-08 <br>&gt; 19:25:37,409::agent::176::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>&gt; Restarting agent, attempt '9'<br>&gt; MainThread::ERROR::2014-12-08 <br>&gt; 19:25:37,409::agent::178::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) <br>&gt; Too many errors occurred, giving up. Please review the log and <br>&gt; consider filing a bug.<br>&gt; MainThread::INFO::2014-12-08 <br>&gt; 19:25:37,409::agent::118::ovirt_hosted_engine_ha.agent.agent.Agent::(run) <br>&gt; Agent shutting down<br>&gt; (END) - Next: /var/log/ovirt-hosted-engine-ha/broker.log<br>&gt;<br>&gt; vdsm.log:<br>&gt;<br>&gt; Detector thread::DEBUG::2014-12-08 <br>&gt; 19:20:45,458::protocoldetector::214::vds.MultiProtocolAcceptor::(_remove_connection) <br>&gt; Removing connection 127.0.0.1:53083<br>&gt; Detector thread::DEBUG::2014-12-08 <br>&gt; 19:20:45,458::BindingXMLRPC::1193::XmlDetector::(handleSocket) xml <br>&gt; over http detected from ('127.0.0.1', 53083)<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,459::BindingXMLRPC::318::vds::(wrapper) client [127.0.0.1]<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,460::task::592::Storage.TaskManager.Task::(_updateState) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state init -&gt; <br>&gt; state preparing<br>&gt; Thread-44::INFO::2014-12-08 <br>&gt; 19:20:45,460::logUtils::48::dispatcher::(wrapper) Run and protect: <br>&gt; connectStorageServer(domType=1, <br>&gt; spUUID='ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', conList=[{'connection': <br>&gt; 'zebulon.ifa.net:/engine', 'iqn': ',', 'protocol_version': '3'<br>&gt; , 'kvm': 'password', '=': 'user', ',': '='}], options=None)<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,461::hsm::2384::Storage.HSM::(__prefetchDomains) nfs local <br>&gt; path: /rhev/data-center/mnt/zebulon.ifa.net:_engine<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,462::hsm::2408::Storage.HSM::(__prefetchDomains) Found SD <br>&gt; uuids: (u'd3240928-dae9-4ed0-8a28-7ab552455063',)<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::hsm::2464::Storage.HSM::(connectStorageServer) knownSDs: <br>&gt; {d3240928-dae9-4ed0-8a28-7ab552455063: storage.nfsSD.findDomain}<br>&gt; Thread-44::ERROR::2014-12-08 <br>&gt; 19:20:45,463::task::863::Storage.TaskManager.Task::(_setError) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Unexpected error<br>&gt; Traceback (most recent call last):<br>&gt; &nbsp; File "/usr/share/vdsm/storage/task.py", line 870, in _run<br>&gt; &nbsp; &nbsp; return fn(*args, **kargs)<br>&gt; &nbsp; File "/usr/share/vdsm/logUtils.py", line 49, in wrapper<br>&gt; &nbsp; &nbsp; res = f(*args, **kwargs)<br>&gt; &nbsp; File "/usr/share/vdsm/storage/hsm.py", line 2466, in <br>&gt; connectStorageServer<br>&gt; &nbsp; &nbsp; res.append({'id': conDef["id"], 'status': status})<br>&gt; KeyError: 'id'<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::882::Storage.TaskManager.Task::(_run) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._run: <br>&gt; b5accf8f-014a-412d-9fb8-9e9447d49b72 (1, <br>&gt; 'ab2b5ee7-9aa7-426f-9d58-5e7d3840ad81', [{'kvm': 'password', ',': '=', <br>&gt; 'conn<br>&gt; ection': 'zebulon.ifa.net:/engine', 'iqn': ',', 'protocol_version': <br>&gt; '3', '=': 'user'}]) {} failed - stopping task<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::1214::Storage.TaskManager.Task::(stop) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::stopping in state <br>&gt; preparing (force False)<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::990::Storage.TaskManager.Task::(_decref) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::ref 1 aborting True<br>&gt; Thread-44::INFO::2014-12-08 <br>&gt; 19:20:45,463::task::1168::Storage.TaskManager.Task::(prepare) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::aborting: Task is <br>&gt; aborted: u"'id'" - code 100<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::1173::Storage.TaskManager.Task::(prepare) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Prepare: aborted: 'id'<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::990::Storage.TaskManager.Task::(_decref) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::ref 0 aborting True<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::925::Storage.TaskManager.Task::(_doAbort) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::Task._doAbort: force False<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) <br>&gt; Owner.cancelAll requests {}<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,463::task::592::Storage.TaskManager.Task::(_updateState) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state <br>&gt; preparing -&gt; state aborting<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,464::task::547::Storage.TaskManager.Task::(__state_aborting) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::_aborting: recover policy <br>&gt; none<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,464::task::592::Storage.TaskManager.Task::(_updateState) <br>&gt; Task=`b5accf8f-014a-412d-9fb8-9e9447d49b72`::moving from state <br>&gt; aborting -&gt; state failed<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,464::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) <br>&gt; Owner.releaseAll requests {} resources {}<br>&gt; Thread-44::DEBUG::2014-12-08 <br>&gt; 19:20:45,464::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) <br>&gt; Owner.cancelAll requests {}<br>&gt; Thread-44::ERROR::2014-12-08 <br>&gt; 19:20:45,464::dispatcher::79::Storage.Dispatcher::(wrapper) 'id'<br>&gt; Traceback (most recent call last):<br>&gt; &nbsp; File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper<br>&gt; &nbsp; &nbsp; result = ctask.prepare(func, *args, **kwargs)<br>&gt; &nbsp; File "/usr/share/vdsm/storage/task.py", line 103, in wrapper<br>&gt; &nbsp; &nbsp; return m(self, *a, **kw)<br>&gt; &nbsp; File "/usr/share/vdsm/storage/task.py", line 1176, in prepare<br>&gt; &nbsp; &nbsp; raise self.error<br>&gt; KeyError: 'id'<br>&gt; clientIFinit::ERROR::2014-12-08 <br>&gt; 19:20:48,190::clientIF::460::vds::(_recoverExistingVms) Vm's recovery <br>&gt; failed<br>&gt; Traceback (most recent call last):<br>&gt; &nbsp; File "/usr/share/vdsm/clientIF.py", line 404, in _recoverExistingVms<br>&gt; &nbsp; &nbsp; caps.CpuTopology().cores())<br>&gt; &nbsp; File "/usr/share/vdsm/caps.py", line 200, in __init__<br>&gt; &nbsp; &nbsp; self._topology = _getCpuTopology(capabilities)<br>&gt; &nbsp; File "/usr/share/vdsm/caps.py", line 232, in _getCpuTopology<br>&gt; &nbsp; &nbsp; capabilities = _getFreshCapsXMLStr()<br>&gt; &nbsp; File "/usr/share/vdsm/caps.py", line 222, in _getFreshCapsXMLStr<br>&gt; &nbsp; &nbsp; return libvirtconnection.get().getCapabilities()<br>&gt; &nbsp; File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", <br>&gt; line 157, in get<br>&gt; &nbsp; &nbsp; passwd)<br>&gt; &nbsp; File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", <br>&gt; line 102, in open_connection<br>&gt; &nbsp; &nbsp; return utils.retry(libvirtOpen, timeout=10, sleep=0.2)<br>&gt; &nbsp; File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 935, in <br>&gt; retry<br>&gt; &nbsp; &nbsp; return func()<br>&gt; &nbsp; File "/usr/lib64/python2.7/site-packages/libvirt.py", line 102, in <br>&gt; openAuth<br>&gt; &nbsp; &nbsp; if ret is None:raise libvirtError('virConnectOpenAuth() failed')<br>&gt; libvirtError: authentication failed: polkit: <br>&gt; polkit\56retains_authorization_after_challenge=1<br>&gt; Authorization requires authentication but no agent is available.<br>&gt;<br>&gt;<br><div><br></div>-- <br>This message is intended only for the addressee and may contain<br>confidential information. Unless you are that person, you may not<br>disclose its contents or use it in any way and are requested to delete<br>the message along with any attachments and notify us immediately.<br>"Transact" is operated by Integrated Financial Arrangements plc. 29<br>Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608<br>5300. (Registered office: as above; Registered in England and Wales<br>under number: 3727592). Authorised and regulated by the Financial<br>Conduct Authority (entered on the Financial Services Register; no. 190856).<br><div><br></div><br><div><br></div>------------------------------<br><div><br></div>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 98<br>*************************************<br></div><div><br></div></div></body></html>