<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'><div>Hi all,</div><div><br></div><div>> Date: Mon, 10 Mar 2014 12:56:19 -0400<br>> From: jbrooks@redhat.com<br>> To: msivak@redhat.com<br>> CC: users@ovirt.org<br>> Subject: Re: [Users] hosted engine help<br>> <br>> <br>> <br>> ----- Original Message -----<br>> > From: "Martin Sivak" <msivak@redhat.com><br>> > To: "Dan Kenigsberg" <danken@redhat.com><br>> > Cc: users@ovirt.org<br>> > Sent: Saturday, March 8, 2014 11:52:59 PM<br>> > Subject: Re: [Users] hosted engine help<br>> > <br>> > Hi Jason,<br>> > <br>> > can you please attach the full logs? We had very similar issue before I we<br>> > need to see if is the same or not.<br>> <br>> I may have to recreate it -- I switched back to an all in one engine after my<br>> setup started refusing to run the engine at all. It's no fun losing your engine!<br>> <br>> This was a migrated-from-standalone setup, maybe that caused additional wrinkles...<br>> <br>> Jason<br>> <br>> > <br>> > Thanks<br><br></div><div>I experienced the exact same symptoms as Jason on a from-scratch installation on two physical nodes with CentOS 6.5 (fully up-to-date) using oVirt 3.4.0_pre (latest test-day release) and GlusterFS 3.5.0beta3 (with Gluster-provided NFS as <span style="font-size: 12pt;">storage </span><span style="font-size: 12pt;">for the self-hosted engine VM only).</span></div><div><br></div><div>I roughly followed the guide from Andrew Lau:</div><div><br></div><div>http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/</div><div><br></div><div>with some variations due to newer packages (resolved bugs) and different hardware setup (no VLANs in my setup: physically separated networks; custom second nic added to Engine VM template before deploying etc.)</div><div><br></div><div>The self-hosted installation on first node + Engine VM (configured for managing both oVirt and the storage; Datacenter default set to NFS because no GlusterFS offered) went apparently smooth, but the HA-agent failed to start at the very end (same errors in logs as Jason: the storage domain seems "missing") and I was only able to start it all manually with:</div><div><br></div><div><div>hosted-engine --connect-storage</div><div>hosted-engine --start-pool</div></div><div>hosted-engine --vm-start</div><div><br></div><div>then the Engine came up and I could use it, I even registered the second node (same final error in HA-agent) and tried to add GlusterFS storage domains for further VMs and ISOs (by the way: the original NFS-GlusterFS domain for Engine VM only is not present inside the Engine web UI) but it always failed activating the domains (they remain "Inactive").</div><div><br></div><div>Furthermore the engine gets killed some time after starting (from 3 up to 11 hours later) and the only way to get it back is repeating the above commands.</div><div><br></div><div>I always managed GlusterFS "natively" (not through oVirt) from the commandline and verified that the NFS-exported Engine-VM-only volume gets replicated, but I obviously failed to try migration because the HA part results inactive and oVirt refuse to migrate the Engine.</div><div><br></div><div>Since I tried many times, with variations and further manual actions between (like trying to manually mount the NFS Engine domain, restarting the HA-agent only etc.), my logs are "cluttered", so I should start from scratch again and pack up all logs in one swipe.</div><div><br></div><div>Tell me what I should capture and at which points in the whole process and I will try to follow up as soon as possible.</div><div><br></div><div>Many thanks,</div><div>Giuseppe</div><div><br></div><div>> > --<br>> > Martin Sivák<br>> > msivak@redhat.com<br>> > Red Hat Czech<br>> > RHEV-M SLA / Brno, CZ<br>> > <br>> > ----- Original Message -----<br>> > > On Fri, Mar 07, 2014 at 10:17:43AM +0100, Sandro Bonazzola wrote:<br>> > > > Il 07/03/2014 01:10, Jason Brooks ha scritto:<br>> > > > > Hey everyone, I've been testing out oVirt 3.4 w/ hosted engine, and<br>> > > > > while I've managed to bring the engine up, I've only been able to do it<br>> > > > > manually, using "hosted-engine --vm-start".<br>> > > > > <br>> > > > > The ovirt-ha-agent service fails reliably for me, erroring out with<br>> > > > > "RequestError: Request failed: success."<br>> > > > > <br>> > > > > I've pasted error passages from the ha agent and vdsm logs below.<br>> > > > > <br>> > > > > Any pointers?<br>> > > > <br>> > > > looks like a VDSM bug, Dan?<br>> > > <br>> > > Why? The exception is raised from deep inside the ovirt_hosted_engine_ha<br>> > > code.<br>> > > _______________________________________________<br>> > > Users mailing list<br>> > > Users@ovirt.org<br>> > > http://lists.ovirt.org/mailman/listinfo/users<br>> > > <br>> > _______________________________________________<br>> > Users mailing list<br>> > Users@ovirt.org<br>> > http://lists.ovirt.org/mailman/listinfo/users<br>> > <br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br></div>                                            </div></body>
</html>