<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 29, 2017 at 2:25 PM, Andy Gibbs <span dir="ltr"><<a href="mailto:andyg1001@hotmail.co.uk" target="_blank">andyg1001@hotmail.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 29 May 2017 08:22, Sandro Bonazzola wrote:<br>
> Hi, so if I understood correctly, you're trying to work on a single host<br>
> deployment right?<br>
> Or are you just trying to replace the bare metal all-in-one 3.6 in a context<br>
> with more hosts?<br>
> If this is the case, can you share your use case? I'm asking because for<br>
> single host installations there are other solutions that may fit better than<br>
> oVirt, like virt-manager or kimchi (<a href="https://github.com/kimchi-project/kimchi" rel="noreferrer" target="_blank">https://github.com/kimchi-<wbr>project/kimchi</a>)<br>
<br>
</span>Sandro, thank you for your reply.<br>
<br>
I hadn't heard about kimchi before. Virt-manager had been discounted as the user interface is not really friendly enough for non-technical people, which is important for us. The simple web interface with oVirt, however, is excellent in this regard.<br>
<br>
I would say that the primary use-case is this: We want a server which individual employees can log into (using their active directory logins), access company-wide "public" VMs or create their own private VMs for their own use (if permitted). Users should be able to start and stop the "public" VMs but not be able to edit or delete them. They should only have full control over the VMs that they create for themselves. And very importantly, it should be possible to say which users have the ability to create their own VMs. Nice to have would be the ability for users to be able to share their VMs with other users. Really nice to have would be a way of detecting whether VMs are in use by someone else before opening a console and stealing it away from the current user! (Actually, case in point, the user web interface for oVirt 3.6 always starts a console for a VM when the user logs in, if it is the only one running on the server and which the user has access to. I don't know i<br>
f this is fixed in 4.1, but our work-around is to have a dummy VM that always runs and displays a graphic with helpful text for any that see it! Bit of a nuisance, but not too bad. We never found a way to disable this behaviour.)<br></blockquote><div><br></div><div>This sounds like a bug to me, if guest agent is installed and running on the guest.</div><div>I'd appreciate if you could open a bug with all relevant details.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
We started off some years ago with a server running oVirt 3.4, now running 3.6, with the all-in-one plugin and had good success with this. The hosted engine for oVirt 4.1 seemed to be the recommended "upgrade path" -- although we did also start with entirely new server hardware.<br>
<br>
Ultimately once this first server is set up we will want to convert the old server hardware to a second node so that we can balance the load (we have a number of very resource-hungry VMs). This would be our secondary use-case. More nodes may follow in future. However, we don't see the particular need to have VMs that migrate from node to node, and each node will most likely have its own storage domains for the VMs that run on it. But to have one central web interface for managing the whole lot is a huge advantage.<br>
<br>
Coming then to the storage issue that comes up in my original post, we are trying to install this first server platform, keeping the node, the hosted engine, and the storage all on one physical machine. We don't (currently) want to set up a separate storage server, and don't really see the benefit of doing so. Since my first email, I've actually succeeded in getting the engine to recognise the node's storage paths. However, I'm not sure it really is the right way. The solution I found was to create a third path, /srv/ovirt/engine, in addition to the data and iso paths. The engine gets installed to /srv/ovirt/engine and then once the engine is started up, I create a new data domain at node:/srv/ovirt/data. This then adds the new path as the master data domain, and then after thinking a bit to itself, suddenly the hosted_storage data domain appears, and after a bit more thinking, everything seems to get properly registered and works. I can then also create the ISO storag<br>
e domain.<br>
<br>
Does this seem like a viable solution, or have I achieved something "illegal"?<br></blockquote><div><br></div><div>Sounds a bit of a hack, but I don't see a good reason why it wouldn't work - perhaps firewalling issues. Certainly not a common or tested scenario.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I am still not having much luck with my other problem(s) to do with restarting the server: it still hangs on shutdown and it still takes a very long time (about ten minutes) after the node starts for the engine to start. Any help on this would be much appreciated.<br></blockquote><div><br></div><div>Logs would be appreciated - engine.log, server.log, perhaps journal entries. Perhaps there's race between NFS and Engine services?</div><div>Y.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks<br>
<div class="HOEnZb"><div class="h5">Andy<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div></div>