Unable to add storage domains to Node Hosted Engine

Hi, I am trying to install a new server with oVirt. I have previously had relatively good success with the all-in-one plugin on oVirt 3.6, but it is time to move to newer things and I am trying to get oVirt 4.1 installed on a clean platform. I am unfortunately finding this very difficult and I hope someone here can help guide me in the right direction :o) My intention is to use oVirt Node as the base OS on the server, install Hosted Engine on it and also have the storage domains hosted on it too. What I have achieved so far is... 1. Installed oVirt Node 4.1.2, generally using the default configuration 2. Changed the performance profile to "virtual-host" inside the web interface as per documentation suggestion 3. Created a new volume group, pool for thin volumes, and finally an XFS-formatted volume that mounts on /srv which will be used for the storage domain(s) 4. From the console on the node, I have created /srv/ovirt/data and /srv/ovirt/iso directories with ownership vdsm:kvm and access rights 0755, which I have then added to /etc/exports with the following configuration: rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36 5. When I install Hosted Engine from the Node web interface, I select NFS4 for the storage and provide the correct host:/path. Other than that, and setting up static network details and email details, I follow the default prompts of the Hosted Engine setup. 6. The Hosted Engine setup seems to run through to completion, and I get a successfully installed message at the end 7. I open up the firewall on port 2049 for tcp and udp; this is done **after** the engine setup is complete since the setup procedure alters the firewall configuration and my modifications otherwise get lost. I have confirmed that it is possible to mount these nfs shares both on the node itself and on the engine vm (once it is running). 8. I can log into the Hosted Engine web interface. This is now where I run into problems. The first problem is that no storage domains are registed, nor can I add any. I get warnings like "The hosted engine storage domain doesn't exist. It will be imported automatically upon data center activation, which requires adding an initial storage domain to the data center". However attempting to do so gives errors like: "VDSM server.domain command CreateStoragePoolVDS failed: Cannot acquire host id: (u'fcdee6d8-c6fc-45fb-a60c-1df4b298f75a', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))". During Hosted Engine setup it seems that the /srv/ovirt/data storage domain is created, but I cannot import it. I also tried to create a new storage domain at /srv/ovirt/iso and met with similar results: it creates it, but won't attach to it. The next problem is that I can't restart the server from the Node web interface. Clicking on the "Restart" Power Options button simply causes the whole system to hang. It appears to shut down the Engine VM ok, it also terminates all ssh connections, but after that the console screen on the server just goes black and sits there. You can still ping the server from the network but can't do anything or see anything. Fortunately the Dell server has a remote console feature that enables me to manually restart the server, but this is not ideal at all. This problem only occurs once the Hosted Engine is installed; before this is done, the "Restart" button works perfectly. The final problem is that once the server has restarted, it takes about 10 minutes before the Node starts the Hosted Engine. Is this correct? I would have thought that once the Node is up, its first responsibility would be to get the Hosted Engine up, but you can see from the dmesg output that on average 530 seconds elapses before kvm is started. Is there anything I can do to speed this up? Any help would be gratefully received. Or if anyone knows of a "oVirt for Dummies" (TM) style step-by-step installation guide that installed Node, Hosted Engine and the storage domains all on one machine, I would love to see that!! Many thanks in advance! Andy

On Fri, May 26, 2017 at 1:44 PM, Andy Gibbs <andyg1001@hotmail.co.uk> wrote:
Hi,
I am trying to install a new server with oVirt. I have previously had relatively good success with the all-in-one plugin on oVirt 3.6, but it is time to move to newer things and I am trying to get oVirt 4.1 installed on a clean platform.
I am unfortunately finding this very difficult and I hope someone here can help guide me in the right direction :o)
My intention is to use oVirt Node as the base OS on the server, install Hosted Engine on it and also have the storage domains hosted on it too.
What I have achieved so far is...
1. Installed oVirt Node 4.1.2, generally using the default configuration 2. Changed the performance profile to "virtual-host" inside the web interface as per documentation suggestion 3. Created a new volume group, pool for thin volumes, and finally an XFS-formatted volume that mounts on /srv which will be used for the storage domain(s) 4. From the console on the node, I have created /srv/ovirt/data and /srv/ovirt/iso directories with ownership vdsm:kvm and access rights 0755, which I have then added to /etc/exports with the following configuration: rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36 5. When I install Hosted Engine from the Node web interface, I select NFS4 for the storage and provide the correct host:/path. Other than that, and setting up static network details and email details, I follow the default prompts of the Hosted Engine setup. 6. The Hosted Engine setup seems to run through to completion, and I get a successfully installed message at the end 7. I open up the firewall on port 2049 for tcp and udp; this is done **after** the engine setup is complete since the setup procedure alters the firewall configuration and my modifications otherwise get lost. I have confirmed that it is possible to mount these nfs shares both on the node itself and on the engine vm (once it is running). 8. I can log into the Hosted Engine web interface.
This is now where I run into problems.
The first problem is that no storage domains are registed, nor can I add any. I get warnings like "The hosted engine storage domain doesn't exist. It will be imported automatically upon data center activation, which requires adding an initial storage domain to the data center". However attempting to do so gives errors like: "VDSM server.domain command CreateStoragePoolVDS failed: Cannot acquire host id: (u'fcdee6d8-c6fc-45fb-a60c-1df4b298f75a', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))". During Hosted Engine setup it seems that the /srv/ovirt/data storage domain is created, but I cannot import it. I also tried to create a new storage domain at /srv/ovirt/iso and met with similar results: it creates it, but won't attach to it.
The next problem is that I can't restart the server from the Node web interface. Clicking on the "Restart" Power Options button simply causes the whole system to hang. It appears to shut down the Engine VM ok, it also terminates all ssh connections, but after that the console screen on the server just goes black and sits there. You can still ping the server from the network but can't do anything or see anything. Fortunately the Dell server has a remote console feature that enables me to manually restart the server, but this is not ideal at all. This problem only occurs once the Hosted Engine is installed; before this is done, the "Restart" button works perfectly.
The final problem is that once the server has restarted, it takes about 10 minutes before the Node starts the Hosted Engine. Is this correct? I would have thought that once the Node is up, its first responsibility would be to get the Hosted Engine up, but you can see from the dmesg output that on average 530 seconds elapses before kvm is started. Is there anything I can do to speed this up?
Any help would be gratefully received. Or if anyone knows of a "oVirt for Dummies" (TM) style step-by-step installation guide that installed Node, Hosted Engine and the storage domains all on one machine, I would love to see that!!
Hi, so if I understood correctly, you're trying to work on a single host deployment right? Or are you just trying to replace the bare metal all-in-one 3.6 in a context with more hosts? If this is the case, can you share your use case? I'm asking because for single host installations there are other solutions that may fit better than oVirt, like virt-manager or kimchi ( https://github.com/kimchi-project/kimchi )
Many thanks in advance!
Andy _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On 29 May 2017 08:22, Sandro Bonazzola wrote:
Hi, so if I understood correctly, you're trying to work on a single host deployment right? Or are you just trying to replace the bare metal all-in-one 3.6 in a context with more hosts? If this is the case, can you share your use case? I'm asking because for single host installations there are other solutions that may fit better than oVirt, like virt-manager or kimchi (https://github.com/kimchi-project/kimchi)
Sandro, thank you for your reply. I hadn't heard about kimchi before. Virt-manager had been discounted as the user interface is not really friendly enough for non-technical people, which is important for us. The simple web interface with oVirt, however, is excellent in this regard. I would say that the primary use-case is this: We want a server which individual employees can log into (using their active directory logins), access company-wide "public" VMs or create their own private VMs for their own use (if permitted). Users should be able to start and stop the "public" VMs but not be able to edit or delete them. They should only have full control over the VMs that they create for themselves. And very importantly, it should be possible to say which users have the ability to create their own VMs. Nice to have would be the ability for users to be able to share their VMs with other users. Really nice to have would be a way of detecting whether VMs are in use by someone else before opening a console and stealing it away from the current user! (Actually, case in point, the user web interface for oVirt 3.6 always starts a console for a VM when the user logs in, if it is the only one running on the server and which the user has access to. I don't know if this is fixed in 4.1, but our work-around is to have a dummy VM that always runs and displays a graphic with helpful text for any that see it! Bit of a nuisance, but not too bad. We never found a way to disable this behaviour.) We started off some years ago with a server running oVirt 3.4, now running 3.6, with the all-in-one plugin and had good success with this. The hosted engine for oVirt 4.1 seemed to be the recommended "upgrade path" -- although we did also start with entirely new server hardware. Ultimately once this first server is set up we will want to convert the old server hardware to a second node so that we can balance the load (we have a number of very resource-hungry VMs). This would be our secondary use-case. More nodes may follow in future. However, we don't see the particular need to have VMs that migrate from node to node, and each node will most likely have its own storage domains for the VMs that run on it. But to have one central web interface for managing the whole lot is a huge advantage. Coming then to the storage issue that comes up in my original post, we are trying to install this first server platform, keeping the node, the hosted engine, and the storage all on one physical machine. We don't (currently) want to set up a separate storage server, and don't really see the benefit of doing so. Since my first email, I've actually succeeded in getting the engine to recognise the node's storage paths. However, I'm not sure it really is the right way. The solution I found was to create a third path, /srv/ovirt/engine, in addition to the data and iso paths. The engine gets installed to /srv/ovirt/engine and then once the engine is started up, I create a new data domain at node:/srv/ovirt/data. This then adds the new path as the master data domain, and then after thinking a bit to itself, suddenly the hosted_storage data domain appears, and after a bit more thinking, everything seems to get properly registered and works. I can then also create the ISO storage domain. Does this seem like a viable solution, or have I achieved something "illegal"? I am still not having much luck with my other problem(s) to do with restarting the server: it still hangs on shutdown and it still takes a very long time (about ten minutes) after the node starts for the engine to start. Any help on this would be much appreciated. Thanks Andy

On Mon, May 29, 2017 at 2:25 PM, Andy Gibbs <andyg1001@hotmail.co.uk> wrote:
Hi, so if I understood correctly, you're trying to work on a single host deployment right? Or are you just trying to replace the bare metal all-in-one 3.6 in a context with more hosts? If this is the case, can you share your use case? I'm asking because for single host installations there are other solutions that may fit better
oVirt, like virt-manager or kimchi (https://github.com/kimchi-
On 29 May 2017 08:22, Sandro Bonazzola wrote: than project/kimchi)
Sandro, thank you for your reply.
I hadn't heard about kimchi before. Virt-manager had been discounted as the user interface is not really friendly enough for non-technical people, which is important for us. The simple web interface with oVirt, however, is excellent in this regard.
I would say that the primary use-case is this: We want a server which individual employees can log into (using their active directory logins), access company-wide "public" VMs or create their own private VMs for their own use (if permitted). Users should be able to start and stop the "public" VMs but not be able to edit or delete them. They should only have full control over the VMs that they create for themselves. And very importantly, it should be possible to say which users have the ability to create their own VMs. Nice to have would be the ability for users to be able to share their VMs with other users. Really nice to have would be a way of detecting whether VMs are in use by someone else before opening a console and stealing it away from the current user! (Actually, case in point, the user web interface for oVirt 3.6 always starts a console for a VM when the user logs in, if it is the only one running on the server and which the user has access to. I don't know i f this is fixed in 4.1, but our work-around is to have a dummy VM that always runs and displays a graphic with helpful text for any that see it! Bit of a nuisance, but not too bad. We never found a way to disable this behaviour.)
This sounds like a bug to me, if guest agent is installed and running on the guest. I'd appreciate if you could open a bug with all relevant details.
We started off some years ago with a server running oVirt 3.4, now running 3.6, with the all-in-one plugin and had good success with this. The hosted engine for oVirt 4.1 seemed to be the recommended "upgrade path" -- although we did also start with entirely new server hardware.
Ultimately once this first server is set up we will want to convert the old server hardware to a second node so that we can balance the load (we have a number of very resource-hungry VMs). This would be our secondary use-case. More nodes may follow in future. However, we don't see the particular need to have VMs that migrate from node to node, and each node will most likely have its own storage domains for the VMs that run on it. But to have one central web interface for managing the whole lot is a huge advantage.
Coming then to the storage issue that comes up in my original post, we are trying to install this first server platform, keeping the node, the hosted engine, and the storage all on one physical machine. We don't (currently) want to set up a separate storage server, and don't really see the benefit of doing so. Since my first email, I've actually succeeded in getting the engine to recognise the node's storage paths. However, I'm not sure it really is the right way. The solution I found was to create a third path, /srv/ovirt/engine, in addition to the data and iso paths. The engine gets installed to /srv/ovirt/engine and then once the engine is started up, I create a new data domain at node:/srv/ovirt/data. This then adds the new path as the master data domain, and then after thinking a bit to itself, suddenly the hosted_storage data domain appears, and after a bit more thinking, everything seems to get properly registered and works. I can then also create the ISO storag e domain.
Does this seem like a viable solution, or have I achieved something "illegal"?
Sounds a bit of a hack, but I don't see a good reason why it wouldn't work - perhaps firewalling issues. Certainly not a common or tested scenario.
I am still not having much luck with my other problem(s) to do with restarting the server: it still hangs on shutdown and it still takes a very long time (about ten minutes) after the node starts for the engine to start. Any help on this would be much appreciated.
Logs would be appreciated - engine.log, server.log, perhaps journal entries. Perhaps there's race between NFS and Engine services? Y.
Thanks Andy _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 30 May 2017, at 08:31, Yaniv Kaul <ykaul@redhat.com> wrote: =20 =20 =20 On Mon, May 29, 2017 at 2:25 PM, Andy Gibbs <andyg1001@hotmail.co.uk = <mailto:andyg1001@hotmail.co.uk>> wrote: On 29 May 2017 08:22, Sandro Bonazzola wrote:
Hi, so if I understood correctly, you're trying to work on a single = host deployment right? Or are you just trying to replace the bare metal all-in-one 3.6 in a = context with more hosts? If this is the case, can you share your use case? I'm asking because = for single host installations there are other solutions that may fit = better than oVirt, like virt-manager or kimchi = (https://github.com/kimchi-project/kimchi = <https://github.com/kimchi-project/kimchi>) =20 Sandro, thank you for your reply. =20 I hadn't heard about kimchi before. Virt-manager had been discounted = as the user interface is not really friendly enough for non-technical =
=20 I would say that the primary use-case is this: We want a server which = individual employees can log into (using their active directory logins), = access company-wide "public" VMs or create their own private VMs for =
f this is fixed in 4.1, but our work-around is to have a dummy VM =
--Apple-Mail=_6B48C156-A9B7-4D11-9AC3-D96A130B085A Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 people, which is important for us. The simple web interface with oVirt, = however, is excellent in this regard. their own use (if permitted). Users should be able to start and stop = the "public" VMs but not be able to edit or delete them. They should = only have full control over the VMs that they create for themselves. = And very importantly, it should be possible to say which users have the = ability to create their own VMs. Nice to have would be the ability for = users to be able to share their VMs with other users. Really nice to = have would be a way of detecting whether VMs are in use by someone else = before opening a console and stealing it away from the current user! = (Actually, case in point, the user web interface for oVirt 3.6 always = starts a console for a VM when the user logs in, if it is the only one = running on the server and which the user has access to. I don't know i that always runs and displays a graphic with helpful text for any that = see it! Bit of a nuisance, but not too bad. We never found a way to = disable this behaviour.)
=20 This sounds like a bug to me, if guest agent is installed and running = on the guest. I'd appreciate if you could open a bug with all relevant details.
=20 =20 We started off some years ago with a server running oVirt 3.4, now = running 3.6, with the all-in-one plugin and had good success with this. = The hosted engine for oVirt 4.1 seemed to be the recommended "upgrade =
=20 Ultimately once this first server is set up we will want to convert =
nothing to do with agent but rather the =E2=80=9Cconnect = automatically=E2=80=9D checkbox per user. Just uncheck it for the user. You may also check out https://github.com/oVirt/ovirt-web-ui for a = modern simplified user portal. It=E2=80=99s not fully complete, it=E2=80=99= s missing this =E2=80=9Cconnect automatically=E2=80=9D functionality, so = it=E2=80=99s perfect for you:) Thanks, michal path" -- although we did also start with entirely new server hardware. the old server hardware to a second node so that we can balance the load = (we have a number of very resource-hungry VMs). This would be our = secondary use-case. More nodes may follow in future. However, we don't = see the particular need to have VMs that migrate from node to node, and = each node will most likely have its own storage domains for the VMs that = run on it. But to have one central web interface for managing the whole = lot is a huge advantage.
=20 Coming then to the storage issue that comes up in my original post, we = are trying to install this first server platform, keeping the node, the = hosted engine, and the storage all on one physical machine. We don't = (currently) want to set up a separate storage server, and don't really = see the benefit of doing so. Since my first email, I've actually = succeeded in getting the engine to recognise the node's storage paths. = However, I'm not sure it really is the right way. The solution I found = was to create a third path, /srv/ovirt/engine, in addition to the data = and iso paths. The engine gets installed to /srv/ovirt/engine and then = once the engine is started up, I create a new data domain at = node:/srv/ovirt/data. This then adds the new path as the master data = domain, and then after thinking a bit to itself, suddenly the = hosted_storage data domain appears, and after a bit more thinking, = everything seems to get properly registered and works. I can then also = create the ISO storag e domain. =20 Does this seem like a viable solution, or have I achieved something = "illegal"? =20 Sounds a bit of a hack, but I don't see a good reason why it wouldn't = work - perhaps firewalling issues. Certainly not a common or tested = scenario. =20 =20 I am still not having much luck with my other problem(s) to do with = restarting the server: it still hangs on shutdown and it still takes a = very long time (about ten minutes) after the node starts for the engine = to start. Any help on this would be much appreciated. =20 Logs would be appreciated - engine.log, server.log, perhaps journal = entries. Perhaps there's race between NFS and Engine services? Y. =20 =20 Thanks Andy _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_6B48C156-A9B7-4D11-9AC3-D96A130B085A Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 30 May 2017, at 08:31, Yaniv Kaul <<a = href=3D"mailto:ykaul@redhat.com" class=3D"">ykaul@redhat.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote">On Mon, May 29, 2017 at 2:25 PM, = Andy Gibbs <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:andyg1001@hotmail.co.uk" target=3D"_blank" = class=3D"">andyg1001@hotmail.co.uk</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=3D"">On 29 = May 2017 08:22, Sandro Bonazzola wrote:<br class=3D""> > Hi, so if I understood correctly, you're trying to work on a single = host<br class=3D""> > deployment right?<br class=3D""> > Or are you just trying to replace the bare metal all-in-one 3.6 in = a context<br class=3D""> > with more hosts?<br class=3D""> > If this is the case, can you share your use case? I'm asking = because for<br class=3D""> > single host installations there are other solutions that may fit = better than<br class=3D""> > oVirt, like virt-manager or kimchi (<a = href=3D"https://github.com/kimchi-project/kimchi" rel=3D"noreferrer" = target=3D"_blank" class=3D"">https://github.com/kimchi-<wbr = class=3D"">project/kimchi</a>)<br class=3D""> <br class=3D""> </span>Sandro, thank you for your reply.<br class=3D""> <br class=3D""> I hadn't heard about kimchi before. Virt-manager had been = discounted as the user interface is not really friendly enough for = non-technical people, which is important for us. The simple web = interface with oVirt, however, is excellent in this regard.<br class=3D"">= <br class=3D""> I would say that the primary use-case is this: We want a server which = individual employees can log into (using their active directory logins), = access company-wide "public" VMs or create their own private VMs for = their own use (if permitted). Users should be able to start and = stop the "public" VMs but not be able to edit or delete them. They = should only have full control over the VMs that they create for = themselves. And very importantly, it should be possible to say = which users have the ability to create their own VMs. Nice to have = would be the ability for users to be able to share their VMs with other = users. Really nice to have would be a way of detecting whether VMs = are in use by someone else before opening a console and stealing it away = from the current user! (Actually, case in point, the user web = interface for oVirt 3.6 always starts a console for a VM when the user = logs in, if it is the only one running on the server and which the user = has access to. I don't know i<br class=3D""> f this is fixed in 4.1, but our work-around is to have a dummy VM = that always runs and displays a graphic with helpful text for any that = see it! Bit of a nuisance, but not too bad. We never found a = way to disable this behaviour.)<br class=3D""></blockquote><div = class=3D""><br class=3D""></div><div class=3D"">This sounds like a bug = to me, if guest agent is installed and running on the guest.</div><div = class=3D"">I'd appreciate if you could open a bug with all relevant = details.</div></div></div></div></div></blockquote><div><br = class=3D""></div>nothing to do with agent but rather the =E2=80=9Cconnect = automatically=E2=80=9D checkbox per user. Just uncheck it for the = user.</div><div>You may also check out <a = href=3D"https://github.com/oVirt/ovirt-web-ui" = class=3D"">https://github.com/oVirt/ovirt-web-ui</a> for a modern = simplified user portal. It=E2=80=99s not fully complete, it=E2=80=99s = missing this =E2=80=9Cconnect automatically=E2=80=9D functionality, so = it=E2=80=99s perfect for you:)</div><div><br = class=3D""></div><div>Thanks,</div><div>michal</div><div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><div class=3D""><br class=3D""></div><blockquote = class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"> <br class=3D""> We started off some years ago with a server running oVirt 3.4, now = running 3.6, with the all-in-one plugin and had good success with = this. The hosted engine for oVirt 4.1 seemed to be the recommended = "upgrade path" -- although we did also start with entirely new server = hardware.<br class=3D""> <br class=3D""> Ultimately once this first server is set up we will want to convert the = old server hardware to a second node so that we can balance the load (we = have a number of very resource-hungry VMs). This would be our = secondary use-case. More nodes may follow in future. = However, we don't see the particular need to have VMs that migrate from = node to node, and each node will most likely have its own storage = domains for the VMs that run on it. But to have one central web = interface for managing the whole lot is a huge advantage.<br class=3D""> <br class=3D""> Coming then to the storage issue that comes up in my original post, we = are trying to install this first server platform, keeping the node, the = hosted engine, and the storage all on one physical machine. We = don't (currently) want to set up a separate storage server, and don't = really see the benefit of doing so. Since my first email, I've = actually succeeded in getting the engine to recognise the node's storage = paths. However, I'm not sure it really is the right way. The = solution I found was to create a third path, /srv/ovirt/engine, in = addition to the data and iso paths. The engine gets installed to = /srv/ovirt/engine and then once the engine is started up, I create a new = data domain at node:/srv/ovirt/data. This then adds the new path = as the master data domain, and then after thinking a bit to itself, = suddenly the hosted_storage data domain appears, and after a bit more = thinking, everything seems to get properly registered and works. I = can then also create the ISO storag<br class=3D""> e domain.<br class=3D""> <br class=3D""> Does this seem like a viable solution, or have I achieved something = "illegal"?<br class=3D""></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">Sounds a bit of a hack, but I don't see = a good reason why it wouldn't work - perhaps firewalling issues. = Certainly not a common or tested scenario.</div><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br class=3D""> I am still not having much luck with my other problem(s) to do with = restarting the server: it still hangs on shutdown and it still takes a = very long time (about ten minutes) after the node starts for the engine = to start. Any help on this would be much appreciated.<br = class=3D""></blockquote><div class=3D""><br class=3D""></div><div = class=3D"">Logs would be appreciated - engine.log, server.log, perhaps = journal entries. Perhaps there's race between NFS and Engine = services?</div><div class=3D"">Y.</div><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br class=3D""> Thanks<br class=3D""> <div class=3D"HOEnZb"><div class=3D"h5">Andy<br class=3D""> ______________________________<wbr class=3D"">_________________<br = class=3D""> Users mailing list<br class=3D""> <a href=3D"mailto:Users@ovirt.org" class=3D"">Users@ovirt.org</a><br = class=3D""> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br class=3D""> </div></div></blockquote></div><br class=3D""></div></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_6B48C156-A9B7-4D11-9AC3-D96A130B085A--

On Fri, May 26, 2017 at 1:44 PM, Andy Gibbs <andyg1001@hotmail.co.uk> wrote:
Hi,
I am trying to install a new server with oVirt. I have previously had relatively good success with the all-in-one plugin on oVirt 3.6, but it is time to move to newer things and I am trying to get oVirt 4.1 installed on a clean platform.
I am unfortunately finding this very difficult and I hope someone here can help guide me in the right direction :o)
My intention is to use oVirt Node as the base OS on the server, install Hosted Engine on it and also have the storage domains hosted on it too.
What I have achieved so far is...
1. Installed oVirt Node 4.1.2, generally using the default configuration 2. Changed the performance profile to "virtual-host" inside the web interface as per documentation suggestion 3. Created a new volume group, pool for thin volumes, and finally an XFS-formatted volume that mounts on /srv which will be used for the storage domain(s) 4. From the console on the node, I have created /srv/ovirt/data and /srv/ovirt/iso directories with ownership vdsm:kvm and access rights 0755, which I have then added to /etc/exports with the following configuration: rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36 5. When I install Hosted Engine from the Node web interface, I select NFS4 for the storage and provide the correct host:/path. Other than that, and setting up static network details and email details, I follow the default prompts of the Hosted Engine setup.
AFAIK loopback mount an NFS share could lead to deadlocks: https://lwn.net/Articles/595652/
6. The Hosted Engine setup seems to run through to completion, and I get a successfully installed message at the end 7. I open up the firewall on port 2049 for tcp and udp; this is done **after** the engine setup is complete since the setup procedure alters the firewall configuration and my modifications otherwise get lost. I have confirmed that it is possible to mount these nfs shares both on the node itself and on the engine vm (once it is running). 8. I can log into the Hosted Engine web interface.
This is now where I run into problems.
The first problem is that no storage domains are registed, nor can I add any. I get warnings like "The hosted engine storage domain doesn't exist. It will be imported automatically upon data center activation, which requires adding an initial storage domain to the data center". However attempting to do so gives errors like: "VDSM server.domain command CreateStoragePoolVDS failed: Cannot acquire host id: (u'fcdee6d8-c6fc-45fb-a60c-1df4b298f75a', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))". During Hosted Engine setup it seems that the /srv/ovirt/data storage domain is created, but I cannot import it. I also tried to create a new storage domain at /srv/ovirt/iso and met with similar results: it creates it, but won't attach to it.
Up to now the hosted-engine storage domain could simply contain the engine VM so you need another data storage domain to act as the master storage domain; when another storage domain is ready, the engine will automatically import the hosted-engine SD and the engine VM.
The next problem is that I can't restart the server from the Node web interface. Clicking on the "Restart" Power Options button simply causes the whole system to hang. It appears to shut down the Engine VM ok, it also terminates all ssh connections, but after that the console screen on the server just goes black and sits there. You can still ping the server from the network but can't do anything or see anything. Fortunately the Dell server has a remote console feature that enables me to manually restart the server, but this is not ideal at all. This problem only occurs once the Hosted Engine is installed; before this is done, the "Restart" button works perfectly.
The final problem is that once the server has restarted, it takes about 10 minutes before the Node starts the Hosted Engine. Is this correct? I would have thought that once the Node is up, its first responsibility would be to get the Hosted Engine up, but you can see from the dmesg output that on average 530 seconds elapses before kvm is started. Is there anything I can do to speed this up?
ovirt-ha-agent systemd unit is not going to wait, by design, nfs-server so it could start before the storage is available and this will cause an extra delay due to retry on error. A few minutes (3/4) should be considered acceptable.
Any help would be gratefully received. Or if anyone knows of a "oVirt for Dummies" (TM) style step-by-step installation guide that installed Node, Hosted Engine and the storage domains all on one machine, I would love to see that!!
Many thanks in advance!
Andy _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, Simone Tiraboschi <stirabos@redhat.com> writes:
5. When I install Hosted Engine from the Node web interface, I select NFS4 for the storage and provide the correct host:/path. Other than that, and setting up static network details and email details, I follow the default prompts of the Hosted Engine setup.
AFAIK loopback mount an NFS share could lead to deadlocks: https://lwn.net/Articles/595652/
This article is from 2014. Are you saying that this issue hasn't been fixed in the intervening three years? FWIW I'm running with Loopback NFS on my single-host Ovirt 4.0.6 system and it's been running happily for a while now. I would have happily used a local filesystem, but the hosted-engine code wouldn't let me. Setting up loopback NFS was easier than trying to learn GlusterFS and figure out how to loopback Gluster. On the other hand, my system does have 256GB of RAM, with only about 50% being used, so that also might work in my favor. -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

I believe the deadlock described in the article is nearly impossible when VM disk caching is disabled (by default). Anyway, there is a solution to avoid loopback NFS: https://gerrit.ovirt.org/#/c/68822/10 On 30/05/2017, 17:55, "users-bounces@ovirt.org on behalf of Derek Atkins" <users-bounces@ovirt.org on behalf of derek@ihtfp.com> wrote: Hi, Simone Tiraboschi <stirabos@redhat.com> writes: > 5. When I install Hosted Engine from the Node web interface, I select NFS4 > for the storage and provide the correct host:/path. Other than that, and > setting up static network details and email details, I follow the default > prompts of the Hosted Engine setup. > > AFAIK loopback mount an NFS share could lead to deadlocks: > https://lwn.net/Articles/595652/ This article is from 2014. Are you saying that this issue hasn't been fixed in the intervening three years? FWIW I'm running with Loopback NFS on my single-host Ovirt 4.0.6 system and it's been running happily for a while now. I would have happily used a local filesystem, but the hosted-engine code wouldn't let me. Setting up loopback NFS was easier than trying to learn GlusterFS and figure out how to loopback Gluster. On the other hand, my system does have 256GB of RAM, with only about 50% being used, so that also might work in my favor. -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (7)
-
Andy Gibbs
-
Derek Atkins
-
Michal Skrivanek
-
Pavel Gashev
-
Sandro Bonazzola
-
Simone Tiraboschi
-
Yaniv Kaul