
Il 23/07/2014 00:09, Itamar Heim ha scritto:
On 07/17/2014 11:14 AM, Maƫl Lavault wrote:
Hi,
I'm a student currently doing my final study project based around Ovirt.
I played a lot with Ovirt to have an overview of what could or could not be done, but I still have a few questions and stuff I want to clarify.
My project consist of a HA architecture using Ovirt, powered by 8 reasonably powerful servers (4 x Quad core Opteron, 32Go RAM, 2x10k RPM HDD) equipped with 2 NIC each.
I followed this tutorial to use Ovirt with GlusterFS :
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
Worked well !
I installed 2 self hosted engine on 2 different servers and a GlusterFS using 2 other servers.
But since we had only 2 x 1Gb NIC per server, we decided to go with bonding and VLAN to separate each networks inspired by this blog post :
http://captainkvm.com/2013/04/maximizing-your-10gb-ethernet-in-rhev/
Unfortunately, it seems like Ovirt 3.4 does not support installing self-hosted engine on bond + vlan. I tried 3.5 but there were to much bug to be usable and the project is set to be deployed in 2 months.
should be available via: http://gerrit.ovirt.org/#/c/29730/ ?
Yes, it's available in 3.4.3[1]. However, looks like VDSM has some issue with the network [2][3] So in order to use it you need to downgrade vdsm to previous version by calling # yum downgrade "vdsm*" We're waiting a new build of vdsm packages fixing the issues. [1] http://www.ovirt.org/OVirt_3.4.3_Release_Notes#oVirt_Hosted_Engine_Setup [2] https://bugzilla.redhat.com/show_bug.cgi?id=1121145 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1121643
A colleague suggested me a workaround using OpenvSwitch between Ovirt and NIC bond to "translate tagged packet into non tagged packet" and hide the bond from Ovirt. Does this have a chance to work ?
Since the GlusterFS is accessed by NFS, I was able to bond the two servers.
A few questions :
- What is the purpose of ovirtmgmt network ? I did a lot of search by haven't found any clear explanation. Does it need to be publicly accessible or a private ip is fine too ?
just what engine used to communicate to hosts. also, by default, the display network and vm network.
- Is the display network used for SPICE/VNC connection ?
yes.
- How do Ovirt differentiate VM network from a storage network ? They both have the same vm role in the interface. Both networks could (should) be on private ip range right ?
that depends on you deploying them on different networks?
- How can I add the storage network after (or before, would be better) self hosted engine is installed ? (Since self hosted engine is stored on Gluster, I will loose connection to the engine)
configure it on host B. move hosted engine to host B. configure it on host A?
- Using self hosted engine, does all my nodes need to be installed with hosted-engine --deploy, or can I have only 2 self hosted engine nodes and 4 classic nodes ?
you can have 2 self hosted and 50 classic nodes as well. no need to match hosted engine to the entire cluster (nor is hosted engine tested with being deployed on 50 hosts...)
- I'm trying to cleanly re-install the second hosted engine after some experiments, but the behavior is strange :
[ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up To continue make a selection from the options below: (1) Continue setup - engine installation is complete (2) Power off and restart the VM (3) Abort setup Isn't this supposed to give me information to connect to the vm so i can install the engine ?
sandro/didi?
if this is the install on the second host, you can just answer 1 and go on. We've a bug opened [4] about this issue, setup should assume here that the engine is up and running without any question. [4] https://bugzilla.redhat.com/show_bug.cgi?id=1106561
Thanks a lot for this truly well made software !
thank you for using it.
thank you for using it.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com