Interestingly enough I literally just went through this same thing with a slight variation.Note to the below: I am not sure if this would be considerd best practice or good for something long term support but I made due with what I hadI had 10Gb cards for my storage network but no 10Gb switch, so I direct connected them with some fun routing and /etc/hosts settings. I also didnt want my storage network on a routed network (have firewalls in the way of VLANs) and I wanted the network separate from my ovirtmgmt - and, as I said, had no switches for 10Gb. Here is what you need at a bare minimum. Adapt / change it as you need1 dedicated NIC on each node for ovirtmgmt. Ex: eth01 dedicated NIC to direct connect node 1 and node 2 - eth1 node11 dedicated NIC to direct connect node 1 and node 3 - eth2 node11 dedicated NIC to direct connect node 2 and node 1 - eth1 node21 dedicated NIC to direct connect node 2 and node 3 - eth2 node21 dedicated NIC to direct connect node 3 and node 1 - eth1 node31 dedicated NIC to direct connect node 3 and node 2 - eth2 node3You'll need custom routes too:Route to node 3 from node 1 via eth2Route to node 3 from node 2 via eth2Route to node 2 from node 3 via eth2Finally, entries in your /etc/hosts which match to your routes aboveThen, advisably, a dedicated NIC per box for VM network but you can leverage ovirtmgmt if you are just proofing this outAt this point if you can reach all of your nodes via this direct connect IPs then you setup gluster as you normally would referencing your entries in /etc/hosts when you call "gluster volume create"In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I setup LACP as well as you can see belowThis is what my Frankenstein POC looked like: http://i.imgur.com/iURL9jv.png You can optionally choose to setup this network in ovirt as well (and add the NICs to each host) but dont configure it as a VM network. Then you can also, with some other minor tweaks, use these direct connects as migration networks rather than ovirtmgmt or VM networkOn Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <bharat@synergysystemsindia.com > wrote:______________________________Hi,I am trying to deploy 3 hosts hyper converged setup.I am using Centos and installed KVM on all hosts.Host-1Hostname - test1.localdomaineth0 - 192.168.100.15/24GW - 192.168.100.1Hoat-2Hostname - test2.localdomaineth0 - 192.168.100.16/24GW - 192.168.100.1Host-3Hostname - test3.localdomaineth0 - 192.168.100.16/24GW - 192.168.100.1I have created two gluster volume "engine" & "data" with replica 3.I have add fqdn entry in /etc/hosts for all host for DNS resolution.I want to deploy Ovirt engine self hosted OVA to manage all the hosts and production VM and my ovirt-engine VM should have HA enabled.I found multiple docs over internet to deply Self-hosted-engine-ova but I don't what kind of network configuration I've to do on Centos network card & KVM. As KVM docs suggest that I've to create a bridge network for Pnic to Vnic bridge. If I configure a bridge br0 for eth0 bridge that I can't see eth0 while deploying ovirt-engine setup at NIC card choice.Kindly help me to do correct configuration for Centos hosts, KVM & ovirt-engine-vm for HA enabled DC.RegrardsBharat KumarG15- Vinayak Nagar complex,Opp.Maa Satiya, ayadUdaipur (Raj.)313001Mob: +91-9950-9960-25_________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users