Bharat,
Are you using DNS for host names or /etc/hosts
I personally place the engines hostname with ip address in /etc/hosts on
all the hypervisors in case my DNS services go down.
I also put the hypervisors in /etc/hosts too
Hope this helps.
On Wed, Sep 13, 2017 at 6:04 AM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
On Wed, Sep 13, 2017 at 8:42 AM, Tailor, Bharat <
bharat(a)synergysystemsindia.com> wrote:
> Hi Charles & Donny,
>
> Thank you so much.
>
> @Donny my 3rd node's IP is 192.168.100.17(Mentioned wrong in above mail).
> I was trying to install ovirt ova on test2.localdomain. I've cleared all
> the steps and at last I got a message like "Engine is still unreachable".
> Kindly help me to troubleshoot it.
>
> @Ykaul How can I register myself in mailing list?
>
http://lists.ovirt.org/mailman/listinfo/users
Y.
>
> Regrards
> Bharat Kumar
>
> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
> Udaipur (Raj.)
> 313001
> Mob: +91-9950-9960-25
>
>
>
>
>
> On Wed, Sep 13, 2017 at 3:06 AM, Charles Kozler <ckozleriii(a)gmail.com>
> wrote:
>
>> Bharat -
>>
>> 1. Yes. Will need to configure switch port as a trunk and setup your
>> VLANs and VLAN ID's
>> 2. Yes
>> 3. You can still access the hosts. The engine itself crashing or being
>> down wont stop your VMs or hosts or anything (unless fencing). You can use
>> virsh
>> 4. My suggestion here is start immediately after a fresh server install
>> and yum update. Installer does a lot and checks a lot and wont like things:
>> ex - you setup ovirtmgmt bridged network yourself
>> 5. Yes. See #1, usually what I do is each ovirt node I have I set an IP
>> of .5, then .6, and so on. This way I can be sure my network itself is
>> working before adding a VM and attaching that NIC to it
>>
>> On Tue, Sep 12, 2017 at 4:41 PM, Donny Davis <donny(a)fortnebula.com>
>> wrote:
>>
>>> 1. Yes, you can do this
>>> 2. Yes, In linux it's called bonding and this can be done from the UI
>>> 3. You can get around using the Engine machine if required with virsh
>>> or virt-manager - however I would just wait for the manager to migrate and
>>> start on another host in the cluster
>>> 4. The deployment will take care of everything for you. You just need
>>> an IP
>>> 5. Yes, you can use vlans or virtual networking(NSXish) called OVS in
>>> oVirt.
>>>
>>> I noticed on your deployment machines 2 and 3 have the same IP. Might
>>> want to fix that before deploying
>>>
>>> Happy trails
>>> ~D
>>>
>>>
>>> On Tue, Sep 12, 2017 at 2:00 PM, Tailor, Bharat <
>>> bharat(a)synergysystemsindia.com> wrote:
>>>
>>>> Hi Charles,
>>>>
>>>> Thank you so much to share a cool stuff with us.
>>>>
>>>> My doubts are still not cleared.
>>>>
>>>>
>>>> 1. What If I have only single Physical network adaptor? Can't I
>>>> use it for management network & production network both.
>>>> 2. If I have two Physical network adaptor, Can I configure NIC
>>>> teaming as like Vmware ESXi.
>>>> 3. What If my ovirt machine fails during production period? In
>>>> vmware we can access ESXi hosts and VM without Vcenter and do all the
>>>> stuffs. Can we do the same with Ovirt & KVM.
>>>> 4. To deploy ovirt engine VM, what kind of configuration I'll
have
>>>> to do on network adaptors? (eg. just configure IP on physical network
or
>>>> have to create br0 for it.)
>>>> 5. Can I make multiple VM networks for vlan configuration?
>>>>
>>>>
>>>> Regrards
>>>> Bharat Kumar
>>>>
>>>> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
>>>> Udaipur (Raj.)
>>>> 313001
>>>> Mob: +91-9950-9960-25
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Sep 12, 2017 at 9:30 PM, Charles Kozler
<ckozleriii(a)gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> Interestingly enough I literally just went through this same thing
>>>>> with a slight variation.
>>>>>
>>>>> Note to the below: I am not sure if this would be considerd best
>>>>> practice or good for something long term support but I made due with
what I
>>>>> had
>>>>>
>>>>> I had 10Gb cards for my storage network but no 10Gb switch, so I
>>>>> direct connected them with some fun routing and /etc/hosts settings.
I also
>>>>> didnt want my storage network on a routed network (have firewalls in
the
>>>>> way of VLANs) and I wanted the network separate from my ovirtmgmt -
and, as
>>>>> I said, had no switches for 10Gb. Here is what you need at a bare
minimum.
>>>>> Adapt / change it as you need
>>>>>
>>>>> 1 dedicated NIC on each node for ovirtmgmt. Ex: eth0
>>>>>
>>>>> 1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
>>>>> 1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1
>>>>>
>>>>> 1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
>>>>> 1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2
>>>>>
>>>>> 1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
>>>>> 1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3
>>>>>
>>>>> You'll need custom routes too:
>>>>>
>>>>> Route to node 3 from node 1 via eth2
>>>>> Route to node 3 from node 2 via eth2
>>>>> Route to node 2 from node 3 via eth2
>>>>>
>>>>> Finally, entries in your /etc/hosts which match to your routes above
>>>>>
>>>>> Then, advisably, a dedicated NIC per box for VM network but you can
>>>>> leverage ovirtmgmt if you are just proofing this out
>>>>>
>>>>> At this point if you can reach all of your nodes via this direct
>>>>> connect IPs then you setup gluster as you normally would referencing
your
>>>>> entries in /etc/hosts when you call "gluster volume
create"
>>>>>
>>>>> In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server
so
>>>>> I setup LACP as well as you can see below
>>>>>
>>>>> This is what my Frankenstein POC looked like:
>>>>>
http://i.imgur.com/iURL9jv.png
>>>>>
>>>>> You can optionally choose to setup this network in ovirt as well
(and
>>>>> add the NICs to each host) but dont configure it as a VM network.
Then you
>>>>> can also, with some other minor tweaks, use these direct connects as
>>>>> migration networks rather than ovirtmgmt or VM network
>>>>>
>>>>> On Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <
>>>>> bharat(a)synergysystemsindia.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am trying to deploy 3 hosts hyper converged setup.
>>>>>> I am using Centos and installed KVM on all hosts.
>>>>>>
>>>>>> Host-1
>>>>>> Hostname - test1.localdomain
>>>>>> eth0 - 192.168.100.15/24
>>>>>> GW - 192.168.100.1
>>>>>>
>>>>>> Hoat-2
>>>>>> Hostname - test2.localdomain
>>>>>> eth0 - 192.168.100.16/24
>>>>>> GW - 192.168.100.1
>>>>>>
>>>>>> Host-3
>>>>>> Hostname - test3.localdomain
>>>>>> eth0 - 192.168.100.16/24
>>>>>> GW - 192.168.100.1
>>>>>>
>>>>>> I have created two gluster volume "engine" &
"data" with replica 3.
>>>>>> I have add fqdn entry in /etc/hosts for all host for DNS
resolution.
>>>>>>
>>>>>> I want to deploy Ovirt engine self hosted OVA to manage all the
>>>>>> hosts and production VM and my ovirt-engine VM should have HA
enabled.
>>>>>>
>>>>>> I found multiple docs over internet to deply
Self-hosted-engine-ova
>>>>>> but I don't what kind of network configuration I've to do
on Centos network
>>>>>> card & KVM. As KVM docs suggest that I've to create a
bridge network for
>>>>>> Pnic to Vnic bridge. If I configure a bridge br0 for eth0 bridge
that I
>>>>>> can't see eth0 while deploying ovirt-engine setup at NIC card
choice.
>>>>>>
>>>>>> Kindly help me to do correct configuration for Centos hosts, KVM
&
>>>>>> ovirt-engine-vm for HA enabled DC.
>>>>>> Regrards
>>>>>> Bharat Kumar
>>>>>>
>>>>>> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
>>>>>> Udaipur (Raj.)
>>>>>> 313001
>>>>>> Mob: +91-9950-9960-25
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>