[ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

Simone Marchioni s.marchioni at lynx2000.it
Thu Jul 13 11:00:26 UTC 2017


Il 12/07/2017 10:59, knarra ha scritto:
> On 07/12/2017 01:43 PM, Simone Marchioni wrote:
>> Il 11/07/2017 11:23, knarra ha scritto:
>>
>> Hi,
>>
>> reply here to both Gianluca and Kasturi.
>>
>> Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
>> packages, but glusterfs-server was missing in my "yum install" 
>> command, so added glusterfs-server to my installation.
>>
>> Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
>> cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
>> was missing, so added to my installation.
> okay, cool.

:-)

>>
>> Rerun deployment and IT WORKED! I can read the message "Succesfully 
>> deployed Gluster" with the blue button "Continue to Hosted Engine 
>> Deployment". There's a minor glitch in the window: the green "V" in 
>> the circle is missing, like there's a missing image (or a wrong path, 
>> as I had to remove "ansible" from the grafton-sanity-check.sh path...)
> There is a bug for this and it will be fixed soon. Here is the bug id 
> for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082

Ok, thank you!

>>
>> Although the deployment worked, and the firewalld and gluterfs errors 
>> are gone, a couple of errors remains:
>>
>>
>> AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:
>>
>> PLAY [gluster_servers] 
>> *********************************************************
>>
>> TASK [Run a shell script] 
>> ******************************************************
>> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
>> conditional check 'result.rc != 0' failed. The error was: error while 
>> evaluating conditional (result.rc != 0): 'dict object' has no 
>> attribute 'rc'"}
>> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
>> conditional check 'result.rc != 0' failed. The error was: error while 
>> evaluating conditional (result.rc != 0): 'dict object' has no 
>> attribute 'rc'"}
>> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
>> conditional check 'result.rc != 0' failed. The error was: error while 
>> evaluating conditional (result.rc != 0): 'dict object' has no 
>> attribute 'rc'"}
>>     to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
> May be you missed to change the path of the script 
> "/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
> is why this failure.

You're right: changed the path and now it's ok.

>>
>> PLAY RECAP 
>> *********************************************************************
>> ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
>> ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
>> ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1
>>
>>
>> PLAY [gluster_servers] 
>> *********************************************************
>>
>> TASK [Run a command in the shell] 
>> **********************************************
>> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
>> {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
>> "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
>> true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
>> "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
>> does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
>> exist"], "stdout": "", "stdout_lines": []}
>> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
>> {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
>> "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
>> true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
>> "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
>> does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
>> exist"], "stdout": "", "stdout_lines": []}
>> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
>> {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
>> "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
>> true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
>> "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
>> does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
>> exist"], "stdout": "", "stdout_lines": []}
>>     to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry
>>
>> PLAY RECAP 
>> *********************************************************************
>> ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
>> ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
>> ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1
> This error can be safely ignored.

Ok

>>
>>
>> These are a problem for my installation or can I ignore them?
> You can just manually run the script to disable hooks on all the 
> nodes. Other error you can ignore.

Done it

>>
>> By the way, I'm writing and documenting this process and can prepare 
>> a tutorial if someone is interested.
>>
>> Thank you again for your support: now I'll proceed with the Hosted 
>> Engine Deployment.
> Good to know that you can now start with Hosted Engine Deployment.

Started the Hosted Engine Deployment, but I have a different problem now.

As the installer asked, I specified some parameters, in particular a 
pingable gateway address. Specified the host1 gateway.
Proceeding with the installer, it requires The Engine VM IP address 
(DHCP or Static). I selected static and specified an IP Address, but the 
IP *IS NOT* in the same subnet as the host1. The VMs IP addresses are 
all on a different subnet.
The installer shows a red message:

The Engine VM (aa.bb.cc.dd/SM) and the default gateway (ww.xx.yy.zz) 
will not be in the same IP subnet. Static routing configuration are not 
supported on automatic VM configuration.

I'm starting to think that BOTH the hosts IPs and the VM IPs MUST BE ON 
THE SAME SUBNET.

Is this a requirement or there's a way to deal with this configuration?
It's related only to "automatic VM configuration" or to oVirt in 
general? Once installed oVirt Engine can I have VMs on different subnet?

Hi
Simone


More information about the Users mailing list