This is a multi-part message in MIME format.
--------------6DA47FB3BE6F27A8C74D0A79
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
On 07/12/2017 01:43 PM, Simone Marchioni wrote:
Il 11/07/2017 11:23, knarra ha scritto:
> On 07/11/2017 01:32 PM, Simone Marchioni wrote:
>> Il 11/07/2017 07:59, knarra ha scritto:
>>
>> Hi,
>>
>> removed partition signatures with wipefs and run deploy again: this
>> time the creation of VG and LV worked correctly. The deployment
>> proceeded until some new errors... :-/
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [start/stop/restart/reload services]
>> **************************************
>> failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item":
>> "glusterd", "msg": "Could not find the requested service
glusterd:
>> host"}
>> failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item":
>> "glusterd", "msg": "Could not find the requested service
glusterd:
>> host"}
>> failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item":
>> "glusterd", "msg": "Could not find the requested service
glusterd:
>> host"}
>> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [Start firewalld if not already started]
>> **********************************
>> ok: [ha1.domain.it]
>> ok: [ha2.domain.it]
>> ok: [ha3.domain.it]
>>
>> TASK [Add/Delete services to firewalld rules]
>> **********************************
>> failed: [ha1.domain.it] (item=glusterfs) => {"failed": true,
"item":
>> "glusterfs", "msg": "ERROR: Exception caught:
>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>> not among existing services Services are defined by port/tcp
>> relationship and named as they are in /etc/services (on most systems)"}
>> failed: [ha2.domain.it] (item=glusterfs) => {"failed": true,
"item":
>> "glusterfs", "msg": "ERROR: Exception caught:
>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>> not among existing services Services are defined by port/tcp
>> relationship and named as they are in /etc/services (on most systems)"}
>> failed: [ha3.domain.it] (item=glusterfs) => {"failed": true,
"item":
>> "glusterfs", "msg": "ERROR: Exception caught:
>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>> not among existing services Services are defined by port/tcp
>> relationship and named as they are in /etc/services (on most systems)"}
>> to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [Start firewalld if not already started]
>> **********************************
>> ok: [ha1.domain.it]
>> ok: [ha2.domain.it]
>> ok: [ha3.domain.it]
>>
>> TASK [Open/Close firewalld ports]
>> **********************************************
>> changed: [ha1.domain.it] => (item=111/tcp)
>> changed: [ha2.domain.it] => (item=111/tcp)
>> changed: [ha3.domain.it] => (item=111/tcp)
>> changed: [ha1.domain.it] => (item=2049/tcp)
>> changed: [ha2.domain.it] => (item=2049/tcp)
>> changed: [ha1.domain.it] => (item=54321/tcp)
>> changed: [ha3.domain.it] => (item=2049/tcp)
>> changed: [ha2.domain.it] => (item=54321/tcp)
>> changed: [ha1.domain.it] => (item=5900/tcp)
>> changed: [ha3.domain.it] => (item=54321/tcp)
>> changed: [ha2.domain.it] => (item=5900/tcp)
>> changed: [ha1.domain.it] => (item=5900-6923/tcp)
>> changed: [ha2.domain.it] => (item=5900-6923/tcp)
>> changed: [ha3.domain.it] => (item=5900/tcp)
>> changed: [ha1.domain.it] => (item=5666/tcp)
>> changed: [ha2.domain.it] => (item=5666/tcp)
>> changed: [ha1.domain.it] => (item=16514/tcp)
>> changed: [ha3.domain.it] => (item=5900-6923/tcp)
>> changed: [ha2.domain.it] => (item=16514/tcp)
>> changed: [ha3.domain.it] => (item=5666/tcp)
>> changed: [ha3.domain.it] => (item=16514/tcp)
>>
>> TASK [Reloads the firewall]
>> ****************************************************
>> changed: [ha1.domain.it]
>> changed: [ha2.domain.it]
>> changed: [ha3.domain.it]
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0
>> ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0
>> ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [Run a shell script]
>> ******************************************************
>> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
>> conditional check 'result.rc != 0' failed. The error was: error
>> while evaluating conditional (result.rc != 0): 'dict object' has no
>> attribute 'rc'"}
>> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
>> conditional check 'result.rc != 0' failed. The error was: error
>> while evaluating conditional (result.rc != 0): 'dict object' has no
>> attribute 'rc'"}
>> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
>> conditional check 'result.rc != 0' failed. The error was: error
>> while evaluating conditional (result.rc != 0): 'dict object' has no
>> attribute 'rc'"}
>> to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [Run a command in the shell]
>> **********************************************
>> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) =>
>> {"changed": true, "cmd": "usermod -a -G gluster
qemu", "delta":
>> "0:00:00.003182", "end": "2017-07-10
18:30:51.204235", "failed":
>> true, "item": "usermod -a -G gluster qemu", "rc":
6, "start":
>> "2017-07-10 18:30:51.201053", "stderr": "usermod: group
'gluster'
>> does not exist", "stderr_lines": ["usermod: group
'gluster' does not
>> exist"], "stdout": "", "stdout_lines": []}
>> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) =>
>> {"changed": true, "cmd": "usermod -a -G gluster
qemu", "delta":
>> "0:00:00.007698", "end": "2017-07-10
18:30:51.391046", "failed":
>> true, "item": "usermod -a -G gluster qemu", "rc":
6, "start":
>> "2017-07-10 18:30:51.383348", "stderr": "usermod: group
'gluster'
>> does not exist", "stderr_lines": ["usermod: group
'gluster' does not
>> exist"], "stdout": "", "stdout_lines": []}
>> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) =>
>> {"changed": true, "cmd": "usermod -a -G gluster
qemu", "delta":
>> "0:00:00.004120", "end": "2017-07-10
18:30:51.405640", "failed":
>> true, "item": "usermod -a -G gluster qemu", "rc":
6, "start":
>> "2017-07-10 18:30:51.401520", "stderr": "usermod: group
'gluster'
>> does not exist", "stderr_lines": ["usermod: group
'gluster' does not
>> exist"], "stdout": "", "stdout_lines": []}
>> to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [start/stop/restart/reload services]
>> **************************************
>> failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item":
>> "glusterd", "msg": "Could not find the requested service
glusterd:
>> host"}
>> failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item":
>> "glusterd", "msg": "Could not find the requested service
glusterd:
>> host"}
>> failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item":
>> "glusterd", "msg": "Could not find the requested service
glusterd:
>> host"}
>> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>>
>> Ignoring errors...
>> Ignoring errors...
>> Ignoring errors...
>> Ignoring errors...
>> Ignoring errors...
>>
>>
>> In start/stop/restart/reload services it complain about "Could not
>> find the requested service glusterd: host". GlusterFS must be
>> preinstalled or not? I simply installed the rpm packages manually
>> BEFORE the deployment:
>>
>> yum install glusterfs glusterfs-cli glusterfs-libs
>> glusterfs-client-xlators glusterfs-api glusterfs-fuse
>>
>> but never configured anything.
> Looks like it failed to add the 'glusterfs' service using firewalld
> and can we try again with what Gianluca suggested ?
>
> Can you please install the latest ovirt rpm which will add all the
> required dependencies and make sure that the following packages are
> installed before running with gdeploy ?
>
> yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy
> cockpit-ovirt-dashboard
>>
>> For firewalld problem "ERROR: Exception caught:
>> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
>> not among existing services Services are defined by port/tcp
>> relationship and named as they are in /etc/services (on most
>> systems)" I haven't touched anything... it's an "out of the
box"
>> installation of CentOS 7.3.
>>
>> Don't know if the following problems - "Run a shell script" and
>> "usermod: group 'gluster' does not exist" - are related to
these...
>> maybe the usermod problem.
> You could safely ignore this and this has nothing to do with the
> configuration.
>>
>> Thank you again.
>> Simone
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>
>
Hi,
reply here to both Gianluca and Kasturi.
Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8
packages, but glusterfs-server was missing in my "yum install"
command, so added glusterfs-server to my installation.
Kasturi: packages ovirt-hosted-engine-setup, gdeploy and
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster
was missing, so added to my installation.
okay, cool.
Rerun deployment and IT WORKED! I can read the message "Succesfully
deployed Gluster" with the blue button "Continue to Hosted Engine
Deployment". There's a minor glitch in the window: the green "V" in
the circle is missing, like there's a missing image (or a wrong path,
as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id
for your reference.
https://bugzilla.redhat.com/show_bug.cgi?id=1462082
Although the deployment worked, and the firewalld and gluterfs errors
are gone, a couple of errors remains:
AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to
change the path of the script
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That is
why this failure.
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832",
"failed": true,
"item": "usermod -a -G gluster qemu", "rc": 6,
"start": "2017-07-12
00:22:46.833688", "stderr": "usermod: group 'gluster' does
not exist",
"stderr_lines": ["usermod: group 'gluster' does not exist"],
"stdout":
"", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964",
"failed": true,
"item": "usermod -a -G gluster qemu", "rc": 6,
"start": "2017-07-12
00:22:46.892317", "stderr": "usermod: group 'gluster' does
not exist",
"stderr_lines": ["usermod: group 'gluster' does not exist"],
"stdout":
"", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600",
"failed": true,
"item": "usermod -a -G gluster qemu", "rc": 6,
"start": "2017-07-12
00:22:47.009592", "stderr": "usermod: group 'gluster' does
not exist",
"stderr_lines": ["usermod: group 'gluster' does not exist"],
"stdout":
"", "stdout_lines": []}
to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
This error
can be safely ignored.
These are a problem for my installation or can I ignore them?
You can just manually
run the script to disable hooks on all the nodes.
Other error you can ignore.
By the way, I'm writing and documenting this process and can prepare a
tutorial if someone is interested.
Thank you again for your support: now I'll proceed with the Hosted
Engine Deployment.
Good to know that you can now start with Hosted Engine
Deployment.
Hi
Simone
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------6DA47FB3BE6F27A8C74D0A79
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 07/12/2017 01:43 PM, Simone
Marchioni wrote:<br>
</div>
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite">
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
<div class="moz-cite-prefix">Il 11/07/2017 11:23, knarra ha
scritto:<br>
</div>
<blockquote
cite="mid:b73b1766-a724-ebc4-f7c7-159ee07a6675@redhat.com"
type="cite">
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
<div class="moz-cite-prefix">On 07/11/2017 01:32 PM, Simone
Marchioni wrote:<br>
</div>
<blockquote
cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it"
type="cite">Il 11/07/2017 07:59, knarra ha scritto: <br>
<br>
Hi, <br>
<br>
removed partition signatures with wipefs and run deploy again:
this time the creation of VG and LV worked correctly. The
deployment proceeded until some new errors... :-/ <br>
<br>
<br>
PLAY [gluster_servers]
********************************************************* <br>
<br>
TASK [start/stop/restart/reload services]
************************************** <br>
failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find
the requested
service glusterd: host"} <br>
failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find
the requested
service glusterd: host"} <br>
failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find
the requested
service glusterd: host"} <br>
to retry, use: --limit
@/tmp/tmp5Dtb2G/service_management.retry <br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
<br>
PLAY [gluster_servers]
********************************************************* <br>
<br>
TASK [Start firewalld if not already started]
********************************** <br>
ok: [ha1.domain.it] <br>
ok: [ha2.domain.it] <br>
ok: [ha3.domain.it] <br>
<br>
TASK [Add/Delete services to firewalld rules]
********************************** <br>
failed: [ha1.domain.it] (item=glusterfs) => {"failed":
true, "item": "glusterfs", "msg": "ERROR:
Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined
by port/tcp relationship and named as they are in
/etc/services (on most systems)"} <br>
failed: [ha2.domain.it] (item=glusterfs) => {"failed":
true, "item": "glusterfs", "msg": "ERROR:
Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined
by port/tcp relationship and named as they are in
/etc/services (on most systems)"} <br>
failed: [ha3.domain.it] (item=glusterfs) => {"failed":
true, "item": "glusterfs", "msg": "ERROR:
Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined
by port/tcp relationship and named as they are in
/etc/services (on most systems)"} <br>
to retry, use: --limit
@/tmp/tmp5Dtb2G/firewalld-service-op.retry <br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=1 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=1 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=1 changed=0 unreachable=0
failed=1 <br>
<br>
<br>
PLAY [gluster_servers]
********************************************************* <br>
<br>
TASK [Start firewalld if not already started]
********************************** <br>
ok: [ha1.domain.it] <br>
ok: [ha2.domain.it] <br>
ok: [ha3.domain.it] <br>
<br>
TASK [Open/Close firewalld ports]
********************************************** <br>
changed: [ha1.domain.it] => (item=111/tcp) <br>
changed: [ha2.domain.it] => (item=111/tcp) <br>
changed: [ha3.domain.it] => (item=111/tcp) <br>
changed: [ha1.domain.it] => (item=2049/tcp) <br>
changed: [ha2.domain.it] => (item=2049/tcp) <br>
changed: [ha1.domain.it] => (item=54321/tcp) <br>
changed: [ha3.domain.it] => (item=2049/tcp) <br>
changed: [ha2.domain.it] => (item=54321/tcp) <br>
changed: [ha1.domain.it] => (item=5900/tcp) <br>
changed: [ha3.domain.it] => (item=54321/tcp) <br>
changed: [ha2.domain.it] => (item=5900/tcp) <br>
changed: [ha1.domain.it] => (item=5900-6923/tcp) <br>
changed: [ha2.domain.it] => (item=5900-6923/tcp) <br>
changed: [ha3.domain.it] => (item=5900/tcp) <br>
changed: [ha1.domain.it] => (item=5666/tcp) <br>
changed: [ha2.domain.it] => (item=5666/tcp) <br>
changed: [ha1.domain.it] => (item=16514/tcp) <br>
changed: [ha3.domain.it] => (item=5900-6923/tcp) <br>
changed: [ha2.domain.it] => (item=16514/tcp) <br>
changed: [ha3.domain.it] => (item=5666/tcp) <br>
changed: [ha3.domain.it] => (item=16514/tcp) <br>
<br>
TASK [Reloads the firewall]
**************************************************** <br>
changed: [ha1.domain.it] <br>
changed: [ha2.domain.it] <br>
changed: [ha3.domain.it] <br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=3 changed=2 unreachable=0
failed=0 <br>
ha2.domain.it : ok=3 changed=2 unreachable=0
failed=0 <br>
ha3.domain.it : ok=3 changed=2 unreachable=0
failed=0 <br>
<br>
<br>
PLAY [gluster_servers]
********************************************************* <br>
<br>
TASK [Run a shell script]
****************************************************** <br>
fatal: [ha1.domain.it]: FAILED! => {"failed": true,
"msg":
"The conditional check 'result.rc != 0' failed. The error was:
error while evaluating conditional (result.rc != 0): 'dict
object' has no attribute 'rc'"} <br>
fatal: [ha2.domain.it]: FAILED! => {"failed": true,
"msg":
"The conditional check 'result.rc != 0' failed. The error was:
error while evaluating conditional (result.rc != 0): 'dict
object' has no attribute 'rc'"} <br>
fatal: [ha3.domain.it]: FAILED! => {"failed": true,
"msg":
"The conditional check 'result.rc != 0' failed. The error was:
error while evaluating conditional (result.rc != 0): 'dict
object' has no attribute 'rc'"} <br>
to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry <br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
<br>
PLAY [gluster_servers]
********************************************************* <br>
<br>
TASK [Run a command in the shell]
********************************************** <br>
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu)
=> {"changed": true, "cmd": "usermod -a -G
gluster qemu",
"delta": "0:00:00.003182", "end":
"2017-07-10
18:30:51.204235", "failed": true, "item": "usermod
-a -G
gluster qemu", "rc": 6, "start": "2017-07-10
18:30:51.201053",
"stderr": "usermod: group 'gluster' does not
exist",
"stderr_lines": ["usermod: group 'gluster' does not
exist"],
"stdout": "", "stdout_lines": []} <br>
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu)
=> {"changed": true, "cmd": "usermod -a -G
gluster qemu",
"delta": "0:00:00.007698", "end":
"2017-07-10
18:30:51.391046", "failed": true, "item": "usermod
-a -G
gluster qemu", "rc": 6, "start": "2017-07-10
18:30:51.383348",
"stderr": "usermod: group 'gluster' does not
exist",
"stderr_lines": ["usermod: group 'gluster' does not
exist"],
"stdout": "", "stdout_lines": []} <br>
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu)
=> {"changed": true, "cmd": "usermod -a -G
gluster qemu",
"delta": "0:00:00.004120", "end":
"2017-07-10
18:30:51.405640", "failed": true, "item": "usermod
-a -G
gluster qemu", "rc": 6, "start": "2017-07-10
18:30:51.401520",
"stderr": "usermod: group 'gluster' does not
exist",
"stderr_lines": ["usermod: group 'gluster' does not
exist"],
"stdout": "", "stdout_lines": []} <br>
to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry <br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
<br>
PLAY [gluster_servers]
********************************************************* <br>
<br>
TASK [start/stop/restart/reload services]
************************************** <br>
failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find
the requested
service glusterd: host"} <br>
failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find
the requested
service glusterd: host"} <br>
failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find
the requested
service glusterd: host"} <br>
to retry, use: --limit
@/tmp/tmp5Dtb2G/service_management.retry <br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
Ignoring errors... <br>
Ignoring errors... <br>
Ignoring errors... <br>
Ignoring errors... <br>
Ignoring errors... <br>
<br>
<br>
In start/stop/restart/reload services it complain about "Could
not find the requested service glusterd: host". GlusterFS must
be preinstalled or not? I simply installed the rpm packages
manually BEFORE the deployment: <br>
<br>
yum install glusterfs glusterfs-cli glusterfs-libs
glusterfs-client-xlators glusterfs-api glusterfs-fuse <br>
<br>
but never configured anything. <br>
</blockquote>
Looks like it failed to add the 'glusterfs' service using
firewalld and can we try again with what Gianluca suggested ? <br>
<br>
Can you please install the latest ovirt rpm which will add all
the required dependencies and make sure that the following
packages are installed before running with gdeploy ?<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
<span style="color: rgb(0, 0, 0); font-family: Arial,
sans-serif; font-size: 13px; font-style: normal;
font-variant-ligatures: normal; font-variant-caps: normal;
font-weight: normal; letter-spacing: normal; orphans: 2;
text-align: start; text-indent: 0px; text-transform: none;
white-space: normal; widows: 2; word-spacing: 0px;
-webkit-text-stroke-width: 0px; background-color: rgb(227,
255, 234); display: inline !important; float: none;"><span
class="Apple-converted-space"> </span>yum install
vdsm-gluster ovirt-hosted-engine-setup gdeploy
cockpit-ovirt-dashboard</span><br>
<blockquote
cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it"
type="cite"> <br>
For firewalld problem "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined
by port/tcp relationship and named as they are in
/etc/services (on most systems)" I haven't touched anything...
it's an "out of the box" installation of CentOS 7.3. <br>
<br>
Don't know if the following problems - "Run a shell script"
and "usermod: group 'gluster' does not exist" - are related
to
these... maybe the usermod problem. <br>
</blockquote>
You could safely ignore this and this has nothing to do with the
configuration.<br>
<blockquote
cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it"
type="cite"> <br>
Thank you again. <br>
Simone <br>
_______________________________________________ <br>
Users mailing list <br>
<a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
<br>
</blockquote>
<p><br>
</p>
</blockquote>
<br>
Hi,<br>
<br>
reply here to both Gianluca and Kasturi.<br>
<br>
Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster
3.8 packages, but glusterfs-server was missing in my "yum install"
command, so added glusterfs-server to my installation.<br>
<br>
Kasturi: packages ovirt-hosted-engine-setup, gdeploy and
cockpit-ovirt-dashboard already installed and updated.
vdsm-gluster was missing, so added to my installation.<br>
</blockquote>
okay, cool.<br>
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite"> <br>
Rerun deployment and IT WORKED! I can read the message
"Succesfully deployed Gluster" with the blue button "Continue to
Hosted Engine Deployment". There's a minor glitch in the window:
the green "V" in the circle is missing, like there's a missing
image (or a wrong path, as I had to remove "ansible" from the
grafton-sanity-check.sh path...)<br>
</blockquote>
There is a bug for this and it will be fixed soon. Here is the bug
id for your reference.
<a class="moz-txt-link-freetext"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1462082">h...
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite"> <br>
Although the deployment worked, and the firewalld and gluterfs
errors are gone, a couple of errors remains:<br>
<br>
<br>
AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD
HANDLING:<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Run a shell script]
******************************************************<br>
fatal: [ha1.domain.it]: FAILED! => {"failed": true,
"msg": "The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has
no attribute 'rc'"}<br>
fatal: [ha2.domain.it]: FAILED! => {"failed": true,
"msg": "The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has
no attribute 'rc'"}<br>
fatal: [ha3.domain.it]: FAILED! => {"failed": true,
"msg": "The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has
no attribute 'rc'"}<br>
to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry<br>
</blockquote>
May be you missed to change the path of the script
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That
is why this failure.<br>
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite"> <br>
PLAY RECAP
*********************************************************************<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Run a command in the shell]
**********************************************<br>
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832",
"failed":
true, "item": "usermod -a -G gluster qemu", "rc": 6,
"start":
"2017-07-12 00:22:46.833688", "stderr": "usermod: group
'gluster'
does not exist", "stderr_lines": ["usermod: group
'gluster' does
not exist"], "stdout": "", "stdout_lines":
[]}<br>
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964",
"failed":
true, "item": "usermod -a -G gluster qemu", "rc": 6,
"start":
"2017-07-12 00:22:46.892317", "stderr": "usermod: group
'gluster'
does not exist", "stderr_lines": ["usermod: group
'gluster' does
not exist"], "stdout": "", "stdout_lines":
[]}<br>
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600",
"failed":
true, "item": "usermod -a -G gluster qemu", "rc": 6,
"start":
"2017-07-12 00:22:47.009592", "stderr": "usermod: group
'gluster'
does not exist", "stderr_lines": ["usermod: group
'gluster' does
not exist"], "stdout": "", "stdout_lines":
[]}<br>
to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
</blockquote>
This error can be safely ignored.<br>
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite"> <br>
<br>
These are a problem for my installation or can I ignore them?<br>
</blockquote>
You can just manually run the script to disable hooks on all the
nodes. Other error you can ignore.<br>
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite"> <br>
By the way, I'm writing and documenting this process and can
prepare a tutorial if someone is interested.<br>
<br>
Thank you again for your support: now I'll proceed with the Hosted
Engine Deployment.<br>
</blockquote>
Good to know that you can now start with Hosted Engine Deployment.<br>
<blockquote
cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it"
type="cite"> <br>
Hi<br>
Simone<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<p><br>
</p>
</body>
</html>
--------------6DA47FB3BE6F27A8C74D0A79--