This is a multi-part message in MIME format.
--------------10205537512B1EE82441AA50
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
On 07/11/2017 01:32 PM, Simone Marchioni wrote:
Il 11/07/2017 07:59, knarra ha scritto:
> On 07/10/2017 07:18 PM, Simone Marchioni wrote:
>> Hi Kasturi,
>>
>> you're right: the file
>> /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I
>> updated the path in the gdeploy config file and run Deploy again.
>> The situation is much better but the Deployment failed again... :-(
>>
>> Here are the errors:
>>
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [Run a shell script]
>> ******************************************************
>> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
>> conditional check 'result.rc != 0' failed. The error was: error
>> while evaluating conditional (result.rc != 0): 'dict object' has no
>> attribute 'rc'"}
>> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
>> conditional check 'result.rc != 0' failed. The error was: error
>> while evaluating conditional (result.rc != 0): 'dict object' has no
>> attribute 'rc'"}
>> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
>> conditional check 'result.rc != 0' failed. The error was: error
>> while evaluating conditional (result.rc != 0): 'dict object' has no
>> attribute 'rc'"}
>> to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>>
>>
>> PLAY [gluster_servers]
>> *********************************************************
>>
>> TASK [Clean up filesystem signature]
>> *******************************************
>> skipping: [ha2.domain.it] => (item=/dev/md128)
>> skipping: [ha1.domain.it] => (item=/dev/md128)
>> skipping: [ha3.domain.it] => (item=/dev/md128)
>>
>> TASK [Create Physical Volume]
>> **************************************************
>> failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true,
>> "failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
>> xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
>> [n]\n Aborted wiping of xfs.\n 1 existing signature left on the
>> device.\n", "rc": 5}
>> failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true,
>> "failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
>> xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
>> [n]\n Aborted wiping of xfs.\n 1 existing signature left on the
>> device.\n", "rc": 5}
>> failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true,
>> "failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
>> xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
>> [n]\n Aborted wiping of xfs.\n 1 existing signature left on the
>> device.\n", "rc": 5}
>> to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
>>
>> PLAY RECAP
>> *********************************************************************
>> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
>> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>>
>> Ignoring errors...
>>
>>
>>
>> Any clue?
> Hi,
>
> I see that there are some signatures left on your device due to
> which the script is failing and creating physical volume also fails.
> Can you try to do fill zeros in the disk for 512MB or 1GB and try
> again ?
>
> dd if=/dev/zero of=<device>
>
> Before running the script again try to do pvcreate and see if
> that works. If it works, just do pvdelete and run the script.
> Everything should work fine.
>
> Thanks
> kasturi
>>
>> Thanks for your time.
>> Simone
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
Hi,
removed partition signatures with wipefs and run deploy again: this
time the creation of VG and LV worked correctly. The deployment
proceeded until some new errors... :-/
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Start firewalld if not already started]
**********************************
ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]
TASK [Add/Delete services to firewalld rules]
**********************************
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true,
"item":
"glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Services are defined by port/tcp
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true,
"item":
"glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Services are defined by port/tcp
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true,
"item":
"glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Services are defined by port/tcp
relationship and named as they are in /etc/services (on most systems)"}
to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Start firewalld if not already started]
**********************************
ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]
TASK [Open/Close firewalld ports]
**********************************************
changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)
TASK [Reloads the firewall]
****************************************************
changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0
ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0
ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.003182", "end": "2017-07-10 18:30:51.204235",
"failed": true,
"item": "usermod -a -G gluster qemu", "rc": 6,
"start": "2017-07-10
18:30:51.201053", "stderr": "usermod: group 'gluster' does
not exist",
"stderr_lines": ["usermod: group 'gluster' does not exist"],
"stdout":
"", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.007698", "end": "2017-07-10 18:30:51.391046",
"failed": true,
"item": "usermod -a -G gluster qemu", "rc": 6,
"start": "2017-07-10
18:30:51.383348", "stderr": "usermod: group 'gluster' does
not exist",
"stderr_lines": ["usermod: group 'gluster' does not exist"],
"stdout":
"", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.004120", "end": "2017-07-10 18:30:51.405640",
"failed": true,
"item": "usermod -a -G gluster qemu", "rc": 6,
"start": "2017-07-10
18:30:51.401520", "stderr": "usermod: group 'gluster' does
not exist",
"stderr_lines": ["usermod: group 'gluster' does not exist"],
"stdout":
"", "stdout_lines": []}
to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors...
Ignoring errors...
Ignoring errors...
Ignoring errors...
Ignoring errors...
In start/stop/restart/reload services it complain about "Could not
find the requested service glusterd: host". GlusterFS must be
preinstalled or not? I simply installed the rpm packages manually
BEFORE the deployment:
yum install glusterfs glusterfs-cli glusterfs-libs
glusterfs-client-xlators glusterfs-api glusterfs-fuse
but never configured anything.
Looks like it failed to add the 'glusterfs'
service using firewalld and
can we try again with what Gianluca suggested ?
Can you please install the latest ovirt rpm which will add all the
required dependencies and make sure that the following packages are
installed before running with gdeploy ?
yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy
cockpit-ovirt-dashboard
For firewalld problem "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Services are defined by port/tcp
relationship and named as they are in /etc/services (on most systems)"
I haven't touched anything... it's an "out of the box" installation of
CentOS 7.3.
Don't know if the following problems - "Run a shell script" and
"usermod: group 'gluster' does not exist" - are related to these...
maybe the usermod problem.
You could safely ignore this and this has nothing to do
with the
configuration.
Thank you again.
Simone
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------10205537512B1EE82441AA50
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 07/11/2017 01:32 PM, Simone
Marchioni wrote:<br>
</div>
<blockquote
cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it"
type="cite">Il 11/07/2017 07:59, knarra ha scritto:
<br>
<blockquote type="cite">On 07/10/2017 07:18 PM, Simone Marchioni
wrote:
<br>
<blockquote type="cite">Hi Kasturi,
<br>
<br>
you're right: the file
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present.
I updated the path in the gdeploy config file and run Deploy
again.
<br>
The situation is much better but the Deployment failed
again... :-(
<br>
<br>
Here are the errors:
<br>
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [Run a shell script]
******************************************************
<br>
fatal: [ha1.domain.it]: FAILED! => {"failed": true,
"msg":
"The conditional check 'result.rc != 0' failed. The error was:
error while evaluating conditional (result.rc != 0): 'dict
object' has no attribute 'rc'"}
<br>
fatal: [ha3.domain.it]: FAILED! => {"failed": true,
"msg":
"The conditional check 'result.rc != 0' failed. The error was:
error while evaluating conditional (result.rc != 0): 'dict
object' has no attribute 'rc'"}
<br>
fatal: [ha2.domain.it]: FAILED! => {"failed": true,
"msg":
"The conditional check 'result.rc != 0' failed. The error was:
error while evaluating conditional (result.rc != 0): 'dict
object' has no attribute 'rc'"}
<br>
to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [Clean up filesystem signature]
*******************************************
<br>
skipping: [ha2.domain.it] => (item=/dev/md128)
<br>
skipping: [ha1.domain.it] => (item=/dev/md128)
<br>
skipping: [ha3.domain.it] => (item=/dev/md128)
<br>
<br>
TASK [Create Physical Volume]
**************************************************
<br>
failed: [ha2.domain.it] (item=/dev/md128) => {"failed":
true, "failed_when_result": true, "item":
"/dev/md128", "msg":
"WARNING: xfs signature detected on /dev/md128 at offset 0.
Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing
signature left on the device.\n", "rc": 5}
<br>
failed: [ha1.domain.it] (item=/dev/md128) => {"failed":
true, "failed_when_result": true, "item":
"/dev/md128", "msg":
"WARNING: xfs signature detected on /dev/md128 at offset 0.
Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing
signature left on the device.\n", "rc": 5}
<br>
failed: [ha3.domain.it] (item=/dev/md128) => {"failed":
true, "failed_when_result": true, "item":
"/dev/md128", "msg":
"WARNING: xfs signature detected on /dev/md128 at offset 0.
Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing
signature left on the device.\n", "rc": 5}
<br>
to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
<br>
Ignoring errors...
<br>
<br>
<br>
<br>
Any clue?
<br>
</blockquote>
Hi,
<br>
<br>
I see that there are some signatures left on your device due
to which the script is failing and creating physical volume also
fails. Can you try to do fill zeros in the disk for 512MB or 1GB
and try again ?
<br>
<br>
dd if=/dev/zero of=<device>
<br>
<br>
Before running the script again try to do pvcreate and see
if that works. If it works, just do pvdelete and run the script.
Everything should work fine.
<br>
<br>
Thanks
<br>
kasturi
<br>
<blockquote type="cite">
<br>
Thanks for your time.
<br>
Simone
<br>
_______________________________________________
<br>
Users mailing list
<br>
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<br>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
<br>
</blockquote>
</blockquote>
<br>
Hi,
<br>
<br>
removed partition signatures with wipefs and run deploy again:
this time the creation of VG and LV worked correctly. The
deployment proceeded until some new errors... :-/
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [start/stop/restart/reload services]
**************************************
<br>
failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find the
requested service
glusterd: host"}
<br>
failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find the
requested service
glusterd: host"}
<br>
failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find the
requested service
glusterd: host"}
<br>
to retry, use: --limit
@/tmp/tmp5Dtb2G/service_management.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [Start firewalld if not already started]
**********************************
<br>
ok: [ha1.domain.it]
<br>
ok: [ha2.domain.it]
<br>
ok: [ha3.domain.it]
<br>
<br>
TASK [Add/Delete services to firewalld rules]
**********************************
<br>
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true,
"item": "glusterfs", "msg": "ERROR: Exception
caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined by
port/tcp relationship and named as they are in /etc/services (on
most systems)"}
<br>
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true,
"item": "glusterfs", "msg": "ERROR: Exception
caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined by
port/tcp relationship and named as they are in /etc/services (on
most systems)"}
<br>
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true,
"item": "glusterfs", "msg": "ERROR: Exception
caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined by
port/tcp relationship and named as they are in /etc/services (on
most systems)"}
<br>
to retry, use: --limit
@/tmp/tmp5Dtb2G/firewalld-service-op.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=1 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=1 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=1 changed=0 unreachable=0
failed=1
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [Start firewalld if not already started]
**********************************
<br>
ok: [ha1.domain.it]
<br>
ok: [ha2.domain.it]
<br>
ok: [ha3.domain.it]
<br>
<br>
TASK [Open/Close firewalld ports]
**********************************************
<br>
changed: [ha1.domain.it] => (item=111/tcp)
<br>
changed: [ha2.domain.it] => (item=111/tcp)
<br>
changed: [ha3.domain.it] => (item=111/tcp)
<br>
changed: [ha1.domain.it] => (item=2049/tcp)
<br>
changed: [ha2.domain.it] => (item=2049/tcp)
<br>
changed: [ha1.domain.it] => (item=54321/tcp)
<br>
changed: [ha3.domain.it] => (item=2049/tcp)
<br>
changed: [ha2.domain.it] => (item=54321/tcp)
<br>
changed: [ha1.domain.it] => (item=5900/tcp)
<br>
changed: [ha3.domain.it] => (item=54321/tcp)
<br>
changed: [ha2.domain.it] => (item=5900/tcp)
<br>
changed: [ha1.domain.it] => (item=5900-6923/tcp)
<br>
changed: [ha2.domain.it] => (item=5900-6923/tcp)
<br>
changed: [ha3.domain.it] => (item=5900/tcp)
<br>
changed: [ha1.domain.it] => (item=5666/tcp)
<br>
changed: [ha2.domain.it] => (item=5666/tcp)
<br>
changed: [ha1.domain.it] => (item=16514/tcp)
<br>
changed: [ha3.domain.it] => (item=5900-6923/tcp)
<br>
changed: [ha2.domain.it] => (item=16514/tcp)
<br>
changed: [ha3.domain.it] => (item=5666/tcp)
<br>
changed: [ha3.domain.it] => (item=16514/tcp)
<br>
<br>
TASK [Reloads the firewall]
****************************************************
<br>
changed: [ha1.domain.it]
<br>
changed: [ha2.domain.it]
<br>
changed: [ha3.domain.it]
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=3 changed=2 unreachable=0
failed=0
<br>
ha2.domain.it : ok=3 changed=2 unreachable=0
failed=0
<br>
ha3.domain.it : ok=3 changed=2 unreachable=0
failed=0
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [Run a shell script]
******************************************************
<br>
fatal: [ha1.domain.it]: FAILED! => {"failed": true,
"msg": "The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has
no attribute 'rc'"}
<br>
fatal: [ha2.domain.it]: FAILED! => {"failed": true,
"msg": "The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has
no attribute 'rc'"}
<br>
fatal: [ha3.domain.it]: FAILED! => {"failed": true,
"msg": "The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has
no attribute 'rc'"}
<br>
to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [Run a command in the shell]
**********************************************
<br>
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.003182", "end": "2017-07-10 18:30:51.204235",
"failed":
true, "item": "usermod -a -G gluster qemu", "rc": 6,
"start":
"2017-07-10 18:30:51.201053", "stderr": "usermod: group
'gluster'
does not exist", "stderr_lines": ["usermod: group
'gluster' does
not exist"], "stdout": "", "stdout_lines": []}
<br>
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.007698", "end": "2017-07-10 18:30:51.391046",
"failed":
true, "item": "usermod -a -G gluster qemu", "rc": 6,
"start":
"2017-07-10 18:30:51.383348", "stderr": "usermod: group
'gluster'
does not exist", "stderr_lines": ["usermod: group
'gluster' does
not exist"], "stdout": "", "stdout_lines": []}
<br>
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) =>
{"changed": true, "cmd": "usermod -a -G gluster qemu",
"delta":
"0:00:00.004120", "end": "2017-07-10 18:30:51.405640",
"failed":
true, "item": "usermod -a -G gluster qemu", "rc": 6,
"start":
"2017-07-10 18:30:51.401520", "stderr": "usermod: group
'gluster'
does not exist", "stderr_lines": ["usermod: group
'gluster' does
not exist"], "stdout": "", "stdout_lines": []}
<br>
to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************
<br>
<br>
TASK [start/stop/restart/reload services]
**************************************
<br>
failed: [ha1.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find the
requested service
glusterd: host"}
<br>
failed: [ha2.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find the
requested service
glusterd: host"}
<br>
failed: [ha3.domain.it] (item=glusterd) => {"failed": true,
"item": "glusterd", "msg": "Could not find the
requested service
glusterd: host"}
<br>
to retry, use: --limit
@/tmp/tmp5Dtb2G/service_management.retry
<br>
<br>
PLAY RECAP
*********************************************************************
<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1
<br>
<br>
Ignoring errors...
<br>
Ignoring errors...
<br>
Ignoring errors...
<br>
Ignoring errors...
<br>
Ignoring errors...
<br>
<br>
<br>
In start/stop/restart/reload services it complain about "Could not
find the requested service glusterd: host". GlusterFS must be
preinstalled or not? I simply installed the rpm packages manually
BEFORE the deployment:
<br>
<br>
yum install glusterfs glusterfs-cli glusterfs-libs
glusterfs-client-xlators glusterfs-api glusterfs-fuse
<br>
<br>
but never configured anything.
<br>
</blockquote>
Looks like it failed to add the 'glusterfs' service using firewalld
and can we try again with what Gianluca suggested ? <br>
<br>
Can you please install the latest ovirt rpm which will add all the
required dependencies and make sure that the following packages are
installed before running with gdeploy ?<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
<span style="color: rgb(0, 0, 0); font-family: Arial, sans-serif;
font-size: 13px; font-style: normal; font-variant-ligatures:
normal; font-variant-caps: normal; font-weight: normal;
letter-spacing: normal; orphans: 2; text-align: start;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(227, 255, 234); display: inline !important;
float: none;"><span
class="Apple-converted-space"> </span>yum
install vdsm-gluster ovirt-hosted-engine-setup gdeploy
cockpit-ovirt-dashboard</span><br>
<blockquote
cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it"
type="cite">
<br>
For firewalld problem "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Services are defined by
port/tcp relationship and named as they are in /etc/services (on
most systems)" I haven't touched anything... it's an "out of the
box" installation of CentOS 7.3.
<br>
<br>
Don't know if the following problems - "Run a shell script" and
"usermod: group 'gluster' does not exist" - are related to
these... maybe the usermod problem.
<br>
</blockquote>
You could safely ignore this and this has nothing to do with the
configuration.<br>
<blockquote
cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it"
type="cite">
<br>
Thank you again.
<br>
Simone
<br>
_______________________________________________
<br>
Users mailing list
<br>
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<br>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
<br>
</blockquote>
<p><br>
</p>
</body>
</html>
--------------10205537512B1EE82441AA50--