Il 11/07/2017 07:59, knarra ha scritto:
On 07/10/2017 07:18 PM, Simone Marchioni wrote:
> Hi Kasturi,
>
> you're right: the file
> /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I
> updated the path in the gdeploy config file and run Deploy again.
> The situation is much better but the Deployment failed again... :-(
>
> Here are the errors:
>
>
>
> PLAY [gluster_servers]
> *********************************************************
>
> TASK [Run a shell script]
> ******************************************************
> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
> conditional check 'result.rc != 0' failed. The error was: error while
> evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
> conditional check 'result.rc != 0' failed. The error was: error while
> evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
> conditional check 'result.rc != 0' failed. The error was: error while
> evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
> to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
>
> PLAY RECAP
> *********************************************************************
> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>
>
> PLAY [gluster_servers]
> *********************************************************
>
> TASK [Clean up filesystem signature]
> *******************************************
> skipping: [ha2.domain.it] => (item=/dev/md128)
> skipping: [ha1.domain.it] => (item=/dev/md128)
> skipping: [ha3.domain.it] => (item=/dev/md128)
>
> TASK [Create Physical Volume]
> **************************************************
> failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true,
> "failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
> xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
> [n]\n Aborted wiping of xfs.\n 1 existing signature left on the
> device.\n", "rc": 5}
> failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true,
> "failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
> xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
> [n]\n Aborted wiping of xfs.\n 1 existing signature left on the
> device.\n", "rc": 5}
> failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true,
> "failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
> xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
> [n]\n Aborted wiping of xfs.\n 1 existing signature left on the
> device.\n", "rc": 5}
> to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
>
> PLAY RECAP
> *********************************************************************
> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>
> Ignoring errors...
>
>
>
> Any clue?
Hi,
I see that there are some signatures left on your device due to
which the script is failing and creating physical volume also fails.
Can you try to do fill zeros in the disk for 512MB or 1GB and try again ?
dd if=/dev/zero of=<device>
Before running the script again try to do pvcreate and see if that
works. If it works, just do pvdelete and run the script. Everything
should work fine.
Thanks
kasturi
>
> Thanks for your time.
> Simone
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
Hi,
removed partition signatures with wipefs and run deploy again: this time
the creation of VG and LV worked correctly. The deployment proceeded
until some new errors... :-/
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Start firewalld if not already started]
**********************************
ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]
TASK [Add/Delete services to firewalld rules]
**********************************
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true,
"item":
"glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
among existing services Services are defined by port/tcp relationship
and named as they are in /etc/services (on most systems)"}
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true,
"item":
"glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
among existing services Services are defined by port/tcp relationship
and named as they are in /etc/services (on most systems)"}
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true,
"item":
"glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
among existing services Services are defined by port/tcp relationship
and named as they are in /etc/services (on most systems)"}
to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Start firewalld if not already started]
**********************************
ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]
TASK [Open/Close firewalld ports]
**********************************************
changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)
TASK [Reloads the firewall]
****************************************************
changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0
ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0
ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed":
true, "cmd": "usermod -a -G gluster qemu", "delta":
"0:00:00.003182",
"end": "2017-07-10 18:30:51.204235", "failed": true,
"item": "usermod -a
-G gluster qemu", "rc": 6, "start": "2017-07-10
18:30:51.201053",
"stderr": "usermod: group 'gluster' does not exist",
"stderr_lines":
["usermod: group 'gluster' does not exist"], "stdout":
"",
"stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed":
true, "cmd": "usermod -a -G gluster qemu", "delta":
"0:00:00.007698",
"end": "2017-07-10 18:30:51.391046", "failed": true,
"item": "usermod -a
-G gluster qemu", "rc": 6, "start": "2017-07-10
18:30:51.383348",
"stderr": "usermod: group 'gluster' does not exist",
"stderr_lines":
["usermod: group 'gluster' does not exist"], "stdout":
"",
"stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed":
true, "cmd": "usermod -a -G gluster qemu", "delta":
"0:00:00.004120",
"end": "2017-07-10 18:30:51.405640", "failed": true,
"item": "usermod -a
-G gluster qemu", "rc": 6, "start": "2017-07-10
18:30:51.401520",
"stderr": "usermod: group 'gluster' does not exist",
"stderr_lines":
["usermod: group 'gluster' does not exist"], "stdout":
"",
"stdout_lines": []}
to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item":
"glusterd", "msg": "Could not find the requested service
glusterd: host"}
to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors...
Ignoring errors...
Ignoring errors...
Ignoring errors...
Ignoring errors...
In start/stop/restart/reload services it complain about "Could not find
the requested service glusterd: host". GlusterFS must be preinstalled or
not? I simply installed the rpm packages manually BEFORE the deployment:
yum install glusterfs glusterfs-cli glusterfs-libs
glusterfs-client-xlators glusterfs-api glusterfs-fuse
but never configured anything.
For firewalld problem "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
among existing services Services are defined by port/tcp relationship
and named as they are in /etc/services (on most systems)" I haven't
touched anything... it's an "out of the box" installation of CentOS 7.3.
Don't know if the following problems - "Run a shell script" and
"usermod: group 'gluster' does not exist" - are related to these...
maybe the usermod problem.
Thank you again.
Simone