This is a multi-part message in MIME format.
--------------4E6CD93E51177C98A4F0A225
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Il 10/07/2017 13:06, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni
<s.marchioni(a)lynx2000.it <mailto:s.marchioni@lynx2000.it>> wrote:
Hi Gianluca,
I recently discovered that the file:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
is missing from the system, and probably is the root cause of my
problem.
Searched with
yum provides
but I can't find any package with the script inside... any clue?
Thank you
Simone
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>
Hi,
but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you
manually installed packages?
Becase the original web link covered the case of ovirt-ng nodes, not
CentOS 7.3 OS.
Possibly you are missing any package that is instead installed inside
ovirt-ng node by default?
Hi Gianluca,
I used plain CentOS 7.3 where I manually installed the necessary packages.
I know the original tutorial used oVirt Node, but I tought the two were
almost the same, with the latter an "out of the box" solution but with
the same features.
That said I discovered the problem: there is no missing package. The
path of the script is wrong. In the tutorial it says:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
while the installed script is in:
/usr/share/gdeploy/scripts/grafton-sanity-check.sh
and is (correctly) part of the gdeploy package.
Updated the Gdeploy config and executed Deploy again. The situation is
much better now, but still says "Deployment Failed". Here's the output:
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
changed: [ha3.domain.it] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it)
changed: [ha2.domain.it] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it)
changed: [ha1.domain.it] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it)
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=1 changed=1 unreachable=0 failed=0
ha2.domain.it : ok=1 changed=1 unreachable=0 failed=0
ha3.domain.it : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Enable or disable services]
**********************************************
ok: [ha1.domain.it] => (item=chronyd)
ok: [ha3.domain.it] => (item=chronyd)
ok: [ha2.domain.it] => (item=chronyd)
PLAY RECAP
*********************************************************************
ha1.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0
ha2.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0
ha3.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
changed: [ha1.domain.it] => (item=chronyd)
changed: [ha2.domain.it] => (item=chronyd)
changed: [ha3.domain.it] => (item=chronyd)
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=1 changed=1 unreachable=0 failed=0
ha2.domain.it : ok=1 changed=1 unreachable=0 failed=0
ha3.domain.it : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
changed: [ha1.domain.it] => (item=vdsm-tool configure --force)
changed: [ha3.domain.it] => (item=vdsm-tool configure --force)
changed: [ha2.domain.it] => (item=vdsm-tool configure --force)
PLAY RECAP
*********************************************************************
ha1.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0
ha2.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0
ha3.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
PLAY RECAP
*********************************************************************
ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1
ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1
ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers]
*********************************************************
TASK [Clean up filesystem signature]
*******************************************
skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)
TASK [Create Physical Volume]
**************************************************
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true,
"failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING: xfs
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n
Aborted wiping of xfs.\n 1 existing signature left on the device.\n",
"rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true,
"failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING: xfs
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n
Aborted wiping of xfs.\n 1 existing signature left on the device.\n",
"rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true,
"failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING: xfs
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n
Aborted wiping of xfs.\n 1 existing signature left on the device.\n",
"rc": 5}
to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors...
Hope to be near the solution... ;-)
Hi,
Simone
--------------4E6CD93E51177C98A4F0A225
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Il 10/07/2017 13:06, Gianluca Cecchi ha
scritto:<br>
</div>
<blockquote
cite="mid:CAG2kNCyAv0DsnejUL24U9H-ujLApcXubTxF-7C58vKNT3oJmBA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Mon, Jul 10, 2017 at 12:57 PM,
Simone Marchioni <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:s.marchioni@lynx2000.it"
target="_blank">s.marchioni(a)lynx2000.it</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>
<div class="h5">
<div
class="m_-8438199709946781215moz-cite-prefix"><br>
</div>
</div>
</div>
Hi Gianluca,<br>
<br>
I recently discovered that the file:<br>
<br>
/usr/share/ansible/gdeploy/<wbr>scripts/grafton-sanity-check.<wbr>sh<br>
<br>
is missing from the system, and probably is the root
cause of my problem.<br>
Searched with<br>
<br>
yum provides<br>
<br>
but I can't find any package with the script inside...
any clue?<br>
<br>
Thank you<span class="HOEnZb"><font
color="#888888"><br>
Simone<br>
</font></span></div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/<wbr>mailman/li...
<br>
</blockquote>
</div>
<br>
</div>
<div class="gmail_extra">Hi,</div>
<div class="gmail_extra">but are your nodes ovirt-ng nodes or
plain CentOS 7.3 where you manually installed packages?</div>
<div class="gmail_extra">Becase the original web link covered
the case of ovirt-ng nodes, not CentOS 7.3 OS.</div>
<div class="gmail_extra">Possibly you are missing any package
that is instead installed inside ovirt-ng node by default?</div>
<div class="gmail_extra"><br>
</div>
</div>
</blockquote>
<br>
<br>
Hi Gianluca,<br>
<br>
I used plain CentOS 7.3 where I manually installed the necessary
packages.<br>
I know the original tutorial used oVirt Node, but I tought the two
were almost the same, with the latter an "out of the box" solution
but with the same features.<br>
<br>
That said I discovered the problem: there is no missing package. The
path of the script is wrong. In the tutorial it says:<br>
<br>
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh<br>
<br>
while the installed script is in:<br>
<br>
/usr/share/gdeploy/scripts/grafton-sanity-check.sh<br>
<br>
and is (correctly) part of the gdeploy package.<br>
<br>
Updated the Gdeploy config and executed Deploy again. The situation
is much better now, but still says "Deployment Failed". Here's the
output:<br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Run a shell script]
******************************************************<br>
changed: [ha3.domain.it] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it)<br>
changed: [ha2.domain.it] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it)<br>
changed: [ha1.domain.it] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it)<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.domain.it : ok=1 changed=1 unreachable=0
failed=0 <br>
ha2.domain.it : ok=1 changed=1 unreachable=0
failed=0 <br>
ha3.domain.it : ok=1 changed=1 unreachable=0
failed=0 <br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Enable or disable services]
**********************************************<br>
ok: [ha1.domain.it] => (item=chronyd)<br>
ok: [ha3.domain.it] => (item=chronyd)<br>
ok: [ha2.domain.it] => (item=chronyd)<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.lynx2000.it : ok=1 changed=0 unreachable=0
failed=0 <br>
ha2.lynx2000.it : ok=1 changed=0 unreachable=0
failed=0 <br>
ha3.lynx2000.it : ok=1 changed=0 unreachable=0
failed=0 <br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [start/stop/restart/reload services]
**************************************<br>
changed: [ha1.domain.it] => (item=chronyd)<br>
changed: [ha2.domain.it] => (item=chronyd)<br>
changed: [ha3.domain.it] => (item=chronyd)<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.domain.it : ok=1 changed=1 unreachable=0
failed=0 <br>
ha2.domain.it : ok=1 changed=1 unreachable=0
failed=0 <br>
ha3.domain.it : ok=1 changed=1 unreachable=0
failed=0 <br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Run a command in the shell]
**********************************************<br>
changed: [ha1.domain.it] => (item=vdsm-tool configure --force)<br>
changed: [ha3.domain.it] => (item=vdsm-tool configure --force)<br>
changed: [ha2.domain.it] => (item=vdsm-tool configure --force)<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.lynx2000.it : ok=1 changed=1 unreachable=0
failed=0 <br>
ha2.lynx2000.it : ok=1 changed=1 unreachable=0
failed=0 <br>
ha3.lynx2000.it : ok=1 changed=1 unreachable=0
failed=0 <br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Run a shell script]
******************************************************<br>
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}<br>
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}<br>
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}<br>
to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.lynx2000.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.lynx2000.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.lynx2000.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
<br>
PLAY [gluster_servers]
*********************************************************<br>
<br>
TASK [Clean up filesystem signature]
*******************************************<br>
skipping: [ha2.domain.it] => (item=/dev/md128) <br>
skipping: [ha1.domain.it] => (item=/dev/md128) <br>
skipping: [ha3.domain.it] => (item=/dev/md128) <br>
<br>
TASK [Create Physical Volume]
**************************************************<br>
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true,
"failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
[n]\n Aborted wiping of xfs.\n 1 existing signature left on the
device.\n", "rc": 5}<br>
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true,
"failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
[n]\n Aborted wiping of xfs.\n 1 existing signature left on the
device.\n", "rc": 5}<br>
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true,
"failed_when_result": true, "item": "/dev/md128",
"msg": "WARNING:
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]:
[n]\n Aborted wiping of xfs.\n 1 existing signature left on the
device.\n", "rc": 5}<br>
to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry<br>
<br>
PLAY RECAP
*********************************************************************<br>
ha1.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha2.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
ha3.domain.it : ok=0 changed=0 unreachable=0
failed=1 <br>
<br>
Ignoring errors...<br>
<br>
<br>
Hope to be near the solution... ;-)<br>
<br>
Hi,<br>
Simone<br>
</body>
</html>
--------------4E6CD93E51177C98A4F0A225--