Installation of oVirt 4.1, Gluster Storage and Hosted Engine

Hi to all, I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine. Followed the following tutorial: http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-... I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt 4.1 repo and all required packages. Configured passwordless ssh as stated. Then I log in cockpit web interface, selected "Hosted Engine with Gluster" and hit the Start button. Configured the parameters as shown in the tutorial. In the last step (5) the Generated Gdeply configuration (note: replaced the real domain with "domain.it"): #gdeploy configuration generated by cockpit-gluster plugin [hosts] ha1.domain.it ha2.domain.it ha3.domain.it [script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it [disktype] raid6 [diskcount] 12 [stripesize] 256 [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh [pv1] action=create devices=sdb ignore_pv_errors=no [vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1:{ha1.domain.it,ha2.domain.it}] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=110GB poolmetadatasize=1GB [lv2:ha3.domain.it] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=80GB poolmetadatasize=1GB [lv3:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=50GB [lv4:ha3.domain.it] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [lv5:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [lv6:ha3.domain.it] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [lv7:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [lv8:ha3.domain.it] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [lv9:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [lv10:ha3.domain.it] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs [script2] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine ignore_volume_errors=no arbiter_count=1 [volume2] action=create volname=data transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data ignore_volume_errors=no arbiter_count=1 [volume3] action=create volname=export transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export ignore_volume_errors=no arbiter_count=1 [volume4] action=create volname=iso transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso ignore_volume_errors=no arbiter_count=1 When I hit "Deploy" button the Deployment fails with the following error: PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 What I'm doing wrong? Maybe I need to initializa glusterfs in some way... What are the logs used to log the status of this deployment so I can check the errors? Thanks in advance! Simone

Il 07/Lug/2017 18:38, "Simone Marchioni" <s.marchioni@lynx2000.it> ha scritto: Hi to all, I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine. Followed the following tutorial: http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt- 4.1-and-gluster-storage/ ... snip ... [script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it ... snip ... When I hit "Deploy" button the Deployment fails with the following error: PLAY [gluster_servers] ****************************** *************************** TASK [Run a shell script] ****************************** ************************ fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry PLAY RECAP ************************************************************ ********* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 What I'm doing wrong? Maybe I need to initializa glusterfs in some way... What are the logs used to log the status of this deployment so I can check the errors? Thanks in advance! Simone _______________________________________________ Gdeploy uses ansible that seems to fail at its first step when executing its shell module http://docs.ansible.com/ansible/shell_module.html In practice in my opinion the shell script defined by [script1] ( grafton-sanity-check.sh) above doesn't exit with a return code (rc) for some reason... Perhaps you have already done a partial step previously? or your disks already contain a label? Is it correct sdb as the target for your disk configuration for gluster? I would try to reinitialize the disks, such as dd if=/dev/zero of=/dev/sdb bs=1024k count=1 ONLY if it is correct that sdb is the disk to format for brick filesystem.. Hih, Gianluca

This is a multi-part message in MIME format. --------------F5C9A93834F58B72186C52DC Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Il 07/07/2017 23:21, Gianluca Cecchi ha scritto:
Il 07/Lug/2017 18:38, "Simone Marchioni" <s.marchioni@lynx2000.it <mailto:s.marchioni@lynx2000.it>> ha scritto:
Hi to all,
I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine.
Followed the following tutorial:
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-... <http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/>
... snip ...
[script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h ha1.domain.it <http://ha1.domain.it>,ha2.domain.it <http://ha2.domain.it>,ha3.domain.it <http://ha3.domain.it>
... snip ...
When I hit "Deploy" button the Deployment fails with the following error:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it <http://ha1.domain.it>]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it <http://ha2.domain.it>]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it <http://ha3.domain.it>]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it <http://ha1.domain.it> : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it <http://ha2.domain.it> : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it <http://ha3.domain.it> : ok=0 changed=0 unreachable=0 failed=1
What I'm doing wrong? Maybe I need to initializa glusterfs in some way... What are the logs used to log the status of this deployment so I can check the errors?
Thanks in advance! Simone _______________________________________________
Gdeploy uses ansible that seems to fail at its first step when executing its shell module
http://docs.ansible.com/ansible/shell_module.html
In practice in my opinion the shell script defined by [script1] (grafton-sanity-check.sh) above doesn't exit with a return code (rc) for some reason... Perhaps you have already done a partial step previously? or your disks already contain a label? Is it correct sdb as the target for your disk configuration for gluster? I would try to reinitialize the disks, such as
dd if=/dev/zero of=/dev/sdb bs=1024k count=1
ONLY if it is correct that sdb is the disk to format for brick filesystem.. Hih, Gianluca
Hi Gianluca, thanks for your reply. I didn't do any previous step: the 3 servers are freshly installed. The disk was wrong: I had to use /dev/md128. Replaced sdb with the correct one and redeployed, but the error was exactly the same. The disks are already initialized because I created the XFS filesystem on /dev/md128 before the deploy. In /var/log/messages there is no error. Hi, Simone --------------F5C9A93834F58B72186C52DC Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Il 07/07/2017 23:21, Gianluca Cecchi ha scritto:<br> </div> <blockquote cite="mid:CAG2kNCx5nzzinb6P4_x_RNESrEo=WBzKKPPk=nzTmT=ELNWv5Q@mail.gmail.com" type="cite"> <div dir="auto"> <div> <div class="gmail_extra"> <div class="gmail_quote">Il 07/Lug/2017 18:38, "Simone Marchioni" <<a moz-do-not-send="true" href="mailto:s.marchioni@lynx2000.it">s.marchioni@lynx2000.it</a>> ha scritto:<br type="attribution"> <blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi to all,<br> <br> I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine.<br> <br> Followed the following tutorial:<br> <br> <a moz-do-not-send="true" href="http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-..." rel="noreferrer" target="_blank">http://www.ovirt.org/blog/2017<wbr>/04/up-and-running-with-ovirt-<wbr>4.1-and-gluster-storage/</a><br> <br> <br> </blockquote> </div> </div> </div> <div dir="auto"><br> </div> <div dir="auto"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br> ... snip ...<br> </blockquote> </div> </div> </div> <div dir="auto"><br> </div> <div dir="auto"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> [script1]<br> action=execute<br> ignore_script_errors=no<br> file=/usr/share/ansible/gdeplo<wbr>y/scripts/grafton-sanity-<wbr>check.sh -d sdb -h <a moz-do-not-send="true" href="http://ha1.domain.it" rel="noreferrer" target="_blank">ha1.domain.it</a>,<a moz-do-not-send="true" href="http://ha2.domain.it" rel="noreferrer" target="_blank">ha2.domain.it</a>,<a moz-do-not-send="true" href="http://ha3.domain.it" rel="noreferrer" target="_blank">ha<wbr>3.domain.it</a><br> <br> ... snip ...<br> </blockquote> </div> </div> </div> <div dir="auto"><br> </div> <div dir="auto"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> <br> When I hit "Deploy" button the Deployment fails with the following error:<br> <br> PLAY [gluster_servers] ******************************<wbr>***************************<br> <br> TASK [Run a shell script] ******************************<wbr>************************<br> fatal: [<a moz-do-not-send="true" href="http://ha1.domain.it" rel="noreferrer" target="_blank">ha1.domain.it</a>]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [<a moz-do-not-send="true" href="http://ha2.domain.it" rel="noreferrer" target="_blank">ha2.domain.it</a>]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [<a moz-do-not-send="true" href="http://ha3.domain.it" rel="noreferrer" target="_blank">ha3.domain.it</a>]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> to retry, use: --limit @/tmp/tmpcV3lam/run-script.ret<wbr>ry<br> <br> PLAY RECAP ******************************<wbr>******************************<wbr>*********<br> <a moz-do-not-send="true" href="http://ha1.domain.it" rel="noreferrer" target="_blank">ha1.domain.it</a> : ok=0 changed=0 unreachable=0 failed=1<br> <a moz-do-not-send="true" href="http://ha2.domain.it" rel="noreferrer" target="_blank">ha2.domain.it</a> : ok=0 changed=0 unreachable=0 failed=1<br> <a moz-do-not-send="true" href="http://ha3.domain.it" rel="noreferrer" target="_blank">ha3.domain.it</a> : ok=0 changed=0 unreachable=0 failed=1<br> <br> What I'm doing wrong? Maybe I need to initializa glusterfs in some way...<br> What are the logs used to log the status of this deployment so I can check the errors?<br> <br> Thanks in advance!<br> Simone<br> ______________________________<wbr>_________________<br> </blockquote> </div> </div> </div> <div dir="auto"><br> </div> <div dir="auto"><br> </div> <div dir="auto"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> </blockquote> </div> Gdeploy uses ansible that seems to fail at its first step when executing its shell module</div> <div class="gmail_extra" dir="auto"><br> </div> <div class="gmail_extra" dir="auto"><a moz-do-not-send="true" href="http://docs.ansible.com/ansible/shell_module.html">http://docs.ansible.com/ansible/shell_module.html</a><br> </div> </div> <div class="gmail_extra" dir="auto"><br> </div> <div class="gmail_extra" dir="auto">In practice in my opinion the shell script defined by [script1] (<span style="font-family:sans-serif">grafton-sanity-</span><wbr style="font-family:sans-serif"><span style="font-family:sans-serif">check.sh) </span>above doesn't exit with a return code (rc) for some reason...</div> <div class="gmail_extra" dir="auto">Perhaps you have already done a partial step previously? or your disks already contain a label?</div> <div class="gmail_extra" dir="auto">Is it correct sdb as the target for your disk configuration for gluster?</div> <div class="gmail_extra" dir="auto">I would try to reinitialize the disks, such as</div> <div class="gmail_extra" dir="auto"><br> </div> <div class="gmail_extra" dir="auto">dd if=/dev/zero of=/dev/sdb bs=1024k count=1</div> <div class="gmail_extra" dir="auto"><br> </div> <div class="gmail_extra" dir="auto">ONLY if it is correct that sdb is the disk to format for brick filesystem..</div> <div class="gmail_extra" dir="auto">Hih,</div> <div class="gmail_extra" dir="auto">Gianluca </div> </div> </blockquote> <br> Hi Gianluca,<br> <br> thanks for your reply.<br> I didn't do any previous step: the 3 servers are freshly installed.<br> The disk was wrong: I had to use /dev/md128. Replaced sdb with the correct one and redeployed, but the error was exactly the same. The disks are already initialized because I created the XFS filesystem on /dev/md128 before the deploy.<br> <br> In /var/log/messages there is no error.<br> <br> Hi,<br> Simone<br> </body> </html> --------------F5C9A93834F58B72186C52DC--

On Mon, Jul 10, 2017 at 12:26 PM, Simone Marchioni <s.marchioni@lynx2000.it> wrote:
Hi Gianluca,
thanks for your reply. I didn't do any previous step: the 3 servers are freshly installed. The disk was wrong: I had to use /dev/md128. Replaced sdb with the correct one and redeployed, but the error was exactly the same. The disks are already initialized because I created the XFS filesystem on /dev/md128 before the deploy.
In /var/log/messages there is no error.
Hi, Simone
You could try to run the script from command line of node 1, eg something like this /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it and see what kind of output it gives... Just a guess

This is a multi-part message in MIME format. --------------2E2A7B2091BA5A85B25B5C50 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Il 10/07/2017 12:48, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:26 PM, Simone Marchioni <s.marchioni@lynx2000.it <mailto:s.marchioni@lynx2000.it>> wrote:
Hi Gianluca,
thanks for your reply. I didn't do any previous step: the 3 servers are freshly installed. The disk was wrong: I had to use /dev/md128. Replaced sdb with the correct one and redeployed, but the error was exactly the same. The disks are already initialized because I created the XFS filesystem on /dev/md128 before the deploy.
In /var/log/messages there is no error.
Hi, Simone
You could try to run the script from command line of node 1, eg something like this /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it <http://ha1.domain.it>,ha2.domain.it <http://ha2.domain.it>,ha3.domain.it <http://ha3.domain.it> and see what kind of output it gives...
Just a guess
Hi Gianluca, I recently discovered that the file: /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh is missing from the system, and probably is the root cause of my problem. Searched with yum provides but I can't find any package with the script inside... any clue? Thank you Simone --------------2E2A7B2091BA5A85B25B5C50 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Il 10/07/2017 12:48, Gianluca Cecchi ha scritto:<br> </div> <blockquote cite="mid:CAG2kNCyNGt2ASZpRCFeT3oxCKzxD9U3hLpT_UFgtnCcA3svNag@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote">On Mon, Jul 10, 2017 at 12:26 PM, Simone Marchioni <span dir="ltr"><<a moz-do-not-send="true" href="mailto:s.marchioni@lynx2000.it" target="_blank">s.marchioni@lynx2000.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div bgcolor="#FFFFFF"> <div> <div class="gmail-h5"> <div class="gmail-m_-3301118127146914534moz-cite-prefix"><br> </div> </div> </div> Hi Gianluca,<br> <br> thanks for your reply.<br> I didn't do any previous step: the 3 servers are freshly installed.<br> The disk was wrong: I had to use /dev/md128. Replaced sdb with the correct one and redeployed, but the error was exactly the same. The disks are already initialized because I created the XFS filesystem on /dev/md128 before the deploy.<br> <br> In /var/log/messages there is no error.<br> <br> Hi,<br> Simone<br> </div> <br> </blockquote> </div> <br> </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">You could try to run the script from command line of node 1, eg something like this</div> <div class="gmail_extra">/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h <a moz-do-not-send="true" href="http://ha1.domain.it">ha1.domain.it</a>,<a moz-do-not-send="true" href="http://ha2.domain.it">ha2.domain.it</a>,<a moz-do-not-send="true" href="http://ha3.domain.it">ha3.domain.it</a><br> </div> <div class="gmail_extra">and see what kind of output it gives...</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">Just a guess</div> </div> </blockquote> <br> <br> Hi Gianluca,<br> <br> I recently discovered that the file:<br> <br> /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh<br> <br> is missing from the system, and probably is the root cause of my problem.<br> Searched with<br> <br> yum provides<br> <br> but I can't find any package with the script inside... any clue?<br> <br> Thank you<br> Simone<br> </body> </html> --------------2E2A7B2091BA5A85B25B5C50--

On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni <s.marchioni@lynx2000.it> wrote:
Hi Gianluca,
I recently discovered that the file:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
is missing from the system, and probably is the root cause of my problem. Searched with
yum provides
but I can't find any package with the script inside... any clue?
Thank you Simone
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi, but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you manually installed packages? Becase the original web link covered the case of ovirt-ng nodes, not CentOS 7.3 OS. Possibly you are missing any package that is instead installed inside ovirt-ng node by default?

This is a multi-part message in MIME format. --------------4E6CD93E51177C98A4F0A225 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Il 10/07/2017 13:06, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni <s.marchioni@lynx2000.it <mailto:s.marchioni@lynx2000.it>> wrote:
Hi Gianluca,
I recently discovered that the file:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
is missing from the system, and probably is the root cause of my problem. Searched with
yum provides
but I can't find any package with the script inside... any clue?
Thank you Simone
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
Hi, but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you manually installed packages? Becase the original web link covered the case of ovirt-ng nodes, not CentOS 7.3 OS. Possibly you are missing any package that is instead installed inside ovirt-ng node by default?
Hi Gianluca, I used plain CentOS 7.3 where I manually installed the necessary packages. I know the original tutorial used oVirt Node, but I tought the two were almost the same, with the latter an "out of the box" solution but with the same features. That said I discovered the problem: there is no missing package. The path of the script is wrong. In the tutorial it says: /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh while the installed script is in: /usr/share/gdeploy/scripts/grafton-sanity-check.sh and is (correctly) part of the gdeploy package. Updated the Gdeploy config and executed Deploy again. The situation is much better now, but still says "Deployment Failed". Here's the output: PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** changed: [ha3.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it) changed: [ha2.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it) changed: [ha1.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it) PLAY RECAP ********************************************************************* ha1.domain.it : ok=1 changed=1 unreachable=0 failed=0 ha2.domain.it : ok=1 changed=1 unreachable=0 failed=0 ha3.domain.it : ok=1 changed=1 unreachable=0 failed=0 PLAY [gluster_servers] ********************************************************* TASK [Enable or disable services] ********************************************** ok: [ha1.domain.it] => (item=chronyd) ok: [ha3.domain.it] => (item=chronyd) ok: [ha2.domain.it] => (item=chronyd) PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0 ha2.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0 ha3.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0 PLAY [gluster_servers] ********************************************************* TASK [start/stop/restart/reload services] ************************************** changed: [ha1.domain.it] => (item=chronyd) changed: [ha2.domain.it] => (item=chronyd) changed: [ha3.domain.it] => (item=chronyd) PLAY RECAP ********************************************************************* ha1.domain.it : ok=1 changed=1 unreachable=0 failed=0 ha2.domain.it : ok=1 changed=1 unreachable=0 failed=0 ha3.domain.it : ok=1 changed=1 unreachable=0 failed=0 PLAY [gluster_servers] ********************************************************* TASK [Run a command in the shell] ********************************************** changed: [ha1.domain.it] => (item=vdsm-tool configure --force) changed: [ha3.domain.it] => (item=vdsm-tool configure --force) changed: [ha2.domain.it] => (item=vdsm-tool configure --force) PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0 ha2.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0 ha3.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0 PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [Clean up filesystem signature] ******************************************* skipping: [ha2.domain.it] => (item=/dev/md128) skipping: [ha1.domain.it] => (item=/dev/md128) skipping: [ha3.domain.it] => (item=/dev/md128) TASK [Create Physical Volume] ************************************************** failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 Ignoring errors... Hope to be near the solution... ;-) Hi, Simone --------------4E6CD93E51177C98A4F0A225 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Il 10/07/2017 13:06, Gianluca Cecchi ha scritto:<br> </div> <blockquote cite="mid:CAG2kNCyAv0DsnejUL24U9H-ujLApcXubTxF-7C58vKNT3oJmBA@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote">On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni <span dir="ltr"><<a moz-do-not-send="true" href="mailto:s.marchioni@lynx2000.it" target="_blank">s.marchioni@lynx2000.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> <div> <div class="h5"> <div class="m_-8438199709946781215moz-cite-prefix"><br> </div> </div> </div> Hi Gianluca,<br> <br> I recently discovered that the file:<br> <br> /usr/share/ansible/gdeploy/<wbr>scripts/grafton-sanity-check.<wbr>sh<br> <br> is missing from the system, and probably is the root cause of my problem.<br> Searched with<br> <br> yum provides<br> <br> but I can't find any package with the script inside... any clue?<br> <br> Thank you<span class="HOEnZb"><font color="#888888"><br> Simone<br> </font></span></div> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br> <br> </blockquote> </div> <br> </div> <div class="gmail_extra">Hi,</div> <div class="gmail_extra">but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you manually installed packages?</div> <div class="gmail_extra">Becase the original web link covered the case of ovirt-ng nodes, not CentOS 7.3 OS.</div> <div class="gmail_extra">Possibly you are missing any package that is instead installed inside ovirt-ng node by default?</div> <div class="gmail_extra"><br> </div> </div> </blockquote> <br> <br> Hi Gianluca,<br> <br> I used plain CentOS 7.3 where I manually installed the necessary packages.<br> I know the original tutorial used oVirt Node, but I tought the two were almost the same, with the latter an "out of the box" solution but with the same features.<br> <br> That said I discovered the problem: there is no missing package. The path of the script is wrong. In the tutorial it says:<br> <br> /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh<br> <br> while the installed script is in:<br> <br> /usr/share/gdeploy/scripts/grafton-sanity-check.sh<br> <br> and is (correctly) part of the gdeploy package.<br> <br> Updated the Gdeploy config and executed Deploy again. The situation is much better now, but still says "Deployment Failed". Here's the output:<br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a shell script] ******************************************************<br> changed: [ha3.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it)<br> changed: [ha2.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it)<br> changed: [ha1.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it)<br> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=1 changed=1 unreachable=0 failed=0 <br> ha2.domain.it : ok=1 changed=1 unreachable=0 failed=0 <br> ha3.domain.it : ok=1 changed=1 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Enable or disable services] **********************************************<br> ok: [ha1.domain.it] => (item=chronyd)<br> ok: [ha3.domain.it] => (item=chronyd)<br> ok: [ha2.domain.it] => (item=chronyd)<br> <br> PLAY RECAP *********************************************************************<br> ha1.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0 <br> ha2.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0 <br> ha3.lynx2000.it : ok=1 changed=0 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [start/stop/restart/reload services] **************************************<br> changed: [ha1.domain.it] => (item=chronyd)<br> changed: [ha2.domain.it] => (item=chronyd)<br> changed: [ha3.domain.it] => (item=chronyd)<br> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=1 changed=1 unreachable=0 failed=0 <br> ha2.domain.it : ok=1 changed=1 unreachable=0 failed=0 <br> ha3.domain.it : ok=1 changed=1 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a command in the shell] **********************************************<br> changed: [ha1.domain.it] => (item=vdsm-tool configure --force)<br> changed: [ha3.domain.it] => (item=vdsm-tool configure --force)<br> changed: [ha2.domain.it] => (item=vdsm-tool configure --force)<br> <br> PLAY RECAP *********************************************************************<br> ha1.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0 <br> ha2.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0 <br> ha3.lynx2000.it : ok=1 changed=1 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a shell script] ******************************************************<br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry<br> <br> PLAY RECAP *********************************************************************<br> ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Clean up filesystem signature] *******************************************<br> skipping: [ha2.domain.it] => (item=/dev/md128) <br> skipping: [ha1.domain.it] => (item=/dev/md128) <br> skipping: [ha3.domain.it] => (item=/dev/md128) <br> <br> TASK [Create Physical Volume] **************************************************<br> failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5}<br> failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5}<br> failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5}<br> to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry<br> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> Ignoring errors...<br> <br> <br> Hope to be near the solution... ;-)<br> <br> Hi,<br> Simone<br> </body> </html> --------------4E6CD93E51177C98A4F0A225--

On 07/07/2017 10:01 PM, Simone Marchioni wrote:
Hi to all,
I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine.
Followed the following tutorial:
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-...
I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt 4.1 repo and all required packages. Configured passwordless ssh as stated. Then I log in cockpit web interface, selected "Hosted Engine with Gluster" and hit the Start button. Configured the parameters as shown in the tutorial.
In the last step (5) the Generated Gdeply configuration (note: replaced the real domain with "domain.it"):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] ha1.domain.it ha2.domain.it ha3.domain.it
[script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it
[disktype] raid6
[diskcount] 12
[stripesize] 256
[service1] action=enable service=chronyd
[service2] action=restart service=chronyd
[shell2] action=execute command=vdsm-tool configure --force
[script3] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
[pv1] action=create devices=sdb ignore_pv_errors=no
[vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no
[lv1:{ha1.domain.it,ha2.domain.it}] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=110GB poolmetadatasize=1GB
[lv2:ha3.domain.it] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=80GB poolmetadatasize=1GB
[lv3:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=50GB
[lv4:ha3.domain.it] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv5:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv6:ha3.domain.it] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv7:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv8:ha3.domain.it] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv9:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv10:ha3.domain.it] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[selinux] yes
[service3] action=restart service=glusterd slice_setup=yes
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs
[script2] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
[shell3] action=execute command=usermod -a -G gluster qemu
[volume1] action=create volname=engine transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no arbiter_count=1
[volume2] action=create volname=data transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no arbiter_count=1
[volume3] action=create volname=export transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no arbiter_count=1
[volume4] action=create volname=iso transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso
ignore_volume_errors=no arbiter_count=1
When I hit "Deploy" button the Deployment fails with the following error:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
What I'm doing wrong? Maybe I need to initializa glusterfs in some way... What are the logs used to log the status of this deployment so I can check the errors?
Thanks in advance! Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi Simone, Can you please let me know what is the version of gdeploy and ansible on your system? Can you check if the path /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? If not, can you edit the generated config file and change the path to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh and see if that works ? You can check the logs in /var/log/messages , or setting log_path in /etc/ansbile/ansible.cfg file. Thanks kasturi.

Il 10/07/2017 09:08, knarra ha scritto:
On 07/07/2017 10:01 PM, Simone Marchioni wrote:
Hi to all,
I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine.
Followed the following tutorial:
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-...
I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt 4.1 repo and all required packages. Configured passwordless ssh as stated. Then I log in cockpit web interface, selected "Hosted Engine with Gluster" and hit the Start button. Configured the parameters as shown in the tutorial.
In the last step (5) the Generated Gdeply configuration (note: replaced the real domain with "domain.it"):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] ha1.domain.it ha2.domain.it ha3.domain.it
[script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it
[disktype] raid6
[diskcount] 12
[stripesize] 256
[service1] action=enable service=chronyd
[service2] action=restart service=chronyd
[shell2] action=execute command=vdsm-tool configure --force
[script3] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
[pv1] action=create devices=sdb ignore_pv_errors=no
[vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no
[lv1:{ha1.domain.it,ha2.domain.it}] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=110GB poolmetadatasize=1GB
[lv2:ha3.domain.it] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=80GB poolmetadatasize=1GB
[lv3:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=50GB
[lv4:ha3.domain.it] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv5:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv6:ha3.domain.it] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv7:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv8:ha3.domain.it] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv9:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv10:ha3.domain.it] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[selinux] yes
[service3] action=restart service=glusterd slice_setup=yes
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs
[script2] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
[shell3] action=execute command=usermod -a -G gluster qemu
[volume1] action=create volname=engine transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no arbiter_count=1
[volume2] action=create volname=data transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no arbiter_count=1
[volume3] action=create volname=export transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no arbiter_count=1
[volume4] action=create volname=iso transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso
ignore_volume_errors=no arbiter_count=1
When I hit "Deploy" button the Deployment fails with the following error:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
What I'm doing wrong? Maybe I need to initializa glusterfs in some way... What are the logs used to log the status of this deployment so I can check the errors?
Thanks in advance! Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi Simone,
Can you please let me know what is the version of gdeploy and ansible on your system? Can you check if the path /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? If not, can you edit the generated config file and change the path to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh and see if that works ?
You can check the logs in /var/log/messages , or setting log_path in /etc/ansbile/ansible.cfg file.
Thanks
kasturi.
Hi Kasturi, thank you for your reply. Here are my versions: gdeploy-2.0.2-7.noarch ansible-2.3.0.0-3.el7.noarch The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh is missing. For the sake of completeness, the entire directory ansible is missing under /usr/share. In /var/log/messages there is no error message, and I have no /etc/ansbile/ansible.cfg config file... I'm starting to think there are some missing pieces in my installation. I installed the following packages: yum install ovirt-engine yum install ovirt-hosted-engine-setup yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome libgovirt ovirt-live-artwork ovirt-log-collector gdeploy cockpit-ovirt-dashboard and relative dependencies. Any idea? Hi, Simone

Il 10/07/2017 09:08, knarra ha scritto:
On 07/07/2017 10:01 PM, Simone Marchioni wrote:
Hi to all,
I have an old installation of oVirt 3.3 with the Engine on a separate server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted Engine.
Followed the following tutorial:
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-...
I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt 4.1 repo and all required packages. Configured passwordless ssh as stated. Then I log in cockpit web interface, selected "Hosted Engine with Gluster" and hit the Start button. Configured the parameters as shown in the tutorial.
In the last step (5) the Generated Gdeply configuration (note: replaced the real domain with "domain.it"):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] ha1.domain.it ha2.domain.it ha3.domain.it
[script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it
[disktype] raid6
[diskcount] 12
[stripesize] 256
[service1] action=enable service=chronyd
[service2] action=restart service=chronyd
[shell2] action=execute command=vdsm-tool configure --force
[script3] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
[pv1] action=create devices=sdb ignore_pv_errors=no
[vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no
[lv1:{ha1.domain.it,ha2.domain.it}] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=110GB poolmetadatasize=1GB
[lv2:ha3.domain.it] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=80GB poolmetadatasize=1GB
[lv3:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=50GB
[lv4:ha3.domain.it] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv5:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv6:ha3.domain.it] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv7:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv8:ha3.domain.it] action=create lvname=gluster_lv_export ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/export lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv9:{ha1.domain.it,ha2.domain.it}] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[lv10:ha3.domain.it] action=create lvname=gluster_lv_iso ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/iso lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=20GB
[selinux] yes
[service3] action=restart service=glusterd slice_setup=yes
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs
[script2] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
[shell3] action=execute command=usermod -a -G gluster qemu
[volume1] action=create volname=engine transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no arbiter_count=1
[volume2] action=create volname=data transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no arbiter_count=1
[volume3] action=create volname=export transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no arbiter_count=1
[volume4] action=create volname=iso transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso
ignore_volume_errors=no arbiter_count=1
When I hit "Deploy" button the Deployment fails with the following error:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
What I'm doing wrong? Maybe I need to initializa glusterfs in some way... What are the logs used to log the status of this deployment so I can check the errors?
Thanks in advance! Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi Simone,
Can you please let me know what is the version of gdeploy and ansible on your system? Can you check if the path /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? If not, can you edit the generated config file and change the path to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh and see if that works ?
You can check the logs in /var/log/messages , or setting log_path in /etc/ansbile/ansible.cfg file.
Thanks
kasturi.
Hi Kasturi,
thank you for your reply. Here are my versions:
gdeploy-2.0.2-7.noarch ansible-2.3.0.0-3.el7.noarch
The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh is missing. For the sake of completeness, the entire directory ansible is missing under /usr/share.
In /var/log/messages there is no error message, and I have no /etc/ansbile/ansible.cfg config file...
I'm starting to think there are some missing pieces in my installation. I installed the following packages:
yum install ovirt-engine yum install ovirt-hosted-engine-setup yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome libgovirt ovirt-live-artwork ovirt-log-collector gdeploy cockpit-ovirt-dashboard
and relative dependencies.
Any idea? Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" is
On 07/10/2017 04:18 PM, Simone Marchioni wrote: present ? If yes, can you change the path in your generated gdeploy config file and run again ?
Hi, Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 10/07/2017 13:49, knarra ha scritto:
On 07/10/2017 04:18 PM, Simone Marchioni wrote:
Il 10/07/2017 09:08, knarra ha scritto:
Hi Simone,
Can you please let me know what is the version of gdeploy and ansible on your system? Can you check if the path /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? If not, can you edit the generated config file and change the path to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh and see if that works ?
You can check the logs in /var/log/messages , or setting log_path in /etc/ansbile/ansible.cfg file.
Thanks
kasturi.
Hi Kasturi,
thank you for your reply. Here are my versions:
gdeploy-2.0.2-7.noarch ansible-2.3.0.0-3.el7.noarch
The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh is missing. For the sake of completeness, the entire directory ansible is missing under /usr/share.
In /var/log/messages there is no error message, and I have no /etc/ansbile/ansible.cfg config file...
I'm starting to think there are some missing pieces in my installation. I installed the following packages:
yum install ovirt-engine yum install ovirt-hosted-engine-setup yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome libgovirt ovirt-live-artwork ovirt-log-collector gdeploy cockpit-ovirt-dashboard
and relative dependencies.
Any idea? Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" is present ? If yes, can you change the path in your generated gdeploy config file and run again ?
Hi Kasturi, you're right: the file /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated the path in the gdeploy config file and run Deploy again. The situation is much better but the Deployment failed again... :-( Here are the errors: PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [Clean up filesystem signature] ******************************************* skipping: [ha2.lynx2000.it] => (item=/dev/md128) skipping: [ha1.lynx2000.it] => (item=/dev/md128) skipping: [ha3.lynx2000.it] => (item=/dev/md128) TASK [Create Physical Volume] ************************************************** failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 Ignoring errors... Any clue? Thanks for your time. Simone

On 07/10/2017 07:18 PM, Simone Marchioni wrote:
Il 10/07/2017 13:49, knarra ha scritto:
On 07/10/2017 04:18 PM, Simone Marchioni wrote:
Il 10/07/2017 09:08, knarra ha scritto:
Hi Simone,
Can you please let me know what is the version of gdeploy and ansible on your system? Can you check if the path /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? If not, can you edit the generated config file and change the path to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh and see if that works ?
You can check the logs in /var/log/messages , or setting log_path in /etc/ansbile/ansible.cfg file.
Thanks
kasturi.
Hi Kasturi,
thank you for your reply. Here are my versions:
gdeploy-2.0.2-7.noarch ansible-2.3.0.0-3.el7.noarch
The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh is missing. For the sake of completeness, the entire directory ansible is missing under /usr/share.
In /var/log/messages there is no error message, and I have no /etc/ansbile/ansible.cfg config file...
I'm starting to think there are some missing pieces in my installation. I installed the following packages:
yum install ovirt-engine yum install ovirt-hosted-engine-setup yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome libgovirt ovirt-live-artwork ovirt-log-collector gdeploy cockpit-ovirt-dashboard
and relative dependencies.
Any idea? Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" is present ? If yes, can you change the path in your generated gdeploy config file and run again ?
Hi Kasturi,
you're right: the file /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated the path in the gdeploy config file and run Deploy again. The situation is much better but the Deployment failed again... :-(
Here are the errors:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [ha2.lynx2000.it] => (item=/dev/md128) skipping: [ha1.lynx2000.it] => (item=/dev/md128) skipping: [ha3.lynx2000.it] => (item=/dev/md128)
TASK [Create Physical Volume] ************************************************** failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
PLAY RECAP ********************************************************************* ha1.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha2.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1 ha3.lynx2000.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors...
Any clue? Hi,
I see that there are some signatures left on your device due to which the script is failing and creating physical volume also fails. Can you try to do fill zeros in the disk for 512MB or 1GB and try again ? dd if=/dev/zero of=<device> Before running the script again try to do pvcreate and see if that works. If it works, just do pvdelete and run the script. Everything should work fine. Thanks kasturi
Thanks for your time. Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 11/07/2017 07:59, knarra ha scritto:
On 07/10/2017 07:18 PM, Simone Marchioni wrote:
Hi Kasturi,
you're right: the file /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated the path in the gdeploy config file and run Deploy again. The situation is much better but the Deployment failed again... :-(
Here are the errors:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [ha2.domain.it] => (item=/dev/md128) skipping: [ha1.domain.it] => (item=/dev/md128) skipping: [ha3.domain.it] => (item=/dev/md128)
TASK [Create Physical Volume] ************************************************** failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors...
Any clue? Hi,
I see that there are some signatures left on your device due to which the script is failing and creating physical volume also fails. Can you try to do fill zeros in the disk for 512MB or 1GB and try again ?
dd if=/dev/zero of=<device>
Before running the script again try to do pvcreate and see if that works. If it works, just do pvdelete and run the script. Everything should work fine.
Thanks kasturi
Thanks for your time. Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi, removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/ PLAY [gluster_servers] ********************************************************* TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it] TASK [Add/Delete services to firewalld rules] ********************************** failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it] TASK [Open/Close firewalld ports] ********************************************** changed: [ha1.domain.it] => (item=111/tcp) changed: [ha2.domain.it] => (item=111/tcp) changed: [ha3.domain.it] => (item=111/tcp) changed: [ha1.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=2049/tcp) changed: [ha1.domain.it] => (item=54321/tcp) changed: [ha3.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=54321/tcp) changed: [ha1.domain.it] => (item=5900/tcp) changed: [ha3.domain.it] => (item=54321/tcp) changed: [ha2.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=5900-6923/tcp) changed: [ha3.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5666/tcp) changed: [ha2.domain.it] => (item=5666/tcp) changed: [ha1.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5666/tcp) changed: [ha3.domain.it] => (item=16514/tcp) TASK [Reloads the firewall] **************************************************** changed: [ha1.domain.it] changed: [ha2.domain.it] changed: [ha3.domain.it] PLAY RECAP ********************************************************************* ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0 PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors... In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment: yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse but never configured anything. For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3. Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem. Thank you again. Simone

On Tue, Jul 11, 2017 at 10:02 AM, Simone Marchioni <s.marchioni@lynx2000.it> wrote:
Hi,
removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/
PLAY [gluster_servers] ****************************** ***************************
TASK [start/stop/restart/reload services] ****************************** ******** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
[snip]
In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment:
yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse
but never configured anything.
For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3.
Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem.
Thank you again.
Simone
For sure you have to install also the package glusterfs-server, that provides glusterd. Probably you was misled by the fact that apparently base CentOS packages seems not to provide the package? But if you have ovirt-4.1-dependencies.repo enabled, you should have gluster 3.8 packages and you do have to install glusterfs-server [ovirt-4.1-centos-gluster38] name=CentOS-7 - Gluster 3.8 baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.8/ gpgcheck=1 enabled=1 gpgkey= https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-... eg [g.cecchi@ov300 ~]$ sudo yum install glusterfs-server Loaded plugins: fastestmirror, langpacks base | 3.6 kB 00:00:00 centos-opstools-release | 2.9 kB 00:00:00 epel-util/x86_64/metalink | 25 kB 00:00:00 extras | 3.4 kB 00:00:00 ovirt-4.1 | 3.0 kB 00:00:00 ovirt-4.1-centos-gluster38 | 2.9 kB 00:00:00 ovirt-4.1-epel/x86_64/metalink | 25 kB 00:00:00 ovirt-4.1-patternfly1-noarch-epel | 3.0 kB 00:00:00 ovirt-centos-ovirt41 | 2.9 kB 00:00:00 rnachimu-gdeploy | 3.0 kB 00:00:00 updates | 3.4 kB 00:00:00 virtio-win-stable | 3.0 kB 00:00:00 Loading mirror speeds from cached hostfile * base: ba.mirror.garr.it * epel-util: epel.besthosting.ua * extras: ba.mirror.garr.it * ovirt-4.1: ftp.nluug.nl * ovirt-4.1-epel: epel.besthosting.ua * updates: ba.mirror.garr.it Resolving Dependencies --> Running transaction check ---> Package glusterfs-server.x86_64 0:3.8.13-1.el7 will be installed --> Processing Dependency: glusterfs-libs(x86-64) = 3.8.13-1.el7 for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: glusterfs-fuse(x86-64) = 3.8.13-1.el7 for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.8.13-1.el7 for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: glusterfs-cli(x86-64) = 3.8.13-1.el7 for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: glusterfs-api(x86-64) = 3.8.13-1.el7 for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: glusterfs(x86-64) = 3.8.13-1.el7 for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: liburcu-cds.so.1()(64bit) for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Processing Dependency: liburcu-bp.so.1()(64bit) for package: glusterfs-server-3.8.13-1.el7.x86_64 --> Running transaction check ---> Package glusterfs.x86_64 0:3.8.10-1.el7 will be updated ---> Package glusterfs.x86_64 0:3.8.13-1.el7 will be an update ---> Package glusterfs-api.x86_64 0:3.8.10-1.el7 will be updated ---> Package glusterfs-api.x86_64 0:3.8.13-1.el7 will be an update ---> Package glusterfs-cli.x86_64 0:3.8.10-1.el7 will be updated ---> Package glusterfs-cli.x86_64 0:3.8.13-1.el7 will be an update ---> Package glusterfs-client-xlators.x86_64 0:3.8.10-1.el7 will be updated ---> Package glusterfs-client-xlators.x86_64 0:3.8.13-1.el7 will be an update ---> Package glusterfs-fuse.x86_64 0:3.8.10-1.el7 will be updated ---> Package glusterfs-fuse.x86_64 0:3.8.13-1.el7 will be an update ---> Package glusterfs-libs.x86_64 0:3.8.10-1.el7 will be updated ---> Package glusterfs-libs.x86_64 0:3.8.13-1.el7 will be an update ---> Package userspace-rcu.x86_64 0:0.7.16-1.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================== Package Arch Version Repository Size ==================================================================================================== Installing: glusterfs-server x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 1.4 M Installing for dependencies: userspace-rcu x86_64 0.7.16-1.el7 ovirt-4.1-centos-gluster38 72 k Updating for dependencies: glusterfs x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 512 k glusterfs-api x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 90 k glusterfs-cli x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 184 k glusterfs-client-xlators x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 783 k glusterfs-fuse x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 134 k glusterfs-libs x86_64 3.8.13-1.el7 ovirt-4.1-centos-gluster38 380 k Transaction Summary ==================================================================================================== Install 1 Package (+1 Dependent package) Upgrade ( 6 Dependent packages) Total download size: 3.5 M Is this ok [y/d/N]: Exiting on user command Your transaction was saved, rerun it with: yum load-transaction /tmp/yum_save_tx.2017-07-11.10-31.hzKtbi.yumtx [g.cecchi@ov300 ~]$ HIH, Gianluca

This is a multi-part message in MIME format. --------------10205537512B1EE82441AA50 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 07/11/2017 01:32 PM, Simone Marchioni wrote:
Il 11/07/2017 07:59, knarra ha scritto:
On 07/10/2017 07:18 PM, Simone Marchioni wrote:
Hi Kasturi,
you're right: the file /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated the path in the gdeploy config file and run Deploy again. The situation is much better but the Deployment failed again... :-(
Here are the errors:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [ha2.domain.it] => (item=/dev/md128) skipping: [ha1.domain.it] => (item=/dev/md128) skipping: [ha3.domain.it] => (item=/dev/md128)
TASK [Create Physical Volume] ************************************************** failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors...
Any clue? Hi,
I see that there are some signatures left on your device due to which the script is failing and creating physical volume also fails. Can you try to do fill zeros in the disk for 512MB or 1GB and try again ?
dd if=/dev/zero of=<device>
Before running the script again try to do pvcreate and see if that works. If it works, just do pvdelete and run the script. Everything should work fine.
Thanks kasturi
Thanks for your time. Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi,
removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/
PLAY [gluster_servers] *********************************************************
TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it]
TASK [Add/Delete services to firewalld rules] ********************************** failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it]
TASK [Open/Close firewalld ports] ********************************************** changed: [ha1.domain.it] => (item=111/tcp) changed: [ha2.domain.it] => (item=111/tcp) changed: [ha3.domain.it] => (item=111/tcp) changed: [ha1.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=2049/tcp) changed: [ha1.domain.it] => (item=54321/tcp) changed: [ha3.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=54321/tcp) changed: [ha1.domain.it] => (item=5900/tcp) changed: [ha3.domain.it] => (item=54321/tcp) changed: [ha2.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=5900-6923/tcp) changed: [ha3.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5666/tcp) changed: [ha2.domain.it] => (item=5666/tcp) changed: [ha1.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5666/tcp) changed: [ha3.domain.it] => (item=16514/tcp)
TASK [Reloads the firewall] **************************************************** changed: [ha1.domain.it] changed: [ha2.domain.it] changed: [ha3.domain.it]
PLAY RECAP ********************************************************************* ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors...
In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment:
yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse
but never configured anything. Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ?
Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ? yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard
For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3.
Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem.
You could safely ignore this and this has nothing to do with the configuration.
Thank you again. Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------10205537512B1EE82441AA50 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 07/11/2017 01:32 PM, Simone Marchioni wrote:<br> </div> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite">Il 11/07/2017 07:59, knarra ha scritto: <br> <blockquote type="cite">On 07/10/2017 07:18 PM, Simone Marchioni wrote: <br> <blockquote type="cite">Hi Kasturi, <br> <br> you're right: the file /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated the path in the gdeploy config file and run Deploy again. <br> The situation is much better but the Deployment failed again... :-( <br> <br> Here are the errors: <br> <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a shell script] ****************************************************** <br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Clean up filesystem signature] ******************************************* <br> skipping: [ha2.domain.it] => (item=/dev/md128) <br> skipping: [ha1.domain.it] => (item=/dev/md128) <br> skipping: [ha3.domain.it] => (item=/dev/md128) <br> <br> TASK [Create Physical Volume] ************************************************** <br> failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} <br> failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} <br> failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} <br> to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> Ignoring errors... <br> <br> <br> <br> Any clue? <br> </blockquote> Hi, <br> <br> I see that there are some signatures left on your device due to which the script is failing and creating physical volume also fails. Can you try to do fill zeros in the disk for 512MB or 1GB and try again ? <br> <br> dd if=/dev/zero of=<device> <br> <br> Before running the script again try to do pvcreate and see if that works. If it works, just do pvdelete and run the script. Everything should work fine. <br> <br> Thanks <br> kasturi <br> <blockquote type="cite"> <br> Thanks for your time. <br> Simone <br> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> </blockquote> <br> Hi, <br> <br> removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/ <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [start/stop/restart/reload services] ************************************** <br> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Start firewalld if not already started] ********************************** <br> ok: [ha1.domain.it] <br> ok: [ha2.domain.it] <br> ok: [ha3.domain.it] <br> <br> TASK [Add/Delete services to firewalld rules] ********************************** <br> failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Start firewalld if not already started] ********************************** <br> ok: [ha1.domain.it] <br> ok: [ha2.domain.it] <br> ok: [ha3.domain.it] <br> <br> TASK [Open/Close firewalld ports] ********************************************** <br> changed: [ha1.domain.it] => (item=111/tcp) <br> changed: [ha2.domain.it] => (item=111/tcp) <br> changed: [ha3.domain.it] => (item=111/tcp) <br> changed: [ha1.domain.it] => (item=2049/tcp) <br> changed: [ha2.domain.it] => (item=2049/tcp) <br> changed: [ha1.domain.it] => (item=54321/tcp) <br> changed: [ha3.domain.it] => (item=2049/tcp) <br> changed: [ha2.domain.it] => (item=54321/tcp) <br> changed: [ha1.domain.it] => (item=5900/tcp) <br> changed: [ha3.domain.it] => (item=54321/tcp) <br> changed: [ha2.domain.it] => (item=5900/tcp) <br> changed: [ha1.domain.it] => (item=5900-6923/tcp) <br> changed: [ha2.domain.it] => (item=5900-6923/tcp) <br> changed: [ha3.domain.it] => (item=5900/tcp) <br> changed: [ha1.domain.it] => (item=5666/tcp) <br> changed: [ha2.domain.it] => (item=5666/tcp) <br> changed: [ha1.domain.it] => (item=16514/tcp) <br> changed: [ha3.domain.it] => (item=5900-6923/tcp) <br> changed: [ha2.domain.it] => (item=16514/tcp) <br> changed: [ha3.domain.it] => (item=5666/tcp) <br> changed: [ha3.domain.it] => (item=16514/tcp) <br> <br> TASK [Reloads the firewall] **************************************************** <br> changed: [ha1.domain.it] <br> changed: [ha2.domain.it] <br> changed: [ha3.domain.it] <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a shell script] ****************************************************** <br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a command in the shell] ********************************************** <br> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [start/stop/restart/reload services] ************************************** <br> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> <br> <br> In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment: <br> <br> yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse <br> <br> but never configured anything. <br> </blockquote> Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ? <br> <br> Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ?<br> <br> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> <span style="color: rgb(0, 0, 0); font-family: Arial, sans-serif; font-size: 13px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(227, 255, 234); display: inline !important; float: none;"><span class="Apple-converted-space"> </span>yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard</span><br> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite"> <br> For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3. <br> <br> Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem. <br> </blockquote> You could safely ignore this and this has nothing to do with the configuration.<br> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite"> <br> Thank you again. <br> Simone <br> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <p><br> </p> </body> </html> --------------10205537512B1EE82441AA50--

This is a multi-part message in MIME format. --------------679906CC5B784AA5749E4CC7 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Il 11/07/2017 11:23, knarra ha scritto:
On 07/11/2017 01:32 PM, Simone Marchioni wrote:
Il 11/07/2017 07:59, knarra ha scritto:
Hi,
removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/
PLAY [gluster_servers] *********************************************************
TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it]
TASK [Add/Delete services to firewalld rules] ********************************** failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it]
TASK [Open/Close firewalld ports] ********************************************** changed: [ha1.domain.it] => (item=111/tcp) changed: [ha2.domain.it] => (item=111/tcp) changed: [ha3.domain.it] => (item=111/tcp) changed: [ha1.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=2049/tcp) changed: [ha1.domain.it] => (item=54321/tcp) changed: [ha3.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=54321/tcp) changed: [ha1.domain.it] => (item=5900/tcp) changed: [ha3.domain.it] => (item=54321/tcp) changed: [ha2.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=5900-6923/tcp) changed: [ha3.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5666/tcp) changed: [ha2.domain.it] => (item=5666/tcp) changed: [ha1.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5666/tcp) changed: [ha3.domain.it] => (item=16514/tcp)
TASK [Reloads the firewall] **************************************************** changed: [ha1.domain.it] changed: [ha2.domain.it] changed: [ha3.domain.it]
PLAY RECAP ********************************************************************* ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors...
In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment:
yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse
but never configured anything. Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ?
Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ?
yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard
For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3.
Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem.
You could safely ignore this and this has nothing to do with the configuration.
Thank you again. Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi, reply here to both Gianluca and Kasturi. Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation. Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation. Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...) Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains: AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING: PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 PLAY [gluster_servers] ********************************************************* TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 These are a problem for my installation or can I ignore them? By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested. Thank you again for your support: now I'll proceed with the Hosted Engine Deployment. Hi Simone --------------679906CC5B784AA5749E4CC7 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Il 11/07/2017 11:23, knarra ha scritto:<br> </div> <blockquote cite="mid:b73b1766-a724-ebc4-f7c7-159ee07a6675@redhat.com" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 07/11/2017 01:32 PM, Simone Marchioni wrote:<br> </div> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite">Il 11/07/2017 07:59, knarra ha scritto: <br> <br> Hi, <br> <br> removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/ <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [start/stop/restart/reload services] ************************************** <br> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Start firewalld if not already started] ********************************** <br> ok: [ha1.domain.it] <br> ok: [ha2.domain.it] <br> ok: [ha3.domain.it] <br> <br> TASK [Add/Delete services to firewalld rules] ********************************** <br> failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Start firewalld if not already started] ********************************** <br> ok: [ha1.domain.it] <br> ok: [ha2.domain.it] <br> ok: [ha3.domain.it] <br> <br> TASK [Open/Close firewalld ports] ********************************************** <br> changed: [ha1.domain.it] => (item=111/tcp) <br> changed: [ha2.domain.it] => (item=111/tcp) <br> changed: [ha3.domain.it] => (item=111/tcp) <br> changed: [ha1.domain.it] => (item=2049/tcp) <br> changed: [ha2.domain.it] => (item=2049/tcp) <br> changed: [ha1.domain.it] => (item=54321/tcp) <br> changed: [ha3.domain.it] => (item=2049/tcp) <br> changed: [ha2.domain.it] => (item=54321/tcp) <br> changed: [ha1.domain.it] => (item=5900/tcp) <br> changed: [ha3.domain.it] => (item=54321/tcp) <br> changed: [ha2.domain.it] => (item=5900/tcp) <br> changed: [ha1.domain.it] => (item=5900-6923/tcp) <br> changed: [ha2.domain.it] => (item=5900-6923/tcp) <br> changed: [ha3.domain.it] => (item=5900/tcp) <br> changed: [ha1.domain.it] => (item=5666/tcp) <br> changed: [ha2.domain.it] => (item=5666/tcp) <br> changed: [ha1.domain.it] => (item=16514/tcp) <br> changed: [ha3.domain.it] => (item=5900-6923/tcp) <br> changed: [ha2.domain.it] => (item=16514/tcp) <br> changed: [ha3.domain.it] => (item=5666/tcp) <br> changed: [ha3.domain.it] => (item=16514/tcp) <br> <br> TASK [Reloads the firewall] **************************************************** <br> changed: [ha1.domain.it] <br> changed: [ha2.domain.it] <br> changed: [ha3.domain.it] <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a shell script] ****************************************************** <br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a command in the shell] ********************************************** <br> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [start/stop/restart/reload services] ************************************** <br> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> <br> <br> In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment: <br> <br> yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse <br> <br> but never configured anything. <br> </blockquote> Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ? <br> <br> Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ?<br> <br> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> <span style="color: rgb(0, 0, 0); font-family: Arial, sans-serif; font-size: 13px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(227, 255, 234); display: inline !important; float: none;"><span class="Apple-converted-space"> </span>yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard</span><br> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite"> <br> For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3. <br> <br> Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem. <br> </blockquote> You could safely ignore this and this has nothing to do with the configuration.<br> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite"> <br> Thank you again. <br> Simone <br> _______________________________________________ <br> Users mailing list <br> <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <p><br> </p> </blockquote> <br> Hi,<br> <br> reply here to both Gianluca and Kasturi.<br> <br> Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation.<br> <br> Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation.<br> <br> Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...)<br> <br> Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains:<br> <br> <br> AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:<br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a shell script] ******************************************************<br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry<br> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a command in the shell] **********************************************<br> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}<br> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}<br> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}<br> to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry<br> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> These are a problem for my installation or can I ignore them?<br> <br> By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested.<br> <br> Thank you again for your support: now I'll proceed with the Hosted Engine Deployment.<br> <br> Hi<br> Simone<br> </body> </html> --------------679906CC5B784AA5749E4CC7--

This is a multi-part message in MIME format. --------------6DA47FB3BE6F27A8C74D0A79 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 07/12/2017 01:43 PM, Simone Marchioni wrote:
Il 11/07/2017 11:23, knarra ha scritto:
On 07/11/2017 01:32 PM, Simone Marchioni wrote:
Il 11/07/2017 07:59, knarra ha scritto:
Hi,
removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/
PLAY [gluster_servers] *********************************************************
TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it]
TASK [Add/Delete services to firewalld rules] ********************************** failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Start firewalld if not already started] ********************************** ok: [ha1.domain.it] ok: [ha2.domain.it] ok: [ha3.domain.it]
TASK [Open/Close firewalld ports] ********************************************** changed: [ha1.domain.it] => (item=111/tcp) changed: [ha2.domain.it] => (item=111/tcp) changed: [ha3.domain.it] => (item=111/tcp) changed: [ha1.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=2049/tcp) changed: [ha1.domain.it] => (item=54321/tcp) changed: [ha3.domain.it] => (item=2049/tcp) changed: [ha2.domain.it] => (item=54321/tcp) changed: [ha1.domain.it] => (item=5900/tcp) changed: [ha3.domain.it] => (item=54321/tcp) changed: [ha2.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=5900-6923/tcp) changed: [ha3.domain.it] => (item=5900/tcp) changed: [ha1.domain.it] => (item=5666/tcp) changed: [ha2.domain.it] => (item=5666/tcp) changed: [ha1.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5900-6923/tcp) changed: [ha2.domain.it] => (item=16514/tcp) changed: [ha3.domain.it] => (item=5666/tcp) changed: [ha3.domain.it] => (item=16514/tcp)
TASK [Reloads the firewall] **************************************************** changed: [ha1.domain.it] changed: [ha2.domain.it] changed: [ha3.domain.it]
PLAY RECAP ********************************************************************* ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors... Ignoring errors...
In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment:
yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse
but never configured anything. Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ?
Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ?
yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard
For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3.
Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem.
You could safely ignore this and this has nothing to do with the configuration.
Thank you again. Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi,
reply here to both Gianluca and Kasturi.
Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation.
Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation. okay, cool.
Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...) There is a bug for this and it will be fixed soon. Here is the bug id for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082
Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains:
AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry May be you missed to change the path of the script "/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That is why this failure.
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 This error can be safely ignored.
These are a problem for my installation or can I ignore them? You can just manually run the script to disable hooks on all the nodes. Other error you can ignore.
By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested.
Thank you again for your support: now I'll proceed with the Hosted Engine Deployment. Good to know that you can now start with Hosted Engine Deployment.
Hi Simone
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------6DA47FB3BE6F27A8C74D0A79 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 07/12/2017 01:43 PM, Simone Marchioni wrote:<br> </div> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <div class="moz-cite-prefix">Il 11/07/2017 11:23, knarra ha scritto:<br> </div> <blockquote cite="mid:b73b1766-a724-ebc4-f7c7-159ee07a6675@redhat.com" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 07/11/2017 01:32 PM, Simone Marchioni wrote:<br> </div> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite">Il 11/07/2017 07:59, knarra ha scritto: <br> <br> Hi, <br> <br> removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/ <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [start/stop/restart/reload services] ************************************** <br> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Start firewalld if not already started] ********************************** <br> ok: [ha1.domain.it] <br> ok: [ha2.domain.it] <br> ok: [ha3.domain.it] <br> <br> TASK [Add/Delete services to firewalld rules] ********************************** <br> failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=1 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Start firewalld if not already started] ********************************** <br> ok: [ha1.domain.it] <br> ok: [ha2.domain.it] <br> ok: [ha3.domain.it] <br> <br> TASK [Open/Close firewalld ports] ********************************************** <br> changed: [ha1.domain.it] => (item=111/tcp) <br> changed: [ha2.domain.it] => (item=111/tcp) <br> changed: [ha3.domain.it] => (item=111/tcp) <br> changed: [ha1.domain.it] => (item=2049/tcp) <br> changed: [ha2.domain.it] => (item=2049/tcp) <br> changed: [ha1.domain.it] => (item=54321/tcp) <br> changed: [ha3.domain.it] => (item=2049/tcp) <br> changed: [ha2.domain.it] => (item=54321/tcp) <br> changed: [ha1.domain.it] => (item=5900/tcp) <br> changed: [ha3.domain.it] => (item=54321/tcp) <br> changed: [ha2.domain.it] => (item=5900/tcp) <br> changed: [ha1.domain.it] => (item=5900-6923/tcp) <br> changed: [ha2.domain.it] => (item=5900-6923/tcp) <br> changed: [ha3.domain.it] => (item=5900/tcp) <br> changed: [ha1.domain.it] => (item=5666/tcp) <br> changed: [ha2.domain.it] => (item=5666/tcp) <br> changed: [ha1.domain.it] => (item=16514/tcp) <br> changed: [ha3.domain.it] => (item=5900-6923/tcp) <br> changed: [ha2.domain.it] => (item=16514/tcp) <br> changed: [ha3.domain.it] => (item=5666/tcp) <br> changed: [ha3.domain.it] => (item=16514/tcp) <br> <br> TASK [Reloads the firewall] **************************************************** <br> changed: [ha1.domain.it] <br> changed: [ha2.domain.it] <br> changed: [ha3.domain.it] <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> ha2.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> ha3.domain.it : ok=3 changed=2 unreachable=0 failed=0 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a shell script] ****************************************************** <br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [Run a command in the shell] ********************************************** <br> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] ********************************************************* <br> <br> TASK [start/stop/restart/reload services] ************************************** <br> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} <br> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry <br> <br> PLAY RECAP ********************************************************************* <br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> Ignoring errors... <br> <br> <br> In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment: <br> <br> yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse <br> <br> but never configured anything. <br> </blockquote> Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ? <br> <br> Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ?<br> <br> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> <span style="color: rgb(0, 0, 0); font-family: Arial, sans-serif; font-size: 13px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(227, 255, 234); display: inline !important; float: none;"><span class="Apple-converted-space"> </span>yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard</span><br> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite"> <br> For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3. <br> <br> Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem. <br> </blockquote> You could safely ignore this and this has nothing to do with the configuration.<br> <blockquote cite="mid:a4330631-c0d8-298c-4d80-593f2f558f1f@lynx2000.it" type="cite"> <br> Thank you again. <br> Simone <br> _______________________________________________ <br> Users mailing list <br> <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <p><br> </p> </blockquote> <br> Hi,<br> <br> reply here to both Gianluca and Kasturi.<br> <br> Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation.<br> <br> Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation.<br> </blockquote> okay, cool.<br> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <br> Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...)<br> </blockquote> There is a bug for this and it will be fixed soon. Here is the bug id for your reference. <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1462082">https://bugzilla.redhat.com/show_bug.cgi?id=1462082</a><br> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <br> Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains:<br> <br> <br> AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:<br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a shell script] ******************************************************<br> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}<br> to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry<br> </blockquote> May be you missed to change the path of the script "/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That is why this failure.<br> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> <br> <br> PLAY [gluster_servers] *********************************************************<br> <br> TASK [Run a command in the shell] **********************************************<br> failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}<br> failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}<br> failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}<br> to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry<br> <br> PLAY RECAP *********************************************************************<br> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1 <br> </blockquote> This error can be safely ignored.<br> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <br> <br> These are a problem for my installation or can I ignore them?<br> </blockquote> You can just manually run the script to disable hooks on all the nodes. Other error you can ignore.<br> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <br> By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested.<br> <br> Thank you again for your support: now I'll proceed with the Hosted Engine Deployment.<br> </blockquote> Good to know that you can now start with Hosted Engine Deployment.<br> <blockquote cite="mid:dd09789b-5986-c70b-c573-94ac394f6829@lynx2000.it" type="cite"> <br> Hi<br> Simone<br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <p><br> </p> </body> </html> --------------6DA47FB3BE6F27A8C74D0A79--

Il 12/07/2017 10:59, knarra ha scritto:
On 07/12/2017 01:43 PM, Simone Marchioni wrote:
Il 11/07/2017 11:23, knarra ha scritto:
Hi,
reply here to both Gianluca and Kasturi.
Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation.
Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation. okay, cool.
:-)
Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082
Ok, thank you!
Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains:
AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script "/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That is why this failure.
You're right: changed the path and now it's ok.
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
This error can be safely ignored.
Ok
These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the nodes. Other error you can ignore.
Done it
By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested.
Thank you again for your support: now I'll proceed with the Hosted Engine Deployment.
Good to know that you can now start with Hosted Engine Deployment.
Started the Hosted Engine Deployment, but I have a different problem now. As the installer asked, I specified some parameters, in particular a pingable gateway address. Specified the host1 gateway. Proceeding with the installer, it requires The Engine VM IP address (DHCP or Static). I selected static and specified an IP Address, but the IP *IS NOT* in the same subnet as the host1. The VMs IP addresses are all on a different subnet. The installer shows a red message: The Engine VM (aa.bb.cc.dd/SM) and the default gateway (ww.xx.yy.zz) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. I'm starting to think that BOTH the hosts IPs and the VM IPs MUST BE ON THE SAME SUBNET. Is this a requirement or there's a way to deal with this configuration? It's related only to "automatic VM configuration" or to oVirt in general? Once installed oVirt Engine can I have VMs on different subnet? Hi Simone

On 07/13/2017 04:30 PM, Simone Marchioni wrote:
Il 12/07/2017 10:59, knarra ha scritto:
On 07/12/2017 01:43 PM, Simone Marchioni wrote:
Il 11/07/2017 11:23, knarra ha scritto:
Hi,
reply here to both Gianluca and Kasturi.
Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation.
Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation. okay, cool.
:-)
Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082
Ok, thank you!
Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains:
AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script "/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That is why this failure.
You're right: changed the path and now it's ok.
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
PLAY [gluster_servers] *********************************************************
TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry
PLAY RECAP ********************************************************************* ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1 ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
This error can be safely ignored.
Ok
These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the nodes. Other error you can ignore.
Done it
By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested.
Thank you again for your support: now I'll proceed with the Hosted Engine Deployment.
Good to know that you can now start with Hosted Engine Deployment.
Started the Hosted Engine Deployment, but I have a different problem now.
As the installer asked, I specified some parameters, in particular a pingable gateway address. Specified the host1 gateway. Proceeding with the installer, it requires The Engine VM IP address (DHCP or Static). I selected static and specified an IP Address, but the IP *IS NOT* in the same subnet as the host1. The VMs IP addresses are all on a different subnet. The installer shows a red message:
The Engine VM (aa.bb.cc.dd/SM) and the default gateway (ww.xx.yy.zz) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration.
I'm starting to think that BOTH the hosts IPs and the VM IPs MUST BE ON THE SAME SUBNET.
Is this a requirement or there's a way to deal with this configuration? It's related only to "automatic VM configuration" or to oVirt in general? Once installed oVirt Engine can I have VMs on different subnet? Here are some Requirements :
1) Engine VM and the managed hosts should be on management subnet 2) Managed host should be able to resolve EngineVM hostname and vice-versa. We configure the engine appliance with DHCP or static addressing. 3) Before proceeding with hosted-engine deployment we recommend the user to have an FQDN assigned to a mac address to go via DHCP way. 4) If you are using static then IP address/ netmask /gateway based on host ones , and add entries to /etc/hosts if you do not have local DNS. Hope this helps !!! Thanks kasturi
Hi Simone _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Gianluca Cecchi
-
knarra
-
Simone Marchioni