On 07/07/2017 10:01 PM, Simone Marchioni wrote:
> Hi to all,
>
> I have an old installation of oVirt 3.3 with the Engine on a separate
> server. I wanted to test the last oVirt 4.1 with Gluster Storage and
> Hosted Engine.
>
> Followed the following tutorial:
>
>
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-glust...
>
>
> I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the
> oVirt 4.1 repo and all required packages. Configured passwordless ssh
> as stated.
> Then I log in cockpit web interface, selected "Hosted Engine with
> Gluster" and hit the Start button. Configured the parameters as shown
> in the tutorial.
>
> In the last step (5) the Generated Gdeply configuration (note:
> replaced the real domain with "domain.it"):
>
> #gdeploy configuration generated by cockpit-gluster plugin
> [hosts]
> ha1.domain.it
> ha2.domain.it
> ha3.domain.it
>
> [script1]
> action=execute
> ignore_script_errors=no
> file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d
> sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it
>
> [disktype]
> raid6
>
> [diskcount]
> 12
>
> [stripesize]
> 256
>
> [service1]
> action=enable
> service=chronyd
>
> [service2]
> action=restart
> service=chronyd
>
> [shell2]
> action=execute
> command=vdsm-tool configure --force
>
> [script3]
> action=execute
> file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
>
> [pv1]
> action=create
> devices=sdb
> ignore_pv_errors=no
>
> [vg1]
> action=create
> vgname=gluster_vg_sdb
> pvname=sdb
> ignore_vg_errors=no
>
> [lv1:{ha1.domain.it,ha2.domain.it}]
> action=create
> poolname=gluster_thinpool_sdb
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> lvtype=thinpool
> size=110GB
> poolmetadatasize=1GB
>
> [lv2:ha3.domain.it]
> action=create
> poolname=gluster_thinpool_sdb
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> lvtype=thinpool
> size=80GB
> poolmetadatasize=1GB
>
> [lv3:{ha1.domain.it,ha2.domain.it}]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=50GB
>
> [lv4:ha3.domain.it]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [lv5:{ha1.domain.it,ha2.domain.it}]
> action=create
> lvname=gluster_lv_data
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/data
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [lv6:ha3.domain.it]
> action=create
> lvname=gluster_lv_data
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/data
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [lv7:{ha1.domain.it,ha2.domain.it}]
> action=create
> lvname=gluster_lv_export
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/export
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [lv8:ha3.domain.it]
> action=create
> lvname=gluster_lv_export
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/export
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [lv9:{ha1.domain.it,ha2.domain.it}]
> action=create
> lvname=gluster_lv_iso
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/iso
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [lv10:ha3.domain.it]
> action=create
> lvname=gluster_lv_iso
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/iso
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=20GB
>
> [selinux]
> yes
>
> [service3]
> action=restart
> service=glusterd
> slice_setup=yes
>
> [firewalld]
> action=add
> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
>
> services=glusterfs
>
> [script2]
> action=execute
> file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
>
> [shell3]
> action=execute
> command=usermod -a -G gluster qemu
>
> [volume1]
> action=create
> volname=engine
> transport=tcp
> replica=yes
> replica_count=3
>
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
>
> value=virt,36,36,30,on,off,enable
>
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
>
> ignore_volume_errors=no
> arbiter_count=1
>
> [volume2]
> action=create
> volname=data
> transport=tcp
> replica=yes
> replica_count=3
>
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
>
> value=virt,36,36,30,on,off,enable
>
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
>
> ignore_volume_errors=no
> arbiter_count=1
>
> [volume3]
> action=create
> volname=export
> transport=tcp
> replica=yes
> replica_count=3
>
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
>
> value=virt,36,36,30,on,off,enable
>
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
>
> ignore_volume_errors=no
> arbiter_count=1
>
> [volume4]
> action=create
> volname=iso
> transport=tcp
> replica=yes
> replica_count=3
>
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
>
> value=virt,36,36,30,on,off,enable
>
brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso
>
> ignore_volume_errors=no
> arbiter_count=1
>
>
> When I hit "Deploy" button the Deployment fails with the following
> error:
>
> PLAY [gluster_servers]
> *********************************************************
>
> TASK [Run a shell script]
> ******************************************************
> fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg":
"The
> conditional check 'result.rc != 0' failed. The error was: error while
> evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
> fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg":
"The
> conditional check 'result.rc != 0' failed. The error was: error while
> evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
> fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg":
"The
> conditional check 'result.rc != 0' failed. The error was: error while
> evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
> to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
>
> PLAY RECAP
> *********************************************************************
> ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
> ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
> ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
>
> What I'm doing wrong? Maybe I need to initializa glusterfs in some
> way...
> What are the logs used to log the status of this deployment so I can
> check the errors?
>
> Thanks in advance!
> Simone
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
Hi Simone,
Can you please let me know what is the version of gdeploy and
ansible on your system? Can you check if the path
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? If
not, can you edit the generated config file and change the path to
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh and see if that
works ?
You can check the logs in /var/log/messages , or setting log_path
in /etc/ansbile/ansible.cfg file.
Thanks
kasturi.
Hi Kasturi,
thank you for your reply. Here are my versions:
gdeploy-2.0.2-7.noarch
ansible-2.3.0.0-3.el7.noarch
The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh is
missing. For the sake of completeness, the entire directory ansible is
missing under /usr/share.
In /var/log/messages there is no error message, and I have no
/etc/ansbile/ansible.cfg config file...
I'm starting to think there are some missing pieces in my installation.
I installed the following packages:
yum install ovirt-engine
yum install ovirt-hosted-engine-setup
yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome
libgovirt ovirt-live-artwork ovirt-log-collector gdeploy
cockpit-ovirt-dashboard
and relative dependencies.
Any idea?
Hi,
Simone