[ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

knarra knarra at redhat.com
Tue Jul 11 05:59:15 UTC 2017


On 07/10/2017 07:18 PM, Simone Marchioni wrote:
> Il 10/07/2017 13:49, knarra ha scritto:
>> On 07/10/2017 04:18 PM, Simone Marchioni wrote:
>>> Il 10/07/2017 09:08, knarra ha scritto:
>>>> Hi Simone,
>>>>
>>>>     Can you please  let me know what is the version of gdeploy and 
>>>> ansible on your system? Can you check if the path 
>>>> /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? 
>>>> If not, can you edit the generated config file and change the path 
>>>> to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh  and see if 
>>>> that works ?
>>>>
>>>>     You can check the logs in /var/log/messages , or setting 
>>>> log_path in /etc/ansbile/ansible.cfg file.
>>>>
>>>> Thanks
>>>>
>>>> kasturi.
>>>>
>>>
>>> Hi Kasturi,
>>>
>>> thank you for your reply. Here are my versions:
>>>
>>> gdeploy-2.0.2-7.noarch
>>> ansible-2.3.0.0-3.el7.noarch
>>>
>>> The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh 
>>> is missing. For the sake of completeness, the entire directory 
>>> ansible is missing under /usr/share.
>>>
>>> In /var/log/messages there is no error message, and I have no 
>>> /etc/ansbile/ansible.cfg config file...
>>>
>>> I'm starting to think there are some missing pieces in my 
>>> installation. I installed the following packages:
>>>
>>> yum install ovirt-engine
>>> yum install ovirt-hosted-engine-setup
>>> yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome 
>>> libgovirt ovirt-live-artwork ovirt-log-collector gdeploy 
>>> cockpit-ovirt-dashboard
>>>
>>> and relative dependencies.
>>>
>>> Any idea?
>> Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" 
>> is present ? If yes, can you change the path in your generated 
>> gdeploy config file and run again ?
>
> Hi Kasturi,
>
> you're right: the file 
> /usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
> updated the path in the gdeploy config file and run Deploy again.
> The situation is much better but the Deployment failed again... :-(
>
> Here are the errors:
>
>
>
> PLAY [gluster_servers] 
> *********************************************************
>
> TASK [Run a shell script] 
> ******************************************************
> fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
> conditional check 'result.rc != 0' failed. The error was: error while 
> evaluating conditional (result.rc != 0): 'dict object' has no 
> attribute 'rc'"}
> fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
> conditional check 'result.rc != 0' failed. The error was: error while 
> evaluating conditional (result.rc != 0): 'dict object' has no 
> attribute 'rc'"}
> fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
> conditional check 'result.rc != 0' failed. The error was: error while 
> evaluating conditional (result.rc != 0): 'dict object' has no 
> attribute 'rc'"}
>     to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry
>
> PLAY RECAP 
> *********************************************************************
> ha1.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
> ha2.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
> ha3.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
>
>
> PLAY [gluster_servers] 
> *********************************************************
>
> TASK [Clean up filesystem signature] 
> *******************************************
> skipping: [ha2.lynx2000.it] => (item=/dev/md128)
> skipping: [ha1.lynx2000.it] => (item=/dev/md128)
> skipping: [ha3.lynx2000.it] => (item=/dev/md128)
>
> TASK [Create Physical Volume] 
> **************************************************
> failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, 
> "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
> signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
> Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
> "rc": 5}
> failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, 
> "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
> signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
> Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
> "rc": 5}
> failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, 
> "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
> signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
> Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
> "rc": 5}
>     to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry
>
> PLAY RECAP 
> *********************************************************************
> ha1.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
> ha2.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
> ha3.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
>
> Ignoring errors...
>
>
>
> Any clue?
Hi,

     I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. Can 
you try to do fill zeros in the disk for 512MB or 1GB and try again ?

     dd if=/dev/zero of=<device>

     Before running the script again try to do pvcreate and see if that 
works. If it works, just do pvdelete and run the script. Everything 
should work fine.

Thanks
kasturi
>
> Thanks for your time.
> Simone
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




More information about the Users mailing list