[ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

Oliver Dietzel O.Dietzel at rto.de
Wed May 3 13:34:14 UTC 2017


Worked as described in the blog entry:

"The installer will ask if you want to configure your host and cluster for Gluster. Again, click "Next" to proceed. In some of my tests, the installer failed at this point, with an error message of Failed to execute stage 'Environment customization'. When I encountered this, I clicked "Restart Setup", repeated the above steps, and was able to proceed normally."

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

After a setup restart this error went away.

The last error i had was in the last stage of the setup process.

The installer was unable to connect to the engine vm after creation and timed out after about the 10th retry.
Postinstall failed, but the vm itself was up and running and i was able to ssh to it and to connect to the web ui.

Log attached.

Thx Oli

-----Ursprüngliche Nachricht-----
Von: knarra [mailto:knarra at redhat.com] 
Gesendet: Mittwoch, 3. Mai 2017 13:49
An: Oliver Dietzel <O.Dietzel at rto.de>; 'users at ovirt.org' <users at ovirt.org>
Betreff: Re: AW: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

On 05/03/2017 03:53 PM, Oliver Dietzel wrote:
> Actual size as displayed by lsblk is 558,9G , a combined size of 530 
> GB worked (engine 100, data 180, vmstore 250), but only without thin provisioning. Deployment failed with thin provisioning enabled, but worked with fixed sizes.
>
> Now i hang in hosted engine deployment (having set installation with gluster to yes when asked) with error:
>
> "Failed to execute stage 'Environment customization': Invalid value provided to 'ENABLE_HC_GLUSTER_SERVICE'"
Hi,

     Can you provide me the exact question and your response to that because of which your setup failed ?

Thanks
kasturi
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: knarra [mailto:knarra at redhat.com]
> Gesendet: Mittwoch, 3. Mai 2017 12:16
> An: Oliver Dietzel <O.Dietzel at rto.de>; 'users at ovirt.org' 
> <users at ovirt.org>
> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on 
> gluster, existing disk sdb not found or filtered, deployment fails
>
> On 05/03/2017 03:20 PM, Oliver Dietzel wrote:
>> Thx a lot, i already got rid of the multipaths.
>>
>> Now 5 tries later i try to understand who disk space calc works.
>>
>> I already understand that the combined GByte limit for my drive sdb is around 530.
>>    
>>> sdb                                                        8:16   0 558,9G  0 disk
>> Now the thin pool creation kicks me! :)
>>
>> (i do a  vgremove gluster_vg_sdb on all hosts and reboot all three 
>> hosts between retries)
>>
>> TASK [Create LVs with specified size for the VGs]
>> ******************************
>> failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb',
>> u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) 
>> => {"failed": true, "item": {"extent": "100%FREE", "lv":
>> "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"},
>> "msg": "  Insufficient suitable allocatable extents for logical 
>> volume
>> gluster_thinpool_sdb: 135680 more required\n", "rc": 5}
> I think you should input the size as 500GB if your actual disk size is 530 ?
>> -----Ursprüngliche Nachricht-----
>> Von: knarra [mailto:knarra at redhat.com]
>> Gesendet: Mittwoch, 3. Mai 2017 11:17
>> An: Oliver Dietzel <O.Dietzel at rto.de>; 'users at ovirt.org'
>> <users at ovirt.org>
>> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on 
>> gluster, existing disk sdb not found or filtered, deployment fails
>>
>> On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
>>> Hi,
>>>
>>> i try to set up a 3 node gluster based ovirt cluster, following this guide:
>>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and
>>> -
>>> g
>>> luster-storage/
>>>
>>> oVirt nodes were installed with all disks available in the system, 
>>> installer limited to use only /dev/sda (both sda and sdb are HPE 
>>> logical volumes on a p410 raid controller)
>>>
>>>
>>> Glusterfs deployment fails in the last step before engine setup:
>>>
>>> PLAY RECAP *********************************************************************
>>> hv1.iw              : ok=1    changed=1    unreachable=0    failed=0
>>> hv2.iw              : ok=1    changed=1    unreachable=0    failed=0
>>> hv3.iw              : ok=1    changed=1    unreachable=0    failed=0
>>>
>>>
>>> PLAY [gluster_servers]
>>> *********************************************************
>>>
>>> TASK [Clean up filesystem signature]
>>> *******************************************
>>> skipping: [hv1.iw] => (item=/dev/sdb)
>>> skipping: [hv2.iw] => (item=/dev/sdb)
>>> skipping: [hv3.iw] => (item=/dev/sdb)
>>>
>>> TASK [Create Physical Volume]
>>> **************************************************
>>> failed: [hv3.iw] (item=/dev/sdb) => {"failed": true,
>>> "failed_when_result": true, "item": "/dev/sdb", "msg": "  Device 
>>> /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
>>> failed: [hv1.iw] (item=/dev/sdb) => {"failed": true,
>>> "failed_when_result": true, "item": "/dev/sdb", "msg": "  Device 
>>> /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
>>> failed: [hv2.iw] (item=/dev/sdb) => {"failed": true,
>>> "failed_when_result": true, "item": "/dev/sdb", "msg": "  Device 
>>> /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
>>>
>>>
>>> But: /dev/sdb exists on all hosts
>>>
>>> [root at hv1 ~]# lsblk
>>> NAME                                                     MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
>>> sda                                                        8:0    0 136,7G  0 disk
>>> ...
>>> sdb                                                        8:16   0 558,9G  0 disk
>>> └─3600508b1001c350a2c1748b0a0ff3860                      253:5    0 558,9G  0 mpath
>>>
>>>
>>>
>>> What can i do to make this work?
>>>
>>> ___________________________________________________________
>>> Oliver Dietzel
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> Hi Oliver,
>>
>>        I see that multipath is enabled on your system and for the device sdb it creates mpath and once this is created system will identify sdb as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below.
>>
>> 1) multipath -l (to list all multipath devices)
>>
>> 2) black list devices in /etc/multipath.conf by adding the lines below, if you do not see this file run the command 'vdsm-tool configure --force' which will create the file for you.
>>
>> blacklist {
>>            devnode "*"
>> }
>>
>> 3) mutipath -F which flushes all the mpath devices.
>>
>> 4) Restart mutipathd by running the command 'systemctl restart multipathd'
>>
>> This should solve the issue.
>>
>> Thanks
>> kasturi.
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>

-------------- next part --------------
An embedded message was scrubbed...
From: Oliver Dietzel <O.Dietzel at rto.de>
Subject: ovirt engine setup log
Date: Wed, 3 May 2017 10:54:25 +0000
Size: 16055
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170503/a0ff67ca/attachment-0001.mht>


More information about the Users mailing list