<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 25, 2016 at 4:02 PM, Simone Tiraboschi <span dir="ltr"><<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="">On Mon, Jul 25, 2016 at 2:38 PM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
> telnet <a href="http://kom-ad01-vm31.holding.com" rel="noreferrer" target="_blank">kom-ad01-vm31.holding.com</a> 54321 = success connection<br>
><br>
> telnet ::1 54321<br>
> Trying ::1...<br>
> telnet: connect to address ::1: Network is unreachable<br>
><br>
> (ipv6 on my server disabled)<br>
<br>
</span>Ok, so the issue seams here: now by default vdsm binds on :: and its<br>
heuristc can end up using ipv6.<br>
See this one: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1350883" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1350883</a><br>
<br>
Can you please try enabling ipv6 on your host or setting<br>
management_ip = 0.0.0.0<br>
under the [address] section in /etc/vdsm/vdsm.conf<br>
and then restarting vdsm.<br>
<div class=""><div class="h5"><br></div></div></blockquote><div><br></div><div>Could you please also add the 'ip addr' output? Just interested to see how IPv6 was<br></div><div>disabled on the host.<br></div><div>It will be even better if you could apply the patch (<a href="https://gerrit.ovirt.org/#/c/60020">https://gerrit.ovirt.org/#/c/60020</a>) and check.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">
<br>
<br>
<br>
> 25.07.2016, 15:35, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>> On Mon, Jul 25, 2016 at 2:15 PM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>> # ss -plutn<br>
>>><br>
>>> Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port<br>
>>><br>
>>> udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6))<br>
>>> udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6))<br>
>>> udp UNCONN 0 0 <a href="http://127.0.0.1:323" rel="noreferrer" target="_blank">127.0.0.1:323</a> *:* users:(("chronyd",pid=795,fd=1))<br>
>>> udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7))<br>
>>> udp UNCONN 0 0 <a href="http://127.0.0.1:25375" rel="noreferrer" target="_blank">127.0.0.1:25375</a> *:* users:(("snmpd",pid=1609,fd=8))<br>
>>> udp UNCONN 0 0 <a href="http://127.0.0.1:25376" rel="noreferrer" target="_blank">127.0.0.1:25376</a> *:* users:(("cmapeerd",pid=2056,fd=5))<br>
>>> udp UNCONN 0 0 <a href="http://127.0.0.1:25393" rel="noreferrer" target="_blank">127.0.0.1:25393</a> *:* users:(("cmanicd",pid=2278,fd=3))<br>
>>> udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9))<br>
>>> udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10))<br>
>>> tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4))<br>
>>> tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8))<br>
>>> tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3))<br>
>>> tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3))<br>
>>> tcp LISTEN 0 100 <a href="http://127.0.0.1:25" rel="noreferrer" target="_blank">127.0.0.1:25</a> *:* users:(("master",pid=1948,fd=13))<br>
>>> tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3))<br>
>>> tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13))<br>
>>> tcp LISTEN 0 128 <a href="http://127.0.0.1:199" rel="noreferrer" target="_blank">127.0.0.1:199</a> *:* users:(("snmpd",pid=1609,fd=9))<br>
>>> tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11))<br>
>>> tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))<br>
>><br>
>> vdsm is properly bind over ipv6.<br>
>><br>
>> Can you please check if you can connect to vdsm with:<br>
>> telnet <a href="http://kom-ad01-vm31.holding.com" rel="noreferrer" target="_blank">kom-ad01-vm31.holding.com</a> 54321<br>
>> and with<br>
>> telnet ::1 54321<br>
>> ?<br>
>><br>
>>> tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))<br>
>>><br>
>>> 25.07.2016, 15:11, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>> On Mon, Jul 25, 2016 at 2:03 PM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>> Yes.<br>
>>>>><br>
>>>>> # ping $(python -c 'import socket; print(socket.gethostname())')<br>
>>>>><br>
>>>>> PING <a href="http://KOM-AD01-VM31.holding.com" rel="noreferrer" target="_blank">KOM-AD01-VM31.holding.com</a> (10.1.0.231) 56(84) bytes of data.<br>
>>>>> 64 bytes from <a href="http://kom-ad01-vm31.holding.com" rel="noreferrer" target="_blank">kom-ad01-vm31.holding.com</a> (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms<br>
>>>>> 64 bytes from <a href="http://kom-ad01-vm31.holding.com" rel="noreferrer" target="_blank">kom-ad01-vm31.holding.com</a> (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms<br>
>>>>> 64 bytes from <a href="http://kom-ad01-vm31.holding.com" rel="noreferrer" target="_blank">kom-ad01-vm31.holding.com</a> (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms<br>
>>>>> ^C<br>
>>>>> --- <a href="http://KOM-AD01-VM31.holding.com" rel="noreferrer" target="_blank">KOM-AD01-VM31.holding.com</a> ping statistics ---<br>
>>>>> 3 packets transmitted, 3 received, 0% packet loss, time 2001ms<br>
>>>>> rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms<br>
>>>>><br>
>>>>> but...<br>
>>>><br>
>>>> and the output of<br>
>>>> ss -plutn<br>
>>>><br>
>>>>> # vdsClient -s 0 getVdsCaps<br>
>>>>><br>
>>>>> Traceback (most recent call last):<br>
>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module><br>
>>>>> code, message = commands[command][0](commandArgs)<br>
>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap<br>
>>>>> return self.ExecAndExit(self.s.getVdsCapabilities())<br>
>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__<br>
>>>>> return self.__send(self.__name, args)<br>
>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request<br>
>>>>> verbose=self.__verbose<br>
>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request<br>
>>>>> return self.single_request(host, handler, request_body, verbose)<br>
>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request<br>
>>>>> self.send_content(h, request_body)<br>
>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content<br>
>>>>> connection.endheaders(request_body)<br>
>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders<br>
>>>>> self._send_output(message_body)<br>
>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output<br>
>>>>> self.send(msg)<br>
>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send<br>
>>>>> self.connect()<br>
>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect<br>
>>>>> sock = socket.create_connection((self.host, self.port), self.timeout)<br>
>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection<br>
>>>>> raise err<br>
>>>>> error: [Errno 101] Network is unreachable<br>
>>>>><br>
>>>>> 25.07.2016, 14:58, "<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>" <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>>:<br>
>>>>>> Ok.<br>
>>>>>><br>
>>>>>> 1) I stopped and disabled the service NetworkManager<br>
>>>>>> # systemctl stop NetworkManager<br>
>>>>>> # systemctl disable NetworkManager<br>
>>>>>><br>
>>>>>> 2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.<br>
>>>>>><br>
>>>>>> 3) Reboot server<br>
>>>>>><br>
>>>>>> 4) Try deploy oVirt HE 4 and I get the same error<br>
>>>>>><br>
>>>>>> [ INFO ] Creating Volume Group<br>
>>>>>> [ INFO ] Creating Storage Domain<br>
>>>>>> [ INFO ] Creating Storage Pool<br>
>>>>>> [ INFO ] Connecting Storage Pool<br>
>>>>>> [ INFO ] Verifying sanlock lockspace initialization<br>
>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable<br>
>>>>>> [ INFO ] Stage: Clean up<br>
>>>>>> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf'<br>
>>>>>> [ INFO ] Stage: Pre-termination<br>
>>>>>> [ INFO ] Stage: Termination<br>
>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy<br>
>>>>>> Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log<br>
>>>>>><br>
>>>>>> What ideas further?<br>
>>>>>><br>
>>>>>> 25.07.2016, 13:06, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>>>>> On Mon, Jul 25, 2016 at 11:54 AM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>>>>> What am I supposed to do for successfully deploy ovirt 4 ?<br>
>>>>>>>> Any ideas ?<br>
>>>>>>><br>
>>>>>>> Can you please try to explicitly configure your DNS with nameserver<br>
>>>>>>> under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for<br>
>>>>>>> the interface you are going to use?<br>
>>>>>>><br>
>>>>>>>> 25.07.2016, 12:47, "<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>" <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>>:<br>
>>>>>>>>> "Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"<br>
>>>>>>>>><br>
>>>>>>>>> Yes. Of course<br>
>>>>>>>>><br>
>>>>>>>>> 25.07.2016, 12:27, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>>>>>>>> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski<br>
>>>>>>>>>> <<a href="mailto:piotr.kliczewski@gmail.com">piotr.kliczewski@gmail.com</a>> wrote:<br>
>>>>>>>>>>> This could be the issue here as well as for BZ #1358530<br>
>>>>>>>>>>><br>
>>>>>>>>>>> On Mon, Jul 25, 2016 at 10:53 AM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>>>>>>>>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ?<br>
>>>>>>>>>>>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.<br>
>>>>>>>>>><br>
>>>>>>>>>> So it could be related to <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1160423" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1160423</a><br>
>>>>>>>>>><br>
>>>>>>>>>> Aleksey, was your DNS configured with DNS1 and DNS2 just on the<br>
>>>>>>>>>> interface you used to create the management bridge on?<br>
>>>>>>>>>> Can you please try the workaround described here<br>
>>>>>>>>>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25</a> ?<br>
>>>>>>>>>><br>
>>>>>>>>>>>> 25.07.2016, 11:26, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>>>>>>>>>>> On Mon, Jul 25, 2016 at 10:22 AM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>>>>>>>>>>> # vdsClient -s 0 getVdsCaps<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Traceback (most recent call last):<br>
>>>>>>>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module><br>
>>>>>>>>>>>>>> code, message = commands[command][0](commandArgs)<br>
>>>>>>>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap<br>
>>>>>>>>>>>>>> return self.ExecAndExit(self.s.getVdsCapabilities())<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__<br>
>>>>>>>>>>>>>> return self.__send(self.__name, args)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request<br>
>>>>>>>>>>>>>> verbose=self.__verbose<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request<br>
>>>>>>>>>>>>>> return self.single_request(host, handler, request_body, verbose)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request<br>
>>>>>>>>>>>>>> self.send_content(h, request_body)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content<br>
>>>>>>>>>>>>>> connection.endheaders(request_body)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders<br>
>>>>>>>>>>>>>> self._send_output(message_body)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output<br>
>>>>>>>>>>>>>> self.send(msg)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send<br>
>>>>>>>>>>>>>> self.connect()<br>
>>>>>>>>>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect<br>
>>>>>>>>>>>>>> sock = socket.create_connection((self.host, self.port), self.timeout)<br>
>>>>>>>>>>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection<br>
>>>>>>>>>>>>>> raise err<br>
>>>>>>>>>>>>>> error: [Errno 101] Network is unreachable<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Yaniv, can you please take also a look to this one?<br>
>>>>>>>>>>>>> it's exactly the opposite of <a href="https://bugzilla.redhat.com/1358530" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/1358530</a><br>
>>>>>>>>>>>>> Here the jsonrpcclient works but not the xmlrpc one.<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>>>>>>>>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>>>>>>>>>>>>> Simone, there is something interesting in the vdsm.log?<br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> For what I saw the issue is not related to the storage but to the network.<br>
>>>>>>>>>>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code<br>
>>>>>>>>>>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and<br>
>>>>>>>>>>>>>>> this happens also when the setup asks to create the lockspace volume.<br>
>>>>>>>>>>>>>>> It seams that in your case the xmlrpc client could not connect vdsm on<br>
>>>>>>>>>>>>>>> the localhost.<br>
>>>>>>>>>>>>>>> It could be somehow related to:<br>
>>>>>>>>>>>>>>> <a href="https://bugzilla.redhat.com/1358530" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/1358530</a><br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> Can you please try executing<br>
>>>>>>>>>>>>>>> sudo vdsClient -s 0 getVdsCaps<br>
>>>>>>>>>>>>>>> on that host?<br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> 22.07.2016, 19:36, "<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>" <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>>:<br>
>>>>>>>>>>>>>>>>> Simone, thanks for link.<br>
>>>>>>>>>>>>>>>>> vdsm.log attached<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>>>>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>>>>>>>>>>>>>>>> Thank you for your response, Simone.<br>
>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>> Log attached.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log?<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> yum install ovirt-engine-appliance<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Then follow the instruction here:<br>
>>>>>>>>>>>>>>>>>> <a href="http://www.ovirt.org/develop/release-management/features/heapplianceflow/" rel="noreferrer" target="_blank">http://www.ovirt.org/develop/release-management/features/heapplianceflow/</a><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>>:<br>
>>>>>>>>>>>>>>>>>>>> Hi Aleksey,<br>
>>>>>>>>>>>>>>>>>>>> Can you please attach hosted-engine-setup logs?<br>
>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <<a href="mailto:aleksey.maksimov@it-kb.ru">aleksey.maksimov@it-kb.ru</a>> wrote:<br>
>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> Hello oVirt guru`s !<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine.<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> My environment :<br>
>>>>>>>>>>>>>>>>>>>>> ============================<br>
>>>>>>>>>>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with<br>
>>>>>>>>>>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200<br>
>>>>>>>>>>>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64)<br>
>>>>>>>>>>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt.<br>
>>>>>>>>>>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB)<br>
>>>>>>>>>>>>>>>>>>>>> - Second LUN for all other VMs (2TB)<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> # multipath -ll<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV<br>
>>>>>>>>>>>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw<br>
>>>>>>>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active<br>
>>>>>>>>>>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running<br>
>>>>>>>>>>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running<br>
>>>>>>>>>>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running<br>
>>>>>>>>>>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV<br>
>>>>>>>>>>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw<br>
>>>>>>>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active<br>
>>>>>>>>>>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running<br>
>>>>>>>>>>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running<br>
>>>>>>>>>>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running<br>
>>>>>>>>>>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine):<br>
>>>>>>>>>>>>>>>>>>>>> ============================<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> # systemctl stop NetworkManager<br>
>>>>>>>>>>>>>>>>>>>>> # systemctl disable NetworkManager<br>
>>>>>>>>>>>>>>>>>>>>> # yum -y install <a href="http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm" rel="noreferrer" target="_blank">http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm</a><br>
>>>>>>>>>>>>>>>>>>>>> # yum -y install epel-release<br>
>>>>>>>>>>>>>>>>>>>>> # wget<br>
>>>>>>>>>>>>>>>>>>>>> <a href="http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511.iso" rel="noreferrer" target="_blank">http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511.iso</a><br>
>>>>>>>>>>>>>>>>>>>>> -P /tmp/<br>
>>>>>>>>>>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup<br>
>>>>>>>>>>>>>>>>>>>>> # yum install screen<br>
>>>>>>>>>>>>>>>>>>>>> # screen -RD<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> ...in screen session :<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> # hosted-engine --deploy<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> ...<br>
>>>>>>>>>>>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted<br>
>>>>>>>>>>>>>>>>>>>>> engine vm and select 60GB LUN...<br>
>>>>>>>>>>>>>>>>>>>>> ...<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==--<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> ...<br>
>>>>>>>>>>>>>>>>>>>>> Firewall manager : iptables<br>
>>>>>>>>>>>>>>>>>>>>> Gateway address : 10.1.0.1<br>
>>>>>>>>>>>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1<br>
>>>>>>>>>>>>>>>>>>>>> Storage Domain type : fc<br>
>>>>>>>>>>>>>>>>>>>>> Host ID : 1<br>
>>>>>>>>>>>>>>>>>>>>> LUN ID :<br>
>>>>>>>>>>>>>>>>>>>>> 360002ac0000000000000001b0000cec9<br>
>>>>>>>>>>>>>>>>>>>>> Image size GB : 40<br>
>>>>>>>>>>>>>>>>>>>>> Console type : vnc<br>
>>>>>>>>>>>>>>>>>>>>> Memory size MB : 4096<br>
>>>>>>>>>>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07<br>
>>>>>>>>>>>>>>>>>>>>> Boot type : cdrom<br>
>>>>>>>>>>>>>>>>>>>>> Number of CPUs : 2<br>
>>>>>>>>>>>>>>>>>>>>> ISO image (cdrom boot/cloud-init) :<br>
>>>>>>>>>>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso<br>
>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from<br>
>>>>>>>>>>>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance?<br>
>>>>>>>>>>>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable.<br>
>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> CPU Type : model_Penryn<br>
>>>>>>>>>>>>>>>>>>>>> ...<br>
>>>>>>>>>>>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization"<br>
>>>>>>>>>>>>>>>>>>>>> ...<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization<br>
>>>>>>>>>>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network<br>
>>>>>>>>>>>>>>>>>>>>> is unreachable<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Clean up<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Generating answer file<br>
>>>>>>>>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf'<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Termination<br>
>>>>>>>>>>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,<br>
>>>>>>>>>>>>>>>>>>>>> please check the issue, fix and redeploy<br>
>>>>>>>>>>>>>>>>>>>>> Log file is located at<br>
>>>>>>>>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> Interestingly<br>
>>>>>>>>>>>>>>>>>>>>> ============================<br>
>>>>>>>>>>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same<br>
>>>>>>>>>>>>>>>>>>>>> configuration !! :<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> ....<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction setup<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Package installation<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Configuring libvirt<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Configuring VDSM<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Starting vdsmd<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Configuring the management bridge<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating Volume Group<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating Storage Domain<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating Storage Pool<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Connecting Storage Pool<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ...<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ...<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating VM Image<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Destroying Storage Pool<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Start monitoring domain<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Configuring VM<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction commit<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Stage: Closing up<br>
>>>>>>>>>>>>>>>>>>>>> [ INFO ] Creating VM<br>
>>>>>>>>>>>>>>>>>>>>> You can now connect to the VM with the following command:<br>
>>>>>>>>>>>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900<br>
>>>>>>>>>>>>>>>>>>>>> ...<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> What could be the problem?<br>
>>>>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>>>>> _______________________________________________<br>
>>>>>>>>>>>>>>>>>>>>> Users mailing list<br>
>>>>>>>>>>>>>>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>>>>>>>>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>>>>>>>>>>>> _______________________________________________<br>
>>>>>>>>>>>> Users mailing list<br>
>>>>>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>>>>>>>>><br>
>>>>>>>>> _______________________________________________<br>
>>>>>>>>> Users mailing list<br>
>>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>>>>>><br>
>>>>>> _______________________________________________<br>
>>>>>> Users mailing list<br>
>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div></div>