oVirt 4 Hosted Engine deploy on fc storage - [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable

Hi Aleksey, Can you please attach hosted-engine-setup logs? On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello oVirt guru`s !
I have problem with initial deploy of ovirt 4.0 hosted engine.
My environment : ============================ * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200 * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) * On 3PAR storage I created 2 LUNs for oVirt. - First LUN for oVirt Hosted Engine VM (60GB) - Second LUN for all other VMs (2TB)
# multipath -ll
3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:1:1 sdd 8:48 active ready running |- 3:0:0:1 sdf 8:80 active ready running |- 2:0:0:1 sdb 8:16 active ready running `- 3:0:1:1 sdh 8:112 active ready running
3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:0:0 sda 8:0 active ready running |- 3:0:0:0 sde 8:64 active ready running |- 2:0:1:0 sdc 8:32 active ready running `- 3:0:1:0 sdg 8:96 active ready running
My steps on first server (initial deploy of ovirt 4.0 hosted engine): ============================
# systemctl stop NetworkManager # systemctl disable NetworkManager # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm # yum -y install epel-release # wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... -P /tmp/ # yum install ovirt-hosted-engine-setup # yum install screen # screen -RD
...in screen session :
# hosted-engine --deploy
... in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN... ...
--== CONFIGURATION PREVIEW ==--
... Firewall manager : iptables Gateway address : 10.1.0.1 Host name for web application : KOM-AD01-OVIRT1 Storage Domain type : fc Host ID : 1 LUN ID : 360002ac0000000000000001b0000cec9 Image size GB : 40 Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:1d:07 Boot type : cdrom Number of CPUs : 2 ISO image (cdrom boot/cloud-init) : /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
CPU Type : model_Penryn ... and get error after step "Verifying sanlock lockspace initialization" ...
[ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
Interestingly ============================ If I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :
.... [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 ...
What could be the problem?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thank you for your response, Simone. Log attached. I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>:
Hi Aleksey, Can you please attach hosted-engine-setup logs?
On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello oVirt guru`s !
I have problem with initial deploy of ovirt 4.0 hosted engine.
My environment : ============================ * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200 * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) * On 3PAR storage I created 2 LUNs for oVirt. - First LUN for oVirt Hosted Engine VM (60GB) - Second LUN for all other VMs (2TB)
# multipath -ll
3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:1:1 sdd 8:48 active ready running |- 3:0:0:1 sdf 8:80 active ready running |- 2:0:0:1 sdb 8:16 active ready running `- 3:0:1:1 sdh 8:112 active ready running
3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:0:0 sda 8:0 active ready running |- 3:0:0:0 sde 8:64 active ready running |- 2:0:1:0 sdc 8:32 active ready running `- 3:0:1:0 sdg 8:96 active ready running
My steps on first server (initial deploy of ovirt 4.0 hosted engine): ============================
# systemctl stop NetworkManager # systemctl disable NetworkManager # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm # yum -y install epel-release # wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... -P /tmp/ # yum install ovirt-hosted-engine-setup # yum install screen # screen -RD
...in screen session :
# hosted-engine --deploy
... in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN... ...
--== CONFIGURATION PREVIEW ==--
... Firewall manager : iptables Gateway address : 10.1.0.1 Host name for web application : KOM-AD01-OVIRT1 Storage Domain type : fc Host ID : 1 LUN ID : 360002ac0000000000000001b0000cec9 Image size GB : 40 Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:1d:07 Boot type : cdrom Number of CPUs : 2 ISO image (cdrom boot/cloud-init) : /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
CPU Type : model_Penryn ... and get error after step "Verifying sanlock lockspace initialization" ...
[ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
Interestingly ============================ If I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :
.... [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 ...
What could be the problem?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote:
Thank you for your response, Simone.
Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>:
Hi Aleksey, Can you please attach hosted-engine-setup logs?
On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello oVirt guru`s !
I have problem with initial deploy of ovirt 4.0 hosted engine.
My environment : ============================ * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200 * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) * On 3PAR storage I created 2 LUNs for oVirt. - First LUN for oVirt Hosted Engine VM (60GB) - Second LUN for all other VMs (2TB)
# multipath -ll
3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:1:1 sdd 8:48 active ready running |- 3:0:0:1 sdf 8:80 active ready running |- 2:0:0:1 sdb 8:16 active ready running `- 3:0:1:1 sdh 8:112 active ready running
3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:0:0 sda 8:0 active ready running |- 3:0:0:0 sde 8:64 active ready running |- 2:0:1:0 sdc 8:32 active ready running `- 3:0:1:0 sdg 8:96 active ready running
My steps on first server (initial deploy of ovirt 4.0 hosted engine): ============================
# systemctl stop NetworkManager # systemctl disable NetworkManager # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm # yum -y install epel-release # wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... -P /tmp/ # yum install ovirt-hosted-engine-setup # yum install screen # screen -RD
...in screen session :
# hosted-engine --deploy
... in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN... ...
--== CONFIGURATION PREVIEW ==--
... Firewall manager : iptables Gateway address : 10.1.0.1 Host name for web application : KOM-AD01-OVIRT1 Storage Domain type : fc Host ID : 1 LUN ID : 360002ac0000000000000001b0000cec9 Image size GB : 40 Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:1d:07 Boot type : cdrom Number of CPUs : 2 ISO image (cdrom boot/cloud-init) : /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
CPU Type : model_Penryn ... and get error after step "Verifying sanlock lockspace initialization" ...
[ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
Interestingly ============================ If I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :
.... [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 ...
What could be the problem?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Simone, thanks for link. vdsm.log attached 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>:
On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote:
Thank you for your response, Simone.
Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance
Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>:
Hi Aleksey, Can you please attach hosted-engine-setup logs?
On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello oVirt guru`s !
I have problem with initial deploy of ovirt 4.0 hosted engine.
My environment : ============================ * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200 * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) * On 3PAR storage I created 2 LUNs for oVirt. - First LUN for oVirt Hosted Engine VM (60GB) - Second LUN for all other VMs (2TB)
# multipath -ll
3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:1:1 sdd 8:48 active ready running |- 3:0:0:1 sdf 8:80 active ready running |- 2:0:0:1 sdb 8:16 active ready running `- 3:0:1:1 sdh 8:112 active ready running
3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:0:0 sda 8:0 active ready running |- 3:0:0:0 sde 8:64 active ready running |- 2:0:1:0 sdc 8:32 active ready running `- 3:0:1:0 sdg 8:96 active ready running
My steps on first server (initial deploy of ovirt 4.0 hosted engine): ============================
# systemctl stop NetworkManager # systemctl disable NetworkManager # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm # yum -y install epel-release # wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... -P /tmp/ # yum install ovirt-hosted-engine-setup # yum install screen # screen -RD
...in screen session :
# hosted-engine --deploy
... in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN... ...
--== CONFIGURATION PREVIEW ==--
... Firewall manager : iptables Gateway address : 10.1.0.1 Host name for web application : KOM-AD01-OVIRT1 Storage Domain type : fc Host ID : 1 LUN ID : 360002ac0000000000000001b0000cec9 Image size GB : 40 Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:1d:07 Boot type : cdrom Number of CPUs : 2 ISO image (cdrom boot/cloud-init) : /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
CPU Type : model_Penryn ... and get error after step "Verifying sanlock lockspace initialization" ...
[ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
Interestingly ============================ If I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :
.... [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 ...
What could be the problem?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Simone, there is something interesting in the vdsm.log? 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Simone, thanks for link. vdsm.log attached
22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>:
On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote:
Thank you for your response, Simone.
Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance
Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>:
Hi Aleksey, Can you please attach hosted-engine-setup logs?
On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello oVirt guru`s !
I have problem with initial deploy of ovirt 4.0 hosted engine.
My environment : ============================ * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200 * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) * On 3PAR storage I created 2 LUNs for oVirt. - First LUN for oVirt Hosted Engine VM (60GB) - Second LUN for all other VMs (2TB)
# multipath -ll
3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:1:1 sdd 8:48 active ready running |- 3:0:0:1 sdf 8:80 active ready running |- 2:0:0:1 sdb 8:16 active ready running `- 3:0:1:1 sdh 8:112 active ready running
3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:0:0 sda 8:0 active ready running |- 3:0:0:0 sde 8:64 active ready running |- 2:0:1:0 sdc 8:32 active ready running `- 3:0:1:0 sdg 8:96 active ready running
My steps on first server (initial deploy of ovirt 4.0 hosted engine): ============================
# systemctl stop NetworkManager # systemctl disable NetworkManager # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm # yum -y install epel-release # wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... -P /tmp/ # yum install ovirt-hosted-engine-setup # yum install screen # screen -RD
...in screen session :
# hosted-engine --deploy
... in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN... ...
--== CONFIGURATION PREVIEW ==--
... Firewall manager : iptables Gateway address : 10.1.0.1 Host name for web application : KOM-AD01-OVIRT1 Storage Domain type : fc Host ID : 1 LUN ID : 360002ac0000000000000001b0000cec9 Image size GB : 40 Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:1d:07 Boot type : cdrom Number of CPUs : 2 ISO image (cdrom boot/cloud-init) : /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
CPU Type : model_Penryn ... and get error after step "Verifying sanlock lockspace initialization" ...
[ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
Interestingly ============================ If I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :
.... [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 ...
What could be the problem?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote:
Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530 Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Simone, thanks for link. vdsm.log attached
22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>:
On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote:
Thank you for your response, Simone.
Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance
Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>:
Hi Aleksey, Can you please attach hosted-engine-setup logs?
On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello oVirt guru`s !
I have problem with initial deploy of ovirt 4.0 hosted engine.
My environment : ============================ * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with multipathd) to storage HP 3PAR 7200 * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) * On 3PAR storage I created 2 LUNs for oVirt. - First LUN for oVirt Hosted Engine VM (60GB) - Second LUN for all other VMs (2TB)
# multipath -ll
3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:1:1 sdd 8:48 active ready running |- 3:0:0:1 sdf 8:80 active ready running |- 2:0:0:1 sdb 8:16 active ready running `- 3:0:1:1 sdh 8:112 active ready running
3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=50 status=active |- 2:0:0:0 sda 8:0 active ready running |- 3:0:0:0 sde 8:64 active ready running |- 2:0:1:0 sdc 8:32 active ready running `- 3:0:1:0 sdg 8:96 active ready running
My steps on first server (initial deploy of ovirt 4.0 hosted engine): ============================
# systemctl stop NetworkManager # systemctl disable NetworkManager # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm # yum -y install epel-release # wget http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... -P /tmp/ # yum install ovirt-hosted-engine-setup # yum install screen # screen -RD
...in screen session :
# hosted-engine --deploy
... in configuration process I chose "fc" as storage type for oVirt hosted engine vm and select 60GB LUN... ...
--== CONFIGURATION PREVIEW ==--
... Firewall manager : iptables Gateway address : 10.1.0.1 Host name for web application : KOM-AD01-OVIRT1 Storage Domain type : fc Host ID : 1 LUN ID : 360002ac0000000000000001b0000cec9 Image size GB : 40 Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:1d:07 Boot type : cdrom Number of CPUs : 2 ISO image (cdrom boot/cloud-init) : /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
CPU Type : model_Penryn ... and get error after step "Verifying sanlock lockspace initialization" ...
[ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log
Interestingly ============================ If I try to deploy hosted-engine v3.6, everything goes well in the same configuration !! :
.... [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 ...
What could be the problem?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

# vdsClient -s 0 getVdsCaps Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote:
Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530
Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Simone, thanks for link. vdsm.log attached
22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>:
On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote:
Thank you for your response, Simone.
Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance
Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>:
Hi Aleksey, Can you please attach hosted-engine-setup logs?
On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
> Hello oVirt guru`s ! > > I have problem with initial deploy of ovirt 4.0 hosted engine. > > My environment : > ============================ > * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with > multipathd) to storage HP 3PAR 7200 > * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) > * On 3PAR storage I created 2 LUNs for oVirt. > - First LUN for oVirt Hosted Engine VM (60GB) > - Second LUN for all other VMs (2TB) > > # multipath -ll > > 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV > size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw > `-+- policy='round-robin 0' prio=50 status=active > |- 2:0:1:1 sdd 8:48 active ready running > |- 3:0:0:1 sdf 8:80 active ready running > |- 2:0:0:1 sdb 8:16 active ready running > `- 3:0:1:1 sdh 8:112 active ready running > > 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV > size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw > `-+- policy='round-robin 0' prio=50 status=active > |- 2:0:0:0 sda 8:0 active ready running > |- 3:0:0:0 sde 8:64 active ready running > |- 2:0:1:0 sdc 8:32 active ready running > `- 3:0:1:0 sdg 8:96 active ready running > > My steps on first server (initial deploy of ovirt 4.0 hosted engine): > ============================ > > # systemctl stop NetworkManager > # systemctl disable NetworkManager > # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm > # yum -y install epel-release > # wget > http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... > -P /tmp/ > # yum install ovirt-hosted-engine-setup > # yum install screen > # screen -RD > > ...in screen session : > > # hosted-engine --deploy > > ... > in configuration process I chose "fc" as storage type for oVirt hosted > engine vm and select 60GB LUN... > ... > > --== CONFIGURATION PREVIEW ==-- > > ... > Firewall manager : iptables > Gateway address : 10.1.0.1 > Host name for web application : KOM-AD01-OVIRT1 > Storage Domain type : fc > Host ID : 1 > LUN ID : > 360002ac0000000000000001b0000cec9 > Image size GB : 40 > Console type : vnc > Memory size MB : 4096 > MAC address : 00:16:3e:77:1d:07 > Boot type : cdrom > Number of CPUs : 2 > ISO image (cdrom boot/cloud-init) : > /tmp/CentOS-7-x86_64-NetInstall-1511.iso
Can I ask why you prefer/need to manually create a VM installing from a CD instead of using the ready-to-use ovirt-engine-appliance? Using the appliance makes the setup process a lot shorted and more comfortable.
> CPU Type : model_Penryn > ... > and get error after step "Verifying sanlock lockspace initialization" > ... > > [ INFO ] Verifying sanlock lockspace initialization > [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network > is unreachable > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: this system is not reliable, > please check the issue, fix and redeploy > Log file is located at > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log > > Interestingly > ============================ > If I try to deploy hosted-engine v3.6, everything goes well in the same > configuration !! : > > .... > [ INFO ] Stage: Transaction setup > [ INFO ] Stage: Misc configuration > [ INFO ] Stage: Package installation > [ INFO ] Stage: Misc configuration > [ INFO ] Configuring libvirt > [ INFO ] Configuring VDSM > [ INFO ] Starting vdsmd > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Configuring the management bridge > [ INFO ] Creating Volume Group > [ INFO ] Creating Storage Domain > [ INFO ] Creating Storage Pool > [ INFO ] Connecting Storage Pool > [ INFO ] Verifying sanlock lockspace initialization > [ INFO ] Creating Image for 'hosted-engine.lockspace' ... > [ INFO ] Image for 'hosted-engine.lockspace' created successfully > [ INFO ] Creating Image for 'hosted-engine.metadata' ... > [ INFO ] Image for 'hosted-engine.metadata' created successfully > [ INFO ] Creating VM Image > [ INFO ] Destroying Storage Pool > [ INFO ] Start monitoring domain > [ INFO ] Configuring VM > [ INFO ] Updating hosted-engine configuration > [ INFO ] Stage: Transaction commit > [ INFO ] Stage: Closing up > [ INFO ] Creating VM > You can now connect to the VM with the following command: > /bin/remote-viewer vnc://localhost:5900 > ... > > What could be the problem? > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote:
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote:
Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530
Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Simone, thanks for link. vdsm.log attached
22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>:
On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote:
Thank you for your response, Simone.
Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance
Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: > Hi Aleksey, > Can you please attach hosted-engine-setup logs? > > On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: > >> Hello oVirt guru`s ! >> >> I have problem with initial deploy of ovirt 4.0 hosted engine. >> >> My environment : >> ============================ >> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >> multipathd) to storage HP 3PAR 7200 >> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >> * On 3PAR storage I created 2 LUNs for oVirt. >> - First LUN for oVirt Hosted Engine VM (60GB) >> - Second LUN for all other VMs (2TB) >> >> # multipath -ll >> >> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >> `-+- policy='round-robin 0' prio=50 status=active >> |- 2:0:1:1 sdd 8:48 active ready running >> |- 3:0:0:1 sdf 8:80 active ready running >> |- 2:0:0:1 sdb 8:16 active ready running >> `- 3:0:1:1 sdh 8:112 active ready running >> >> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >> `-+- policy='round-robin 0' prio=50 status=active >> |- 2:0:0:0 sda 8:0 active ready running >> |- 3:0:0:0 sde 8:64 active ready running >> |- 2:0:1:0 sdc 8:32 active ready running >> `- 3:0:1:0 sdg 8:96 active ready running >> >> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >> ============================ >> >> # systemctl stop NetworkManager >> # systemctl disable NetworkManager >> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >> # yum -y install epel-release >> # wget >> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >> -P /tmp/ >> # yum install ovirt-hosted-engine-setup >> # yum install screen >> # screen -RD >> >> ...in screen session : >> >> # hosted-engine --deploy >> >> ... >> in configuration process I chose "fc" as storage type for oVirt hosted >> engine vm and select 60GB LUN... >> ... >> >> --== CONFIGURATION PREVIEW ==-- >> >> ... >> Firewall manager : iptables >> Gateway address : 10.1.0.1 >> Host name for web application : KOM-AD01-OVIRT1 >> Storage Domain type : fc >> Host ID : 1 >> LUN ID : >> 360002ac0000000000000001b0000cec9 >> Image size GB : 40 >> Console type : vnc >> Memory size MB : 4096 >> MAC address : 00:16:3e:77:1d:07 >> Boot type : cdrom >> Number of CPUs : 2 >> ISO image (cdrom boot/cloud-init) : >> /tmp/CentOS-7-x86_64-NetInstall-1511.iso > > Can I ask why you prefer/need to manually create a VM installing from > a CD instead of using the ready-to-use ovirt-engine-appliance? > Using the appliance makes the setup process a lot shorted and more comfortable. > >> CPU Type : model_Penryn >> ... >> and get error after step "Verifying sanlock lockspace initialization" >> ... >> >> [ INFO ] Verifying sanlock lockspace initialization >> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >> is unreachable >> [ INFO ] Stage: Clean up >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >> please check the issue, fix and redeploy >> Log file is located at >> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >> >> Interestingly >> ============================ >> If I try to deploy hosted-engine v3.6, everything goes well in the same >> configuration !! : >> >> .... >> [ INFO ] Stage: Transaction setup >> [ INFO ] Stage: Misc configuration >> [ INFO ] Stage: Package installation >> [ INFO ] Stage: Misc configuration >> [ INFO ] Configuring libvirt >> [ INFO ] Configuring VDSM >> [ INFO ] Starting vdsmd >> [ INFO ] Waiting for VDSM hardware info >> [ INFO ] Configuring the management bridge >> [ INFO ] Creating Volume Group >> [ INFO ] Creating Storage Domain >> [ INFO ] Creating Storage Pool >> [ INFO ] Connecting Storage Pool >> [ INFO ] Verifying sanlock lockspace initialization >> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >> [ INFO ] Image for 'hosted-engine.metadata' created successfully >> [ INFO ] Creating VM Image >> [ INFO ] Destroying Storage Pool >> [ INFO ] Start monitoring domain >> [ INFO ] Configuring VM >> [ INFO ] Updating hosted-engine configuration >> [ INFO ] Stage: Transaction commit >> [ INFO ] Stage: Closing up >> [ INFO ] Creating VM >> You can now connect to the VM with the following command: >> /bin/remote-viewer vnc://localhost:5900 >> ... >> >> What could be the problem? >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users

Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote:
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote:
Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530
Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Simone, thanks for link. vdsm.log attached
22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>:
On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: > Thank you for your response, Simone. > > Log attached.
It seams it comes from VDSM, can you please attach also vdsm.log?
> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration.
yum install ovirt-engine-appliance
Then follow the instruction here: http://www.ovirt.org/develop/release-management/features/heapplianceflow/
> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >> Hi Aleksey, >> Can you please attach hosted-engine-setup logs? >> >> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >> >>> Hello oVirt guru`s ! >>> >>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>> >>> My environment : >>> ============================ >>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>> multipathd) to storage HP 3PAR 7200 >>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>> * On 3PAR storage I created 2 LUNs for oVirt. >>> - First LUN for oVirt Hosted Engine VM (60GB) >>> - Second LUN for all other VMs (2TB) >>> >>> # multipath -ll >>> >>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>> `-+- policy='round-robin 0' prio=50 status=active >>> |- 2:0:1:1 sdd 8:48 active ready running >>> |- 3:0:0:1 sdf 8:80 active ready running >>> |- 2:0:0:1 sdb 8:16 active ready running >>> `- 3:0:1:1 sdh 8:112 active ready running >>> >>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>> `-+- policy='round-robin 0' prio=50 status=active >>> |- 2:0:0:0 sda 8:0 active ready running >>> |- 3:0:0:0 sde 8:64 active ready running >>> |- 2:0:1:0 sdc 8:32 active ready running >>> `- 3:0:1:0 sdg 8:96 active ready running >>> >>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>> ============================ >>> >>> # systemctl stop NetworkManager >>> # systemctl disable NetworkManager >>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>> # yum -y install epel-release >>> # wget >>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>> -P /tmp/ >>> # yum install ovirt-hosted-engine-setup >>> # yum install screen >>> # screen -RD >>> >>> ...in screen session : >>> >>> # hosted-engine --deploy >>> >>> ... >>> in configuration process I chose "fc" as storage type for oVirt hosted >>> engine vm and select 60GB LUN... >>> ... >>> >>> --== CONFIGURATION PREVIEW ==-- >>> >>> ... >>> Firewall manager : iptables >>> Gateway address : 10.1.0.1 >>> Host name for web application : KOM-AD01-OVIRT1 >>> Storage Domain type : fc >>> Host ID : 1 >>> LUN ID : >>> 360002ac0000000000000001b0000cec9 >>> Image size GB : 40 >>> Console type : vnc >>> Memory size MB : 4096 >>> MAC address : 00:16:3e:77:1d:07 >>> Boot type : cdrom >>> Number of CPUs : 2 >>> ISO image (cdrom boot/cloud-init) : >>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >> >> Can I ask why you prefer/need to manually create a VM installing from >> a CD instead of using the ready-to-use ovirt-engine-appliance? >> Using the appliance makes the setup process a lot shorted and more comfortable. >> >>> CPU Type : model_Penryn >>> ... >>> and get error after step "Verifying sanlock lockspace initialization" >>> ... >>> >>> [ INFO ] Verifying sanlock lockspace initialization >>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>> is unreachable >>> [ INFO ] Stage: Clean up >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>> please check the issue, fix and redeploy >>> Log file is located at >>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>> >>> Interestingly >>> ============================ >>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>> configuration !! : >>> >>> .... >>> [ INFO ] Stage: Transaction setup >>> [ INFO ] Stage: Misc configuration >>> [ INFO ] Stage: Package installation >>> [ INFO ] Stage: Misc configuration >>> [ INFO ] Configuring libvirt >>> [ INFO ] Configuring VDSM >>> [ INFO ] Starting vdsmd >>> [ INFO ] Waiting for VDSM hardware info >>> [ INFO ] Configuring the management bridge >>> [ INFO ] Creating Volume Group >>> [ INFO ] Creating Storage Domain >>> [ INFO ] Creating Storage Pool >>> [ INFO ] Connecting Storage Pool >>> [ INFO ] Verifying sanlock lockspace initialization >>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>> [ INFO ] Creating VM Image >>> [ INFO ] Destroying Storage Pool >>> [ INFO ] Start monitoring domain >>> [ INFO ] Configuring VM >>> [ INFO ] Updating hosted-engine configuration >>> [ INFO ] Stage: Transaction commit >>> [ INFO ] Stage: Closing up >>> [ INFO ] Creating VM >>> You can now connect to the VM with the following command: >>> /bin/remote-viewer vnc://localhost:5900 >>> ... >>> >>> What could be the problem? >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users

This could be the issue here as well as for BZ #1358530 On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote:
Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote:
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote:
Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530
Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Simone, thanks for link. vdsm.log attached
22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: > On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >> Thank you for your response, Simone. >> >> Log attached. > > It seams it comes from VDSM, can you please attach also vdsm.log? > >> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. > > yum install ovirt-engine-appliance > > Then follow the instruction here: > http://www.ovirt.org/develop/release-management/features/heapplianceflow/ > >> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>> Hi Aleksey, >>> Can you please attach hosted-engine-setup logs? >>> >>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>> >>>> Hello oVirt guru`s ! >>>> >>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>> >>>> My environment : >>>> ============================ >>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>> multipathd) to storage HP 3PAR 7200 >>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>> - Second LUN for all other VMs (2TB) >>>> >>>> # multipath -ll >>>> >>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>> `-+- policy='round-robin 0' prio=50 status=active >>>> |- 2:0:1:1 sdd 8:48 active ready running >>>> |- 3:0:0:1 sdf 8:80 active ready running >>>> |- 2:0:0:1 sdb 8:16 active ready running >>>> `- 3:0:1:1 sdh 8:112 active ready running >>>> >>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>> `-+- policy='round-robin 0' prio=50 status=active >>>> |- 2:0:0:0 sda 8:0 active ready running >>>> |- 3:0:0:0 sde 8:64 active ready running >>>> |- 2:0:1:0 sdc 8:32 active ready running >>>> `- 3:0:1:0 sdg 8:96 active ready running >>>> >>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>> ============================ >>>> >>>> # systemctl stop NetworkManager >>>> # systemctl disable NetworkManager >>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>> # yum -y install epel-release >>>> # wget >>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>> -P /tmp/ >>>> # yum install ovirt-hosted-engine-setup >>>> # yum install screen >>>> # screen -RD >>>> >>>> ...in screen session : >>>> >>>> # hosted-engine --deploy >>>> >>>> ... >>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>> engine vm and select 60GB LUN... >>>> ... >>>> >>>> --== CONFIGURATION PREVIEW ==-- >>>> >>>> ... >>>> Firewall manager : iptables >>>> Gateway address : 10.1.0.1 >>>> Host name for web application : KOM-AD01-OVIRT1 >>>> Storage Domain type : fc >>>> Host ID : 1 >>>> LUN ID : >>>> 360002ac0000000000000001b0000cec9 >>>> Image size GB : 40 >>>> Console type : vnc >>>> Memory size MB : 4096 >>>> MAC address : 00:16:3e:77:1d:07 >>>> Boot type : cdrom >>>> Number of CPUs : 2 >>>> ISO image (cdrom boot/cloud-init) : >>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>> >>> Can I ask why you prefer/need to manually create a VM installing from >>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>> Using the appliance makes the setup process a lot shorted and more comfortable. >>> >>>> CPU Type : model_Penryn >>>> ... >>>> and get error after step "Verifying sanlock lockspace initialization" >>>> ... >>>> >>>> [ INFO ] Verifying sanlock lockspace initialization >>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>> is unreachable >>>> [ INFO ] Stage: Clean up >>>> [ INFO ] Generating answer file >>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>> [ INFO ] Stage: Pre-termination >>>> [ INFO ] Stage: Termination >>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>> please check the issue, fix and redeploy >>>> Log file is located at >>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>> >>>> Interestingly >>>> ============================ >>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>> configuration !! : >>>> >>>> .... >>>> [ INFO ] Stage: Transaction setup >>>> [ INFO ] Stage: Misc configuration >>>> [ INFO ] Stage: Package installation >>>> [ INFO ] Stage: Misc configuration >>>> [ INFO ] Configuring libvirt >>>> [ INFO ] Configuring VDSM >>>> [ INFO ] Starting vdsmd >>>> [ INFO ] Waiting for VDSM hardware info >>>> [ INFO ] Configuring the management bridge >>>> [ INFO ] Creating Volume Group >>>> [ INFO ] Creating Storage Domain >>>> [ INFO ] Creating Storage Pool >>>> [ INFO ] Connecting Storage Pool >>>> [ INFO ] Verifying sanlock lockspace initialization >>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>> [ INFO ] Creating VM Image >>>> [ INFO ] Destroying Storage Pool >>>> [ INFO ] Start monitoring domain >>>> [ INFO ] Configuring VM >>>> [ INFO ] Updating hosted-engine configuration >>>> [ INFO ] Stage: Transaction commit >>>> [ INFO ] Stage: Closing up >>>> [ INFO ] Creating VM >>>> You can now connect to the VM with the following command: >>>> /bin/remote-viewer vnc://localhost:5900 >>>> ... >>>> >>>> What could be the problem? >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote:
Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote:
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote:
Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530
Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: > Simone, thanks for link. > vdsm.log attached > > 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>> Thank you for your response, Simone. >>> >>> Log attached. >> >> It seams it comes from VDSM, can you please attach also vdsm.log? >> >>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >> >> yum install ovirt-engine-appliance >> >> Then follow the instruction here: >> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >> >>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>> Hi Aleksey, >>>> Can you please attach hosted-engine-setup logs? >>>> >>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>> >>>>> Hello oVirt guru`s ! >>>>> >>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>> >>>>> My environment : >>>>> ============================ >>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>> multipathd) to storage HP 3PAR 7200 >>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>> - Second LUN for all other VMs (2TB) >>>>> >>>>> # multipath -ll >>>>> >>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>> >>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>> >>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>> ============================ >>>>> >>>>> # systemctl stop NetworkManager >>>>> # systemctl disable NetworkManager >>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>> # yum -y install epel-release >>>>> # wget >>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>> -P /tmp/ >>>>> # yum install ovirt-hosted-engine-setup >>>>> # yum install screen >>>>> # screen -RD >>>>> >>>>> ...in screen session : >>>>> >>>>> # hosted-engine --deploy >>>>> >>>>> ... >>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>> engine vm and select 60GB LUN... >>>>> ... >>>>> >>>>> --== CONFIGURATION PREVIEW ==-- >>>>> >>>>> ... >>>>> Firewall manager : iptables >>>>> Gateway address : 10.1.0.1 >>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>> Storage Domain type : fc >>>>> Host ID : 1 >>>>> LUN ID : >>>>> 360002ac0000000000000001b0000cec9 >>>>> Image size GB : 40 >>>>> Console type : vnc >>>>> Memory size MB : 4096 >>>>> MAC address : 00:16:3e:77:1d:07 >>>>> Boot type : cdrom >>>>> Number of CPUs : 2 >>>>> ISO image (cdrom boot/cloud-init) : >>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>> >>>> Can I ask why you prefer/need to manually create a VM installing from >>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>> >>>>> CPU Type : model_Penryn >>>>> ... >>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>> ... >>>>> >>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>> is unreachable >>>>> [ INFO ] Stage: Clean up >>>>> [ INFO ] Generating answer file >>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>> [ INFO ] Stage: Pre-termination >>>>> [ INFO ] Stage: Termination >>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>> please check the issue, fix and redeploy >>>>> Log file is located at >>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>> >>>>> Interestingly >>>>> ============================ >>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>> configuration !! : >>>>> >>>>> .... >>>>> [ INFO ] Stage: Transaction setup >>>>> [ INFO ] Stage: Misc configuration >>>>> [ INFO ] Stage: Package installation >>>>> [ INFO ] Stage: Misc configuration >>>>> [ INFO ] Configuring libvirt >>>>> [ INFO ] Configuring VDSM >>>>> [ INFO ] Starting vdsmd >>>>> [ INFO ] Waiting for VDSM hardware info >>>>> [ INFO ] Configuring the management bridge >>>>> [ INFO ] Creating Volume Group >>>>> [ INFO ] Creating Storage Domain >>>>> [ INFO ] Creating Storage Pool >>>>> [ INFO ] Connecting Storage Pool >>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>> [ INFO ] Creating VM Image >>>>> [ INFO ] Destroying Storage Pool >>>>> [ INFO ] Start monitoring domain >>>>> [ INFO ] Configuring VM >>>>> [ INFO ] Updating hosted-engine configuration >>>>> [ INFO ] Stage: Transaction commit >>>>> [ INFO ] Stage: Closing up >>>>> [ INFO ] Creating VM >>>>> You can now connect to the VM with the following command: >>>>> /bin/remote-viewer vnc://localhost:5900 >>>>> ... >>>>> >>>>> What could be the problem? >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?" Yes. Of course 25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote:
Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote:
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: > Simone, there is something interesting in the vdsm.log?
For what I saw the issue is not related to the storage but to the network. ovirt-hosted-engine-setup uses the jsonrpc client, instead the code from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and this happens also when the setup asks to create the lockspace volume. It seams that in your case the xmlrpc client could not connect vdsm on the localhost. It could be somehow related to: https://bugzilla.redhat.com/1358530
Can you please try executing sudo vdsClient -s 0 getVdsCaps on that host?
> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >> Simone, thanks for link. >> vdsm.log attached >> >> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>> Thank you for your response, Simone. >>>> >>>> Log attached. >>> >>> It seams it comes from VDSM, can you please attach also vdsm.log? >>> >>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>> >>> yum install ovirt-engine-appliance >>> >>> Then follow the instruction here: >>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>> >>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>> Hi Aleksey, >>>>> Can you please attach hosted-engine-setup logs? >>>>> >>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>> >>>>>> Hello oVirt guru`s ! >>>>>> >>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>> >>>>>> My environment : >>>>>> ============================ >>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>> multipathd) to storage HP 3PAR 7200 >>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>> - Second LUN for all other VMs (2TB) >>>>>> >>>>>> # multipath -ll >>>>>> >>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>> >>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>> >>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>> ============================ >>>>>> >>>>>> # systemctl stop NetworkManager >>>>>> # systemctl disable NetworkManager >>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>> # yum -y install epel-release >>>>>> # wget >>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>> -P /tmp/ >>>>>> # yum install ovirt-hosted-engine-setup >>>>>> # yum install screen >>>>>> # screen -RD >>>>>> >>>>>> ...in screen session : >>>>>> >>>>>> # hosted-engine --deploy >>>>>> >>>>>> ... >>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>> engine vm and select 60GB LUN... >>>>>> ... >>>>>> >>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>> >>>>>> ... >>>>>> Firewall manager : iptables >>>>>> Gateway address : 10.1.0.1 >>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>> Storage Domain type : fc >>>>>> Host ID : 1 >>>>>> LUN ID : >>>>>> 360002ac0000000000000001b0000cec9 >>>>>> Image size GB : 40 >>>>>> Console type : vnc >>>>>> Memory size MB : 4096 >>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>> Boot type : cdrom >>>>>> Number of CPUs : 2 >>>>>> ISO image (cdrom boot/cloud-init) : >>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>> >>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>> >>>>>> CPU Type : model_Penryn >>>>>> ... >>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>> ... >>>>>> >>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>> is unreachable >>>>>> [ INFO ] Stage: Clean up >>>>>> [ INFO ] Generating answer file >>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>> [ INFO ] Stage: Pre-termination >>>>>> [ INFO ] Stage: Termination >>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>> please check the issue, fix and redeploy >>>>>> Log file is located at >>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>> >>>>>> Interestingly >>>>>> ============================ >>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>> configuration !! : >>>>>> >>>>>> .... >>>>>> [ INFO ] Stage: Transaction setup >>>>>> [ INFO ] Stage: Misc configuration >>>>>> [ INFO ] Stage: Package installation >>>>>> [ INFO ] Stage: Misc configuration >>>>>> [ INFO ] Configuring libvirt >>>>>> [ INFO ] Configuring VDSM >>>>>> [ INFO ] Starting vdsmd >>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>> [ INFO ] Configuring the management bridge >>>>>> [ INFO ] Creating Volume Group >>>>>> [ INFO ] Creating Storage Domain >>>>>> [ INFO ] Creating Storage Pool >>>>>> [ INFO ] Connecting Storage Pool >>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>> [ INFO ] Creating VM Image >>>>>> [ INFO ] Destroying Storage Pool >>>>>> [ INFO ] Start monitoring domain >>>>>> [ INFO ] Configuring VM >>>>>> [ INFO ] Updating hosted-engine configuration >>>>>> [ INFO ] Stage: Transaction commit >>>>>> [ INFO ] Stage: Closing up >>>>>> [ INFO ] Creating VM >>>>>> You can now connect to the VM with the following command: >>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>> ... >>>>>> >>>>>> What could be the problem? >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ? 25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote:
Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote:
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: > On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >> Simone, there is something interesting in the vdsm.log? > > For what I saw the issue is not related to the storage but to the network. > ovirt-hosted-engine-setup uses the jsonrpc client, instead the code > from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and > this happens also when the setup asks to create the lockspace volume. > It seams that in your case the xmlrpc client could not connect vdsm on > the localhost. > It could be somehow related to: > https://bugzilla.redhat.com/1358530 > > Can you please try executing > sudo vdsClient -s 0 getVdsCaps > on that host? > >> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>> Simone, thanks for link. >>> vdsm.log attached >>> >>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>> Thank you for your response, Simone. >>>>> >>>>> Log attached. >>>> >>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>> >>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>> >>>> yum install ovirt-engine-appliance >>>> >>>> Then follow the instruction here: >>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>> >>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>> Hi Aleksey, >>>>>> Can you please attach hosted-engine-setup logs? >>>>>> >>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>> >>>>>>> Hello oVirt guru`s ! >>>>>>> >>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>> >>>>>>> My environment : >>>>>>> ============================ >>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>> - Second LUN for all other VMs (2TB) >>>>>>> >>>>>>> # multipath -ll >>>>>>> >>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>> >>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>> >>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>> ============================ >>>>>>> >>>>>>> # systemctl stop NetworkManager >>>>>>> # systemctl disable NetworkManager >>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>> # yum -y install epel-release >>>>>>> # wget >>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>> -P /tmp/ >>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>> # yum install screen >>>>>>> # screen -RD >>>>>>> >>>>>>> ...in screen session : >>>>>>> >>>>>>> # hosted-engine --deploy >>>>>>> >>>>>>> ... >>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>> engine vm and select 60GB LUN... >>>>>>> ... >>>>>>> >>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>> >>>>>>> ... >>>>>>> Firewall manager : iptables >>>>>>> Gateway address : 10.1.0.1 >>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>> Storage Domain type : fc >>>>>>> Host ID : 1 >>>>>>> LUN ID : >>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>> Image size GB : 40 >>>>>>> Console type : vnc >>>>>>> Memory size MB : 4096 >>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>> Boot type : cdrom >>>>>>> Number of CPUs : 2 >>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>> >>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>> >>>>>>> CPU Type : model_Penryn >>>>>>> ... >>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>> ... >>>>>>> >>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>> is unreachable >>>>>>> [ INFO ] Stage: Clean up >>>>>>> [ INFO ] Generating answer file >>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>> [ INFO ] Stage: Pre-termination >>>>>>> [ INFO ] Stage: Termination >>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>> please check the issue, fix and redeploy >>>>>>> Log file is located at >>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>> >>>>>>> Interestingly >>>>>>> ============================ >>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>> configuration !! : >>>>>>> >>>>>>> .... >>>>>>> [ INFO ] Stage: Transaction setup >>>>>>> [ INFO ] Stage: Misc configuration >>>>>>> [ INFO ] Stage: Package installation >>>>>>> [ INFO ] Stage: Misc configuration >>>>>>> [ INFO ] Configuring libvirt >>>>>>> [ INFO ] Configuring VDSM >>>>>>> [ INFO ] Starting vdsmd >>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>> [ INFO ] Configuring the management bridge >>>>>>> [ INFO ] Creating Volume Group >>>>>>> [ INFO ] Creating Storage Domain >>>>>>> [ INFO ] Creating Storage Pool >>>>>>> [ INFO ] Connecting Storage Pool >>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>> [ INFO ] Creating VM Image >>>>>>> [ INFO ] Destroying Storage Pool >>>>>>> [ INFO ] Start monitoring domain >>>>>>> [ INFO ] Configuring VM >>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>> [ INFO ] Stage: Transaction commit >>>>>>> [ INFO ] Stage: Closing up >>>>>>> [ INFO ] Creating VM >>>>>>> You can now connect to the VM with the following command: >>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>> ... >>>>>>> >>>>>>> What could be the problem? >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote:
Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: > # vdsClient -s 0 getVdsCaps > > Traceback (most recent call last): > File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> > code, message = commands[command][0](commandArgs) > File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap > return self.ExecAndExit(self.s.getVdsCapabilities()) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ > return self.__send(self.__name, args) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request > verbose=self.__verbose > File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request > return self.single_request(host, handler, request_body, verbose) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request > self.send_content(h, request_body) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content > connection.endheaders(request_body) > File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders > self._send_output(message_body) > File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output > self.send(msg) > File "/usr/lib64/python2.7/httplib.py", line 797, in send > self.connect() > File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect > sock = socket.create_connection((self.host, self.port), self.timeout) > File "/usr/lib64/python2.7/socket.py", line 571, in create_connection > raise err > error: [Errno 101] Network is unreachable
Yaniv, can you please take also a look to this one? it's exactly the opposite of https://bugzilla.redhat.com/1358530 Here the jsonrpcclient works but not the xmlrpc one.
> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>> Simone, there is something interesting in the vdsm.log? >> >> For what I saw the issue is not related to the storage but to the network. >> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >> this happens also when the setup asks to create the lockspace volume. >> It seams that in your case the xmlrpc client could not connect vdsm on >> the localhost. >> It could be somehow related to: >> https://bugzilla.redhat.com/1358530 >> >> Can you please try executing >> sudo vdsClient -s 0 getVdsCaps >> on that host? >> >>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>> Simone, thanks for link. >>>> vdsm.log attached >>>> >>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>> Thank you for your response, Simone. >>>>>> >>>>>> Log attached. >>>>> >>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>> >>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>> >>>>> yum install ovirt-engine-appliance >>>>> >>>>> Then follow the instruction here: >>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>> >>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>> Hi Aleksey, >>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>> >>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>> >>>>>>>> Hello oVirt guru`s ! >>>>>>>> >>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>> >>>>>>>> My environment : >>>>>>>> ============================ >>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>> >>>>>>>> # multipath -ll >>>>>>>> >>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>> >>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>> >>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>> ============================ >>>>>>>> >>>>>>>> # systemctl stop NetworkManager >>>>>>>> # systemctl disable NetworkManager >>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>> # yum -y install epel-release >>>>>>>> # wget >>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>> -P /tmp/ >>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>> # yum install screen >>>>>>>> # screen -RD >>>>>>>> >>>>>>>> ...in screen session : >>>>>>>> >>>>>>>> # hosted-engine --deploy >>>>>>>> >>>>>>>> ... >>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>> engine vm and select 60GB LUN... >>>>>>>> ... >>>>>>>> >>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>> >>>>>>>> ... >>>>>>>> Firewall manager : iptables >>>>>>>> Gateway address : 10.1.0.1 >>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>> Storage Domain type : fc >>>>>>>> Host ID : 1 >>>>>>>> LUN ID : >>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>> Image size GB : 40 >>>>>>>> Console type : vnc >>>>>>>> Memory size MB : 4096 >>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>> Boot type : cdrom >>>>>>>> Number of CPUs : 2 >>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>> >>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>> >>>>>>>> CPU Type : model_Penryn >>>>>>>> ... >>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>> ... >>>>>>>> >>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>> is unreachable >>>>>>>> [ INFO ] Stage: Clean up >>>>>>>> [ INFO ] Generating answer file >>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>> [ INFO ] Stage: Termination >>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>> please check the issue, fix and redeploy >>>>>>>> Log file is located at >>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>> >>>>>>>> Interestingly >>>>>>>> ============================ >>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>> configuration !! : >>>>>>>> >>>>>>>> .... >>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>> [ INFO ] Stage: Package installation >>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>> [ INFO ] Configuring libvirt >>>>>>>> [ INFO ] Configuring VDSM >>>>>>>> [ INFO ] Starting vdsmd >>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>> [ INFO ] Creating Volume Group >>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>> [ INFO ] Creating VM Image >>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>> [ INFO ] Start monitoring domain >>>>>>>> [ INFO ] Configuring VM >>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>> [ INFO ] Stage: Closing up >>>>>>>> [ INFO ] Creating VM >>>>>>>> You can now connect to the VM with the following command: >>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>> ... >>>>>>>> >>>>>>>> What could be the problem? >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Ok. 1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager 2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file. 3) Reboot server 4) Try deploy oVirt HE 4 and I get the same error [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log What ideas further? 25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote:
Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: > On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >> # vdsClient -s 0 getVdsCaps >> >> Traceback (most recent call last): >> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >> code, message = commands[command][0](commandArgs) >> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >> return self.ExecAndExit(self.s.getVdsCapabilities()) >> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >> return self.__send(self.__name, args) >> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >> verbose=self.__verbose >> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >> return self.single_request(host, handler, request_body, verbose) >> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >> self.send_content(h, request_body) >> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >> connection.endheaders(request_body) >> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >> self._send_output(message_body) >> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >> self.send(msg) >> File "/usr/lib64/python2.7/httplib.py", line 797, in send >> self.connect() >> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >> sock = socket.create_connection((self.host, self.port), self.timeout) >> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >> raise err >> error: [Errno 101] Network is unreachable > > Yaniv, can you please take also a look to this one? > it's exactly the opposite of https://bugzilla.redhat.com/1358530 > Here the jsonrpcclient works but not the xmlrpc one. > >> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>> Simone, there is something interesting in the vdsm.log? >>> >>> For what I saw the issue is not related to the storage but to the network. >>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>> this happens also when the setup asks to create the lockspace volume. >>> It seams that in your case the xmlrpc client could not connect vdsm on >>> the localhost. >>> It could be somehow related to: >>> https://bugzilla.redhat.com/1358530 >>> >>> Can you please try executing >>> sudo vdsClient -s 0 getVdsCaps >>> on that host? >>> >>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>> Simone, thanks for link. >>>>> vdsm.log attached >>>>> >>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>> Thank you for your response, Simone. >>>>>>> >>>>>>> Log attached. >>>>>> >>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>> >>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>> >>>>>> yum install ovirt-engine-appliance >>>>>> >>>>>> Then follow the instruction here: >>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>> >>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>> Hi Aleksey, >>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>> >>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>> >>>>>>>>> Hello oVirt guru`s ! >>>>>>>>> >>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>> >>>>>>>>> My environment : >>>>>>>>> ============================ >>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>> >>>>>>>>> # multipath -ll >>>>>>>>> >>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>> >>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>> >>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>> ============================ >>>>>>>>> >>>>>>>>> # systemctl stop NetworkManager >>>>>>>>> # systemctl disable NetworkManager >>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>> # yum -y install epel-release >>>>>>>>> # wget >>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>> -P /tmp/ >>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>> # yum install screen >>>>>>>>> # screen -RD >>>>>>>>> >>>>>>>>> ...in screen session : >>>>>>>>> >>>>>>>>> # hosted-engine --deploy >>>>>>>>> >>>>>>>>> ... >>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>> ... >>>>>>>>> >>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>> >>>>>>>>> ... >>>>>>>>> Firewall manager : iptables >>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>> Storage Domain type : fc >>>>>>>>> Host ID : 1 >>>>>>>>> LUN ID : >>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>> Image size GB : 40 >>>>>>>>> Console type : vnc >>>>>>>>> Memory size MB : 4096 >>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>> Boot type : cdrom >>>>>>>>> Number of CPUs : 2 >>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>> >>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>> >>>>>>>>> CPU Type : model_Penryn >>>>>>>>> ... >>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>> ... >>>>>>>>> >>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>> is unreachable >>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>> [ INFO ] Generating answer file >>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>> please check the issue, fix and redeploy >>>>>>>>> Log file is located at >>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>> >>>>>>>>> Interestingly >>>>>>>>> ============================ >>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>> configuration !! : >>>>>>>>> >>>>>>>>> .... >>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>> [ INFO ] Configuring VM >>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>> [ INFO ] Creating VM >>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>> ... >>>>>>>>> >>>>>>>>> What could be the problem? >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users@ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 1:46 PM, <aleksey.maksimov@it-kb.ru> wrote:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
Is your host hostname resolvable now? Can you please check it with: ping $(python -c 'import socket; print(socket.gethostname())')
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: > Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? > After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>> # vdsClient -s 0 getVdsCaps >>> >>> Traceback (most recent call last): >>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>> code, message = commands[command][0](commandArgs) >>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>> return self.__send(self.__name, args) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>> verbose=self.__verbose >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>> return self.single_request(host, handler, request_body, verbose) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>> self.send_content(h, request_body) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>> connection.endheaders(request_body) >>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>> self._send_output(message_body) >>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>> self.send(msg) >>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>> self.connect() >>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>> sock = socket.create_connection((self.host, self.port), self.timeout) >>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>> raise err >>> error: [Errno 101] Network is unreachable >> >> Yaniv, can you please take also a look to this one? >> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >> Here the jsonrpcclient works but not the xmlrpc one. >> >>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>> Simone, there is something interesting in the vdsm.log? >>>> >>>> For what I saw the issue is not related to the storage but to the network. >>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>> this happens also when the setup asks to create the lockspace volume. >>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>> the localhost. >>>> It could be somehow related to: >>>> https://bugzilla.redhat.com/1358530 >>>> >>>> Can you please try executing >>>> sudo vdsClient -s 0 getVdsCaps >>>> on that host? >>>> >>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>> Simone, thanks for link. >>>>>> vdsm.log attached >>>>>> >>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>> Thank you for your response, Simone. >>>>>>>> >>>>>>>> Log attached. >>>>>>> >>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>> >>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>> >>>>>>> yum install ovirt-engine-appliance >>>>>>> >>>>>>> Then follow the instruction here: >>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>> >>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>> Hi Aleksey, >>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>> >>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>> >>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>> >>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>> >>>>>>>>>> My environment : >>>>>>>>>> ============================ >>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>> >>>>>>>>>> # multipath -ll >>>>>>>>>> >>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>> >>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>> >>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>> ============================ >>>>>>>>>> >>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>> # yum -y install epel-release >>>>>>>>>> # wget >>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>> -P /tmp/ >>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>> # yum install screen >>>>>>>>>> # screen -RD >>>>>>>>>> >>>>>>>>>> ...in screen session : >>>>>>>>>> >>>>>>>>>> # hosted-engine --deploy >>>>>>>>>> >>>>>>>>>> ... >>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>> >>>>>>>>>> ... >>>>>>>>>> Firewall manager : iptables >>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>> Storage Domain type : fc >>>>>>>>>> Host ID : 1 >>>>>>>>>> LUN ID : >>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>> Image size GB : 40 >>>>>>>>>> Console type : vnc >>>>>>>>>> Memory size MB : 4096 >>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>> Boot type : cdrom >>>>>>>>>> Number of CPUs : 2 >>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>> >>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>> >>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>> ... >>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>> is unreachable >>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>> Log file is located at >>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>> >>>>>>>>>> Interestingly >>>>>>>>>> ============================ >>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>> configuration !! : >>>>>>>>>> >>>>>>>>>> .... >>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> What could be the problem? >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users@ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Yes. # ping $(python -c 'import socket; print(socket.gethostname())') PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms but... # vdsClient -s 0 getVdsCaps Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable 25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
This could be the issue here as well as for BZ #1358530
On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: > Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? > After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>> # vdsClient -s 0 getVdsCaps >>> >>> Traceback (most recent call last): >>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>> code, message = commands[command][0](commandArgs) >>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>> return self.__send(self.__name, args) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>> verbose=self.__verbose >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>> return self.single_request(host, handler, request_body, verbose) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>> self.send_content(h, request_body) >>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>> connection.endheaders(request_body) >>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>> self._send_output(message_body) >>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>> self.send(msg) >>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>> self.connect() >>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>> sock = socket.create_connection((self.host, self.port), self.timeout) >>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>> raise err >>> error: [Errno 101] Network is unreachable >> >> Yaniv, can you please take also a look to this one? >> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >> Here the jsonrpcclient works but not the xmlrpc one. >> >>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>> Simone, there is something interesting in the vdsm.log? >>>> >>>> For what I saw the issue is not related to the storage but to the network. >>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>> this happens also when the setup asks to create the lockspace volume. >>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>> the localhost. >>>> It could be somehow related to: >>>> https://bugzilla.redhat.com/1358530 >>>> >>>> Can you please try executing >>>> sudo vdsClient -s 0 getVdsCaps >>>> on that host? >>>> >>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>> Simone, thanks for link. >>>>>> vdsm.log attached >>>>>> >>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>> Thank you for your response, Simone. >>>>>>>> >>>>>>>> Log attached. >>>>>>> >>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>> >>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>> >>>>>>> yum install ovirt-engine-appliance >>>>>>> >>>>>>> Then follow the instruction here: >>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>> >>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>> Hi Aleksey, >>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>> >>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>> >>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>> >>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>> >>>>>>>>>> My environment : >>>>>>>>>> ============================ >>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>> >>>>>>>>>> # multipath -ll >>>>>>>>>> >>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>> >>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>> >>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>> ============================ >>>>>>>>>> >>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>> # yum -y install epel-release >>>>>>>>>> # wget >>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>> -P /tmp/ >>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>> # yum install screen >>>>>>>>>> # screen -RD >>>>>>>>>> >>>>>>>>>> ...in screen session : >>>>>>>>>> >>>>>>>>>> # hosted-engine --deploy >>>>>>>>>> >>>>>>>>>> ... >>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>> >>>>>>>>>> ... >>>>>>>>>> Firewall manager : iptables >>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>> Storage Domain type : fc >>>>>>>>>> Host ID : 1 >>>>>>>>>> LUN ID : >>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>> Image size GB : 40 >>>>>>>>>> Console type : vnc >>>>>>>>>> Memory size MB : 4096 >>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>> Boot type : cdrom >>>>>>>>>> Number of CPUs : 2 >>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>> >>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>> >>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>> ... >>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>> is unreachable >>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>> Log file is located at >>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>> >>>>>>>>>> Interestingly >>>>>>>>>> ============================ >>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>> configuration !! : >>>>>>>>>> >>>>>>>>>> .... >>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> What could be the problem? >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users@ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote: > This could be the issue here as well as for BZ #1358530 > > On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: >> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names.
So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on? Can you please try the workaround described here https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ?
>> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >>> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>> # vdsClient -s 0 getVdsCaps >>>> >>>> Traceback (most recent call last): >>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>> code, message = commands[command][0](commandArgs) >>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>> return self.__send(self.__name, args) >>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>> verbose=self.__verbose >>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>> return self.single_request(host, handler, request_body, verbose) >>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>> self.send_content(h, request_body) >>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>> connection.endheaders(request_body) >>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>> self._send_output(message_body) >>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>> self.send(msg) >>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>> self.connect() >>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>> raise err >>>> error: [Errno 101] Network is unreachable >>> >>> Yaniv, can you please take also a look to this one? >>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>> Here the jsonrpcclient works but not the xmlrpc one. >>> >>>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>> Simone, there is something interesting in the vdsm.log? >>>>> >>>>> For what I saw the issue is not related to the storage but to the network. >>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>> this happens also when the setup asks to create the lockspace volume. >>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>> the localhost. >>>>> It could be somehow related to: >>>>> https://bugzilla.redhat.com/1358530 >>>>> >>>>> Can you please try executing >>>>> sudo vdsClient -s 0 getVdsCaps >>>>> on that host? >>>>> >>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>>> Simone, thanks for link. >>>>>>> vdsm.log attached >>>>>>> >>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>> Thank you for your response, Simone. >>>>>>>>> >>>>>>>>> Log attached. >>>>>>>> >>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>> >>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>> >>>>>>>> yum install ovirt-engine-appliance >>>>>>>> >>>>>>>> Then follow the instruction here: >>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>> >>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>> Hi Aleksey, >>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>> >>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>> >>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>> >>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>> >>>>>>>>>>> My environment : >>>>>>>>>>> ============================ >>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>> >>>>>>>>>>> # multipath -ll >>>>>>>>>>> >>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>> >>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>> >>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>> ============================ >>>>>>>>>>> >>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>> # wget >>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>> -P /tmp/ >>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>> # yum install screen >>>>>>>>>>> # screen -RD >>>>>>>>>>> >>>>>>>>>>> ...in screen session : >>>>>>>>>>> >>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>> >>>>>>>>>>> ... >>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>> ... >>>>>>>>>>> >>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>> >>>>>>>>>>> ... >>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>> Host ID : 1 >>>>>>>>>>> LUN ID : >>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>> Image size GB : 40 >>>>>>>>>>> Console type : vnc >>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>> Boot type : cdrom >>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>> >>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>> >>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>> ... >>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>> ... >>>>>>>>>>> >>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>> is unreachable >>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>> Log file is located at >>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>> >>>>>>>>>>> Interestingly >>>>>>>>>>> ============================ >>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>> configuration !! : >>>>>>>>>>> >>>>>>>>>>> .... >>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>> ... >>>>>>>>>>> >>>>>>>>>>> What could be the problem? >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users@ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

# ss -plutn Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23)) tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14)) 25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
"Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?"
Yes. Of course
25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>: > On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski > <piotr.kliczewski@gmail.com> wrote: >> This could be the issue here as well as for BZ #1358530 >> >> On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: >>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. > > So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 > > Aleksey, was your DNS configured with DNS1 and DNS2 just on the > interface you used to create the management bridge on? > Can you please try the workaround described here > https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? > >>> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >>>> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>> # vdsClient -s 0 getVdsCaps >>>>> >>>>> Traceback (most recent call last): >>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>>> code, message = commands[command][0](commandArgs) >>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>>> return self.__send(self.__name, args) >>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>>> verbose=self.__verbose >>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>>> return self.single_request(host, handler, request_body, verbose) >>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>>> self.send_content(h, request_body) >>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>>> connection.endheaders(request_body) >>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>>> self._send_output(message_body) >>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>>> self.send(msg) >>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>>> self.connect() >>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>>> raise err >>>>> error: [Errno 101] Network is unreachable >>>> >>>> Yaniv, can you please take also a look to this one? >>>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>>> Here the jsonrpcclient works but not the xmlrpc one. >>>> >>>>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>> Simone, there is something interesting in the vdsm.log? >>>>>> >>>>>> For what I saw the issue is not related to the storage but to the network. >>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>>> this happens also when the setup asks to create the lockspace volume. >>>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>>> the localhost. >>>>>> It could be somehow related to: >>>>>> https://bugzilla.redhat.com/1358530 >>>>>> >>>>>> Can you please try executing >>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>> on that host? >>>>>> >>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>>>> Simone, thanks for link. >>>>>>>> vdsm.log attached >>>>>>>> >>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>> >>>>>>>>>> Log attached. >>>>>>>>> >>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>>> >>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>>> >>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>> >>>>>>>>> Then follow the instruction here: >>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>> >>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>>> Hi Aleksey, >>>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>>> >>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>> >>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>>> >>>>>>>>>>>> My environment : >>>>>>>>>>>> ============================ >>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>> >>>>>>>>>>>> # multipath -ll >>>>>>>>>>>> >>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>>> >>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>>> >>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>>> ============================ >>>>>>>>>>>> >>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>> # wget >>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>> # yum install screen >>>>>>>>>>>> # screen -RD >>>>>>>>>>>> >>>>>>>>>>>> ...in screen session : >>>>>>>>>>>> >>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>> >>>>>>>>>>>> ... >>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>> ... >>>>>>>>>>>> >>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>> >>>>>>>>>>>> ... >>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>> LUN ID : >>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>> Console type : vnc >>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>> >>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>>> >>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>> ... >>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>>> ... >>>>>>>>>>>> >>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>>> is unreachable >>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>> Log file is located at >>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>>> >>>>>>>>>>>> Interestingly >>>>>>>>>>>> ============================ >>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>>> configuration !! : >>>>>>>>>>>> >>>>>>>>>>>> .... >>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>>> ... >>>>>>>>>>>> >>>>>>>>>>>> What could be the problem? >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Users mailing list >>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 2:15 PM, <aleksey.maksimov@it-kb.ru> wrote:
# ss -plutn
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))
vdsm is properly bind over ipv6. Can you please check if you can connect to vdsm with: telnet kom-ad01-vm31.holding.com 54321 and with telnet ::1 54321 ?
tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))
25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote:
What am I supposed to do for successfully deploy ovirt 4 ? Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: > "Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?" > > Yes. Of course > > 25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski >> <piotr.kliczewski@gmail.com> wrote: >>> This could be the issue here as well as for BZ #1358530 >>> >>> On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >>>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. >> >> So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 >> >> Aleksey, was your DNS configured with DNS1 and DNS2 just on the >> interface you used to create the management bridge on? >> Can you please try the workaround described here >> https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? >> >>>> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>> # vdsClient -s 0 getVdsCaps >>>>>> >>>>>> Traceback (most recent call last): >>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>>>> code, message = commands[command][0](commandArgs) >>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>>>> return self.__send(self.__name, args) >>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>>>> verbose=self.__verbose >>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>>>> return self.single_request(host, handler, request_body, verbose) >>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>>>> self.send_content(h, request_body) >>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>>>> connection.endheaders(request_body) >>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>>>> self._send_output(message_body) >>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>>>> self.send(msg) >>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>>>> self.connect() >>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>>>> raise err >>>>>> error: [Errno 101] Network is unreachable >>>>> >>>>> Yaniv, can you please take also a look to this one? >>>>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>>>> Here the jsonrpcclient works but not the xmlrpc one. >>>>> >>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>> Simone, there is something interesting in the vdsm.log? >>>>>>> >>>>>>> For what I saw the issue is not related to the storage but to the network. >>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>>>> this happens also when the setup asks to create the lockspace volume. >>>>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>>>> the localhost. >>>>>>> It could be somehow related to: >>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>> >>>>>>> Can you please try executing >>>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>>> on that host? >>>>>>> >>>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>>>>> Simone, thanks for link. >>>>>>>>> vdsm.log attached >>>>>>>>> >>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>>> >>>>>>>>>>> Log attached. >>>>>>>>>> >>>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>>>> >>>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>>>> >>>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>>> >>>>>>>>>> Then follow the instruction here: >>>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>>> >>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>>>> Hi Aleksey, >>>>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>>> >>>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>>>> >>>>>>>>>>>>> My environment : >>>>>>>>>>>>> ============================ >>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>>> >>>>>>>>>>>>> # multipath -ll >>>>>>>>>>>>> >>>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>>>> >>>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>>>> >>>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>>>> ============================ >>>>>>>>>>>>> >>>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>>> # wget >>>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>>> # yum install screen >>>>>>>>>>>>> # screen -RD >>>>>>>>>>>>> >>>>>>>>>>>>> ...in screen session : >>>>>>>>>>>>> >>>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>>> >>>>>>>>>>>>> ... >>>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>>> ... >>>>>>>>>>>>> >>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>>> >>>>>>>>>>>>> ... >>>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>>> LUN ID : >>>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>>> Console type : vnc >>>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>>> >>>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>>>> >>>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>>> ... >>>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>>>> ... >>>>>>>>>>>>> >>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>>>> is unreachable >>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>>> Log file is located at >>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>>>> >>>>>>>>>>>>> Interestingly >>>>>>>>>>>>> ============================ >>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>>>> configuration !! : >>>>>>>>>>>>> >>>>>>>>>>>>> .... >>>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>>>> ... >>>>>>>>>>>>> >>>>>>>>>>>>> What could be the problem? >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

telnet kom-ad01-vm31.holding.com 54321 = success connection telnet ::1 54321 Trying ::1... telnet: connect to address ::1: Network is unreachable (ipv6 on my server disabled) 25.07.2016, 15:35, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:15 PM, <aleksey.maksimov@it-kb.ru> wrote:
# ss -plutn
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))
vdsm is properly bind over ipv6.
Can you please check if you can connect to vdsm with: telnet kom-ad01-vm31.holding.com 54321 and with telnet ::1 54321 ?
tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))
25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote: > What am I supposed to do for successfully deploy ovirt 4 ? > Any ideas ?
Can you please try to explicitly configure your DNS with nameserver under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for the interface you are going to use?
> 25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >> "Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?" >> >> Yes. Of course >> >> 25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>: >>> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski >>> <piotr.kliczewski@gmail.com> wrote: >>>> This could be the issue here as well as for BZ #1358530 >>>> >>>> On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >>>>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. >>> >>> So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 >>> >>> Aleksey, was your DNS configured with DNS1 and DNS2 just on the >>> interface you used to create the management bridge on? >>> Can you please try the workaround described here >>> https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? >>> >>>>> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>> # vdsClient -s 0 getVdsCaps >>>>>>> >>>>>>> Traceback (most recent call last): >>>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>>>>> code, message = commands[command][0](commandArgs) >>>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>>>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>>>>> return self.__send(self.__name, args) >>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>>>>> verbose=self.__verbose >>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>>>>> return self.single_request(host, handler, request_body, verbose) >>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>>>>> self.send_content(h, request_body) >>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>>>>> connection.endheaders(request_body) >>>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>>>>> self._send_output(message_body) >>>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>>>>> self.send(msg) >>>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>>>>> self.connect() >>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>>>>> raise err >>>>>>> error: [Errno 101] Network is unreachable >>>>>> >>>>>> Yaniv, can you please take also a look to this one? >>>>>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>>>>> Here the jsonrpcclient works but not the xmlrpc one. >>>>>> >>>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>> Simone, there is something interesting in the vdsm.log? >>>>>>>> >>>>>>>> For what I saw the issue is not related to the storage but to the network. >>>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>>>>> this happens also when the setup asks to create the lockspace volume. >>>>>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>>>>> the localhost. >>>>>>>> It could be somehow related to: >>>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>>> >>>>>>>> Can you please try executing >>>>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>>>> on that host? >>>>>>>> >>>>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>>>>>> Simone, thanks for link. >>>>>>>>>> vdsm.log attached >>>>>>>>>> >>>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>>>> >>>>>>>>>>>> Log attached. >>>>>>>>>>> >>>>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>>>>> >>>>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>>>>> >>>>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>>>> >>>>>>>>>>> Then follow the instruction here: >>>>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>>>> >>>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>>>>> Hi Aleksey, >>>>>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>>>>> >>>>>>>>>>>>>> My environment : >>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>>>> >>>>>>>>>>>>>> # multipath -ll >>>>>>>>>>>>>> >>>>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>>>>> >>>>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>>>>> >>>>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>> >>>>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>>>> # wget >>>>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>>>> # yum install screen >>>>>>>>>>>>>> # screen -RD >>>>>>>>>>>>>> >>>>>>>>>>>>>> ...in screen session : >>>>>>>>>>>>>> >>>>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>>>> >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> >>>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>>>> >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>>>> LUN ID : >>>>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>>>> Console type : vnc >>>>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>>>> >>>>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>>>>> >>>>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> >>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>>>>> is unreachable >>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>>>> Log file is located at >>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>>>>> >>>>>>>>>>>>>> Interestingly >>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>>>>> configuration !! : >>>>>>>>>>>>>> >>>>>>>>>>>>>> .... >>>>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> >>>>>>>>>>>>>> What could be the problem? >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 2:38 PM, <aleksey.maksimov@it-kb.ru> wrote:
telnet kom-ad01-vm31.holding.com 54321 = success connection
telnet ::1 54321 Trying ::1... telnet: connect to address ::1: Network is unreachable
(ipv6 on my server disabled)
Ok, so the issue seams here: now by default vdsm binds on :: and its heuristc can end up using ipv6. See this one: https://bugzilla.redhat.com/show_bug.cgi?id=1350883 Can you please try enabling ipv6 on your host or setting management_ip = 0.0.0.0 under the [address] section in /etc/vdsm/vdsm.conf and then restarting vdsm.
25.07.2016, 15:35, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:15 PM, <aleksey.maksimov@it-kb.ru> wrote:
# ss -plutn
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))
vdsm is properly bind over ipv6.
Can you please check if you can connect to vdsm with: telnet kom-ad01-vm31.holding.com 54321 and with telnet ::1 54321 ?
tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))
25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>:
Ok.
1) I stopped and disabled the service NetworkManager # systemctl stop NetworkManager # systemctl disable NetworkManager
2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file.
3) Reboot server
4) Try deploy oVirt HE 4 and I get the same error
[ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log
What ideas further?
25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>: > On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote: >> What am I supposed to do for successfully deploy ovirt 4 ? >> Any ideas ? > > Can you please try to explicitly configure your DNS with nameserver > under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for > the interface you are going to use? > >> 25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>> "Aleksey, was your DNS configured with DNS1 and DNS2 just on the interface you used to create the management bridge on?" >>> >>> Yes. Of course >>> >>> 25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com>: >>>> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski >>>> <piotr.kliczewski@gmail.com> wrote: >>>>> This could be the issue here as well as for BZ #1358530 >>>>> >>>>> On Mon, Jul 25, 2016 at 10:53 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >>>>>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. >>>> >>>> So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 >>>> >>>> Aleksey, was your DNS configured with DNS1 and DNS2 just on the >>>> interface you used to create the management bridge on? >>>> Can you please try the workaround described here >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? >>>> >>>>>> 25.07.2016, 11:26, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>> On Mon, Jul 25, 2016 at 10:22 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>> # vdsClient -s 0 getVdsCaps >>>>>>>> >>>>>>>> Traceback (most recent call last): >>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>>>>>> code, message = commands[command][0](commandArgs) >>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>>>>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>>>>>> return self.__send(self.__name, args) >>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>>>>>> verbose=self.__verbose >>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>>>>>> return self.single_request(host, handler, request_body, verbose) >>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>>>>>> self.send_content(h, request_body) >>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>>>>>> connection.endheaders(request_body) >>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>>>>>> self._send_output(message_body) >>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>>>>>> self.send(msg) >>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>>>>>> self.connect() >>>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>>>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>>>>>> raise err >>>>>>>> error: [Errno 101] Network is unreachable >>>>>>> >>>>>>> Yaniv, can you please take also a look to this one? >>>>>>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>>>>>> Here the jsonrpcclient works but not the xmlrpc one. >>>>>>> >>>>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>> Simone, there is something interesting in the vdsm.log? >>>>>>>>> >>>>>>>>> For what I saw the issue is not related to the storage but to the network. >>>>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>>>>>> this happens also when the setup asks to create the lockspace volume. >>>>>>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>>>>>> the localhost. >>>>>>>>> It could be somehow related to: >>>>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>>>> >>>>>>>>> Can you please try executing >>>>>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>>>>> on that host? >>>>>>>>> >>>>>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" <aleksey.maksimov@it-kb.ru>: >>>>>>>>>>> Simone, thanks for link. >>>>>>>>>>> vdsm.log attached >>>>>>>>>>> >>>>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>>>>> >>>>>>>>>>>>> Log attached. >>>>>>>>>>>> >>>>>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>>>>>> >>>>>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>>>>>> >>>>>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>>>>> >>>>>>>>>>>> Then follow the instruction here: >>>>>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>>>>> >>>>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" <stirabos@redhat.com>: >>>>>>>>>>>>>> Hi Aleksey, >>>>>>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> My environment : >>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # multipath -ll >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>>>>> # wget >>>>>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>>>>> # yum install screen >>>>>>>>>>>>>>> # screen -RD >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ...in screen session : >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>>>>> LUN ID : >>>>>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>>>>> Console type : vnc >>>>>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>>>>>> is unreachable >>>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>>>>> Log file is located at >>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Interestingly >>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>>>>>> configuration !! : >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> .... >>>>>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> What could be the problem? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 4:02 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Jul 25, 2016 at 2:38 PM, <aleksey.maksimov@it-kb.ru> wrote:
telnet kom-ad01-vm31.holding.com 54321 = success connection
telnet ::1 54321 Trying ::1... telnet: connect to address ::1: Network is unreachable
(ipv6 on my server disabled)
Ok, so the issue seams here: now by default vdsm binds on :: and its heuristc can end up using ipv6. See this one: https://bugzilla.redhat.com/show_bug.cgi?id=1350883
Can you please try enabling ipv6 on your host or setting management_ip = 0.0.0.0 under the [address] section in /etc/vdsm/vdsm.conf and then restarting vdsm.
Could you please also add the 'ip addr' output? Just interested to see how IPv6 was disabled on the host. It will be even better if you could apply the patch ( https://gerrit.ovirt.org/#/c/60020) and check.
On Mon, Jul 25, 2016 at 2:15 PM, <aleksey.maksimov@it-kb.ru> wrote:
# ss -plutn
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))
vdsm is properly bind over ipv6.
Can you please check if you can connect to vdsm with: telnet kom-ad01-vm31.holding.com 54321 and with telnet ::1 54321 ?
tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))
25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" < aleksey.maksimov@it-kb.ru>: > Ok. > > 1) I stopped and disabled the service NetworkManager > # systemctl stop NetworkManager > # systemctl disable NetworkManager > > 2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file. > > 3) Reboot server > > 4) Try deploy oVirt HE 4 and I get the same error > > [ INFO ] Creating Volume Group > [ INFO ] Creating Storage Domain > [ INFO ] Creating Storage Pool > [ INFO ] Connecting Storage Pool > [ INFO ] Verifying sanlock lockspace initialization > [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy > Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log > > What ideas further? > > 25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote: >>> What am I supposed to do for successfully deploy ovirt 4 ? >>> Any ideas ? >> >> Can you please try to explicitly configure your DNS with nameserver >> under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for >> the interface you are going to use? >> >>> 25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" < aleksey.maksimov@it-kb.ru>: >>>> "Aleksey, was your DNS configured with DNS1 and DNS2 just on
25.07.2016, 15:35, "Simone Tiraboschi" <stirabos@redhat.com>: the interface you used to create the management bridge on?"
>>>> >>>> Yes. Of course >>>> >>>> 25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com : >>>>> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski >>>>> <piotr.kliczewski@gmail.com> wrote: >>>>>> This could be the issue here as well as for BZ #1358530 >>>>>> >>>>>> On Mon, Jul 25, 2016 at 10:53 AM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >>>>>>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. >>>>> >>>>> So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 >>>>> >>>>> Aleksey, was your DNS configured with DNS1 and DNS2 just on the >>>>> interface you used to create the management bridge on? >>>>> Can you please try the workaround described here >>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? >>>>> >>>>>>> 25.07.2016, 11:26, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>> On Mon, Jul 25, 2016 at 10:22 AM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>> # vdsClient -s 0 getVdsCaps >>>>>>>>> >>>>>>>>> Traceback (most recent call last): >>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>>>>>>> code, message = commands[command][0](commandArgs) >>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>>>>>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>>>>>>> return self.__send(self.__name, args) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>>>>>>> verbose=self.__verbose >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>>>>>>> return self.single_request(host, handler, request_body, verbose) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>>>>>>> self.send_content(h, request_body) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>>>>>>> connection.endheaders(request_body) >>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>>>>>>> self._send_output(message_body) >>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>>>>>>> self.send(msg) >>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>>>>>>> self.connect() >>>>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>>>>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>>>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>>>>>>> raise err >>>>>>>>> error: [Errno 101] Network is unreachable >>>>>>>> >>>>>>>> Yaniv, can you please take also a look to this one? >>>>>>>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>>>>>>> Here the jsonrpcclient works but not the xmlrpc one. >>>>>>>> >>>>>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>> Simone, there is something interesting in the vdsm.log? >>>>>>>>>> >>>>>>>>>> For what I saw the issue is not related to the storage but to the network. >>>>>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>>>>>>> this happens also when the setup asks to create the lockspace volume. >>>>>>>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>>>>>>> the localhost. >>>>>>>>>> It could be somehow related to: >>>>>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>>>>> >>>>>>>>>> Can you please try executing >>>>>>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>>>>>> on that host? >>>>>>>>>> >>>>>>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" < aleksey.maksimov@it-kb.ru>: >>>>>>>>>>>> Simone, thanks for link. >>>>>>>>>>>> vdsm.log attached >>>>>>>>>>>> >>>>>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Log attached. >>>>>>>>>>>>> >>>>>>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>>>>>>> >>>>>>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>>>>>>> >>>>>>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>>>>>> >>>>>>>>>>>>> Then follow the instruction here: >>>>>>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>>>>>> >>>>>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>>>>>>>>> Hi Aleksey, >>>>>>>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My environment : >>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # multipath -ll >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>>>>>> # wget >>>>>>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>>>>>> # yum install screen >>>>>>>>>>>>>>>> # screen -RD >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ...in screen session : >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>>>>>> LUN ID : >>>>>>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>>>>>> Console type : vnc >>>>>>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>>>>>>> is unreachable >>>>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>>>>>> Log file is located at >>>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Interestingly >>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>>>>>>> configuration !! : >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> .... >>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> What could be the problem? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Edi, danken, this is again the malfunctioning heuristics in the client for handling ipv6. Is there a bug on this issue ? On Jul 27, 2016 8:57 AM, <aleksey.maksimov@it-kb.ru> wrote:
I enabled ipv6 for "lo" and "ovirtmgmt" interfaces and deployment process overt completed successfully.
# cat /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.ovirtmgmt.disable_ipv6 = 0
Thank you for your Help!
26.07.2016, 09:13, "Edward Haas" <ehaas@redhat.com>:
On Mon, Jul 25, 2016 at 4:02 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Jul 25, 2016 at 2:38 PM, <aleksey.maksimov@it-kb.ru> wrote:
telnet kom-ad01-vm31.holding.com 54321 = success connection
telnet ::1 54321 Trying ::1... telnet: connect to address ::1: Network is unreachable
(ipv6 on my server disabled)
Ok, so the issue seams here: now by default vdsm binds on :: and its heuristc can end up using ipv6. See this one: https://bugzilla.redhat.com/show_bug.cgi?id=1350883
Can you please try enabling ipv6 on your host or setting management_ip = 0.0.0.0 under the [address] section in /etc/vdsm/vdsm.conf and then restarting vdsm.
Could you please also add the 'ip addr' output? Just interested to see how IPv6 was disabled on the host. It will be even better if you could apply the patch ( https://gerrit.ovirt.org/#/c/60020) and check.
On Mon, Jul 25, 2016 at 2:15 PM, <aleksey.maksimov@it-kb.ru> wrote:
# ss -plutn
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))
vdsm is properly bind over ipv6.
Can you please check if you can connect to vdsm with: telnet kom-ad01-vm31.holding.com 54321 and with telnet ::1 54321 ?
tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))
25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Yes.
# ping $(python -c 'import socket; print(socket.gethostname())')
PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 ttl=64 time=0.015 ms 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 ttl=64 time=0.011 ms ^C --- KOM-AD01-VM31.holding.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms
but...
and the output of ss -plutn
# vdsClient -s 0 getVdsCaps
Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> code, message = commands[command][0](commandArgs) File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap return self.ExecAndExit(self.s.getVdsCapabilities()) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" < aleksey.maksimov@it-kb.ru>: > Ok. > > 1) I stopped and disabled the service NetworkManager > # systemctl stop NetworkManager > # systemctl disable NetworkManager > > 2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add PEERDNS=no in ifcfg-* file. > > 3) Reboot server > > 4) Try deploy oVirt HE 4 and I get the same error > > [ INFO ] Creating Volume Group > [ INFO ] Creating Storage Domain > [ INFO ] Creating Storage Pool > [ INFO ] Connecting Storage Pool > [ INFO ] Verifying sanlock lockspace initialization > [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network is unreachable > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy > Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup- 20160725142534-t81kwf.log > > What ideas further? > > 25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>: >> On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> wrote: >>> What am I supposed to do for successfully deploy ovirt 4 ? >>> Any ideas ? >> >> Can you please try to explicitly configure your DNS with nameserver >> under /etc/resolv.conf and remove DNS1 and DNS2 and set PEERDNS=no for >> the interface you are going to use? >> >>> 25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" < aleksey.maksimov@it-kb.ru>: >>>> "Aleksey, was your DNS configured with DNS1 and DNS2 just on
25.07.2016, 15:35, "Simone Tiraboschi" <stirabos@redhat.com>: the interface you used to create the management bridge on?"
>>>> >>>> Yes. Of course >>>> >>>> 25.07.2016, 12:27, "Simone Tiraboschi" <stirabos@redhat.com : >>>>> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski >>>>> <piotr.kliczewski@gmail.com> wrote: >>>>>> This could be the issue here as well as for BZ #1358530 >>>>>> >>>>>> On Mon, Jul 25, 2016 at 10:53 AM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>> Could this be due to the fact that the ovirt installer has changed network configuration files (ifcfg-*, resolv.conf) ? >>>>>>> After the error in ovirt installation process I see from resolv.conf disappeared on my DNS servers entry and now the server is unable to resolve names. >>>>> >>>>> So it could be related to https://bugzilla.redhat.com/show_bug.cgi?id=1160423 >>>>> >>>>> Aleksey, was your DNS configured with DNS1 and DNS2 just on the >>>>> interface you used to create the management bridge on? >>>>> Can you please try the workaround described here >>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? >>>>> >>>>>>> 25.07.2016, 11:26, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>> On Mon, Jul 25, 2016 at 10:22 AM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>> # vdsClient -s 0 getVdsCaps >>>>>>>>> >>>>>>>>> Traceback (most recent call last): >>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> >>>>>>>>> code, message = commands[command][0](commandArgs) >>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap >>>>>>>>> return self.ExecAndExit(self.s.getVdsCapabilities()) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ >>>>>>>>> return self.__send(self.__name, args) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request >>>>>>>>> verbose=self.__verbose >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request >>>>>>>>> return self.single_request(host, handler, request_body, verbose) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request >>>>>>>>> self.send_content(h, request_body) >>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content >>>>>>>>> connection.endheaders(request_body) >>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders >>>>>>>>> self._send_output(message_body) >>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output >>>>>>>>> self.send(msg) >>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, in send >>>>>>>>> self.connect() >>>>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>>>>>> sock = socket.create_connection((self.host, self.port), self.timeout) >>>>>>>>> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection >>>>>>>>> raise err >>>>>>>>> error: [Errno 101] Network is unreachable >>>>>>>> >>>>>>>> Yaniv, can you please take also a look to this one? >>>>>>>> it's exactly the opposite of https://bugzilla.redhat.com/1358530 >>>>>>>> Here the jsonrpcclient works but not the xmlrpc one. >>>>>>>> >>>>>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>> Simone, there is something interesting in the vdsm.log? >>>>>>>>>> >>>>>>>>>> For what I saw the issue is not related to the storage but to the network. >>>>>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, instead the code >>>>>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc client somewhere and >>>>>>>>>> this happens also when the setup asks to create the lockspace volume. >>>>>>>>>> It seams that in your case the xmlrpc client could not connect vdsm on >>>>>>>>>> the localhost. >>>>>>>>>> It could be somehow related to: >>>>>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>>>>> >>>>>>>>>> Can you please try executing >>>>>>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>>>>>> on that host? >>>>>>>>>> >>>>>>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" < aleksey.maksimov@it-kb.ru>: >>>>>>>>>>>> Simone, thanks for link. >>>>>>>>>>>> vdsm.log attached >>>>>>>>>>>> >>>>>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Log attached. >>>>>>>>>>>>> >>>>>>>>>>>>> It seams it comes from VDSM, can you please attach also vdsm.log? >>>>>>>>>>>>> >>>>>>>>>>>>>> I don't use ovirt-engine-appliance because I have not found "how-to" for ovirt-engine-appliance deployment in hosted engine configuration. >>>>>>>>>>>>> >>>>>>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>>>>>> >>>>>>>>>>>>> Then follow the instruction here: >>>>>>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>>>>>> >>>>>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" < stirabos@redhat.com>: >>>>>>>>>>>>>>> Hi Aleksey, >>>>>>>>>>>>>>> Can you please attach hosted-engine-setup logs? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, < aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I have problem with initial deploy of ovirt 4.0 hosted engine. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My environment : >>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with Qlogic FC HBA connected (with >>>>>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>>>>>> * On each server installed CentOS 7.2 Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for oVirt. >>>>>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM (60GB) >>>>>>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # multipath -ll >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 3par-vv1 (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>>>>>> size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready running >>>>>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready running >>>>>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready running >>>>>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready running >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 3par-vv2 (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 status=active >>>>>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready running >>>>>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready running >>>>>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready running >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My steps on first server (initial deploy of ovirt 4.0 hosted engine): >>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>>>>>> # yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>>>>>> # wget >>>>>>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>>>>>> # yum install screen >>>>>>>>>>>>>>>> # screen -RD >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ...in screen session : >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> in configuration process I chose "fc" as storage type for oVirt hosted >>>>>>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>>>>>> Host name for web application : KOM-AD01-OVIRT1 >>>>>>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>>>>>> LUN ID : >>>>>>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>>>>>> Console type : vnc >>>>>>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>>>>>> ISO image (cdrom boot/cloud-init) : >>>>>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Can I ask why you prefer/need to manually create a VM installing from >>>>>>>>>>>>>>> a CD instead of using the ready-to-use ovirt-engine-appliance? >>>>>>>>>>>>>>> Using the appliance makes the setup process a lot shorted and more comfortable. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> and get error after step "Verifying sanlock lockspace initialization" >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network >>>>>>>>>>>>>>>> is unreachable >>>>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, >>>>>>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>>>>>> Log file is located at >>>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup- 20160722123404-t26vw0.log >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Interestingly >>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, everything goes well in the same >>>>>>>>>>>>>>>> configuration !! : >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> .... >>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>>>>>> [ INFO ] Configuring the management bridge >>>>>>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace initialization >>>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ... >>>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully >>>>>>>>>>>>>>>> [ INFO ] Creating Image for 'hosted-engine.metadata' ... >>>>>>>>>>>>>>>> [ INFO ] Image for 'hosted-engine.metadata' created successfully >>>>>>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>>>>>> You can now connect to the VM with the following command: >>>>>>>>>>>>>>>> /bin/remote-viewer vnc://localhost:5900 >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> What could be the problem? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Jul 27, 2016 at 8:12 AM, Roy Golan <rgolan@redhat.com> wrote:
Edi, danken, this is again the malfunctioning heuristics in the client for handling ipv6. Is there a bug on this issue ?
https://bugzilla.redhat.com/show_bug.cgi?id=1350883
On Jul 27, 2016 8:57 AM, <aleksey.maksimov@it-kb.ru> wrote:
I enabled ipv6 for "lo" and "ovirtmgmt" interfaces and deployment process overt completed successfully.
# cat /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.ovirtmgmt.disable_ipv6 = 0
Thank you for your Help!
26.07.2016, 09:13, "Edward Haas" <ehaas@redhat.com>:
On Mon, Jul 25, 2016 at 4:02 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Jul 25, 2016 at 2:38 PM, <aleksey.maksimov@it-kb.ru> wrote:
telnet kom-ad01-vm31.holding.com 54321 = success connection
telnet ::1 54321 Trying ::1... telnet: connect to address ::1: Network is unreachable
(ipv6 on my server disabled)
Ok, so the issue seams here: now by default vdsm binds on :: and its heuristc can end up using ipv6. See this one: https://bugzilla.redhat.com/show_bug.cgi?id=1350883
Can you please try enabling ipv6 on your host or setting management_ip = 0.0.0.0 under the [address] section in /etc/vdsm/vdsm.conf and then restarting vdsm.
Could you please also add the 'ip addr' output? Just interested to see how IPv6 was disabled on the host. It will be even better if you could apply the patch (https://gerrit.ovirt.org/#/c/60020) and check.
25.07.2016, 15:35, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:15 PM, <aleksey.maksimov@it-kb.ru> wrote:
# ss -plutn
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=827,fd=6)) udp UNCONN 0 0 *:161 *:* users:(("snmpd",pid=1609,fd=6)) udp UNCONN 0 0 127.0.0.1:323 *:* users:(("chronyd",pid=795,fd=1)) udp UNCONN 0 0 *:959 *:* users:(("rpcbind",pid=827,fd=7)) udp UNCONN 0 0 127.0.0.1:25375 *:* users:(("snmpd",pid=1609,fd=8)) udp UNCONN 0 0 127.0.0.1:25376 *:* users:(("cmapeerd",pid=2056,fd=5)) udp UNCONN 0 0 127.0.0.1:25393 *:* users:(("cmanicd",pid=2278,fd=3)) udp UNCONN 0 0 :::111 :::* users:(("rpcbind",pid=827,fd=9)) udp UNCONN 0 0 :::959 :::* users:(("rpcbind",pid=827,fd=10)) tcp LISTEN 0 128 *:2381 *:* users:(("hpsmhd",pid=3903,fd=4),("hpsmhd",pid=3901,fd=4),("hpsmhd",pid=3900,fd=4),("hpsmhd",pid=3899,fd=4),("hpsmhd",pid=3898,fd=4),("hpsmhd",pid=3893,fd=4)) tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=827,fd=8)) tcp LISTEN 0 5 *:54322 *:* users:(("ovirt-imageio-d",pid=753,fd=3)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=1606,fd=3)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1948,fd=13)) tcp LISTEN 0 128 *:2301 *:* users:(("hpsmhd",pid=3903,fd=3),("hpsmhd",pid=3901,fd=3),("hpsmhd",pid=3900,fd=3),("hpsmhd",pid=3899,fd=3),("hpsmhd",pid=3898,fd=3),("hpsmhd",pid=3893,fd=3)) tcp LISTEN 0 30 *:16514 *:* users:(("libvirtd",pid=10688,fd=13)) tcp LISTEN 0 128 127.0.0.1:199 *:* users:(("snmpd",pid=1609,fd=9)) tcp LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=827,fd=11)) tcp LISTEN 0 5 :::54321 :::* users:(("vdsm",pid=11077,fd=23))
vdsm is properly bind over ipv6.
Can you please check if you can connect to vdsm with: telnet kom-ad01-vm31.holding.com 54321 and with telnet ::1 54321 ?
tcp LISTEN 0 30 :::16514 :::* users:(("libvirtd",pid=10688,fd=14))
25.07.2016, 15:11, "Simone Tiraboschi" <stirabos@redhat.com>:
On Mon, Jul 25, 2016 at 2:03 PM, <aleksey.maksimov@it-kb.ru> wrote: > Yes. > > # ping $(python -c 'import socket; print(socket.gethostname())') > > PING KOM-AD01-VM31.holding.com (10.1.0.231) 56(84) bytes of data. > 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=1 > ttl=64 time=0.030 ms > 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=2 > ttl=64 time=0.015 ms > 64 bytes from kom-ad01-vm31.holding.com (10.1.0.231): icmp_seq=3 > ttl=64 time=0.011 ms > ^C > --- KOM-AD01-VM31.holding.com ping statistics --- > 3 packets transmitted, 3 received, 0% packet loss, time 2001ms > rtt min/avg/max/mdev = 0.011/0.018/0.030/0.009 ms > > but...
and the output of ss -plutn
> # vdsClient -s 0 getVdsCaps > > Traceback (most recent call last): > File "/usr/share/vdsm/vdsClient.py", line 2980, in <module> > code, message = commands[command][0](commandArgs) > File "/usr/share/vdsm/vdsClient.py", line 543, in do_getCap > return self.ExecAndExit(self.s.getVdsCapabilities()) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ > return self.__send(self.__name, args) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in > __request > verbose=self.__verbose > File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request > return self.single_request(host, handler, request_body, > verbose) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in > single_request > self.send_content(h, request_body) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in > send_content > connection.endheaders(request_body) > File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders > self._send_output(message_body) > File "/usr/lib64/python2.7/httplib.py", line 835, in > _send_output > self.send(msg) > File "/usr/lib64/python2.7/httplib.py", line 797, in send > self.connect() > File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line > 203, in connect > sock = socket.create_connection((self.host, self.port), > self.timeout) > File "/usr/lib64/python2.7/socket.py", line 571, in > create_connection > raise err > error: [Errno 101] Network is unreachable > > 25.07.2016, 14:58, "aleksey.maksimov@it-kb.ru" > <aleksey.maksimov@it-kb.ru>: >> Ok. >> >> 1) I stopped and disabled the service NetworkManager >> # systemctl stop NetworkManager >> # systemctl disable NetworkManager >> >> 2) I filled /etc/resolv.conf and remove DNS1,DNS2 and add >> PEERDNS=no in ifcfg-* file. >> >> 3) Reboot server >> >> 4) Try deploy oVirt HE 4 and I get the same error >> >> [ INFO ] Creating Volume Group >> [ INFO ] Creating Storage Domain >> [ INFO ] Creating Storage Pool >> [ INFO ] Connecting Storage Pool >> [ INFO ] Verifying sanlock lockspace initialization >> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno >> 101] Network is unreachable >> [ INFO ] Stage: Clean up >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160725143420.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed: this system is not >> reliable, please check the issue, fix and redeploy >> Log file is located at >> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160725142534-t81kwf.log >> >> What ideas further? >> >> 25.07.2016, 13:06, "Simone Tiraboschi" <stirabos@redhat.com>: >>> On Mon, Jul 25, 2016 at 11:54 AM, <aleksey.maksimov@it-kb.ru> >>> wrote: >>>> What am I supposed to do for successfully deploy ovirt 4 ? >>>> Any ideas ? >>> >>> Can you please try to explicitly configure your DNS with >>> nameserver >>> under /etc/resolv.conf and remove DNS1 and DNS2 and set >>> PEERDNS=no for >>> the interface you are going to use? >>> >>>> 25.07.2016, 12:47, "aleksey.maksimov@it-kb.ru" >>>> <aleksey.maksimov@it-kb.ru>: >>>>> "Aleksey, was your DNS configured with DNS1 and DNS2 just on >>>>> the interface you used to create the management bridge on?" >>>>> >>>>> Yes. Of course >>>>> >>>>> 25.07.2016, 12:27, "Simone Tiraboschi" >>>>> <stirabos@redhat.com>: >>>>>> On Mon, Jul 25, 2016 at 10:56 AM, Piotr Kliczewski >>>>>> <piotr.kliczewski@gmail.com> wrote: >>>>>>> This could be the issue here as well as for BZ #1358530 >>>>>>> >>>>>>> On Mon, Jul 25, 2016 at 10:53 AM, >>>>>>> <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>> Could this be due to the fact that the ovirt installer >>>>>>>> has changed network configuration files (ifcfg-*, resolv.conf) ? >>>>>>>> After the error in ovirt installation process I see >>>>>>>> from resolv.conf disappeared on my DNS servers entry and now the server is >>>>>>>> unable to resolve names. >>>>>> >>>>>> So it could be related to >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160423 >>>>>> >>>>>> Aleksey, was your DNS configured with DNS1 and DNS2 just >>>>>> on the >>>>>> interface you used to create the management bridge on? >>>>>> Can you please try the workaround described here >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c25 ? >>>>>> >>>>>>>> 25.07.2016, 11:26, "Simone Tiraboschi" >>>>>>>> <stirabos@redhat.com>: >>>>>>>>> On Mon, Jul 25, 2016 at 10:22 AM, >>>>>>>>> <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>> # vdsClient -s 0 getVdsCaps >>>>>>>>>> >>>>>>>>>> Traceback (most recent call last): >>>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 2980, in >>>>>>>>>> <module> >>>>>>>>>> code, message = >>>>>>>>>> commands[command][0](commandArgs) >>>>>>>>>> File "/usr/share/vdsm/vdsClient.py", line 543, in >>>>>>>>>> do_getCap >>>>>>>>>> return >>>>>>>>>> self.ExecAndExit(self.s.getVdsCapabilities()) >>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line >>>>>>>>>> 1233, in __call__ >>>>>>>>>> return self.__send(self.__name, args) >>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line >>>>>>>>>> 1587, in __request >>>>>>>>>> verbose=self.__verbose >>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line >>>>>>>>>> 1273, in request >>>>>>>>>> return self.single_request(host, handler, >>>>>>>>>> request_body, verbose) >>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line >>>>>>>>>> 1301, in single_request >>>>>>>>>> self.send_content(h, request_body) >>>>>>>>>> File "/usr/lib64/python2.7/xmlrpclib.py", line >>>>>>>>>> 1448, in send_content >>>>>>>>>> connection.endheaders(request_body) >>>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 975, >>>>>>>>>> in endheaders >>>>>>>>>> self._send_output(message_body) >>>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 835, >>>>>>>>>> in _send_output >>>>>>>>>> self.send(msg) >>>>>>>>>> File "/usr/lib64/python2.7/httplib.py", line 797, >>>>>>>>>> in send >>>>>>>>>> self.connect() >>>>>>>>>> File >>>>>>>>>> "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect >>>>>>>>>> sock = socket.create_connection((self.host, >>>>>>>>>> self.port), self.timeout) >>>>>>>>>> File "/usr/lib64/python2.7/socket.py", line 571, >>>>>>>>>> in create_connection >>>>>>>>>> raise err >>>>>>>>>> error: [Errno 101] Network is unreachable >>>>>>>>> >>>>>>>>> Yaniv, can you please take also a look to this one? >>>>>>>>> it's exactly the opposite of >>>>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>>>> Here the jsonrpcclient works but not the xmlrpc one. >>>>>>>>> >>>>>>>>>> 25.07.2016, 11:17, "Simone Tiraboschi" >>>>>>>>>> <stirabos@redhat.com>: >>>>>>>>>>> On Mon, Jul 25, 2016 at 7:51 AM, >>>>>>>>>>> <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>> Simone, there is something interesting in the >>>>>>>>>>>> vdsm.log? >>>>>>>>>>> >>>>>>>>>>> For what I saw the issue is not related to the >>>>>>>>>>> storage but to the network. >>>>>>>>>>> ovirt-hosted-engine-setup uses the jsonrpc client, >>>>>>>>>>> instead the code >>>>>>>>>>> from ovirt-hosted-engine-ha still uses the xmlrpc >>>>>>>>>>> client somewhere and >>>>>>>>>>> this happens also when the setup asks to create the >>>>>>>>>>> lockspace volume. >>>>>>>>>>> It seams that in your case the xmlrpc client could >>>>>>>>>>> not connect vdsm on >>>>>>>>>>> the localhost. >>>>>>>>>>> It could be somehow related to: >>>>>>>>>>> https://bugzilla.redhat.com/1358530 >>>>>>>>>>> >>>>>>>>>>> Can you please try executing >>>>>>>>>>> sudo vdsClient -s 0 getVdsCaps >>>>>>>>>>> on that host? >>>>>>>>>>> >>>>>>>>>>>> 22.07.2016, 19:36, "aleksey.maksimov@it-kb.ru" >>>>>>>>>>>> <aleksey.maksimov@it-kb.ru>: >>>>>>>>>>>>> Simone, thanks for link. >>>>>>>>>>>>> vdsm.log attached >>>>>>>>>>>>> >>>>>>>>>>>>> 22.07.2016, 19:28, "Simone Tiraboschi" >>>>>>>>>>>>> <stirabos@redhat.com>: >>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 5:59 PM, >>>>>>>>>>>>>> <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>>> Thank you for your response, Simone. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Log attached. >>>>>>>>>>>>>> >>>>>>>>>>>>>> It seams it comes from VDSM, can you please >>>>>>>>>>>>>> attach also vdsm.log? >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I don't use ovirt-engine-appliance because I >>>>>>>>>>>>>>> have not found "how-to" for ovirt-engine-appliance deployment in hosted >>>>>>>>>>>>>>> engine configuration. >>>>>>>>>>>>>> >>>>>>>>>>>>>> yum install ovirt-engine-appliance >>>>>>>>>>>>>> >>>>>>>>>>>>>> Then follow the instruction here: >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://www.ovirt.org/develop/release-management/features/heapplianceflow/ >>>>>>>>>>>>>> >>>>>>>>>>>>>>> 22.07.2016, 17:09, "Simone Tiraboschi" >>>>>>>>>>>>>>> <stirabos@redhat.com>: >>>>>>>>>>>>>>>> Hi Aleksey, >>>>>>>>>>>>>>>> Can you please attach hosted-engine-setup >>>>>>>>>>>>>>>> logs? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, Jul 22, 2016 at 3:46 PM, >>>>>>>>>>>>>>>> <aleksey.maksimov@it-kb.ru> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hello oVirt guru`s ! >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I have problem with initial deploy of >>>>>>>>>>>>>>>>> ovirt 4.0 hosted engine. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> My environment : >>>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>>> * Two servers HP ProLiant DL 360 G5 with >>>>>>>>>>>>>>>>> Qlogic FC HBA connected (with >>>>>>>>>>>>>>>>> multipathd) to storage HP 3PAR 7200 >>>>>>>>>>>>>>>>> * On each server installed CentOS 7.2 >>>>>>>>>>>>>>>>> Linux (3.10.0-327.22.2.el7.x86_64) >>>>>>>>>>>>>>>>> * On 3PAR storage I created 2 LUNs for >>>>>>>>>>>>>>>>> oVirt. >>>>>>>>>>>>>>>>> - First LUN for oVirt Hosted Engine VM >>>>>>>>>>>>>>>>> (60GB) >>>>>>>>>>>>>>>>> - Second LUN for all other VMs (2TB) >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> # multipath -ll >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 3par-vv1 >>>>>>>>>>>>>>>>> (360002ac0000000000000001b0000cec9) dm-0 3PARdata,VV >>>>>>>>>>>>>>>>> size=60G features='1 queue_if_no_path' >>>>>>>>>>>>>>>>> hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 >>>>>>>>>>>>>>>>> status=active >>>>>>>>>>>>>>>>> |- 2:0:1:1 sdd 8:48 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> |- 3:0:0:1 sdf 8:80 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> |- 2:0:0:1 sdb 8:16 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> `- 3:0:1:1 sdh 8:112 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 3par-vv2 >>>>>>>>>>>>>>>>> (360002ac000000000000000160000cec9) dm-1 3PARdata,VV >>>>>>>>>>>>>>>>> size=2.0T features='1 queue_if_no_path' >>>>>>>>>>>>>>>>> hwhandler='1 alua' wp=rw >>>>>>>>>>>>>>>>> `-+- policy='round-robin 0' prio=50 >>>>>>>>>>>>>>>>> status=active >>>>>>>>>>>>>>>>> |- 2:0:0:0 sda 8:0 active ready running >>>>>>>>>>>>>>>>> |- 3:0:0:0 sde 8:64 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> |- 2:0:1:0 sdc 8:32 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> `- 3:0:1:0 sdg 8:96 active ready >>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> My steps on first server (initial deploy >>>>>>>>>>>>>>>>> of ovirt 4.0 hosted engine): >>>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> # systemctl stop NetworkManager >>>>>>>>>>>>>>>>> # systemctl disable NetworkManager >>>>>>>>>>>>>>>>> # yum -y install >>>>>>>>>>>>>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>>>>>>>>>>>>>>>> # yum -y install epel-release >>>>>>>>>>>>>>>>> # wget >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> http://mirror.yandex.ru/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1511... >>>>>>>>>>>>>>>>> -P /tmp/ >>>>>>>>>>>>>>>>> # yum install ovirt-hosted-engine-setup >>>>>>>>>>>>>>>>> # yum install screen >>>>>>>>>>>>>>>>> # screen -RD >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ...in screen session : >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> # hosted-engine --deploy >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>> in configuration process I chose "fc" as >>>>>>>>>>>>>>>>> storage type for oVirt hosted >>>>>>>>>>>>>>>>> engine vm and select 60GB LUN... >>>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> --== CONFIGURATION PREVIEW ==-- >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>> Firewall manager : iptables >>>>>>>>>>>>>>>>> Gateway address : 10.1.0.1 >>>>>>>>>>>>>>>>> Host name for web application : >>>>>>>>>>>>>>>>> KOM-AD01-OVIRT1 >>>>>>>>>>>>>>>>> Storage Domain type : fc >>>>>>>>>>>>>>>>> Host ID : 1 >>>>>>>>>>>>>>>>> LUN ID : >>>>>>>>>>>>>>>>> 360002ac0000000000000001b0000cec9 >>>>>>>>>>>>>>>>> Image size GB : 40 >>>>>>>>>>>>>>>>> Console type : vnc >>>>>>>>>>>>>>>>> Memory size MB : 4096 >>>>>>>>>>>>>>>>> MAC address : 00:16:3e:77:1d:07 >>>>>>>>>>>>>>>>> Boot type : cdrom >>>>>>>>>>>>>>>>> Number of CPUs : 2 >>>>>>>>>>>>>>>>> ISO image (cdrom >>>>>>>>>>>>>>>>> boot/cloud-init) : >>>>>>>>>>>>>>>>> /tmp/CentOS-7-x86_64-NetInstall-1511.iso >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Can I ask why you prefer/need to manually >>>>>>>>>>>>>>>> create a VM installing from >>>>>>>>>>>>>>>> a CD instead of using the ready-to-use >>>>>>>>>>>>>>>> ovirt-engine-appliance? >>>>>>>>>>>>>>>> Using the appliance makes the setup process >>>>>>>>>>>>>>>> a lot shorted and more comfortable. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> CPU Type : model_Penryn >>>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>> and get error after step "Verifying >>>>>>>>>>>>>>>>> sanlock lockspace initialization" >>>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace >>>>>>>>>>>>>>>>> initialization >>>>>>>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc >>>>>>>>>>>>>>>>> configuration': [Errno 101] Network >>>>>>>>>>>>>>>>> is unreachable >>>>>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160722124133.conf' >>>>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>>>>> [ ERROR ] Hosted Engine deployment >>>>>>>>>>>>>>>>> failed: this system is not reliable, >>>>>>>>>>>>>>>>> please check the issue, fix and redeploy >>>>>>>>>>>>>>>>> Log file is located at >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160722123404-t26vw0.log >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Interestingly >>>>>>>>>>>>>>>>> ============================ >>>>>>>>>>>>>>>>> If I try to deploy hosted-engine v3.6, >>>>>>>>>>>>>>>>> everything goes well in the same >>>>>>>>>>>>>>>>> configuration !! : >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> .... >>>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction setup >>>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>>>> [ INFO ] Stage: Package installation >>>>>>>>>>>>>>>>> [ INFO ] Stage: Misc configuration >>>>>>>>>>>>>>>>> [ INFO ] Configuring libvirt >>>>>>>>>>>>>>>>> [ INFO ] Configuring VDSM >>>>>>>>>>>>>>>>> [ INFO ] Starting vdsmd >>>>>>>>>>>>>>>>> [ INFO ] Waiting for VDSM hardware info >>>>>>>>>>>>>>>>> [ INFO ] Configuring the management >>>>>>>>>>>>>>>>> bridge >>>>>>>>>>>>>>>>> [ INFO ] Creating Volume Group >>>>>>>>>>>>>>>>> [ INFO ] Creating Storage Domain >>>>>>>>>>>>>>>>> [ INFO ] Creating Storage Pool >>>>>>>>>>>>>>>>> [ INFO ] Connecting Storage Pool >>>>>>>>>>>>>>>>> [ INFO ] Verifying sanlock lockspace >>>>>>>>>>>>>>>>> initialization >>>>>>>>>>>>>>>>> [ INFO ] Creating Image for >>>>>>>>>>>>>>>>> 'hosted-engine.lockspace' ... >>>>>>>>>>>>>>>>> [ INFO ] Image for >>>>>>>>>>>>>>>>> 'hosted-engine.lockspace' created successfully >>>>>>>>>>>>>>>>> [ INFO ] Creating Image for >>>>>>>>>>>>>>>>> 'hosted-engine.metadata' ... >>>>>>>>>>>>>>>>> [ INFO ] Image for >>>>>>>>>>>>>>>>> 'hosted-engine.metadata' created successfully >>>>>>>>>>>>>>>>> [ INFO ] Creating VM Image >>>>>>>>>>>>>>>>> [ INFO ] Destroying Storage Pool >>>>>>>>>>>>>>>>> [ INFO ] Start monitoring domain >>>>>>>>>>>>>>>>> [ INFO ] Configuring VM >>>>>>>>>>>>>>>>> [ INFO ] Updating hosted-engine >>>>>>>>>>>>>>>>> configuration >>>>>>>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>>>>>>> You can now connect to the VM >>>>>>>>>>>>>>>>> with the following command: >>>>>>>>>>>>>>>>> /bin/remote-viewer >>>>>>>>>>>>>>>>> vnc://localhost:5900 >>>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> What could be the problem? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Jul 27, 2016 at 09:25:25AM +0200, Simone Tiraboschi wrote:
On Wed, Jul 27, 2016 at 8:12 AM, Roy Golan <rgolan@redhat.com> wrote:
Edi, danken, this is again the malfunctioning heuristics in the client for handling ipv6. Is there a bug on this issue ?
Please try applying this patch https://gerrit.ovirt.org/#/c/61363/4/lib/vdsm/vdscli.py it would be included in ovirt-4.0.2, but we'd apreciate as wide testing as possible.
participants (6)
-
aleksey.maksimov@it-kb.ru
-
Dan Kenigsberg
-
Edward Haas
-
Piotr Kliczewski
-
Roy Golan
-
Simone Tiraboschi