Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

Hi, i try to set up a 3 node gluster based ovirt cluster, following this guide: https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster... oVirt nodes were installed with all disks available in the system, installer limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller) Glusterfs deployment fails in the last step before engine setup: PLAY RECAP ********************************************************************* hv1.iw : ok=1 changed=1 unreachable=0 failed=0 hv2.iw : ok=1 changed=1 unreachable=0 failed=0 hv3.iw : ok=1 changed=1 unreachable=0 failed=0 PLAY [gluster_servers] ********************************************************* TASK [Clean up filesystem signature] ******************************************* skipping: [hv1.iw] => (item=/dev/sdb) skipping: [hv2.iw] => (item=/dev/sdb) skipping: [hv3.iw] => (item=/dev/sdb) TASK [Create Physical Volume] ************************************************** failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} But: /dev/sdb exists on all hosts [root@hv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 136,7G 0 disk ... sdb 8:16 0 558,9G 0 disk └─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath What can i do to make this work? ___________________________________________________________ Oliver Dietzel

On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
Hi,
i try to set up a 3 node gluster based ovirt cluster, following this guide: https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster...
oVirt nodes were installed with all disks available in the system, installer limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller)
Glusterfs deployment fails in the last step before engine setup:
PLAY RECAP ********************************************************************* hv1.iw : ok=1 changed=1 unreachable=0 failed=0 hv2.iw : ok=1 changed=1 unreachable=0 failed=0 hv3.iw : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [hv1.iw] => (item=/dev/sdb) skipping: [hv2.iw] => (item=/dev/sdb) skipping: [hv3.iw] => (item=/dev/sdb)
TASK [Create Physical Volume] ************************************************** failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
But: /dev/sdb exists on all hosts
[root@hv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 136,7G 0 disk ... sdb 8:16 0 558,9G 0 disk └─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath
What can i do to make this work?
___________________________________________________________ Oliver Dietzel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi Oliver, I see that multipath is enabled on your system and for the device sdb it creates mpath and once this is created system will identify sdb as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below. 1) multipath -l (to list all multipath devices) 2) black list devices in /etc/multipath.conf by adding the lines below, if you do not see this file run the command 'vdsm-tool configure --force' which will create the file for you. blacklist { devnode "*" } 3) mutipath -F which flushes all the mpath devices. 4) Restart mutipathd by running the command 'systemctl restart multipathd' This should solve the issue. Thanks kasturi.

Thx a lot, i already got rid of the multipaths. Now 5 tries later i try to understand who disk space calc works. I already understand that the combined GByte limit for my drive sdb is around 530.
sdb 8:16 0 558,9G 0 disk
Now the thin pool creation kicks me! :) (i do a vgremove gluster_vg_sdb on all hosts and reboot all three hosts between retries) TASK [Create LVs with specified size for the VGs] ****************************** failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb', u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"failed": true, "item": {"extent": "100%FREE", "lv": "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"}, "msg": " Insufficient suitable allocatable extents for logical volume gluster_thinpool_sdb: 135680 more required\n", "rc": 5} -----Ursprüngliche Nachricht----- Von: knarra [mailto:knarra@redhat.com] Gesendet: Mittwoch, 3. Mai 2017 11:17 An: Oliver Dietzel <O.Dietzel@rto.de>; 'users@ovirt.org' <users@ovirt.org> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
Hi,
i try to set up a 3 node gluster based ovirt cluster, following this guide: https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-g luster-storage/
oVirt nodes were installed with all disks available in the system, installer limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller)
Glusterfs deployment fails in the last step before engine setup:
PLAY RECAP ********************************************************************* hv1.iw : ok=1 changed=1 unreachable=0 failed=0 hv2.iw : ok=1 changed=1 unreachable=0 failed=0 hv3.iw : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [hv1.iw] => (item=/dev/sdb) skipping: [hv2.iw] => (item=/dev/sdb) skipping: [hv3.iw] => (item=/dev/sdb)
TASK [Create Physical Volume] ************************************************** failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
But: /dev/sdb exists on all hosts
[root@hv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 136,7G 0 disk ... sdb 8:16 0 558,9G 0 disk └─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath
What can i do to make this work?
___________________________________________________________ Oliver Dietzel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hi Oliver, I see that multipath is enabled on your system and for the device sdb it creates mpath and once this is created system will identify sdb as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below. 1) multipath -l (to list all multipath devices) 2) black list devices in /etc/multipath.conf by adding the lines below, if you do not see this file run the command 'vdsm-tool configure --force' which will create the file for you. blacklist { devnode "*" } 3) mutipath -F which flushes all the mpath devices. 4) Restart mutipathd by running the command 'systemctl restart multipathd' This should solve the issue. Thanks kasturi.

On 05/03/2017 03:20 PM, Oliver Dietzel wrote:
Thx a lot, i already got rid of the multipaths.
Now 5 tries later i try to understand who disk space calc works.
I already understand that the combined GByte limit for my drive sdb is around 530.
sdb 8:16 0 558,9G 0 disk Now the thin pool creation kicks me! :)
(i do a vgremove gluster_vg_sdb on all hosts and reboot all three hosts between retries)
TASK [Create LVs with specified size for the VGs] ****************************** failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb', u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"failed": true, "item": {"extent": "100%FREE", "lv": "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"}, "msg": " Insufficient suitable allocatable extents for logical volume gluster_thinpool_sdb: 135680 more required\n", "rc": 5} I think you should input the size as 500GB if your actual disk size is 530 ?
-----Ursprüngliche Nachricht----- Von: knarra [mailto:knarra@redhat.com] Gesendet: Mittwoch, 3. Mai 2017 11:17 An: Oliver Dietzel <O.Dietzel@rto.de>; 'users@ovirt.org' <users@ovirt.org> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails
On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
Hi,
i try to set up a 3 node gluster based ovirt cluster, following this guide: https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-g luster-storage/
oVirt nodes were installed with all disks available in the system, installer limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller)
Glusterfs deployment fails in the last step before engine setup:
PLAY RECAP ********************************************************************* hv1.iw : ok=1 changed=1 unreachable=0 failed=0 hv2.iw : ok=1 changed=1 unreachable=0 failed=0 hv3.iw : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [hv1.iw] => (item=/dev/sdb) skipping: [hv2.iw] => (item=/dev/sdb) skipping: [hv3.iw] => (item=/dev/sdb)
TASK [Create Physical Volume] ************************************************** failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
But: /dev/sdb exists on all hosts
[root@hv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 136,7G 0 disk ... sdb 8:16 0 558,9G 0 disk └─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath
What can i do to make this work?
___________________________________________________________ Oliver Dietzel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hi Oliver,
I see that multipath is enabled on your system and for the device sdb it creates mpath and once this is created system will identify sdb as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below.
1) multipath -l (to list all multipath devices)
2) black list devices in /etc/multipath.conf by adding the lines below, if you do not see this file run the command 'vdsm-tool configure --force' which will create the file for you.
blacklist { devnode "*" }
3) mutipath -F which flushes all the mpath devices.
4) Restart mutipathd by running the command 'systemctl restart multipathd'
This should solve the issue.
Thanks kasturi.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Actual size as displayed by lsblk is 558,9G , a combined size of 530 GB worked (engine 100, data 180, vmstore 250), but only without thin provisioning. Deployment failed with thin provisioning enabled, but worked with fixed sizes. Now i hang in hosted engine deployment (having set installation with gluster to yes when asked) with error: "Failed to execute stage 'Environment customization': Invalid value provided to 'ENABLE_HC_GLUSTER_SERVICE' " -----Ursprüngliche Nachricht----- Von: knarra [mailto:knarra@redhat.com] Gesendet: Mittwoch, 3. Mai 2017 12:16 An: Oliver Dietzel <O.Dietzel@rto.de>; 'users@ovirt.org' <users@ovirt.org> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails On 05/03/2017 03:20 PM, Oliver Dietzel wrote:
Thx a lot, i already got rid of the multipaths.
Now 5 tries later i try to understand who disk space calc works.
I already understand that the combined GByte limit for my drive sdb is around 530.
sdb 8:16 0 558,9G 0 disk Now the thin pool creation kicks me! :)
(i do a vgremove gluster_vg_sdb on all hosts and reboot all three hosts between retries)
TASK [Create LVs with specified size for the VGs] ****************************** failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb', u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"failed": true, "item": {"extent": "100%FREE", "lv": "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"}, "msg": " Insufficient suitable allocatable extents for logical volume gluster_thinpool_sdb: 135680 more required\n", "rc": 5} I think you should input the size as 500GB if your actual disk size is 530 ?
-----Ursprüngliche Nachricht----- Von: knarra [mailto:knarra@redhat.com] Gesendet: Mittwoch, 3. Mai 2017 11:17 An: Oliver Dietzel <O.Dietzel@rto.de>; 'users@ovirt.org' <users@ovirt.org> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails
On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
Hi,
i try to set up a 3 node gluster based ovirt cluster, following this guide: https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and- g luster-storage/
oVirt nodes were installed with all disks available in the system, installer limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller)
Glusterfs deployment fails in the last step before engine setup:
PLAY RECAP ********************************************************************* hv1.iw : ok=1 changed=1 unreachable=0 failed=0 hv2.iw : ok=1 changed=1 unreachable=0 failed=0 hv3.iw : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [hv1.iw] => (item=/dev/sdb) skipping: [hv2.iw] => (item=/dev/sdb) skipping: [hv3.iw] => (item=/dev/sdb)
TASK [Create Physical Volume] ************************************************** failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
But: /dev/sdb exists on all hosts
[root@hv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 136,7G 0 disk ... sdb 8:16 0 558,9G 0 disk └─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath
What can i do to make this work?
___________________________________________________________ Oliver Dietzel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hi Oliver,
I see that multipath is enabled on your system and for the device sdb it creates mpath and once this is created system will identify sdb as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below.
1) multipath -l (to list all multipath devices)
2) black list devices in /etc/multipath.conf by adding the lines below, if you do not see this file run the command 'vdsm-tool configure --force' which will create the file for you.
blacklist { devnode "*" }
3) mutipath -F which flushes all the mpath devices.
4) Restart mutipathd by running the command 'systemctl restart multipathd'
This should solve the issue.
Thanks kasturi.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 05/03/2017 03:53 PM, Oliver Dietzel wrote:
Actual size as displayed by lsblk is 558,9G , a combined size of 530 GB worked (engine 100, data 180, vmstore 250), but only without thin provisioning. Deployment failed with thin provisioning enabled, but worked with fixed sizes.
Now i hang in hosted engine deployment (having set installation with gluster to yes when asked) with error:
"Failed to execute stage 'Environment customization': Invalid value provided to 'ENABLE_HC_GLUSTER_SERVICE'" Hi,
Can you provide me the exact question and your response to that because of which your setup failed ? Thanks kasturi
-----Ursprüngliche Nachricht----- Von: knarra [mailto:knarra@redhat.com] Gesendet: Mittwoch, 3. Mai 2017 12:16 An: Oliver Dietzel <O.Dietzel@rto.de>; 'users@ovirt.org' <users@ovirt.org> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails
On 05/03/2017 03:20 PM, Oliver Dietzel wrote:
Thx a lot, i already got rid of the multipaths.
Now 5 tries later i try to understand who disk space calc works.
I already understand that the combined GByte limit for my drive sdb is around 530.
sdb 8:16 0 558,9G 0 disk Now the thin pool creation kicks me! :)
(i do a vgremove gluster_vg_sdb on all hosts and reboot all three hosts between retries)
TASK [Create LVs with specified size for the VGs] ****************************** failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb', u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"failed": true, "item": {"extent": "100%FREE", "lv": "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"}, "msg": " Insufficient suitable allocatable extents for logical volume gluster_thinpool_sdb: 135680 more required\n", "rc": 5} I think you should input the size as 500GB if your actual disk size is 530 ? -----Ursprüngliche Nachricht----- Von: knarra [mailto:knarra@redhat.com] Gesendet: Mittwoch, 3. Mai 2017 11:17 An: Oliver Dietzel <O.Dietzel@rto.de>; 'users@ovirt.org' <users@ovirt.org> Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails
On 05/03/2017 02:06 PM, Oliver Dietzel wrote:
Hi,
i try to set up a 3 node gluster based ovirt cluster, following this guide: https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and- g luster-storage/
oVirt nodes were installed with all disks available in the system, installer limited to use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller)
Glusterfs deployment fails in the last step before engine setup:
PLAY RECAP ********************************************************************* hv1.iw : ok=1 changed=1 unreachable=0 failed=0 hv2.iw : ok=1 changed=1 unreachable=0 failed=0 hv3.iw : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] ******************************************* skipping: [hv1.iw] => (item=/dev/sdb) skipping: [hv2.iw] => (item=/dev/sdb) skipping: [hv3.iw] => (item=/dev/sdb)
TASK [Create Physical Volume] ************************************************** failed: [hv3.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv1.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5} failed: [hv2.iw] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb not found (or ignored by filtering).\n", "rc": 5}
But: /dev/sdb exists on all hosts
[root@hv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 136,7G 0 disk ... sdb 8:16 0 558,9G 0 disk └─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath
What can i do to make this work?
___________________________________________________________ Oliver Dietzel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hi Oliver,
I see that multipath is enabled on your system and for the device sdb it creates mpath and once this is created system will identify sdb as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below.
1) multipath -l (to list all multipath devices)
2) black list devices in /etc/multipath.conf by adding the lines below, if you do not see this file run the command 'vdsm-tool configure --force' which will create the file for you.
blacklist { devnode "*" }
3) mutipath -F which flushes all the mpath devices.
4) Restart mutipathd by running the command 'systemctl restart multipathd'
This should solve the issue.
Thanks kasturi.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_002_CB113C990D3AAA4B835B76042462961538471693RTOS2010Ertode_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 V29ya2VkIGFzIGRlc2NyaWJlZCBpbiB0aGUgYmxvZyBlbnRyeToNCg0KIlRoZSBpbnN0YWxsZXIg d2lsbCBhc2sgaWYgeW91IHdhbnQgdG8gY29uZmlndXJlIHlvdXIgaG9zdCBhbmQgY2x1c3RlciBm b3IgR2x1c3Rlci4gQWdhaW4sIGNsaWNrICJOZXh0IiB0byBwcm9jZWVkLiBJbiBzb21lIG9mIG15 IHRlc3RzLCB0aGUgaW5zdGFsbGVyIGZhaWxlZCBhdCB0aGlzIHBvaW50LCB3aXRoIGFuIGVycm9y IG1lc3NhZ2Ugb2YgRmFpbGVkIHRvIGV4ZWN1dGUgc3RhZ2UgJ0Vudmlyb25tZW50IGN1c3RvbWl6 YXRpb24nLiBXaGVuIEkgZW5jb3VudGVyZWQgdGhpcywgSSBjbGlja2VkICJSZXN0YXJ0IFNldHVw IiwgcmVwZWF0ZWQgdGhlIGFib3ZlIHN0ZXBzLCBhbmQgd2FzIGFibGUgdG8gcHJvY2VlZCBub3Jt YWxseS4iDQoNCmh0dHBzOi8vd3d3Lm92aXJ0Lm9yZy9ibG9nLzIwMTcvMDQvdXAtYW5kLXJ1bm5p bmctd2l0aC1vdmlydC00LTEtYW5kLWdsdXN0ZXItc3RvcmFnZS8NCg0KQWZ0ZXIgYSBzZXR1cCBy ZXN0YXJ0IHRoaXMgZXJyb3Igd2VudCBhd2F5Lg0KDQpUaGUgbGFzdCBlcnJvciBpIGhhZCB3YXMg aW4gdGhlIGxhc3Qgc3RhZ2Ugb2YgdGhlIHNldHVwIHByb2Nlc3MuDQoNClRoZSBpbnN0YWxsZXIg d2FzIHVuYWJsZSB0byBjb25uZWN0IHRvIHRoZSBlbmdpbmUgdm0gYWZ0ZXIgY3JlYXRpb24gYW5k IHRpbWVkIG91dCBhZnRlciBhYm91dCB0aGUgMTB0aCByZXRyeS4NClBvc3RpbnN0YWxsIGZhaWxl ZCwgYnV0IHRoZSB2bSBpdHNlbGYgd2FzIHVwIGFuZCBydW5uaW5nIGFuZCBpIHdhcyBhYmxlIHRv IHNzaCB0byBpdCBhbmQgdG8gY29ubmVjdCB0byB0aGUgd2ViIHVpLg0KDQpMb2cgYXR0YWNoZWQu DQoNClRoeCBPbGkNCg0KLS0tLS1VcnNwcsO8bmdsaWNoZSBOYWNocmljaHQtLS0tLQ0KVm9uOiBr bmFycmEgW21haWx0bzprbmFycmFAcmVkaGF0LmNvbV0gDQpHZXNlbmRldDogTWl0dHdvY2gsIDMu IE1haSAyMDE3IDEzOjQ5DQpBbjogT2xpdmVyIERpZXR6ZWwgPE8uRGlldHplbEBydG8uZGU+OyAn dXNlcnNAb3ZpcnQub3JnJyA8dXNlcnNAb3ZpcnQub3JnPg0KQmV0cmVmZjogUmU6IEFXOiBbb3Zp cnQtdXNlcnNdIE92aXJ0IE5vZGUgNC4xLjEuMSBpbnN0YWxsYXRpb24sIGVuZ2luZSBvbiBnbHVz dGVyLCBleGlzdGluZyBkaXNrIHNkYiBub3QgZm91bmQgb3IgZmlsdGVyZWQsIGRlcGxveW1lbnQg ZmFpbHMNCg0KT24gMDUvMDMvMjAxNyAwMzo1MyBQTSwgT2xpdmVyIERpZXR6ZWwgd3JvdGU6DQo+ IEFjdHVhbCBzaXplIGFzIGRpc3BsYXllZCBieSBsc2JsayBpcyA1NTgsOUcgLCBhIGNvbWJpbmVk IHNpemUgb2YgNTMwIA0KPiBHQiB3b3JrZWQgKGVuZ2luZSAxMDAsIGRhdGEgMTgwLCB2bXN0b3Jl IDI1MCksIGJ1dCBvbmx5IHdpdGhvdXQgdGhpbiBwcm92aXNpb25pbmcuIERlcGxveW1lbnQgZmFp bGVkIHdpdGggdGhpbiBwcm92aXNpb25pbmcgZW5hYmxlZCwgYnV0IHdvcmtlZCB3aXRoIGZpeGVk IHNpemVzLg0KPg0KPiBOb3cgaSBoYW5nIGluIGhvc3RlZCBlbmdpbmUgZGVwbG95bWVudCAoaGF2 aW5nIHNldCBpbnN0YWxsYXRpb24gd2l0aCBnbHVzdGVyIHRvIHllcyB3aGVuIGFza2VkKSB3aXRo IGVycm9yOg0KPg0KPiAiRmFpbGVkIHRvIGV4ZWN1dGUgc3RhZ2UgJ0Vudmlyb25tZW50IGN1c3Rv bWl6YXRpb24nOiBJbnZhbGlkIHZhbHVlIHByb3ZpZGVkIHRvICdFTkFCTEVfSENfR0xVU1RFUl9T RVJWSUNFJyINCkhpLA0KDQogICAgIENhbiB5b3UgcHJvdmlkZSBtZSB0aGUgZXhhY3QgcXVlc3Rp b24gYW5kIHlvdXIgcmVzcG9uc2UgdG8gdGhhdCBiZWNhdXNlIG9mIHdoaWNoIHlvdXIgc2V0dXAg ZmFpbGVkID8NCg0KVGhhbmtzDQprYXN0dXJpDQo+DQo+DQo+DQo+IC0tLS0tVXJzcHLDvG5nbGlj aGUgTmFjaHJpY2h0LS0tLS0NCj4gVm9uOiBrbmFycmEgW21haWx0bzprbmFycmFAcmVkaGF0LmNv bV0NCj4gR2VzZW5kZXQ6IE1pdHR3b2NoLCAzLiBNYWkgMjAxNyAxMjoxNg0KPiBBbjogT2xpdmVy IERpZXR6ZWwgPE8uRGlldHplbEBydG8uZGU+OyAndXNlcnNAb3ZpcnQub3JnJyANCj4gPHVzZXJz QG92aXJ0Lm9yZz4NCj4gQmV0cmVmZjogUmU6IFtvdmlydC11c2Vyc10gT3ZpcnQgTm9kZSA0LjEu MS4xIGluc3RhbGxhdGlvbiwgZW5naW5lIG9uIA0KPiBnbHVzdGVyLCBleGlzdGluZyBkaXNrIHNk YiBub3QgZm91bmQgb3IgZmlsdGVyZWQsIGRlcGxveW1lbnQgZmFpbHMNCj4NCj4gT24gMDUvMDMv MjAxNyAwMzoyMCBQTSwgT2xpdmVyIERpZXR6ZWwgd3JvdGU6DQo+PiBUaHggYSBsb3QsIGkgYWxy ZWFkeSBnb3QgcmlkIG9mIHRoZSBtdWx0aXBhdGhzLg0KPj4NCj4+IE5vdyA1IHRyaWVzIGxhdGVy IGkgdHJ5IHRvIHVuZGVyc3RhbmQgd2hvIGRpc2sgc3BhY2UgY2FsYyB3b3Jrcy4NCj4+DQo+PiBJ IGFscmVhZHkgdW5kZXJzdGFuZCB0aGF0IHRoZSBjb21iaW5lZCBHQnl0ZSBsaW1pdCBmb3IgbXkg ZHJpdmUgc2RiIGlzIGFyb3VuZCA1MzAuDQo+PiAgICANCj4+PiBzZGIgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDg6MTYgICAwIDU1OCw5RyAg MCBkaXNrDQo+PiBOb3cgdGhlIHRoaW4gcG9vbCBjcmVhdGlvbiBraWNrcyBtZSEgOikNCj4+DQo+ PiAoaSBkbyBhICB2Z3JlbW92ZSBnbHVzdGVyX3ZnX3NkYiBvbiBhbGwgaG9zdHMgYW5kIHJlYm9v dCBhbGwgdGhyZWUgDQo+PiBob3N0cyBiZXR3ZWVuIHJldHJpZXMpDQo+Pg0KPj4gVEFTSyBbQ3Jl YXRlIExWcyB3aXRoIHNwZWNpZmllZCBzaXplIGZvciB0aGUgVkdzXQ0KPj4gKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqDQo+PiBmYWlsZWQ6IFtodjEuaXcucnRvLmRlXSAoaXRlbT17dSds dic6IHUnZ2x1c3Rlcl90aGlucG9vbF9zZGInLA0KPj4gdSdzaXplJzogdSc1MzBHQicsIHUnZXh0 ZW50JzogdScxMDAlRlJFRScsIHUndmcnOiB1J2dsdXN0ZXJfdmdfc2RiJ30pIA0KPj4gPT4geyJm YWlsZWQiOiB0cnVlLCAiaXRlbSI6IHsiZXh0ZW50IjogIjEwMCVGUkVFIiwgImx2IjoNCj4+ICJn bHVzdGVyX3RoaW5wb29sX3NkYiIsICJzaXplIjogIjUzMEdCIiwgInZnIjogImdsdXN0ZXJfdmdf c2RiIn0sDQo+PiAibXNnIjogIiAgSW5zdWZmaWNpZW50IHN1aXRhYmxlIGFsbG9jYXRhYmxlIGV4 dGVudHMgZm9yIGxvZ2ljYWwgDQo+PiB2b2x1bWUNCj4+IGdsdXN0ZXJfdGhpbnBvb2xfc2RiOiAx MzU2ODAgbW9yZSByZXF1aXJlZFxuIiwgInJjIjogNX0NCj4gSSB0aGluayB5b3Ugc2hvdWxkIGlu cHV0IHRoZSBzaXplIGFzIDUwMEdCIGlmIHlvdXIgYWN0dWFsIGRpc2sgc2l6ZSBpcyA1MzAgPw0K Pj4gLS0tLS1VcnNwcsO8bmdsaWNoZSBOYWNocmljaHQtLS0tLQ0KPj4gVm9uOiBrbmFycmEgW21h aWx0bzprbmFycmFAcmVkaGF0LmNvbV0NCj4+IEdlc2VuZGV0OiBNaXR0d29jaCwgMy4gTWFpIDIw MTcgMTE6MTcNCj4+IEFuOiBPbGl2ZXIgRGlldHplbCA8Ty5EaWV0emVsQHJ0by5kZT47ICd1c2Vy c0BvdmlydC5vcmcnDQo+PiA8dXNlcnNAb3ZpcnQub3JnPg0KPj4gQmV0cmVmZjogUmU6IFtvdmly dC11c2Vyc10gT3ZpcnQgTm9kZSA0LjEuMS4xIGluc3RhbGxhdGlvbiwgZW5naW5lIG9uIA0KPj4g Z2x1c3RlciwgZXhpc3RpbmcgZGlzayBzZGIgbm90IGZvdW5kIG9yIGZpbHRlcmVkLCBkZXBsb3lt ZW50IGZhaWxzDQo+Pg0KPj4gT24gMDUvMDMvMjAxNyAwMjowNiBQTSwgT2xpdmVyIERpZXR6ZWwg d3JvdGU6DQo+Pj4gSGksDQo+Pj4NCj4+PiBpIHRyeSB0byBzZXQgdXAgYSAzIG5vZGUgZ2x1c3Rl ciBiYXNlZCBvdmlydCBjbHVzdGVyLCBmb2xsb3dpbmcgdGhpcyBndWlkZToNCj4+PiBodHRwczov L3d3dy5vdmlydC5vcmcvYmxvZy8yMDE3LzA0L3VwLWFuZC1ydW5uaW5nLXdpdGgtb3ZpcnQtNC0x LWFuZA0KPj4+IC0NCj4+PiBnDQo+Pj4gbHVzdGVyLXN0b3JhZ2UvDQo+Pj4NCj4+PiBvVmlydCBu b2RlcyB3ZXJlIGluc3RhbGxlZCB3aXRoIGFsbCBkaXNrcyBhdmFpbGFibGUgaW4gdGhlIHN5c3Rl bSwgDQo+Pj4gaW5zdGFsbGVyIGxpbWl0ZWQgdG8gdXNlIG9ubHkgL2Rldi9zZGEgKGJvdGggc2Rh IGFuZCBzZGIgYXJlIEhQRSANCj4+PiBsb2dpY2FsIHZvbHVtZXMgb24gYSBwNDEwIHJhaWQgY29u dHJvbGxlcikNCj4+Pg0KPj4+DQo+Pj4gR2x1c3RlcmZzIGRlcGxveW1lbnQgZmFpbHMgaW4gdGhl IGxhc3Qgc3RlcCBiZWZvcmUgZW5naW5lIHNldHVwOg0KPj4+DQo+Pj4gUExBWSBSRUNBUCAqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioNCj4+PiBodjEuaXcgICAgICAgICAgICAgIDogb2s9MSAgICBjaGFuZ2VkPTEgICAg dW5yZWFjaGFibGU9MCAgICBmYWlsZWQ9MA0KPj4+IGh2Mi5pdyAgICAgICAgICAgICAgOiBvaz0x ICAgIGNoYW5nZWQ9MSAgICB1bnJlYWNoYWJsZT0wICAgIGZhaWxlZD0wDQo+Pj4gaHYzLml3ICAg ICAgICAgICAgICA6IG9rPTEgICAgY2hhbmdlZD0xICAgIHVucmVhY2hhYmxlPTAgICAgZmFpbGVk PTANCj4+Pg0KPj4+DQo+Pj4gUExBWSBbZ2x1c3Rlcl9zZXJ2ZXJzXQ0KPj4+ICoqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+DQo+Pj4g VEFTSyBbQ2xlYW4gdXAgZmlsZXN5c3RlbSBzaWduYXR1cmVdDQo+Pj4gKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+IHNraXBwaW5nOiBbaHYxLml3XSA9PiAo aXRlbT0vZGV2L3NkYikNCj4+PiBza2lwcGluZzogW2h2Mi5pd10gPT4gKGl0ZW09L2Rldi9zZGIp DQo+Pj4gc2tpcHBpbmc6IFtodjMuaXddID0+IChpdGVtPS9kZXYvc2RiKQ0KPj4+DQo+Pj4gVEFT SyBbQ3JlYXRlIFBoeXNpY2FsIFZvbHVtZV0NCj4+PiAqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKg0KPj4+IGZhaWxlZDogW2h2My5pd10gKGl0ZW09L2Rl di9zZGIpID0+IHsiZmFpbGVkIjogdHJ1ZSwNCj4+PiAiZmFpbGVkX3doZW5fcmVzdWx0IjogdHJ1 ZSwgIml0ZW0iOiAiL2Rldi9zZGIiLCAibXNnIjogIiAgRGV2aWNlIA0KPj4+IC9kZXYvc2RiIG5v dCBmb3VuZCAob3IgaWdub3JlZCBieSBmaWx0ZXJpbmcpLlxuIiwgInJjIjogNX0NCj4+PiBmYWls ZWQ6IFtodjEuaXddIChpdGVtPS9kZXYvc2RiKSA9PiB7ImZhaWxlZCI6IHRydWUsDQo+Pj4gImZh aWxlZF93aGVuX3Jlc3VsdCI6IHRydWUsICJpdGVtIjogIi9kZXYvc2RiIiwgIm1zZyI6ICIgIERl dmljZSANCj4+PiAvZGV2L3NkYiBub3QgZm91bmQgKG9yIGlnbm9yZWQgYnkgZmlsdGVyaW5nKS5c biIsICJyYyI6IDV9DQo+Pj4gZmFpbGVkOiBbaHYyLml3XSAoaXRlbT0vZGV2L3NkYikgPT4geyJm YWlsZWQiOiB0cnVlLA0KPj4+ICJmYWlsZWRfd2hlbl9yZXN1bHQiOiB0cnVlLCAiaXRlbSI6ICIv ZGV2L3NkYiIsICJtc2ciOiAiICBEZXZpY2UgDQo+Pj4gL2Rldi9zZGIgbm90IGZvdW5kIChvciBp Z25vcmVkIGJ5IGZpbHRlcmluZykuXG4iLCAicmMiOiA1fQ0KPj4+DQo+Pj4NCj4+PiBCdXQ6IC9k ZXYvc2RiIGV4aXN0cyBvbiBhbGwgaG9zdHMNCj4+Pg0KPj4+IFtyb290QGh2MSB+XSMgbHNibGsN Cj4+PiBOQU1FICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICBNQUo6TUlOIFJNICAgU0laRSBSTyBUWVBFICBNT1VOVFBPSU5UDQo+Pj4gc2RhICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA4OjAgICAg MCAxMzYsN0cgIDAgZGlzaw0KPj4+IC4uLg0KPj4+IHNkYiAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgODoxNiAgIDAgNTU4LDlHICAwIGRpc2sN Cj4+PiDilJTilIAzNjAwNTA4YjEwMDFjMzUwYTJjMTc0OGIwYTBmZjM4NjAgICAgICAgICAgICAg ICAgICAgICAgMjUzOjUgICAgMCA1NTgsOUcgIDAgbXBhdGgNCj4+Pg0KPj4+DQo+Pj4NCj4+PiBX aGF0IGNhbiBpIGRvIHRvIG1ha2UgdGhpcyB3b3JrPw0KPj4+DQo+Pj4gX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+PiBPbGl2ZXIg RGlldHplbA0KPj4+DQo+Pj4NCj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXw0KPj4+IFVzZXJzIG1haWxpbmcgbGlzdA0KPj4+IFVzZXJzQG92aXJ0Lm9y Zw0KPj4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KPj4g SGkgT2xpdmVyLA0KPj4NCj4+ICAgICAgICBJIHNlZSB0aGF0IG11bHRpcGF0aCBpcyBlbmFibGVk IG9uIHlvdXIgc3lzdGVtIGFuZCBmb3IgdGhlIGRldmljZSBzZGIgaXQgY3JlYXRlcyBtcGF0aCBh bmQgb25jZSB0aGlzIGlzIGNyZWF0ZWQgc3lzdGVtIHdpbGwgaWRlbnRpZnkgc2RiIGFzICIzNjAw NTA4YjEwMDFjMzUwYTJjMTc0OGIwYTBmZjM4NjAiLiBUbyBtYWtlIHRoaXMgd29yayBwZXJmb3Jt IHRoZSBzdGVwcyBiZWxvdy4NCj4+DQo+PiAxKSBtdWx0aXBhdGggLWwgKHRvIGxpc3QgYWxsIG11 bHRpcGF0aCBkZXZpY2VzKQ0KPj4NCj4+IDIpIGJsYWNrIGxpc3QgZGV2aWNlcyBpbiAvZXRjL211 bHRpcGF0aC5jb25mIGJ5IGFkZGluZyB0aGUgbGluZXMgYmVsb3csIGlmIHlvdSBkbyBub3Qgc2Vl IHRoaXMgZmlsZSBydW4gdGhlIGNvbW1hbmQgJ3Zkc20tdG9vbCBjb25maWd1cmUgLS1mb3JjZScg d2hpY2ggd2lsbCBjcmVhdGUgdGhlIGZpbGUgZm9yIHlvdS4NCj4+DQo+PiBibGFja2xpc3Qgew0K Pj4gICAgICAgICAgICBkZXZub2RlICIqIg0KPj4gfQ0KPj4NCj4+IDMpIG11dGlwYXRoIC1GIHdo aWNoIGZsdXNoZXMgYWxsIHRoZSBtcGF0aCBkZXZpY2VzLg0KPj4NCj4+IDQpIFJlc3RhcnQgbXV0 aXBhdGhkIGJ5IHJ1bm5pbmcgdGhlIGNvbW1hbmQgJ3N5c3RlbWN0bCByZXN0YXJ0IG11bHRpcGF0 aGQnDQo+Pg0KPj4gVGhpcyBzaG91bGQgc29sdmUgdGhlIGlzc3VlLg0KPj4NCj4+IFRoYW5rcw0K Pj4ga2FzdHVyaS4NCj4+DQo+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXw0KPj4gVXNlcnMgbWFpbGluZyBsaXN0DQo+PiBVc2Vyc0BvdmlydC5vcmcNCj4+ IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KPg0KDQo= --_002_CB113C990D3AAA4B835B76042462961538471693RTOS2010Ertode_ Content-Type: message/rfc822 Content-Disposition: attachment; creation-date="Wed, 03 May 2017 13:33:21 GMT"; modification-date="Wed, 03 May 2017 13:34:14 GMT" From: Oliver Dietzel <O.Dietzel@rto.de> To: Oliver Dietzel <O.Dietzel@rto.de>, "oli.dietzel@gmail.com" <oli.dietzel@gmail.com> Subject: ovirt engine setup log Date: Wed, 3 May 2017 10:54:25 +0000 Message-ID: <CB113C990D3AAA4B835B760424629615384714BD@RTOS2010E.rto.de> Content-Language: de-DE Content-Type: multipart/alternative; boundary="_000_CB113C990D3AAA4B835B760424629615384714BDRTOS2010Ertode_" MIME-Version: 1.0 --_000_CB113C990D3AAA4B835B760424629615384714BDRTOS2010Ertode_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable You can now connect to the VM with the following command: hosted-engine --console You can also graphically connect to the VM from your system with the follow= ing command: remote-viewer vnc://hv1.iw.rto.de:5900 Use temporary password "7393GGzV" to connect to vnc console. Please ensure that your Guest OS is properly configured to support serial c= onsole according to your distro documentation. Follow http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_conso= le_the_old_way for more info. If you need to reboot the VM you will need to start it manually using the c= ommand: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password |- [ INFO ] Stage: Initializing |- [ INFO ] Stage: Environment setup |- Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss= .conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/root/ovirt-en= gine-answers', '/root/heanswers.conf'] |- Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20170503124420-= tm6vbb.log |- Version: otopi-1.6.1 (otopi-1.6.1-1.el7.centos) |- [ INFO ] Stage: Environment packages setup |- [ INFO ] Stage: Programs detection |- [ INFO ] Stage: Environment setup |- [ INFO ] Stage: Environment customization |- |- --=3D=3D PRODUCT OPTIONS =3D=3D-- |- |- Configure Image I/O Proxy on this host? (Yes, No) [Yes]: |- |- --=3D=3D PACKAGES =3D=3D-- |- |- |- --=3D=3D NETWORK CONFIGURATION =3D=3D-- |- |- [ INFO ] firewalld will be configured as firewall manager. |- |- --=3D=3D DATABASE CONFIGURATION =3D=3D-- |- |- |- --=3D=3D OVIRT ENGINE CONFIGURATION =3D=3D-- |- |- |- --=3D=3D STORAGE CONFIGURATION =3D=3D-- |- |- |- --=3D=3D PKI CONFIGURATION =3D=3D-- |- |- |- --=3D=3D APACHE CONFIGURATION =3D=3D-- |- |- |- --=3D=3D SYSTEM CONFIGURATION =3D=3D-- |- |- |- --=3D=3D MISC CONFIGURATION =3D=3D-- |- |- Please choose Data Warehouse sampling scale: |- (1) Basic |- (2) Full |- (1, 2)[1]: |- |- --=3D=3D END OF CONFIGURATION =3D=3D-- |- |- [ INFO ] Stage: Setup validation |- [WARNING] Cannot validate host name settings, reason: resolved host does= not match any of the local addresses |- |- --=3D=3D CONFIGURATION PREVIEW =3D=3D-- |- |- Application mode : both |- Default SAN wipe after delete : False |- Firewall manager : firewalld |- Update Firewall : True |- Host FQDN : ovirt-engine.iw.rto.de |- Configure local Engine database : True |- Set application as default page : True |- Configure Apache SSL : True |- Engine database secured connection : False |- Engine database user name : engine |- Engine database name : engine |- Engine database host : localhost |- Engine database port : 5432 |- Engine database host name validation : False |- Engine installation : True |- PKI organization : iw.rto.de |- DWH installation : True |- DWH database secured connection : False |- DWH database host : localhost |- DWH database user name : ovirt_engine_history |- DWH database name : ovirt_engine_history |- DWH database port : 5432 |- DWH database host name validation : False |- Configure local DWH database : True |- Configure Image I/O Proxy : True |- Configure VMConsole Proxy : True |- Configure WebSocket Proxy : True |- [ INFO ] Stage: Transaction setup |- [ INFO ] Stopping engine service |- [ INFO ] Stopping ovirt-fence-kdump-listener service |- [ INFO ] Stopping dwh service |- [ INFO ] Stopping Image I/O Proxy service |- [ INFO ] Stopping vmconsole-proxy service |- [ INFO ] Stopping websocket-proxy service |- [ INFO ] Stage: Misc configuration |- [ INFO ] Stage: Package installation |- [ INFO ] Stage: Misc configuration |- [ INFO ] Upgrading CA |- [ INFO ] Initializing PostgreSQL |- [ INFO ] Creating PostgreSQL 'engine' database |- [ INFO ] Configuring PostgreSQL |- [ INFO ] Creating PostgreSQL 'ovirt_engine_history' database |- [ INFO ] Configuring PostgreSQL |- [ INFO ] Creating CA |- [ INFO ] Creating/refreshing Engine database schema |- [ INFO ] Creating/refreshing DWH database schema |- [ INFO ] Configuring Image I/O Proxy |- [ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts |- [ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts |- [ INFO ] Configuring WebSocket Proxy |- [ INFO ] Creating/refreshing Engine 'internal' domain database schema |- [ INFO ] Generating post install configuration file '/etc/ovirt-engine-s= etup.conf.d/20-setup-ovirt-post.conf' |- [ INFO ] Stage: Transaction commit |- [ INFO ] Stage: Closing up |- [ INFO ] Starting engine service |- [ INFO ] Starting dwh service |- [ INFO ] Restarting ovirt-vmconsole proxy service |- |- --=3D=3D SUMMARY =3D=3D-- |- |- [ INFO ] Restarting httpd |- Please use the user 'admin@internal' and password specified in order to = login |- Web access is enabled at: |- http://ovirt-engine.iw.rto.de:80/ovirt-engine |- https://ovirt-engine.iw.rto.de:443/ovirt-engine |- Internal CA 39:C6:B1:EE:BE:EC:80:93:09:20:6C:22:0A:82:FD:43:65:5D:5DD |- SSH fingerprint: da:6d:f1:f7:9f:40:b2:a9:32:6c:0a:39:c0:dd:4f:15 |- |- --=3D=3D END OF SUMMARY =3D=3D-- |- |- [ INFO ] Stage: Clean up |- Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20= 170503124420-tm6vbb.log |- [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/201= 70503124836-setup.conf' |- [ INFO ] Stage: Pre-termination |- [ INFO ] Stage: Termination |- [ INFO ] Execution of setup completed successfully |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS ___________________________________________________________ Oliver Dietzel Abteilung: Marketing-SEO RTO GmbH Hanauer Landstra=DFe 439 60314 Frankfurt Tel: +49 69 42085 2222 Fax: +49 69 42085 400 Email: O.Dietzel@rto.de Web: www.rto.de --_000_CB113C990D3AAA4B835B760424629615384714BDRTOS2010Ertode_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-= 1"> <meta name=3D"Generator" content=3D"Microsoft Exchange Server"> <!-- converted from rtf --> <style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left:= #800000 2px solid; } --></style> </head> <body> <font face=3D"Times New Roman" size=3D"3"><span style=3D"font-size:12pt;"> <div>You can now connect to the VM with the following command: <br> hosted-engine --console <br> You can also graphically connect to the VM from your system with the follow= ing command: <br> remote-viewer vnc://hv1.iw.rto.de:5900 <br> Use temporary password "7393GGzV" to connect to vnc console. <br> Please ensure that your Guest OS is properly configured to support serial c= onsole according to your distro documentation. <br> Follow <a href=3D"http://www.ovirt.org/Serial_Console_Setup#I_need_to_acces= s_the_console_the_old_way"> http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_the_= old_way</a> for more info. <br> If you need to reboot the VM you will need to start it manually using the c= ommand: <br> hosted-engine --vm-start <br> You can then set a temporary password using the command: <br> hosted-engine --add-console-password<br> |- [ INFO ] Stage: Initializing<br> |- [ INFO ] Stage: Environment setup<br> |- Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss= .conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/root/ovirt-en= gine-answers', '/root/heanswers.conf']<br> |- Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20170503124420-= tm6vbb.log<br> |- Version: otopi-1.6.1 (otopi-1.6.1-1.el7.centos)<br> |- [ INFO ] Stage: Environment packages setup<br> |- [ INFO ] Stage: Programs detection<br> |- [ INFO ] Stage: Environment setup<br> |- [ INFO ] Stage: Environment customization<br> |- <br> |- --=3D=3D PRODUCT OPTIONS =3D=3D-- <br> |-<br> |- Configure Image I/O Proxy on this host? (Yes, No) [Yes]:<br> |- <br> |- --=3D=3D PACKAGES =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D NETWORK CONFIGURATION =3D=3D-- <br> |-<br> |- [ INFO ] firewalld will be configured as firewall manager.<br> |- <br> |- --=3D=3D DATABASE CONFIGURATION =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D OVIRT ENGINE CONFIGURATION =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D STORAGE CONFIGURATION =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D PKI CONFIGURATION =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D APACHE CONFIGURATION =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D SYSTEM CONFIGURATION =3D=3D-- <br> |-<br> |- <br> |- --=3D=3D MISC CONFIGURATION =3D=3D-- <br> |-<br> |- Please choose Data Warehouse sampling scale: <br> |- (1) Basic <br> |- (2) Full <br> |- (1, 2)[1]:<br> |- <br> |- --=3D=3D END OF CONFIGURATION =3D=3D-- <br> |-<br> |- [ INFO ] Stage: Setup validation<br> |- [WARNING] Cannot validate host name settings, reason: resolved host does= not match any of the local addresses<br> |- <br> |- --=3D=3D CONFIGURATION PREVIEW =3D=3D-- <br> |- <br> |- Application mode : both <br> |- Default SAN wipe after delete : False <br> |- Firewall manager : firewalld <br> |- Update Firewall : True <br> |- Host FQDN : ovirt-engine.iw.rto.de <br> |- Configure local Engine database : True <br> |- Set application as default page : True <br> |- Configure Apache SSL : True <br> |- Engine database secured connection : False <br> |- Engine database user name : engine <br> |- Engine database name : engine<br> |- Engine database host : localhost <br> |- Engine database port : 5432 <br> |- Engine database host name validation : False <br> |- Engine installation : True <br> |- PKI organization : iw.rto.de <br> |- DWH installation : True <br> |- DWH database secured connection : False <br> |- DWH database host : localhost <br> |- DWH database user name : ovirt_engine_history <br> |- DWH database name : ovirt_engine_history <br> |- DWH database port : 5432 <br> |- DWH database host name validation : False <br> |- Configure local DWH database : True <br> |- Configure Image I/O Proxy : True <br> |- Configure VMConsole Proxy : True <br> |- Configure WebSocket Proxy : True<br> |- [ INFO ] Stage: Transaction setup<br> |- [ INFO ] Stopping engine service<br> |- [ INFO ] Stopping ovirt-fence-kdump-listener service<br> |- [ INFO ] Stopping dwh service<br> |- [ INFO ] Stopping Image I/O Proxy service<br> |- [ INFO ] Stopping vmconsole-proxy service<br> |- [ INFO ] Stopping websocket-proxy service<br> |- [ INFO ] Stage: Misc configuration<br> |- [ INFO ] Stage: Package installation<br> |- [ INFO ] Stage: Misc configuration<br> |- [ INFO ] Upgrading CA<br> |- [ INFO ] Initializing PostgreSQL<br> |- [ INFO ] Creating PostgreSQL 'engine' database<br> |- [ INFO ] Configuring PostgreSQL<br> |- [ INFO ] Creating PostgreSQL 'ovirt_engine_history' database<br> |- [ INFO ] Configuring PostgreSQL<br> |- [ INFO ] Creating CA<br> |- [ INFO ] Creating/refreshing Engine database schema<br> |- [ INFO ] Creating/refreshing DWH database schema<br> |- [ INFO ] Configuring Image I/O Proxy<br> |- [ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts<br> |- [ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts<br> |- [ INFO ] Configuring WebSocket Proxy<br> |- [ INFO ] Creating/refreshing Engine 'internal' domain database schema<br=
|- [ INFO ] Generating post install configuration file '/etc/ovirt-engine-s= etup.conf.d/20-setup-ovirt-post.conf'<br> |- [ INFO ] Stage: Transaction commit<br> |- [ INFO ] Stage: Closing up<br> |- [ INFO ] Starting engine service<br> |- [ INFO ] Starting dwh service<br> |- [ INFO ] Restarting ovirt-vmconsole proxy service<br> |- <br> |- --=3D=3D SUMMARY =3D=3D-- <br> |-<br> |- [ INFO ] Restarting httpd<br> |- Please use the user 'admin@internal' and password specified in order to = login<br> |- Web access is enabled at: <br> |- <a href=3D"http://ovirt-engine.iw.rto.de:80/ovirt-engine">http://ovirt-e= ngine.iw.rto.de:80/ovirt-engine</a> <br> |- <a href=3D"https://ovirt-engine.iw.rto.de:443/ovirt-engine">https://ovir= t-engine.iw.rto.de:443/ovirt-engine</a><br> |- Internal CA 39:C6:B1:EE:BE:EC:80:93:09:20:6C:22:0A:82:FD:43:65:5D:5DD<br=
|- SSH fingerprint: da:6d:f1:f7:9f:40:b2:a9:32:6c:0a:39:c0:dd:4f:15<br> |- <br> |- --=3D=3D END OF SUMMARY =3D=3D-- <br> |-<br> |- [ INFO ] Stage: Clean up<br> |- Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20= 170503124420-tm6vbb.log<br> |- [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/201= 70503124836-setup.conf'<br> |- [ INFO ] Stage: Pre-termination<br> |- [ INFO ] Stage: Termination<br> |- [ INFO ] Execution of setup completed successfully<br> |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS </div> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> <div><font face=3D"Calibri" size=3D"2" color=3D"#8EAADB"><span style=3D"fon= t-size:11pt;">___________________________________________________________</= span></font></div> <table width=3D"40" style=3D"width:24pt;margin-left:5.4pt;"> <col width=3D"40" style=3D"width:24pt;"> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#595959"><span style=3D"font= -size:8pt;">Oliver Dietzel</span></font></td> </tr> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">Abteilung: Marketing-SEO</span></font></td> </tr> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">RTO GmbH</span></font></td> </tr> </table> <div><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"fon= t-size:8pt;">Hanauer Landstra=DFe 439<br> 60314 Frankfurt</span></font></div> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> <table width=3D"80" style=3D"width:48pt;margin-left:5.4pt;"> <col width=3D"40" style=3D"width:24pt;"> <col width=3D"40" style=3D"width:24pt;"> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">Tel:</span></font></td> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">+49 69 42085 2222</span></font></td> </tr> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">Fax:</span></font></td> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">+49 69 42085 400</span></font></td> </tr> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">Email: </span></font></td> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">O.Dietzel@rto.de</span></font></td> </tr> <tr> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">Web:</span></font></td> <td><font face=3D"Verdana" size=3D"1" color=3D"#A0A0A0"><span style=3D"font= -size:8pt;">www.rto.de</span></font></td> </tr> </table> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> <div><font face=3D"Calibri" size=3D"2"><span style=3D"font-size:11pt;">&nbs= p;</span></font></div> </span></font> </body> </html> --_000_CB113C990D3AAA4B835B760424629615384714BDRTOS2010Ertode_-- --_002_CB113C990D3AAA4B835B76042462961538471693RTOS2010Ertode_--
participants (2)
-
knarra
-
Oliver Dietzel