Re: [ovirt-users] Failed deply of ovirt-engine using ovirt-node-ng-installer-ovirt-4.2-pre-2017120512.iso

On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
Ciao Simone thanks for really quick answer.
2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
Ciao Roberto,
On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> wrote:
I'm having trouble to deploy one three host hyperconverged lab using iso-image named above.
Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 <(201)%20712-0512>.iso is still a pre-release software. Your contribute testing it is really appreciated!
It's a pleasure !.
My test environment is based on HPE BL680cG7 blade servers. These servers has 6 physical 10GB network interfaces (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc).
I choose one of these six phys interfaces (enp5s0f0) and assigned it a static IPv4 address, for each node.
After node reboot, interface ONBOOT param is still set to no. Changed via iLO interface to yes and restarted network. Fine.
After gluster setup, with gdeploy script under Cockpit interface, avoiding errors coming from : /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine deploy.
With the new version, I'm having an error never seen before:
The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Failed to execute stage 'Environment customization': The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Hosted Engine deployment failed.
There's no input field for HE subnet mask. Anyway in our class-c ovirt management network these ARE in the same subnet. How to recover from this ? I cannot add /24 CIDR in HE Static IP address field, it isn't allowed.
10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it should't fail. The issue here seams different:
From hosted-engine-setup log I see that you passed the VM IP address via answerfile: 2017-12-06 09:14:30,195+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic CIDR=str:'10.114.60.117'
while the right syntax should be: OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
Did you wrote the answerfile by yourself or did you entered the IP address in the cockpit wizard? if so we probably have a regression there.
I've inserted it while providing data for setup, using Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface. No manual update of answer file.
Moreover, VM FQDN is asked two times during the deploy process. It's correct ?
No, I don't think so but I don't see it from you logs. Could you please explain it?
Yes: first time is requested during initial setup of HE VM deploy
The second one, instead, is asked (at least to me ) in this step, after initial setup:
So both on cockpit side?
[image: Immagine incorporata 1]
Some additional, general questions: NetworkManager: must be disabled deploying HCI solution ? In my attempt, wasn't disabled.
Simone, could you confirm or not that NM must stay in place while deploying ? This qustion was struggling since 3.6...... what is the "best practice" ? All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but I wasn't able to find any mandatory rule.
In early 3.6 you had to disable it but now you can safely keep it on.
There's some document to follow to perform a correct deploy ?
Is this one still "valid" ? : https://ovirt.org/blog/2017/ 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
Attached hosted-engine-setup log. TIA
-- Roberto 110-006-970
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Roberto Nunin

Yes, both times on Cockpit. 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
Ciao Simone thanks for really quick answer.
2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
Ciao Roberto,
On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> wrote:
I'm having trouble to deploy one three host hyperconverged lab using iso-image named above.
Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 <(201)%20712-0512>.iso is still a pre-release software. Your contribute testing it is really appreciated!
It's a pleasure !.
My test environment is based on HPE BL680cG7 blade servers. These servers has 6 physical 10GB network interfaces (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc).
I choose one of these six phys interfaces (enp5s0f0) and assigned it a static IPv4 address, for each node.
After node reboot, interface ONBOOT param is still set to no. Changed via iLO interface to yes and restarted network. Fine.
After gluster setup, with gdeploy script under Cockpit interface, avoiding errors coming from : /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine deploy.
With the new version, I'm having an error never seen before:
The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Failed to execute stage 'Environment customization': The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Hosted Engine deployment failed.
There's no input field for HE subnet mask. Anyway in our class-c ovirt management network these ARE in the same subnet. How to recover from this ? I cannot add /24 CIDR in HE Static IP address field, it isn't allowed.
10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it should't fail. The issue here seams different:
From hosted-engine-setup log I see that you passed the VM IP address via answerfile: 2017-12-06 09:14:30,195+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic CIDR=str:'10.114.60.117'
while the right syntax should be: OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
Did you wrote the answerfile by yourself or did you entered the IP address in the cockpit wizard? if so we probably have a regression there.
I've inserted it while providing data for setup, using Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface. No manual update of answer file.
Moreover, VM FQDN is asked two times during the deploy process. It's correct ?
No, I don't think so but I don't see it from you logs. Could you please explain it?
Yes: first time is requested during initial setup of HE VM deploy
The second one, instead, is asked (at least to me ) in this step, after initial setup:
So both on cockpit side?
[image: Immagine incorporata 1]
Some additional, general questions: NetworkManager: must be disabled deploying HCI solution ? In my attempt, wasn't disabled.
Simone, could you confirm or not that NM must stay in place while deploying ? This qustion was struggling since 3.6...... what is the "best practice" ? All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but I wasn't able to find any mandatory rule.
In early 3.6 you had to disable it but now you can safely keep it on.
There's some document to follow to perform a correct deploy ?
Is this one still "valid" ? : https://ovirt.org/blog/2017/ 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
Attached hosted-engine-setup log. TIA
-- Roberto 110-006-970
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Roberto Nunin
-- Roberto Nunin

This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1] 2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
Yes, both times on Cockpit.
2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
Ciao Simone thanks for really quick answer.
2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
Ciao Roberto,
On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> wrote:
I'm having trouble to deploy one three host hyperconverged lab using iso-image named above.
Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 <(201)%20712-0512>.iso is still a pre-release software. Your contribute testing it is really appreciated!
It's a pleasure !.
My test environment is based on HPE BL680cG7 blade servers. These servers has 6 physical 10GB network interfaces (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc).
I choose one of these six phys interfaces (enp5s0f0) and assigned it a static IPv4 address, for each node.
After node reboot, interface ONBOOT param is still set to no. Changed via iLO interface to yes and restarted network. Fine.
After gluster setup, with gdeploy script under Cockpit interface, avoiding errors coming from : /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine deploy.
With the new version, I'm having an error never seen before:
The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Failed to execute stage 'Environment customization': The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Hosted Engine deployment failed.
There's no input field for HE subnet mask. Anyway in our class-c ovirt management network these ARE in the same subnet. How to recover from this ? I cannot add /24 CIDR in HE Static IP address field, it isn't allowed.
10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it should't fail. The issue here seams different:
From hosted-engine-setup log I see that you passed the VM IP address via answerfile: 2017-12-06 09:14:30,195+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic CIDR=str:'10.114.60.117'
while the right syntax should be: OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
Did you wrote the answerfile by yourself or did you entered the IP address in the cockpit wizard? if so we probably have a regression there.
I've inserted it while providing data for setup, using Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface. No manual update of answer file.
Moreover, VM FQDN is asked two times during the deploy process. It's correct ?
No, I don't think so but I don't see it from you logs. Could you please explain it?
Yes: first time is requested during initial setup of HE VM deploy
The second one, instead, is asked (at least to me ) in this step, after initial setup:
So both on cockpit side?
[image: Immagine incorporata 1]
Some additional, general questions: NetworkManager: must be disabled deploying HCI solution ? In my attempt, wasn't disabled.
Simone, could you confirm or not that NM must stay in place while deploying ? This qustion was struggling since 3.6...... what is the "best practice" ? All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but I wasn't able to find any mandatory rule.
In early 3.6 you had to disable it but now you can safely keep it on.
There's some document to follow to perform a correct deploy ?
Is this one still "valid" ? : https://ovirt.org/blog/2017/ 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
Attached hosted-engine-setup log. TIA
-- Roberto 110-006-970
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Roberto Nunin
-- Roberto Nunin
-- Roberto Nunin

On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1]
2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
Yes, both times on Cockpit.
2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
Ciao Simone thanks for really quick answer.
2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
Ciao Roberto,
On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> wrote:
I'm having trouble to deploy one three host hyperconverged lab using iso-image named above.
Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 <%28201%29%20712-0512>.iso is still a pre-release software. Your contribute testing it is really appreciated!
It's a pleasure !.
My test environment is based on HPE BL680cG7 blade servers. These servers has 6 physical 10GB network interfaces (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc).
I choose one of these six phys interfaces (enp5s0f0) and assigned it a static IPv4 address, for each node.
After node reboot, interface ONBOOT param is still set to no. Changed via iLO interface to yes and restarted network. Fine.
After gluster setup, with gdeploy script under Cockpit interface, avoiding errors coming from : /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine deploy.
With the new version, I'm having an error never seen before:
The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Failed to execute stage 'Environment customization': The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not be in the same IP subnet. Static routing configuration are not supported on automatic VM configuration. Hosted Engine deployment failed.
There's no input field for HE subnet mask. Anyway in our class-c ovirt management network these ARE in the same subnet. How to recover from this ? I cannot add /24 CIDR in HE Static IP address field, it isn't allowed.
Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712 Perhaps a new regression? Although I can't see where it happened.
10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it should't fail. The issue here seams different:
From hosted-engine-setup log I see that you passed the VM IP address via answerfile: 2017-12-06 09:14:30,195+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic CIDR=str:'10.114.60.117'
while the right syntax should be: OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
Did you wrote the answerfile by yourself or did you entered the IP address in the cockpit wizard? if so we probably have a regression there.
I've inserted it while providing data for setup, using Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface. No manual update of answer file.
Moreover, VM FQDN is asked two times during the deploy process. It's correct ?
No, I don't think so but I don't see it from you logs. Could you please explain it?
Yes: first time is requested during initial setup of HE VM deploy
The second one, instead, is asked (at least to me ) in this step, after initial setup:
So both on cockpit side?
[image: Immagine incorporata 1]
Some additional, general questions: NetworkManager: must be disabled deploying HCI solution ? In my attempt, wasn't disabled.
Simone, could you confirm or not that NM must stay in place while deploying ? This qustion was struggling since 3.6...... what is the "best practice" ? All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but I wasn't able to find any mandatory rule.
In early 3.6 you had to disable it but now you can safely keep it on.
There's some document to follow to perform a correct deploy ?
Is this one still "valid" ? : https://ovirt.org/blog/2017/ 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
Attached hosted-engine-setup log. TIA
-- Roberto 110-006-970
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Roberto Nunin
-- Roberto Nunin
-- Roberto Nunin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit. 2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1]
2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
Yes, both times on Cockpit.
2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
Ciao Simone thanks for really quick answer.
2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
Ciao Roberto,
On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> wrote:
> I'm having trouble to deploy one three host hyperconverged lab using > iso-image named above. >
Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 <%28201%29%20712-0512>.iso is still a pre-release software. Your contribute testing it is really appreciated!
It's a pleasure !.
> > My test environment is based on HPE BL680cG7 blade servers. > These servers has 6 physical 10GB network interfaces (flexNIC), each > one with four profiles (ethernet,FCoE,iSCSI,etc). > > I choose one of these six phys interfaces (enp5s0f0) and assigned it > a static IPv4 address, for each node. > > After node reboot, interface ONBOOT param is still set to no. > Changed via iLO interface to yes and restarted network. Fine. > > After gluster setup, with gdeploy script under Cockpit interface, > avoiding errors coming from : > /usr/share/gdepply/scripts/blacklist_all_disks.sh, start > hosted-engine deploy. > > With the new version, I'm having an error never seen before: > > The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will > not be in the same IP subnet. Static routing configuration are not > supported on automatic VM configuration. > Failed to execute stage 'Environment customization': The Engine VM > (10.114.60.117) and this host (10.114.60.134/24) will not be in the > same IP subnet. Static routing configuration are not supported on automatic > VM configuration. > Hosted Engine deployment failed. > > There's no input field for HE subnet mask. Anyway in our class-c > ovirt management network these ARE in the same subnet. > How to recover from this ? I cannot add /24 CIDR in HE Static IP > address field, it isn't allowed. >
Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712
Perhaps a new regression? Although I can't see where it happened.
10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it should't fail. The issue here seams different:
From hosted-engine-setup log I see that you passed the VM IP address via answerfile: 2017-12-06 09:14:30,195+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic CIDR=str:'10.114.60.117'
while the right syntax should be: OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
Did you wrote the answerfile by yourself or did you entered the IP address in the cockpit wizard? if so we probably have a regression there.
I've inserted it while providing data for setup, using Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface. No manual update of answer file.
> > Moreover, VM FQDN is asked two times during the deploy process. It's > correct ? >
No, I don't think so but I don't see it from you logs. Could you please explain it?
Yes: first time is requested during initial setup of HE VM deploy
The second one, instead, is asked (at least to me ) in this step, after initial setup:
So both on cockpit side?
[image: Immagine incorporata 1]
> > Some additional, general questions: > NetworkManager: must be disabled deploying HCI solution ? In my > attempt, wasn't disabled. >
Simone, could you confirm or not that NM must stay in place while deploying ? This qustion was struggling since 3.6...... what is the "best practice" ? All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but I wasn't able to find any mandatory rule.
In early 3.6 you had to disable it but now you can safely keep it on.
There's some document to follow to perform a correct deploy ? > Is this one still "valid" ? : https://ovirt.org/blog/2017/ > 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ > > Attached hosted-engine-setup log. > TIA > > > > -- > Roberto > 110-006-970 > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Roberto Nunin
-- Roberto Nunin
-- Roberto Nunin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Roberto Nunin

On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit.
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level. Directly running hosted-engine-setup from CLI instead should work.
2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1]
2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
Yes, both times on Cockpit.
2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
Ciao Simone thanks for really quick answer.
2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
> Ciao Roberto, > > On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> > wrote: > >> I'm having trouble to deploy one three host hyperconverged lab >> using iso-image named above. >> > > Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 > <%28201%29%20712-0512>.iso is still a pre-release software. > Your contribute testing it is really appreciated! >
It's a pleasure !.
> > > >> >> My test environment is based on HPE BL680cG7 blade servers. >> These servers has 6 physical 10GB network interfaces (flexNIC), >> each one with four profiles (ethernet,FCoE,iSCSI,etc). >> >> I choose one of these six phys interfaces (enp5s0f0) and assigned >> it a static IPv4 address, for each node. >> >> After node reboot, interface ONBOOT param is still set to no. >> Changed via iLO interface to yes and restarted network. Fine. >> >> After gluster setup, with gdeploy script under Cockpit interface, >> avoiding errors coming from : >> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >> hosted-engine deploy. >> >> With the new version, I'm having an error never seen before: >> >> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >> will not be in the same IP subnet. Static routing configuration are not >> supported on automatic VM configuration. >> Failed to execute stage 'Environment customization': The Engine VM >> (10.114.60.117) and this host (10.114.60.134/24) will not be in >> the same IP subnet. Static routing configuration are not supported on >> automatic VM configuration. >> Hosted Engine deployment failed. >> >> There's no input field for HE subnet mask. Anyway in our class-c >> ovirt management network these ARE in the same subnet. >> How to recover from this ? I cannot add /24 CIDR in HE Static IP >> address field, it isn't allowed. >> >
Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712
Perhaps a new regression? Although I can't see where it happened.
> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet > so it should't fail. > The issue here seams different: > > From hosted-engine-setup log I see that you passed the VM IP address > via answerfile: > 2017-12-06 09:14:30,195+0100 DEBUG otopi.context > context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic > CIDR=str:'10.114.60.117' > > while the right syntax should be: > OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 > > Did you wrote the answerfile by yourself or did you entered the IP > address in the cockpit wizard? if so we probably have a regression there. >
I've inserted it while providing data for setup, using Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface. No manual update of answer file.
> > > >> >> Moreover, VM FQDN is asked two times during the deploy process. >> It's correct ? >> > > No, I don't think so but I don't see it from you logs. > Could you please explain it? >
Yes: first time is requested during initial setup of HE VM deploy
The second one, instead, is asked (at least to me ) in this step, after initial setup:
So both on cockpit side?
[image: Immagine incorporata 1]
> > >> >> Some additional, general questions: >> NetworkManager: must be disabled deploying HCI solution ? In my >> attempt, wasn't disabled. >> > Simone, could you confirm or not that NM must stay in place while deploying ? This qustion was struggling since 3.6...... what is the "best practice" ? All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but I wasn't able to find any mandatory rule.
In early 3.6 you had to disable it but now you can safely keep it on.
> There's some document to follow to perform a correct deploy ? >> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >> >> Attached hosted-engine-setup log. >> TIA >> >> >> >> -- >> Roberto >> 110-006-970 >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >
-- Roberto Nunin
-- Roberto Nunin
-- Roberto Nunin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Roberto Nunin

On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit.
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level.
Now pushed a simple fix: https://gerrit.ovirt.org/85142 You can try using the jenkins-generated RPMs: http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_master_check-patch-el...
Directly running hosted-engine-setup from CLI instead should work.
2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1]
2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
Yes, both times on Cockpit.
2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> wrote:
> Ciao Simone > thanks for really quick answer. > > 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > >> Ciao Roberto, >> >> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com> >> wrote: >> >>> I'm having trouble to deploy one three host hyperconverged lab >>> using iso-image named above. >>> >> >> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 >> <%28201%29%20712-0512>.iso is still a pre-release software. >> Your contribute testing it is really appreciated! >> > > It's a pleasure !. > > >> >> >> >>> >>> My test environment is based on HPE BL680cG7 blade servers. >>> These servers has 6 physical 10GB network interfaces (flexNIC), >>> each one with four profiles (ethernet,FCoE,iSCSI,etc). >>> >>> I choose one of these six phys interfaces (enp5s0f0) and assigned >>> it a static IPv4 address, for each node. >>> >>> After node reboot, interface ONBOOT param is still set to no. >>> Changed via iLO interface to yes and restarted network. Fine. >>> >>> After gluster setup, with gdeploy script under Cockpit interface, >>> avoiding errors coming from : >>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>> hosted-engine deploy. >>> >>> With the new version, I'm having an error never seen before: >>> >>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>> will not be in the same IP subnet. Static routing configuration are not >>> supported on automatic VM configuration. >>> Failed to execute stage 'Environment customization': The Engine >>> VM (10.114.60.117) and this host (10.114.60.134/24) will not be >>> in the same IP subnet. Static routing configuration are not supported on >>> automatic VM configuration. >>> Hosted Engine deployment failed. >>> >>> There's no input field for HE subnet mask. Anyway in our class-c >>> ovirt management network these ARE in the same subnet. >>> How to recover from this ? I cannot add /24 CIDR in HE Static IP >>> address field, it isn't allowed. >>> >>
Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712
Perhaps a new regression? Although I can't see where it happened.
>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet >> so it should't fail. >> The issue here seams different: >> >> From hosted-engine-setup log I see that you passed the VM IP >> address via answerfile: >> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >> CIDR=str:'10.114.60.117' >> >> while the right syntax should be: >> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >> >> Did you wrote the answerfile by yourself or did you entered the IP >> address in the cockpit wizard? if so we probably have a regression there. >> > > I've inserted it while providing data for setup, using Cockpit > interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web > interface. No manual update of answer file. > >> >> >> >>> >>> Moreover, VM FQDN is asked two times during the deploy process. >>> It's correct ? >>> >> >> No, I don't think so but I don't see it from you logs. >> Could you please explain it? >> > > Yes: first time is requested during initial setup of HE VM deploy > > The second one, instead, is asked (at least to me ) in this step, > after initial setup: >
So both on cockpit side?
> > [image: Immagine incorporata 1] > >> >> >>> >>> Some additional, general questions: >>> NetworkManager: must be disabled deploying HCI solution ? In my >>> attempt, wasn't disabled. >>> >> > Simone, could you confirm or not that NM must stay in place while > deploying ? This qustion was struggling since 3.6...... what is the "best > practice" ? > All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it > disabled, but I wasn't able to find any mandatory rule. >
In early 3.6 you had to disable it but now you can safely keep it on.
> > >> There's some document to follow to perform a correct deploy ? >>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>> >>> Attached hosted-engine-setup log. >>> TIA >>> >>> >>> >>> -- >>> Roberto >>> 110-006-970 >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > > -- > Roberto Nunin > > > >
-- Roberto Nunin
-- Roberto Nunin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Roberto Nunin
-- Didi

It has worked. Now additional (hopefully final) question. In the blog https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-sto... is written that selecting cluster, additional hosts should be visible. These must be imported or detached. I cannot see the additional hosts. Must I add them as new host ? Or this means that the deploy hasn't finished correctly ? Thanks in advance. 2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit.
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level.
Now pushed a simple fix:
https://gerrit.ovirt.org/85142
You can try using the jenkins-generated RPMs:
http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_ master_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
Directly running hosted-engine-setup from CLI instead should work.
2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1]
2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
Yes, both times on Cockpit.
2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
> > > On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> > wrote: > >> Ciao Simone >> thanks for really quick answer. >> >> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: >> >>> Ciao Roberto, >>> >>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin@gmail.com >>> > wrote: >>> >>>> I'm having trouble to deploy one three host hyperconverged lab >>>> using iso-image named above. >>>> >>> >>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 >>> <%28201%29%20712-0512>.iso is still a pre-release software. >>> Your contribute testing it is really appreciated! >>> >> >> It's a pleasure !. >> >> >>> >>> >>> >>>> >>>> My test environment is based on HPE BL680cG7 blade servers. >>>> These servers has 6 physical 10GB network interfaces (flexNIC), >>>> each one with four profiles (ethernet,FCoE,iSCSI,etc). >>>> >>>> I choose one of these six phys interfaces (enp5s0f0) and assigned >>>> it a static IPv4 address, for each node. >>>> >>>> After node reboot, interface ONBOOT param is still set to no. >>>> Changed via iLO interface to yes and restarted network. Fine. >>>> >>>> After gluster setup, with gdeploy script under Cockpit interface, >>>> avoiding errors coming from : >>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>>> hosted-engine deploy. >>>> >>>> With the new version, I'm having an error never seen before: >>>> >>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>> will not be in the same IP subnet. Static routing configuration are not >>>> supported on automatic VM configuration. >>>> Failed to execute stage 'Environment customization': The Engine >>>> VM (10.114.60.117) and this host (10.114.60.134/24) will not be >>>> in the same IP subnet. Static routing configuration are not supported on >>>> automatic VM configuration. >>>> Hosted Engine deployment failed. >>>> >>>> There's no input field for HE subnet mask. Anyway in our class-c >>>> ovirt management network these ARE in the same subnet. >>>> How to recover from this ? I cannot add /24 CIDR in HE Static IP >>>> address field, it isn't allowed. >>>> >>> Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712
Perhaps a new regression? Although I can't see where it happened.
>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 >>> subnet so it should't fail. >>> The issue here seams different: >>> >>> From hosted-engine-setup log I see that you passed the VM IP >>> address via answerfile: >>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >>> CIDR=str:'10.114.60.117' >>> >>> while the right syntax should be: >>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >>> >>> Did you wrote the answerfile by yourself or did you entered the IP >>> address in the cockpit wizard? if so we probably have a regression there. >>> >> >> I've inserted it while providing data for setup, using Cockpit >> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web >> interface. No manual update of answer file. >> >>> >>> >>> >>>> >>>> Moreover, VM FQDN is asked two times during the deploy process. >>>> It's correct ? >>>> >>> >>> No, I don't think so but I don't see it from you logs. >>> Could you please explain it? >>> >> >> Yes: first time is requested during initial setup of HE VM deploy >> >> The second one, instead, is asked (at least to me ) in this step, >> after initial setup: >> > > So both on cockpit side? > > >> >> [image: Immagine incorporata 1] >> >>> >>> >>>> >>>> Some additional, general questions: >>>> NetworkManager: must be disabled deploying HCI solution ? In my >>>> attempt, wasn't disabled. >>>> >>> >> Simone, could you confirm or not that NM must stay in place while >> deploying ? This qustion was struggling since 3.6...... what is the "best >> practice" ? >> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it >> disabled, but I wasn't able to find any mandatory rule. >> > > In early 3.6 you had to disable it but now you can safely keep it on. > > > >> >> >>> There's some document to follow to perform a correct deploy ? >>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>>> >>>> Attached hosted-engine-setup log. >>>> TIA >>>> >>>> >>>> >>>> -- >>>> Roberto >>>> 110-006-970 >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> >> >> -- >> Roberto Nunin >> >> >> >> >
-- Roberto Nunin
-- Roberto Nunin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Roberto Nunin
-- Didi
-- Roberto Nunin

On Wed, Dec 6, 2017 at 2:33 PM, Roberto Nunin <robnunin@gmail.com> wrote:
It has worked.
Thanks!
Now additional (hopefully final) question.
In the blog https://ovirt.org/blog/2017/04/up-and-running-with- ovirt-4.1-and-gluster-storage/ is written that selecting cluster, additional hosts should be visible. These must be imported or detached.
I cannot see the additional hosts. Must I add them as new host ?
Yes, please follow this section: https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-sto...
Or this means that the deploy hasn't finished correctly ?
Thanks in advance.
2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit.
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level.
Now pushed a simple fix:
https://gerrit.ovirt.org/85142
You can try using the jenkins-generated RPMs:
http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_maste r_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
Directly running hosted-engine-setup from CLI instead should work.
2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
This is the first request, note that do not allow to add CIDR: [image: Immagine incorporata 1]
2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>:
> Yes, both times on Cockpit. > > 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > >> >> >> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com> >> wrote: >> >>> Ciao Simone >>> thanks for really quick answer. >>> >>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com> >>> : >>> >>>> Ciao Roberto, >>>> >>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin < >>>> robnunin@gmail.com> wrote: >>>> >>>>> I'm having trouble to deploy one three host hyperconverged lab >>>>> using iso-image named above. >>>>> >>>> >>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512 >>>> <%28201%29%20712-0512>.iso is still a pre-release software. >>>> Your contribute testing it is really appreciated! >>>> >>> >>> It's a pleasure !. >>> >>> >>>> >>>> >>>> >>>>> >>>>> My test environment is based on HPE BL680cG7 blade servers. >>>>> These servers has 6 physical 10GB network interfaces (flexNIC), >>>>> each one with four profiles (ethernet,FCoE,iSCSI,etc). >>>>> >>>>> I choose one of these six phys interfaces (enp5s0f0) and >>>>> assigned it a static IPv4 address, for each node. >>>>> >>>>> After node reboot, interface ONBOOT param is still set to no. >>>>> Changed via iLO interface to yes and restarted network. Fine. >>>>> >>>>> After gluster setup, with gdeploy script under Cockpit >>>>> interface, avoiding errors coming from : >>>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>>>> hosted-engine deploy. >>>>> >>>>> With the new version, I'm having an error never seen before: >>>>> >>>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>> will not be in the same IP subnet. Static routing configuration are not >>>>> supported on automatic VM configuration. >>>>> Failed to execute stage 'Environment customization': The Engine >>>>> VM (10.114.60.117) and this host (10.114.60.134/24) will not be >>>>> in the same IP subnet. Static routing configuration are not supported on >>>>> automatic VM configuration. >>>>> Hosted Engine deployment failed. >>>>> >>>>> There's no input field for HE subnet mask. Anyway in our class-c >>>>> ovirt management network these ARE in the same subnet. >>>>> How to recover from this ? I cannot add /24 CIDR in HE Static IP >>>>> address field, it isn't allowed. >>>>> >>>> Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712
Perhaps a new regression? Although I can't see where it happened.
>>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 >>>> subnet so it should't fail. >>>> The issue here seams different: >>>> >>>> From hosted-engine-setup log I see that you passed the VM IP >>>> address via answerfile: >>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >>>> CIDR=str:'10.114.60.117' >>>> >>>> while the right syntax should be: >>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >>>> >>>> Did you wrote the answerfile by yourself or did you entered the >>>> IP address in the cockpit wizard? if so we probably have a regression there. >>>> >>> >>> I've inserted it while providing data for setup, using Cockpit >>> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web >>> interface. No manual update of answer file. >>> >>>> >>>> >>>> >>>>> >>>>> Moreover, VM FQDN is asked two times during the deploy process. >>>>> It's correct ? >>>>> >>>> >>>> No, I don't think so but I don't see it from you logs. >>>> Could you please explain it? >>>> >>> >>> Yes: first time is requested during initial setup of HE VM deploy >>> >>> The second one, instead, is asked (at least to me ) in this step, >>> after initial setup: >>> >> >> So both on cockpit side? >> >> >>> >>> [image: Immagine incorporata 1] >>> >>>> >>>> >>>>> >>>>> Some additional, general questions: >>>>> NetworkManager: must be disabled deploying HCI solution ? In my >>>>> attempt, wasn't disabled. >>>>> >>>> >>> Simone, could you confirm or not that NM must stay in place while >>> deploying ? This qustion was struggling since 3.6...... what is the "best >>> practice" ? >>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it >>> disabled, but I wasn't able to find any mandatory rule. >>> >> >> In early 3.6 you had to disable it but now you can safely keep it >> on. >> >> >> >>> >>> >>>> There's some document to follow to perform a correct deploy ? >>>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>>>> >>>>> Attached hosted-engine-setup log. >>>>> TIA >>>>> >>>>> >>>>> >>>>> -- >>>>> Roberto >>>>> 110-006-970 >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >>> >>> -- >>> Roberto Nunin >>> >>> >>> >>> >> > > > -- > Roberto Nunin > > > >
> >
-- Roberto Nunin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Roberto Nunin
-- Didi
-- Roberto Nunin

2017-12-06 15:29 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 2:33 PM, Roberto Nunin <robnunin@gmail.com> wrote:
It has worked.
Thanks!
Now additional (hopefully final) question.
In the blog https://ovirt.org/blog/2017/04/up-and-running-with-ovir t-4.1-and-gluster-storage/ is written that selecting cluster, additional hosts should be visible. These must be imported or detached.
I cannot see the additional hosts. Must I add them as new host ?
Yes, please follow this section: https://ovirt.org/blog/2017/04/up-and-running-with-ovirt- 4.1-and-gluster-storage/#configuring-hosts-two-and-three-for-hosted-engine
Ciao SImone I've followed the procedure (extracted from the same blog I've mentioned in my email), but if I add host 2 and 3 instead to import them, no chance to "see" something when I'm going to add the data domain: [image: Immagine incorporata 1] nothing appear in the drop-down list, while glusterd is running on all three nodes: aps-te68-mng.example.com: Status of volume: data aps-te68-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te68-mng.example.com: ------------------------------------------------------------------------------ aps-te68-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te68-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te68-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te68-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75690 aps-te68-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te68-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te68-mng.example.com: aps-te68-mng.example.com: Task Status of Volume data aps-te68-mng.example.com: ------------------------------------------------------------------------------ aps-te68-mng.example.com: There are no active volume tasks aps-te68-mng.example.com: aps-te61-mng.example.com: Status of volume: data aps-te61-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te61-mng.example.com: ------------------------------------------------------------------------------ aps-te61-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te61-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te61-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te61-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 12170 aps-te61-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te61-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te61-mng.example.com: aps-te61-mng.example.com: Task Status of Volume data aps-te61-mng.example.com: ------------------------------------------------------------------------------ aps-te61-mng.example.com: There are no active volume tasks aps-te61-mng.example.com: aps-te64-mng.example.com: Status of volume: data aps-te64-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te64-mng.example.com: ------------------------------------------------------------------------------ aps-te64-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te64-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te64-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te64-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75675 aps-te64-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te64-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te64-mng.example.com: aps-te64-mng.example.com: Task Status of Volume data aps-te64-mng.example.com: ------------------------------------------------------------------------------ aps-te64-mng.example.com: There are no active volume tasks aps-te64-mng.example.com: Unfortunately, no way to import host 2 and 3. Searched around all the GUI, without success. Did you've any hints to suggest ? Thanks
Or this means that the deploy hasn't finished correctly ?
Thanks in advance.
2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit.
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level.
Now pushed a simple fix:
https://gerrit.ovirt.org/85142
You can try using the jenkins-generated RPMs:
http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_maste r_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
Directly running hosted-engine-setup from CLI instead should work.
2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> wrote:
> This is the first request, note that do not allow to add CIDR: > [image: Immagine incorporata 1] > > 2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>: > >> Yes, both times on Cockpit. >> >> 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: >> >>> >>> >>> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin@gmail.com >>> > wrote: >>> >>>> Ciao Simone >>>> thanks for really quick answer. >>>> >>>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com >>>> >: >>>> >>>>> Ciao Roberto, >>>>> >>>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin < >>>>> robnunin@gmail.com> wrote: >>>>> >>>>>> I'm having trouble to deploy one three host hyperconverged lab >>>>>> using iso-image named above. >>>>>> >>>>> >>>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre- >>>>> 2017120512 <%28201%29%20712-0512>.iso is still a pre-release >>>>> software. >>>>> Your contribute testing it is really appreciated! >>>>> >>>> >>>> It's a pleasure !. >>>> >>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> My test environment is based on HPE BL680cG7 blade servers. >>>>>> These servers has 6 physical 10GB network interfaces (flexNIC), >>>>>> each one with four profiles (ethernet,FCoE,iSCSI,etc). >>>>>> >>>>>> I choose one of these six phys interfaces (enp5s0f0) and >>>>>> assigned it a static IPv4 address, for each node. >>>>>> >>>>>> After node reboot, interface ONBOOT param is still set to no. >>>>>> Changed via iLO interface to yes and restarted network. Fine. >>>>>> >>>>>> After gluster setup, with gdeploy script under Cockpit >>>>>> interface, avoiding errors coming from : >>>>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>>>>> hosted-engine deploy. >>>>>> >>>>>> With the new version, I'm having an error never seen before: >>>>>> >>>>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>> supported on automatic VM configuration. >>>>>> Failed to execute stage 'Environment customization': The >>>>>> Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>> supported on automatic VM configuration. >>>>>> Hosted Engine deployment failed. >>>>>> >>>>>> There's no input field for HE subnet mask. Anyway in our >>>>>> class-c ovirt management network these ARE in the same subnet. >>>>>> How to recover from this ? I cannot add /24 CIDR in HE Static >>>>>> IP address field, it isn't allowed. >>>>>> >>>>> Incidentally, we also got now another similar report:
https://bugzilla.redhat.com/show_bug.cgi?id=1522712
Perhaps a new regression? Although I can't see where it happened.
> >>>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 >>>>> subnet so it should't fail. >>>>> The issue here seams different: >>>>> >>>>> From hosted-engine-setup log I see that you passed the VM IP >>>>> address via answerfile: >>>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >>>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >>>>> CIDR=str:'10.114.60.117' >>>>> >>>>> while the right syntax should be: >>>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >>>>> >>>>> Did you wrote the answerfile by yourself or did you entered the >>>>> IP address in the cockpit wizard? if so we probably have a regression there. >>>>> >>>> >>>> I've inserted it while providing data for setup, using Cockpit >>>> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web >>>> interface. No manual update of answer file. >>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> Moreover, VM FQDN is asked two times during the deploy process. >>>>>> It's correct ? >>>>>> >>>>> >>>>> No, I don't think so but I don't see it from you logs. >>>>> Could you please explain it? >>>>> >>>> >>>> Yes: first time is requested during initial setup of HE VM deploy >>>> >>>> The second one, instead, is asked (at least to me ) in this step, >>>> after initial setup: >>>> >>> >>> So both on cockpit side? >>> >>> >>>> >>>> [image: Immagine incorporata 1] >>>> >>>>> >>>>> >>>>>> >>>>>> Some additional, general questions: >>>>>> NetworkManager: must be disabled deploying HCI solution ? In my >>>>>> attempt, wasn't disabled. >>>>>> >>>>> >>>> Simone, could you confirm or not that NM must stay in place >>>> while deploying ? This qustion was struggling since 3.6...... what is the >>>> "best practice" ? >>>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it >>>> disabled, but I wasn't able to find any mandatory rule. >>>> >>> >>> In early 3.6 you had to disable it but now you can safely keep it >>> on. >>> >>> >>> >>>> >>>> >>>>> There's some document to follow to perform a correct deploy ? >>>>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>>>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>>>>> >>>>>> Attached hosted-engine-setup log. >>>>>> TIA >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Roberto >>>>>> 110-006-970 >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> Roberto Nunin >>>> >>>> >>>> >>>> >>> >> >> >> -- >> Roberto Nunin >> >> >> >> > >> >> > > > -- > Roberto Nunin > > > > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Didi
-- Roberto Nunin
-- Didi
-- Roberto Nunin
-- Roberto Nunin

On Wed, Dec 6, 2017 at 5:00 PM, Roberto Nunin <robnunin@gmail.com> wrote:
2017-12-06 15:29 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 2:33 PM, Roberto Nunin <robnunin@gmail.com> wrote:
It has worked.
Thanks!
Now additional (hopefully final) question.
In the blog https://ovirt.org/blog/2017/04/up-and-running-with-ovir t-4.1-and-gluster-storage/ is written that selecting cluster, additional hosts should be visible. These must be imported or detached.
I cannot see the additional hosts. Must I add them as new host ?
Yes, please follow this section: https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4. 1-and-gluster-storage/#configuring-hosts-two-and-three-for-hosted-engine
Ciao SImone
I've followed the procedure (extracted from the same blog I've mentioned in my email), but if I add host 2 and 3 instead to import them, no chance to "see" something when I'm going to add the data domain:
If I'm not wrong, you have to create the gluster data storage domain checking the box "Use managed gluster" before adding your additional two hosts. Once that SD will be there, the data center will go up triggering the auto import of the hosted engine storage domain. Only at that point you will be able to correctly add the remaining hosts.
[image: Immagine incorporata 1]
nothing appear in the drop-down list, while glusterd is running on all three nodes:
aps-te68-mng.example.com: Status of volume: data aps-te68-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te68-mng.example.com: ------------------------------ ------------------------------------------------ aps-te68-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te68-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te68-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te68-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75690 aps-te68-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te68-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te68-mng.example.com: aps-te68-mng.example.com: Task Status of Volume data aps-te68-mng.example.com: ------------------------------ ------------------------------------------------ aps-te68-mng.example.com: There are no active volume tasks aps-te68-mng.example.com: aps-te61-mng.example.com: Status of volume: data aps-te61-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te61-mng.example.com: ------------------------------ ------------------------------------------------ aps-te61-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te61-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te61-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te61-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 12170 aps-te61-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te61-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te61-mng.example.com: aps-te61-mng.example.com: Task Status of Volume data aps-te61-mng.example.com: ------------------------------ ------------------------------------------------ aps-te61-mng.example.com: There are no active volume tasks aps-te61-mng.example.com: aps-te64-mng.example.com: Status of volume: data aps-te64-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te64-mng.example.com: ------------------------------ ------------------------------------------------ aps-te64-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te64-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te64-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te64-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75675 aps-te64-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te64-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te64-mng.example.com: aps-te64-mng.example.com: Task Status of Volume data aps-te64-mng.example.com: ------------------------------ ------------------------------------------------ aps-te64-mng.example.com: There are no active volume tasks aps-te64-mng.example.com:
Unfortunately, no way to import host 2 and 3. Searched around all the GUI, without success. Did you've any hints to suggest ?
Thanks
Or this means that the deploy hasn't finished correctly ?
Thanks in advance.
2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
I've read the Bugzilla report, but after modify answer file, how to provide it to the Cockpit process ? I know that in the CLI installation it could be provided as aparameter of the CLI command, but in Cockpit.
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level.
Now pushed a simple fix:
https://gerrit.ovirt.org/85142
You can try using the jenkins-generated RPMs:
http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_maste r_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
Directly running hosted-engine-setup from CLI instead should work.
2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
> On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> > wrote: > >> This is the first request, note that do not allow to add CIDR: >> [image: Immagine incorporata 1] >> >> 2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>: >> >>> Yes, both times on Cockpit. >>> >>> 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com> >>> : >>> >>>> >>>> >>>> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin < >>>> robnunin@gmail.com> wrote: >>>> >>>>> Ciao Simone >>>>> thanks for really quick answer. >>>>> >>>>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi < >>>>> stirabos@redhat.com>: >>>>> >>>>>> Ciao Roberto, >>>>>> >>>>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin < >>>>>> robnunin@gmail.com> wrote: >>>>>> >>>>>>> I'm having trouble to deploy one three host hyperconverged lab >>>>>>> using iso-image named above. >>>>>>> >>>>>> >>>>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre- >>>>>> 2017120512 <%28201%29%20712-0512>.iso is still a pre-release >>>>>> software. >>>>>> Your contribute testing it is really appreciated! >>>>>> >>>>> >>>>> It's a pleasure !. >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> My test environment is based on HPE BL680cG7 blade servers. >>>>>>> These servers has 6 physical 10GB network interfaces >>>>>>> (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc). >>>>>>> >>>>>>> I choose one of these six phys interfaces (enp5s0f0) and >>>>>>> assigned it a static IPv4 address, for each node. >>>>>>> >>>>>>> After node reboot, interface ONBOOT param is still set to no. >>>>>>> Changed via iLO interface to yes and restarted network. Fine. >>>>>>> >>>>>>> After gluster setup, with gdeploy script under Cockpit >>>>>>> interface, avoiding errors coming from : >>>>>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>>>>>> hosted-engine deploy. >>>>>>> >>>>>>> With the new version, I'm having an error never seen before: >>>>>>> >>>>>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>>> supported on automatic VM configuration. >>>>>>> Failed to execute stage 'Environment customization': The >>>>>>> Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>>> supported on automatic VM configuration. >>>>>>> Hosted Engine deployment failed. >>>>>>> >>>>>>> There's no input field for HE subnet mask. Anyway in our >>>>>>> class-c ovirt management network these ARE in the same subnet. >>>>>>> How to recover from this ? I cannot add /24 CIDR in HE Static >>>>>>> IP address field, it isn't allowed. >>>>>>> >>>>>> > Incidentally, we also got now another similar report: > > https://bugzilla.redhat.com/show_bug.cgi?id=1522712 > > Perhaps a new regression? Although I can't see where it happened. > > >> >>>>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 >>>>>> subnet so it should't fail. >>>>>> The issue here seams different: >>>>>> >>>>>> From hosted-engine-setup log I see that you passed the VM IP >>>>>> address via answerfile: >>>>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >>>>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >>>>>> CIDR=str:'10.114.60.117' >>>>>> >>>>>> while the right syntax should be: >>>>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >>>>>> >>>>>> Did you wrote the answerfile by yourself or did you entered the >>>>>> IP address in the cockpit wizard? if so we probably have a regression there. >>>>>> >>>>> >>>>> I've inserted it while providing data for setup, using Cockpit >>>>> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web >>>>> interface. No manual update of answer file. >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Moreover, VM FQDN is asked two times during the deploy >>>>>>> process. It's correct ? >>>>>>> >>>>>> >>>>>> No, I don't think so but I don't see it from you logs. >>>>>> Could you please explain it? >>>>>> >>>>> >>>>> Yes: first time is requested during initial setup of HE VM >>>>> deploy >>>>> >>>>> The second one, instead, is asked (at least to me ) in this >>>>> step, after initial setup: >>>>> >>>> >>>> So both on cockpit side? >>>> >>>> >>>>> >>>>> [image: Immagine incorporata 1] >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Some additional, general questions: >>>>>>> NetworkManager: must be disabled deploying HCI solution ? In >>>>>>> my attempt, wasn't disabled. >>>>>>> >>>>>> >>>>> Simone, could you confirm or not that NM must stay in place >>>>> while deploying ? This qustion was struggling since 3.6...... what is the >>>>> "best practice" ? >>>>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it >>>>> disabled, but I wasn't able to find any mandatory rule. >>>>> >>>> >>>> In early 3.6 you had to disable it but now you can safely keep it >>>> on. >>>> >>>> >>>> >>>>> >>>>> >>>>>> There's some document to follow to perform a correct deploy ? >>>>>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>>>>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>>>>>> >>>>>>> Attached hosted-engine-setup log. >>>>>>> TIA >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Roberto >>>>>>> 110-006-970 >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Roberto Nunin >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> -- >>> Roberto Nunin >>> >>> >>> >>> >> >>> >>> >> >> >> -- >> Roberto Nunin >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Didi >
-- Roberto Nunin
-- Didi
-- Roberto Nunin
-- Roberto Nunin

Final update on yesterday thread: - restarded from scratch node installation. The only issue noticed with my lab is that chosen network interfaces will remain down after node install. This require access to the node console to correct the config and, in a completely automated deploy process, this isn't the best condition. I'm obviously available to perform further tests. We are deploying node install via PXE - the ovirt-hosted.engine-setup provided yesterday (ovirt-hosted-engine-setup-2.2.1-0.0.master.20171206123553.git94f4c9e.el7.centos.noarch) has solved the problem Static IP address given to the HE VM. - HE VM FQDN is requested still two times, as already documented, first in phase 1) , then in phase 5). - the issue related to non visible gluster endpoint when adding the first data domain, was related to missed activation of the "Enable Gluster service" checkbox in cluster properities. Being a gluster HC installation, this could be activated by default. - finally hosts 2 and 3 aren't visible for import, so it is needed to add them as a new node. Done it. - after host 2 and 3 installation and data domain creation using glusterfs managed volume, engine domain is imported successfully. - hosted engine deploy on remaining nodes ends successfully. Now I'm facing with another problem, but I will send another mail. Thanks for support! Ciao 2017-12-06 17:18 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 5:00 PM, Roberto Nunin <robnunin@gmail.com> wrote:
2017-12-06 15:29 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 2:33 PM, Roberto Nunin <robnunin@gmail.com> wrote:
It has worked.
Thanks!
Now additional (hopefully final) question.
In the blog https://ovirt.org/blog/2017/04/up-and-running-with-ovir t-4.1-and-gluster-storage/ is written that selecting cluster, additional hosts should be visible. These must be imported or detached.
I cannot see the additional hosts. Must I add them as new host ?
Yes, please follow this section: https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1 -and-gluster-storage/#configuring-hosts-two-and-three-for-hosted-engine
Ciao SImone
I've followed the procedure (extracted from the same blog I've mentioned in my email), but if I add host 2 and 3 instead to import them, no chance to "see" something when I'm going to add the data domain:
If I'm not wrong, you have to create the gluster data storage domain checking the box "Use managed gluster" before adding your additional two hosts. Once that SD will be there, the data center will go up triggering the auto import of the hosted engine storage domain. Only at that point you will be able to correctly add the remaining hosts.
[image: Immagine incorporata 1]
nothing appear in the drop-down list, while glusterd is running on all three nodes:
aps-te68-mng.example.com: Status of volume: data aps-te68-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te68-mng.example.com: ------------------------------ ------------------------------------------------ aps-te68-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te68-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te68-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te68-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75690 aps-te68-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te68-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te68-mng.example.com: aps-te68-mng.example.com: Task Status of Volume data aps-te68-mng.example.com: ------------------------------ ------------------------------------------------ aps-te68-mng.example.com: There are no active volume tasks aps-te68-mng.example.com: aps-te61-mng.example.com: Status of volume: data aps-te61-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te61-mng.example.com: ------------------------------ ------------------------------------------------ aps-te61-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te61-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te61-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te61-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 12170 aps-te61-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te61-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te61-mng.example.com: aps-te61-mng.example.com: Task Status of Volume data aps-te61-mng.example.com: ------------------------------ ------------------------------------------------ aps-te61-mng.example.com: There are no active volume tasks aps-te61-mng.example.com: aps-te64-mng.example.com: Status of volume: data aps-te64-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te64-mng.example.com: ------------------------------ ------------------------------------------------ aps-te64-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te64-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te64-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te64-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75675 aps-te64-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te64-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te64-mng.example.com: aps-te64-mng.example.com: Task Status of Volume data aps-te64-mng.example.com: ------------------------------ ------------------------------------------------ aps-te64-mng.example.com: There are no active volume tasks aps-te64-mng.example.com:
Unfortunately, no way to import host 2 and 3. Searched around all the GUI, without success. Did you've any hints to suggest ?
Thanks
Or this means that the deploy hasn't finished correctly ?
Thanks in advance.
2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> wrote:
> I've read the Bugzilla report, but after modify answer file, how to > provide it to the Cockpit process ? > I know that in the CLI installation it could be provided as > aparameter of the CLI command, but in Cockpit. >
You are right, unfortunately we have to fix it: I don't see any possible workaround at cockpit level.
Now pushed a simple fix:
https://gerrit.ovirt.org/85142
You can try using the jenkins-generated RPMs:
http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_maste r_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
Directly running hosted-engine-setup from CLI instead should work.
> > 2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>: > >> On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> >> wrote: >> >>> This is the first request, note that do not allow to add CIDR: >>> [image: Immagine incorporata 1] >>> >>> 2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>: >>> >>>> Yes, both times on Cockpit. >>>> >>>> 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com >>>> >: >>>> >>>>> >>>>> >>>>> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin < >>>>> robnunin@gmail.com> wrote: >>>>> >>>>>> Ciao Simone >>>>>> thanks for really quick answer. >>>>>> >>>>>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi < >>>>>> stirabos@redhat.com>: >>>>>> >>>>>>> Ciao Roberto, >>>>>>> >>>>>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin < >>>>>>> robnunin@gmail.com> wrote: >>>>>>> >>>>>>>> I'm having trouble to deploy one three host hyperconverged >>>>>>>> lab using iso-image named above. >>>>>>>> >>>>>>> >>>>>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre- >>>>>>> 2017120512 <%28201%29%20712-0512>.iso is still a pre-release >>>>>>> software. >>>>>>> Your contribute testing it is really appreciated! >>>>>>> >>>>>> >>>>>> It's a pleasure !. >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> My test environment is based on HPE BL680cG7 blade servers. >>>>>>>> These servers has 6 physical 10GB network interfaces >>>>>>>> (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc). >>>>>>>> >>>>>>>> I choose one of these six phys interfaces (enp5s0f0) and >>>>>>>> assigned it a static IPv4 address, for each node. >>>>>>>> >>>>>>>> After node reboot, interface ONBOOT param is still set to no. >>>>>>>> Changed via iLO interface to yes and restarted network. Fine. >>>>>>>> >>>>>>>> After gluster setup, with gdeploy script under Cockpit >>>>>>>> interface, avoiding errors coming from : >>>>>>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>>>>>>> hosted-engine deploy. >>>>>>>> >>>>>>>> With the new version, I'm having an error never seen before: >>>>>>>> >>>>>>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>>>> supported on automatic VM configuration. >>>>>>>> Failed to execute stage 'Environment customization': The >>>>>>>> Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>>>> supported on automatic VM configuration. >>>>>>>> Hosted Engine deployment failed. >>>>>>>> >>>>>>>> There's no input field for HE subnet mask. Anyway in our >>>>>>>> class-c ovirt management network these ARE in the same subnet. >>>>>>>> How to recover from this ? I cannot add /24 CIDR in HE Static >>>>>>>> IP address field, it isn't allowed. >>>>>>>> >>>>>>> >> Incidentally, we also got now another similar report: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1522712 >> >> Perhaps a new regression? Although I can't see where it happened. >> >> >>> >>>>>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 >>>>>>> subnet so it should't fail. >>>>>>> The issue here seams different: >>>>>>> >>>>>>> From hosted-engine-setup log I see that you passed the VM IP >>>>>>> address via answerfile: >>>>>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >>>>>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >>>>>>> CIDR=str:'10.114.60.117' >>>>>>> >>>>>>> while the right syntax should be: >>>>>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >>>>>>> >>>>>>> Did you wrote the answerfile by yourself or did you entered >>>>>>> the IP address in the cockpit wizard? if so we probably have a regression >>>>>>> there. >>>>>>> >>>>>> >>>>>> I've inserted it while providing data for setup, using Cockpit >>>>>> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web >>>>>> interface. No manual update of answer file. >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Moreover, VM FQDN is asked two times during the deploy >>>>>>>> process. It's correct ? >>>>>>>> >>>>>>> >>>>>>> No, I don't think so but I don't see it from you logs. >>>>>>> Could you please explain it? >>>>>>> >>>>>> >>>>>> Yes: first time is requested during initial setup of HE VM >>>>>> deploy >>>>>> >>>>>> The second one, instead, is asked (at least to me ) in this >>>>>> step, after initial setup: >>>>>> >>>>> >>>>> So both on cockpit side? >>>>> >>>>> >>>>>> >>>>>> [image: Immagine incorporata 1] >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Some additional, general questions: >>>>>>>> NetworkManager: must be disabled deploying HCI solution ? In >>>>>>>> my attempt, wasn't disabled. >>>>>>>> >>>>>>> >>>>>> Simone, could you confirm or not that NM must stay in place >>>>>> while deploying ? This qustion was struggling since 3.6...... what is the >>>>>> "best practice" ? >>>>>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it >>>>>> disabled, but I wasn't able to find any mandatory rule. >>>>>> >>>>> >>>>> In early 3.6 you had to disable it but now you can safely keep >>>>> it on. >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>>> There's some document to follow to perform a correct deploy ? >>>>>>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>>>>>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>>>>>>> >>>>>>>> Attached hosted-engine-setup log. >>>>>>>> TIA >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Roberto >>>>>>>> 110-006-970 >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Roberto Nunin >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> Roberto Nunin >>>> >>>> >>>> >>>> >>> >>>> >>>> >>> >>> >>> -- >>> Roberto Nunin >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Didi >> > > > > -- > Roberto Nunin > > > >
-- Didi
-- Roberto Nunin
-- Roberto Nunin
-- Roberto Nunin

On Thu, Dec 7, 2017 at 12:30 PM, Roberto Nunin <robnunin@gmail.com> wrote:
Final update on yesterday thread:
- restarded from scratch node installation. The only issue noticed with my lab is that chosen network interfaces will remain down after node install. This require access to the node console to correct the config and, in a completely automated deploy process, this isn't the best condition. I'm obviously available to perform further tests. We are deploying node install via PXE - the ovirt-hosted.engine-setup provided yesterday (ovirt-hosted-engine-setup-2.2.1-0.0.master.20171206123553.git94f4c9e.el7.centos.noarch) has solved the problem Static IP address given to the HE VM. - HE VM FQDN is requested still two times, as already documented, first in phase 1) , then in phase 5).
I opened a bug tracking it: https://bugzilla.redhat.com/show_bug.cgi?id=1524372
- the issue related to non visible gluster endpoint when adding the first data domain, was related to missed activation of the "Enable Gluster service" checkbox in cluster properities. Being a gluster HC installation, this could be activated by default.
Adding Denis here
- finally hosts 2 and 3 aren't visible for import, so it is needed to add them as a new node. Done it.
Tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1466132
- after host 2 and 3 installation and data domain creation using glusterfs managed volume, engine domain is imported successfully. - hosted engine deploy on remaining nodes ends successfully.
Now I'm facing with another problem, but I will send another mail.
Thanks for support! Ciao
2017-12-06 17:18 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 5:00 PM, Roberto Nunin <robnunin@gmail.com> wrote:
2017-12-06 15:29 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Dec 6, 2017 at 2:33 PM, Roberto Nunin <robnunin@gmail.com> wrote:
It has worked.
Thanks!
Now additional (hopefully final) question.
In the blog https://ovirt.org/blog/2017/04/up-and-running-with-ovir t-4.1-and-gluster-storage/ is written that selecting cluster, additional hosts should be visible. These must be imported or detached.
I cannot see the additional hosts. Must I add them as new host ?
Yes, please follow this section: https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1 -and-gluster-storage/#configuring-hosts-two-and-three-for-hosted-engine
Ciao SImone
I've followed the procedure (extracted from the same blog I've mentioned in my email), but if I add host 2 and 3 instead to import them, no chance to "see" something when I'm going to add the data domain:
If I'm not wrong, you have to create the gluster data storage domain checking the box "Use managed gluster" before adding your additional two hosts. Once that SD will be there, the data center will go up triggering the auto import of the hosted engine storage domain. Only at that point you will be able to correctly add the remaining hosts.
[image: Immagine incorporata 1]
nothing appear in the drop-down list, while glusterd is running on all three nodes:
aps-te68-mng.example.com: Status of volume: data aps-te68-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te68-mng.example.com: ------------------------------ ------------------------------------------------ aps-te68-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te68-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te68-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te68-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te68-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75690 aps-te68-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te68-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te68-mng.example.com: aps-te68-mng.example.com: Task Status of Volume data aps-te68-mng.example.com: ------------------------------ ------------------------------------------------ aps-te68-mng.example.com: There are no active volume tasks aps-te68-mng.example.com: aps-te61-mng.example.com: Status of volume: data aps-te61-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te61-mng.example.com: ------------------------------ ------------------------------------------------ aps-te61-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te61-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te61-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te61-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te61-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 12170 aps-te61-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A N/A Y 75675 aps-te61-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te61-mng.example.com: aps-te61-mng.example.com: Task Status of Volume data aps-te61-mng.example.com: ------------------------------ ------------------------------------------------ aps-te61-mng.example.com: There are no active volume tasks aps-te61-mng.example.com: aps-te64-mng.example.com: Status of volume: data aps-te64-mng.example.com: Gluster process TCP Port RDMA Port Online Pid aps-te64-mng.example.com: ------------------------------ ------------------------------------------------ aps-te64-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 64531 aps-te64-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49153 0 Y 65012 aps-te64-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric aps-te64-mng.example.com: ks/data/data 49155 0 Y 75603 aps-te64-mng.example.com: Self-heal Daemon on localhost N/A N/A Y 75675 aps-te64-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A N/A Y 75690 aps-te64-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A N/A Y 12170 aps-te64-mng.example.com: aps-te64-mng.example.com: Task Status of Volume data aps-te64-mng.example.com: ------------------------------ ------------------------------------------------ aps-te64-mng.example.com: There are no active volume tasks aps-te64-mng.example.com:
Unfortunately, no way to import host 2 and 3. Searched around all the GUI, without success. Did you've any hints to suggest ?
Thanks
Or this means that the deploy hasn't finished correctly ?
Thanks in advance.
2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin@gmail.com> > wrote: > >> I've read the Bugzilla report, but after modify answer file, how to >> provide it to the Cockpit process ? >> I know that in the CLI installation it could be provided as >> aparameter of the CLI command, but in Cockpit. >> > > You are right, unfortunately we have to fix it: I don't see any > possible workaround at cockpit level. >
Now pushed a simple fix:
https://gerrit.ovirt.org/85142
You can try using the jenkins-generated RPMs:
http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_maste r_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
> > Directly running hosted-engine-setup from CLI instead should work. > > >> >> 2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi@redhat.com>: >> >>> On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin@gmail.com> >>> wrote: >>> >>>> This is the first request, note that do not allow to add CIDR: >>>> [image: Immagine incorporata 1] >>>> >>>> 2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin@gmail.com>: >>>> >>>>> Yes, both times on Cockpit. >>>>> >>>>> 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi < >>>>> stirabos@redhat.com>: >>>>> >>>>>> >>>>>> >>>>>> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin < >>>>>> robnunin@gmail.com> wrote: >>>>>> >>>>>>> Ciao Simone >>>>>>> thanks for really quick answer. >>>>>>> >>>>>>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi < >>>>>>> stirabos@redhat.com>: >>>>>>> >>>>>>>> Ciao Roberto, >>>>>>>> >>>>>>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin < >>>>>>>> robnunin@gmail.com> wrote: >>>>>>>> >>>>>>>>> I'm having trouble to deploy one three host hyperconverged >>>>>>>>> lab using iso-image named above. >>>>>>>>> >>>>>>>> >>>>>>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre- >>>>>>>> 2017120512 <%28201%29%20712-0512>.iso is still a pre-release >>>>>>>> software. >>>>>>>> Your contribute testing it is really appreciated! >>>>>>>> >>>>>>> >>>>>>> It's a pleasure !. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> My test environment is based on HPE BL680cG7 blade servers. >>>>>>>>> These servers has 6 physical 10GB network interfaces >>>>>>>>> (flexNIC), each one with four profiles (ethernet,FCoE,iSCSI,etc). >>>>>>>>> >>>>>>>>> I choose one of these six phys interfaces (enp5s0f0) and >>>>>>>>> assigned it a static IPv4 address, for each node. >>>>>>>>> >>>>>>>>> After node reboot, interface ONBOOT param is still set to no. >>>>>>>>> Changed via iLO interface to yes and restarted network. Fine. >>>>>>>>> >>>>>>>>> After gluster setup, with gdeploy script under Cockpit >>>>>>>>> interface, avoiding errors coming from : >>>>>>>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start >>>>>>>>> hosted-engine deploy. >>>>>>>>> >>>>>>>>> With the new version, I'm having an error never seen before: >>>>>>>>> >>>>>>>>> The Engine VM (10.114.60.117) and this host ( >>>>>>>>> 10.114.60.134/24) will not be in the same IP subnet. Static >>>>>>>>> routing configuration are not supported on automatic VM configuration. >>>>>>>>> Failed to execute stage 'Environment customization': The >>>>>>>>> Engine VM (10.114.60.117) and this host (10.114.60.134/24) >>>>>>>>> will not be in the same IP subnet. Static routing configuration are not >>>>>>>>> supported on automatic VM configuration. >>>>>>>>> Hosted Engine deployment failed. >>>>>>>>> >>>>>>>>> There's no input field for HE subnet mask. Anyway in our >>>>>>>>> class-c ovirt management network these ARE in the same subnet. >>>>>>>>> How to recover from this ? I cannot add /24 CIDR in HE >>>>>>>>> Static IP address field, it isn't allowed. >>>>>>>>> >>>>>>>> >>> Incidentally, we also got now another similar report: >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1522712 >>> >>> Perhaps a new regression? Although I can't see where it happened. >>> >>> >>>> >>>>>>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 >>>>>>>> subnet so it should't fail. >>>>>>>> The issue here seams different: >>>>>>>> >>>>>>>> From hosted-engine-setup log I see that you passed the VM IP >>>>>>>> address via answerfile: >>>>>>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context >>>>>>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic >>>>>>>> CIDR=str:'10.114.60.117' >>>>>>>> >>>>>>>> while the right syntax should be: >>>>>>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24 >>>>>>>> >>>>>>>> Did you wrote the answerfile by yourself or did you entered >>>>>>>> the IP address in the cockpit wizard? if so we probably have a regression >>>>>>>> there. >>>>>>>> >>>>>>> >>>>>>> I've inserted it while providing data for setup, using >>>>>>> Cockpit interface. Tried to add CIDR (/24), but it isn't allowed from >>>>>>> Cockpit web interface. No manual update of answer file. >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Moreover, VM FQDN is asked two times during the deploy >>>>>>>>> process. It's correct ? >>>>>>>>> >>>>>>>> >>>>>>>> No, I don't think so but I don't see it from you logs. >>>>>>>> Could you please explain it? >>>>>>>> >>>>>>> >>>>>>> Yes: first time is requested during initial setup of HE VM >>>>>>> deploy >>>>>>> >>>>>>> The second one, instead, is asked (at least to me ) in this >>>>>>> step, after initial setup: >>>>>>> >>>>>> >>>>>> So both on cockpit side? >>>>>> >>>>>> >>>>>>> >>>>>>> [image: Immagine incorporata 1] >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Some additional, general questions: >>>>>>>>> NetworkManager: must be disabled deploying HCI solution ? In >>>>>>>>> my attempt, wasn't disabled. >>>>>>>>> >>>>>>>> >>>>>>> Simone, could you confirm or not that NM must stay in place >>>>>>> while deploying ? This qustion was struggling since 3.6...... what is the >>>>>>> "best practice" ? >>>>>>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it >>>>>>> disabled, but I wasn't able to find any mandatory rule. >>>>>>> >>>>>> >>>>>> In early 3.6 you had to disable it but now you can safely keep >>>>>> it on. >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> There's some document to follow to perform a correct deploy ? >>>>>>>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/ >>>>>>>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >>>>>>>>> >>>>>>>>> Attached hosted-engine-setup log. >>>>>>>>> TIA >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Roberto >>>>>>>>> 110-006-970 >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users@ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Roberto Nunin >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Roberto Nunin >>>>> >>>>> >>>>> >>>>> >>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Roberto Nunin >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> -- >>> Didi >>> >> >> >> >> -- >> Roberto Nunin >> >> >> >> >
-- Didi
-- Roberto Nunin
-- Roberto Nunin
-- Roberto Nunin
participants (3)
-
Roberto Nunin
-
Simone Tiraboschi
-
Yedidyah Bar David