changing ip of host and its ovirtmgmt vlan

Hello, an host not maintained by me was modified so that its mgmt network had become ovirtmgmntZ2Z3. Originally the host had been added into engine with its hostname and not its ip and this simplifies things. So a dns entry change was done while the host was in maintenance (as far as I have understood...) The guy changed the /etc/sysconfig/netowrk-scripts/ files and apparently it was activated ok, but when host rebooted the config was reverted due to persistence of vdsm. As he had urgence for this host to become operational again, in the mean time I worked like this, having now a working host: - modified /etc/sysconfig/network/scripts files with the new required configuration - modified files under /var/lib/vdsm/persistence/netconf/nets/ eg the file ovirtmgmntZ2Z3 with its ip and vlan correct information sync then power off / power on host The host comes up good and as it had been peviously put into maintenance, it was able to be activated and power on some VMs. Can I consider this workflow ok or is there any ip/network information of the host stored into engine db or other parts on engine or hosts? I have then a question for the ovirtmgmt logical network itself, but I will open a new thread for it... Thanks in advance, Gianluca

Hello Gianluca, Not sure i fully understood, but, if the host's IP has changes and the vlan then the correct flow will be: 1) Remove the host 2) Edit the management network with vlan tag - the vlan you need/want 3) Add/install the host - make sure you using the correct/new IP(if using IP) or the correct FQDN(if has changed). Note that doing things manually on the host such as changing ovirtmgmt's configuration without the engine or vdsm may cause problems and will not persist the changes during reboots. If the host's IP has changed or it's FQDN then you must install the host again. Cheers) On Wed, Sep 27, 2017 at 12:22 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
Hello, an host not maintained by me was modified so that its mgmt network had become ovirtmgmntZ2Z3. Originally the host had been added into engine with its hostname and not its ip and this simplifies things. So a dns entry change was done while the host was in maintenance (as far as I have understood...) The guy changed the /etc/sysconfig/netowrk-scripts/ files and apparently it was activated ok, but when host rebooted the config was reverted due to persistence of vdsm.
As he had urgence for this host to become operational again, in the mean time I worked like this, having now a working host:
- modified /etc/sysconfig/network/scripts files with the new required configuration - modified files under /var/lib/vdsm/persistence/netconf/nets/ eg the file ovirtmgmntZ2Z3 with its ip and vlan correct information
sync
then power off / power on host
The host comes up good and as it had been peviously put into maintenance, it was able to be activated and power on some VMs.
Can I consider this workflow ok or is there any ip/network information of the host stored into engine db or other parts on engine or hosts?
I have then a question for the ovirtmgmt logical network itself, but I will open a new thread for it...
Thanks in advance, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Michael Burman Quality engineer - rhv network - redhat israel Red Hat <https://www.redhat.com> mburman@redhat.com M: 0545355725 IM: mburman <https://red.ht/sig>

On Wed, Sep 27, 2017 at 4:25 PM, Michael Burman <mburman@redhat.com> wrote:
Hello Gianluca,
Not sure i fully understood, but, if the host's IP has changes and the vlan then the correct flow will be:
1) Remove the host 2) Edit the management network with vlan tag - the vlan you need/want 3) Add/install the host - make sure you using the correct/new IP(if using IP) or the correct FQDN(if has changed).
Note that doing things manually on the host such as changing ovirtmgmt's configuration without the engine or vdsm may cause problems and will not persist the changes during reboots. If the host's IP has changed or it's FQDN then you must install the host again.
Cheers)
Original situation was: 1 DC 1 Cluster: CLA with 2 hosts: host1 and host2 ovirtmgmt defined on vlan10 engine is an external server on VLAN5 that can reach VLAN10 of hosts So far so good Need to add another host that is in another physical server room. Here VLAN10 is not present, so I cannot set ovirtmgmt If I understand correctly, the VLAN assigned to ovirtmgmt is a DC property: I cannot have different vlans assigned to ovirtmgmt in different clusters of the same DC, correct? So the path: Create a second cluster CLB and define on it the logical network ovirtmgmt2 on VLAN20 and set it as the mgmt network for that cluster Add the new host host3 to CLB. So far so good: the engine on VLAN5 is able to manage the hosts of CLA ad CLB with their mgmt networks in VLAN10 and VLAN20 Now it is decided to create a new VLAN30 that is transportable across the the 2 physical locations and to have host1, host2, host3 to be part of a new CLC cluster where the mgmt network is now on VLAN30 Can I simplify operations, as many VMs are already in place in CLA and CLB? So the question arises: the 3 hosts were added originally using their dns hostname and not their IP address. Can I change my dns settings so that the engine resolves the hostnames with the new IPs and change vlan of ovirtmgmt? And if I decide to start from scratch with this new cluster CLC on VLAN30, can I retain my old 3 hostnames (resolving to their new IPs)? How? Hope I was able to clarify a bit the scenario Gianluca

This is a good question and to be honest, i really not sure what will happen as we never tested such scenario in which the IPs of the hosts has been changed, but the original hostname don't. Alona, dan, can you please share your insights here? Thanks, On Wed, Sep 27, 2017 at 5:48 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Sep 27, 2017 at 4:25 PM, Michael Burman <mburman@redhat.com> wrote:
Hello Gianluca,
Not sure i fully understood, but, if the host's IP has changes and the vlan then the correct flow will be:
1) Remove the host 2) Edit the management network with vlan tag - the vlan you need/want 3) Add/install the host - make sure you using the correct/new IP(if using IP) or the correct FQDN(if has changed).
Note that doing things manually on the host such as changing ovirtmgmt's configuration without the engine or vdsm may cause problems and will not persist the changes during reboots. If the host's IP has changed or it's FQDN then you must install the host again.
Cheers)
Original situation was:
1 DC 1 Cluster: CLA with 2 hosts: host1 and host2 ovirtmgmt defined on vlan10
engine is an external server on VLAN5 that can reach VLAN10 of hosts So far so good
Need to add another host that is in another physical server room. Here VLAN10 is not present, so I cannot set ovirtmgmt If I understand correctly, the VLAN assigned to ovirtmgmt is a DC property: I cannot have different vlans assigned to ovirtmgmt in different clusters of the same DC, correct?
So the path: Create a second cluster CLB and define on it the logical network ovirtmgmt2 on VLAN20 and set it as the mgmt network for that cluster Add the new host host3 to CLB.
So far so good: the engine on VLAN5 is able to manage the hosts of CLA ad CLB with their mgmt networks in VLAN10 and VLAN20
Now it is decided to create a new VLAN30 that is transportable across the the 2 physical locations and to have host1, host2, host3 to be part of a new CLC cluster where the mgmt network is now on VLAN30
Can I simplify operations, as many VMs are already in place in CLA and CLB? So the question arises:
the 3 hosts were added originally using their dns hostname and not their IP address. Can I change my dns settings so that the engine resolves the hostnames with the new IPs and change vlan of ovirtmgmt?
And if I decide to start from scratch with this new cluster CLC on VLAN30, can I retain my old 3 hostnames (resolving to their new IPs)? How?
Hope I was able to clarify a bit the scenario
Gianluca
-- Michael Burman Quality engineer - rhv network - redhat israel Red Hat <https://www.redhat.com> mburman@redhat.com M: 0545355725 IM: mburman <https://red.ht/sig>

On Wed, Sep 27, 2017 at 5:48 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Sep 27, 2017 at 4:25 PM, Michael Burman <mburman@redhat.com> wrote:
Hello Gianluca,
Not sure i fully understood, but, if the host's IP has changes and the vlan then the correct flow will be:
1) Remove the host 2) Edit the management network with vlan tag - the vlan you need/want 3) Add/install the host - make sure you using the correct/new IP(if using IP) or the correct FQDN(if has changed).
Note that doing things manually on the host such as changing ovirtmgmt's configuration without the engine or vdsm may cause problems and will not persist the changes during reboots. If the host's IP has changed or it's FQDN then you must install the host again.
Cheers)
Original situation was:
1 DC 1 Cluster: CLA with 2 hosts: host1 and host2 ovirtmgmt defined on vlan10
engine is an external server on VLAN5 that can reach VLAN10 of hosts So far so good
Need to add another host that is in another physical server room. Here VLAN10 is not present, so I cannot set ovirtmgmt If I understand correctly, the VLAN assigned to ovirtmgmt is a DC property: I cannot have different vlans assigned to ovirtmgmt in different clusters of the same DC, correct?
So the path: Create a second cluster CLB and define on it the logical network ovirtmgmt2 on VLAN20 and set it as the mgmt network for that cluster Add the new host host3 to CLB.
So far so good: the engine on VLAN5 is able to manage the hosts of CLA ad CLB with their mgmt networks in VLAN10 and VLAN20
Now it is decided to create a new VLAN30 that is transportable across the the 2 physical locations and to have host1, host2, host3 to be part of a new CLC cluster where the mgmt network is now on VLAN30
Can I simplify operations, as many VMs are already in place in CLA and CLB? So the question arises:
the 3 hosts were added originally using their dns hostname and not their IP address. Can I change my dns settings so that the engine resolves the hostnames with the new IPs and change vlan of ovirtmgmt?
You can - 1. Move the host to maintenance mode. 2. Change the cluster of the hosts to the new one. 3. Running setupNetworks + save network configuration directly on the host removing the old management network and configuring the new one (with new vlan and ip). Petr, can you please provide the syntax of the command? * You may first try doing this step via the ui. Make sure you uncheck the 'Verify connectivity between Host and Engine' checkbox in the Setup Networks window. I'm not sure it will work, maybe the engine will block it since you're trying to touch the nic with the management ip. 4. Change the dns setting to resolve to the new IPs.
And if I decide to start from scratch with this new cluster CLC on VLAN30, can I retain my old 3 hostnames (resolving to their new IPs)? How?
You can - 1. remove the host from the original clusters 2. remove all the network from the host using Petr's vdsm tool ( https://gerrit.ovirt.org/#/c/79495/) 3. change the dns setting to resolve to the new IPs. 4. Add the host to the new cluster.
Hope I was able to clarify a bit the scenario
Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Sep 28, 2017 at 5:25 PM, Alona Kaplan <alkaplan@redhat.com> wrote:
And if I decide to start from scratch with this new cluster CLC on VLAN30, can I retain my old 3 hostnames (resolving to their new IPs)? How?
You can -
I decided to go this way above and it seems I was able to reach the desired target. These below my steps. Eventually I will be more precise with another host for which I have to do the same Host host1 is in cluster CLA and original mgmt network in CLA is ovirtmgmt on vlan167
From host point of view the original ovirtmgmt is a bridge on top of bond0.167
I have to put host1 into cluster CLB where the mgmt network is ovirtmgmntZ2Z3 on vlan68 This cluster CLB has already one host in it and the ovirtmgmt network is not checked as "required" (if I select setup networks for the host host2 in CLB I see it in the right end side)
1. remove the host from the original clusters
Put host host1 into maintenance and then removed
2. remove all the network from the host using Petr's vdsm tool ( https://gerrit.ovirt.org/#/c/79495/)
Actually I compared my 4.1.6 version of the python scripts involved in the tool and they seems quite different from their 4.2.0 version in the gerrit...so I didn't follow this way. Instead. as I have to retain all networks but the management one I: a) connect to host via ssh b) create an ifcfg-bond0.68 so that: # diff ifcfg-bond0.68 ifcfg-bond0.167 2c2 < DEVICE=bond0.68 ---
DEVICE=bond0.167 4c4 < BRIDGE=ovirtmgmntZ2Z3
BRIDGE=ovirtmgmt
c) create an ifcfg-ovirtmgmntZ2Z3 so that: # diff ifcfg-ovirtmgmntZ2Z3 ifcfg-ovirtmgmt 2c2 < DEVICE=ovirtmgmntZ2Z3 ---
DEVICE=ovirtmgmt 7,9c7,9 < IPADDR=10.4.192.33 < NETMASK=255.255.255.0 < GATEWAY=10.4.192.254
< IPADDR=10.4.167.84 < NETMASK=255.255.255.0 < GATEWAY=10.4.167.254 d) ask network guys to configure vlan 68 on the trunk network ports of the switch ports of the blade e) activate the new interfaces on host1 ifup bond0.68 ifup ovirtmgmntZ2Z3
3. change the dns setting to resolve to the new IPs.
Done this and also for more safety I put a static entry with the new ip of host1 inside /etc/hosts of engine and of the other host host2 of the target cluster Verify that host1 is reachable from engine and host2 4. Add the host to the new cluster.
Done. I get all the "Installing... " steps and then Sep 29, 2017 2:36:12 PM Status of host host1 was set to NonOperational. Sep 29, 2017 2:36:12 PM Host host1 does not comply with the cluster CLB networks, the following networks are missing on host: 'ovirtmgmntZ2Z3' Sep 29, 2017 2:36:08 PM Host host1 installation failed. Failed to configure management network on the host. Sep 29, 2017 2:36:08 PM Host host1 installation failed. Failed to configure management network on the host. Sep 29, 2017 2:36:05 PM Installing Host host1. Stage: Termination. 5. I select host1 and go into setup networks page I cannot remove the ovirtmgmt network, so I simply edit it and set its protocol to none Also I drag and drop the available ovirtmgmntZ2Z3 network from the right to the side of bond0 (under ovirtmgmt square) and edit it Please note that I don't find how to set default route. It seems automatic. It is for this reason that I have to remove the static configuration of ovirtmgmt, otherwise when saving I get VDSM host1 command HostSetupNetworksVDS failed: Only a singe default route network is allowed. Possibly I miss something here about default route... I leave checked both options: Verify connectivity between Host and Engine and Save network configuration The host then is able to save and activate its new configuration. In events pane I have this: Sep 29, 2017 2:45:08 PM Status of host host1 was set to Up. Sep 29, 2017 2:44:35 PM Managed to sync all host host1 networks. Sep 29, 2017 2:44:35 PM (1/1): Successfully applied changes on host host1. (User: g.cecchi@internal-authz) Sep 29, 2017 2:44:35 PM (1/1): Applying network's changes on host host1. (User: g.cecchi@internal-authz) Sep 29, 2017 2:42:41 PM Network changes were saved on host host1 In /var/lib/vdsm/persistence I have: [root@host1 network-scripts]# ll /var/lib/vdsm/persistence/netconf/nets/ total 32 -rw-r--r--. 1 root root 270 Sep 29 14:42 iscsi1 -rw-r--r--. 1 root root 253 Sep 29 14:42 iscsi2 -rw-r--r--. 1 root root 371 Sep 29 14:42 ovirtmgmntZ2Z3 -rw-r--r--. 1 root root 229 Sep 29 14:42 ovirtmgmt -rw-r--r--. 1 root root 225 Sep 29 14:42 vlan162 -rw-r--r--. 1 root root 271 Sep 29 14:42 vlan187 -rw-r--r--. 1 root root 229 Sep 29 14:42 vlan65 -rw-r--r--. 1 root root 224 Sep 29 14:42 vlan98 [root@host1 network-scripts]# that seems correct in terms of timestamps and contents. I put host1 into maintenance and reboot it via ssh management --> restart And it seems all has been maintained: [g.cecchi@host1 ~]$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.4.192.254 0.0.0.0 UG 0 0 0 ovirtmgmntZ2Z3 10.4.187.0 0.0.0.0 255.255.255.0 U 0 0 0 p1p2.187 10.4.192.0 0.0.0.0 255.255.255.0 U 0 0 0 ovirtmgmntZ2Z3 10.10.100.0 0.0.0.0 255.255.255.0 U 0 0 0 p1p2 10.10.100.0 0.0.0.0 255.255.255.0 U 0 0 0 p1p1.100 [g.cecchi@host1 ~]$ And I'm able to activate it. The storage connections (two iSCSI ones on p1p2 and p1p1.100) have been preserved from the original one when the host was into the other cluster I also completed a migration of a VM from host2 to this new cluster host host1: Sep 29, 2017 3:41:17 PM Migration completed (VM: c7service, Source: host2, Destination: host1, Duration: 22 seconds, Total: 22 seconds, Actual downtime: 148ms) Sep 29, 2017 3:40:56 PM Migration started (VM: c7service, Source: host2, Destination: host1, User: g.cecchi@internal-authz). Apparently the only problem I have is that I still cannot remove the ovirtmgmt network from this particular host. If I go into "Setup Networks" page, I get in this if I try to put ovirtmgmt network to the right side, out of the networks of the host: " Error while executing action: host1: - The Management Network ('ovirtmgmt') is mandatory and cannot be removed. " But actually the ovirtmgmt network is not checked as required in this particular cluster and in fact the other host host2 that is part of this cluster has not it assigned Cheers and thanks for the useful suggestions in this and the other related thread. Gianluca
participants (3)
-
Alona Kaplan
-
Gianluca Cecchi
-
Michael Burman