On Thu, Sep 28, 2017 at 5:25 PM, Alona Kaplan <alkaplan(a)redhat.com> wrote:
> And if I decide to start from scratch with this new cluster CLC on
> VLAN30, can I retain my old 3 hostnames (resolving to their new IPs)? How?
>
You can -
I decided to go this way above and it seems I was able to reach the desired
target. These below my steps. Eventually I will be more precise with
another host for which I have to do the same
Host host1 is in cluster CLA and original mgmt network in CLA is ovirtmgmt
on vlan167
From host point of view the original ovirtmgmt is a bridge on top of
bond0.167
I have to put host1 into cluster CLB where the mgmt network is
ovirtmgmntZ2Z3 on vlan68
This cluster CLB has already one host in it and the ovirtmgmt network is
not checked as "required"
(if I select setup networks for the host host2 in CLB I see it in the right
end side)
1. remove the host from the original clusters
Put host host1 into maintenance and then removed
2. remove all the network from the host using Petr's vdsm tool (
https://gerrit.ovirt.org/#/c/79495/)
Actually I compared my 4.1.6 version of the python scripts involved in the
tool and they seems quite different from their 4.2.0 version in the
gerrit...so I didn't follow this way.
Instead. as I have to retain all networks but the management one I:
a) connect to host via ssh
b) create an ifcfg-bond0.68 so that:
# diff ifcfg-bond0.68 ifcfg-bond0.167
2c2
< DEVICE=bond0.68
---
DEVICE=bond0.167
4c4
< BRIDGE=ovirtmgmntZ2Z3
---
BRIDGE=ovirtmgmt
c) create an ifcfg-ovirtmgmntZ2Z3 so that:
# diff ifcfg-ovirtmgmntZ2Z3 ifcfg-ovirtmgmt
2c2
< DEVICE=ovirtmgmntZ2Z3
---
DEVICE=ovirtmgmt
7,9c7,9
< IPADDR=10.4.192.33
< NETMASK=255.255.255.0
< GATEWAY=10.4.192.254
< IPADDR=10.4.167.84
< NETMASK=255.255.255.0
< GATEWAY=10.4.167.254
d) ask network guys to configure vlan 68 on the trunk network ports of the
switch ports of the blade
e) activate the new interfaces on host1
ifup bond0.68
ifup ovirtmgmntZ2Z3
3. change the dns setting to resolve to the new IPs.
Done this and also for more safety I put a static entry with the new ip of
host1 inside /etc/hosts of engine and of the other host host2 of the target
cluster
Verify that host1 is reachable from engine and host2
4. Add the host to the new cluster.
>
>
Done.
I get all the "Installing... " steps and then
Sep 29, 2017 2:36:12 PM Status of host host1 was set to NonOperational.
Sep 29, 2017 2:36:12 PM Host host1 does not comply with the cluster CLB
networks, the following networks are missing on host: 'ovirtmgmntZ2Z3'
Sep 29, 2017 2:36:08 PM Host host1 installation failed. Failed to configure
management network on the host.
Sep 29, 2017 2:36:08 PM Host host1 installation failed. Failed to configure
management network on the host.
Sep 29, 2017 2:36:05 PM Installing Host host1. Stage: Termination.
5. I select host1 and go into setup networks page
I cannot remove the ovirtmgmt network, so I simply edit it and set its
protocol to none
Also I drag and drop the available ovirtmgmntZ2Z3 network from the right to
the side of bond0 (under ovirtmgmt square) and edit it
Please note that I don't find how to set default route. It seems automatic.
It is for this reason that I have to remove the static configuration of
ovirtmgmt, otherwise when saving I get
VDSM host1 command HostSetupNetworksVDS failed: Only a singe default route
network is allowed.
Possibly I miss something here about default route...
I leave checked both options:
Verify connectivity between Host and Engine
and
Save network configuration
The host then is able to save and activate its new configuration. In events
pane I have this:
Sep 29, 2017 2:45:08 PM Status of host host1 was set to Up.
Sep 29, 2017 2:44:35 PM Managed to sync all host host1 networks.
Sep 29, 2017 2:44:35 PM (1/1): Successfully applied changes on host host1.
(User: g.cecchi@internal-authz)
Sep 29, 2017 2:44:35 PM (1/1): Applying network's changes on host host1.
(User: g.cecchi@internal-authz)
Sep 29, 2017 2:42:41 PM Network changes were saved on host host1
In /var/lib/vdsm/persistence I have:
[root@host1 network-scripts]# ll /var/lib/vdsm/persistence/netconf/nets/
total 32
-rw-r--r--. 1 root root 270 Sep 29 14:42 iscsi1
-rw-r--r--. 1 root root 253 Sep 29 14:42 iscsi2
-rw-r--r--. 1 root root 371 Sep 29 14:42 ovirtmgmntZ2Z3
-rw-r--r--. 1 root root 229 Sep 29 14:42 ovirtmgmt
-rw-r--r--. 1 root root 225 Sep 29 14:42 vlan162
-rw-r--r--. 1 root root 271 Sep 29 14:42 vlan187
-rw-r--r--. 1 root root 229 Sep 29 14:42 vlan65
-rw-r--r--. 1 root root 224 Sep 29 14:42 vlan98
[root@host1 network-scripts]#
that seems correct in terms of timestamps and contents.
I put host1 into maintenance and reboot it via ssh management --> restart
And it seems all has been maintained:
[g.cecchi@host1 ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt
Iface
0.0.0.0 10.4.192.254 0.0.0.0 UG 0 0 0
ovirtmgmntZ2Z3
10.4.187.0 0.0.0.0 255.255.255.0 U 0 0 0
p1p2.187
10.4.192.0 0.0.0.0 255.255.255.0 U 0 0 0
ovirtmgmntZ2Z3
10.10.100.0 0.0.0.0 255.255.255.0 U 0 0 0
p1p2
10.10.100.0 0.0.0.0 255.255.255.0 U 0 0 0
p1p1.100
[g.cecchi@host1 ~]$
And I'm able to activate it.
The storage connections (two iSCSI ones on p1p2 and p1p1.100) have been
preserved from the original one when the host was into the other cluster
I also completed a migration of a VM from host2 to this new cluster host
host1:
Sep 29, 2017 3:41:17 PM Migration completed (VM: c7service, Source: host2,
Destination: host1, Duration: 22 seconds, Total: 22 seconds, Actual
downtime: 148ms)
Sep 29, 2017 3:40:56 PM Migration started (VM: c7service, Source: host2,
Destination: host1, User: g.cecchi@internal-authz).
Apparently the only problem I have is that I still cannot remove the
ovirtmgmt network from this particular host.
If I go into "Setup Networks" page, I get in this if I try to put ovirtmgmt
network to the right side, out of the networks of the host:
"
Error while executing action:
host1:
- The Management Network ('ovirtmgmt') is mandatory and cannot be
removed.
"
But actually the ovirtmgmt network is not checked as required in this
particular cluster and in fact the other host host2 that is part of this
cluster has not it assigned
Cheers and thanks for the useful suggestions in this and the other related
thread.
Gianluca