Changing FQDN
by Alex K
Hi all,
I am running ovirt 4.2.8
I did change ovirt engine FQDN using ovirt-engine-rename tool following
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
Hosts FQDN is also changed and all seem fine apart from OVN connection and
ImageIO proxy.
About OVN, i just configured OVN and when I test connection I get:
Failed with error Certificate for <new fqdn> doesn't match any of the
subject alternative names: [old fqdn] and code 5050)
about Imageio proxy when I test the connection I do get nothing. No error
at engine.log or at GUI.
Thus it seems that I have to generate/replace new certs.
Is there a way I can fix this until I switch to 4.3 and eventually to 4.4
where it seems that this is handled from the rename tool?
Thanks for any assistance.
Alex.
4 years, 4 months
Guest VM snapshots are not retained when importing data storage domain
by Alex K
Hi all,
I have a dual node self hosted cluster v4.3 using gluster as storage so as
to test an actual scenario which will need to be followed at production.
The purpose is to rename the cluster FQDN to a new one, wiping out any
reference to the old previous FQDN. I was not successful in using the
engine-rename tool or other means as there are leftovers from previous FQDN
that cause issues.
The cluster has a data storage domain with one guest VM running on it which
has one snapshot.
I am testing a destructive scenario as below and I find out that when
importing the storage domain to the newly configured cluster, while the
guest VM is imported fine, I do not see the guest VM disk snapshots.
Steps that I follow for this scenario:
*Initial status: *
I have an ovirt cluster with two hosts named v0 and v1.
The gluster storage domain is configured at a separate network where the
hosts are named gluster0 and gluster1.
The cluster has an engine and data storage domain named "engine" and "vms"
respectively.
The "vms" storage domain hosts one guest VM with one guest VM disk
snapshot.
All are configured with fqdn *localdomain.local*
*# Steps to rename all cluster to new fqdn lab.local and import "vms"
storage domain*
1. Set v1 ovirt host at maintenance then remove it from GUI.
2. At v1 install fresh CentOS7 using the new FQDN lab.local
3. at v0 set global maintenance and shutdown engine. Remove the engine
storage data. (complete wipe of any engine related data. What is important
is only VM guests and their snapshots).
4. at v0, remove bricks belonging to "engine" and "vms" gluster volumes of
v1 and detach gluster peer v1.
gluster volume remove-brick engine replica 1 gluster1:/gluster/engine/brick
force
gluster volume remove-brick vms replica 1 gluster1:/gluster/vms/brick force
gluster peer detach gluster1
5. On v1, prepare gluster service, reattach peer and add bricks from v0.
At this phase all data from vms gluster volume will be synced to the new
host. Verify with `gluster heal info vms`.
from v0 server run:
gluster peer probe gluster1
gluster volume add-brick engine replica 2 gluster1:/gluster/engine/brick
gluster volume add-brick vms replica 2 gluster1:/gluster/vms/brick
At this state all gluster volume are up and in sync. We confirm "vms" sync
with
gluster volume heal info vms
6. At freshly installed v1 install engine using the same clean gluster
engine volume:
hosted-engine --deploy --config-append=/root/storage.conf
--config-append=answers.conf (use new FQDN!)
7. Upon completion of engine deployment and after having ensured the vms
gluster volume is synced (step 5) remove bricks of v0 host (v0 now should
not be visible at ovirt GUI) and detach gluster peer v0.
at v1 host run:
gluster volume remove-brick engine replica 1 gluster0:/gluster/engine/brick
force
gluster volume remove-brick vms replica 1 gluster0:/gluster/vms/brick force
gluster peer detach gluster0
8. Install fresh CentOS7 on v0 and prepare it with ovirt node packages,
networking and gluster.
9. At v0, attach gluster bricks from v1. Confirm sync with gluster volume
heal info.
at v1 host:
gluster peer probe gluster0
gluster volume add-brick engine replica 2 gluster0:/gluster/engine/brick
gluster volume add-brick vms replica 2 gluster0:/gluster/vms/brick
10. at engine, add entry for v0 host at /etc/hosts. At ovirt GUI, add v0.
/etc/hosts:
10.10.10.220 node0 v0.lab.local
10.10.10.221 node1 v1.lab.local
10.10.10.222 engine.lab.local engine
10.100.100.1 gluster0
10.100.100.2 gluster1
11. At ovirt GUI import vms gluster volume as vms storage domain.
At this step I have to approve operation:
[image: image.png]
12. At ovirt GUI, import VMs from vms storage domain.
At this step the VM is found and imported from the imported storage domain
"vms", but the VM does not show the previously available disk snapshot.
The import of the storage domain should have retained the guest VM
snapshot.
How can this be troubleshooted? Do I have to keep some type of engine DB
backup so as to make the snapshots visible? If yes, is it possible to
restore this backup to a fresh engine that has a new FQDN?
Thanx very much for any advise and hint.
Alex
4 years, 4 months
best way to have storage vlan available to host AND vms?
by Philip Brown
I'm trying to allow a particular iSCSI VLAN to be available to all hosts.. but also to a few select VMs.
Im finding this challenging, since prior to now, I did host iSCSI config on the host-local level.
i used the cockpit GUI to create a "VLAN" entity, assigned it to an interface, and then configured an IP address.
But when I attempt to create a "network"(/aka VLAN) entity from the main hosted-engine level.. it seems to conflict with prior host-local created ones.
and when I remove the host-local entries...
I no longer seem to have a way in the GUI to create IP addresses for the host, from host cockpit?
It recognises that the entity exists, but put it in the "unmanaged" section.
So.. how can I handle this best?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com
4 years, 4 months
PKI Problem
by ramon@clematide.ch
Hi
I did a fresh installation of version 4.4.0.3. After the engine setup I replaced the apache certificate with a custom certificate. I used this article to do it: https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html
To summarize, I replaced those files with my own authority and the signed custom certificate
/etc/pki/ovirt-engine/keys/apache.key.nopass
/etc/pki/ovirt-engine/certs/apache.cer
/etc/pki/ovirt-engine/apache-ca.pem
That worked so far, apache uses now my certificate, login is possible. To setup a new machine, I need to upload an iso image, which failed. I found this error in /var/log/ovirt-imageio/daemon.log
2020-07-08 20:43:23,750 INFO (Thread-10) [http] OPEN client=192.168.1.228
2020-07-08 20:43:23,767 INFO (Thread-10) [backends.http] Open backend netloc='the_secret_hostname:54322' path='/images/ef60404c-dc69-4a3d-bfaa-8571f675f3e1' cafile='/etc/pki/ovirt-engine/apache-ca.pem' secure=True
2020-07-08 20:43:23,770 ERROR (Thread-10) [http] Server error
Traceback (most recent call last):
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 699, in __call__
self.dispatch(req, resp)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 744, in dispatch
return method(req, resp, *match.groups())
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", line 84, in wrapper
return func(self, req, resp, *args)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", line 66, in put
backends.get(req, ticket, self.config),
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py", line 53, in get
cafile=config.tls.ca_file)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 48, in open
secure=options.get("secure", True))
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 63, in __init__
options = self._options()
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 364, in _options
self._con.request("OPTIONS", self.url.path)
File "/usr/lib64/python3.6/http/client.py", line 1254, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib64/python3.6/http/client.py", line 1300, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/lib64/python3.6/http/client.py", line 974, in send
self.connect()
File "/usr/lib64/python3.6/http/client.py", line 1422, in connect
server_hostname=server_hostname)
File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
_context=self, _session=session)
File "/usr/lib64/python3.6/ssl.py", line 776, in __init__
self.do_handshake()
File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
self._sslobj.do_handshake()
File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)
2020-07-08 20:43:23,770 INFO (Thread-10) [http] CLOSE client=192.168.1.228 [connection 1 ops, 0.019775 s] [dispatch 1 ops, 0.003114 s]
I'm a python developer so I had no problem reading the traceback.
The SSL handshake fails when image-io tries to connect to what I think is called an ovn-provider. But it is using my new authority certificate cafile='/etc/pki/ovirt-engine/apache-ca.pem' which does not validate the certificate generated by the ovirt engine setup, which the ovn-provider probably uses.
I didn't exactly know where the parameter for the validation ca file is. Probably it is the ca_file parameter in /etc/ovirt-imageio/conf.d/50-engine.conf. But that needs to be set to my own authority ca file.
I modified the python file to set the ca_file parameter to the engine setups ca_file directly
/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py
So the function call around line 50 looks like this:
backend = module.open(
ticket.url,
mode,
sparse=ticket.sparse,
dirty=ticket.dirty,
cafile='/etc/pki/ovirt-engine/ca.pem' #config.tls.ca_file
)
Now the image upload works, but obviously this is not the way to fix things. Is there an other way to make image-io accept the certificate from the engine setup, while using my custom certificate? I don't want to replace the certificates of all ovirt components with custom certificates. I only need the weblogin with my custom certificate.
Regards
4 years, 4 months
Problem with paused VMs in ovirt 4.3.10.
by Dmitry Sekretev
Hi!
We have a problem with paused VMs in ovirt cluster. Please, help to
solve this.
In ovirt manager massege "VM rtb-stagedsw02-ovh has been paused."
Resume fails with error "Failed to resume VM rtb-stagedsw02-ovh (Host:
ovirt-node09-ovh.local, User: admin@internal-authz)."
In oVirt Cluster 38 VM, paused VM only ubuntu 20.04 focal with docker swarm.
Archived logs in attach.
Packeges on ovirt nodes:
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-provider-ovn-driver-1.2.29-1.el7.noarch
ovirt-vmconsole-host-1.0.7-2.el7.noarch
python2-ovirt-host-deploy-1.8.5-1.el7.noarch
ovirt-imageio-common-1.5.3-0.el7.x86_64
cockpit-machines-ovirt-195.6-1.el7.centos.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-host-dependencies-4.3.5-1.el7.x86_64
ovirt-host-4.3.5-1.el7.x86_64
python-ovirt-engine-sdk4-4.3.4-2.el7.x86_64
ovirt-host-deploy-common-1.8.5-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
ovirt-hosted-engine-setup-2.3.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-imageio-daemon-1.5.3-0.el7.noarch
cockpit-ovirt-dashboard-0.13.10-1.el7.noarch
ovirt-release43-4.3.10-1.el7.noarch
ovirt-hosted-engine-ha-2.3.6-1.el7.noarch
Packeges on HostedEngine:
ovirt-ansible-infra-1.1.13-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.10.4-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.10.4-1.el7.noarch
ovirt-engine-restapi-4.3.10.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-iso-uploader-4.3.2-1.el7.noarch
ovirt-provider-ovn-1.2.29-1.el7.noarch
ovirt-imageio-proxy-setup-1.5.3-0.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.10-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.10.4-1.el7.noarch
python-ovirt-engine-sdk4-4.3.4-2.el7.x86_64
python2-ovirt-host-deploy-1.8.5-1.el7.noarch
ovirt-ansible-vm-infra-1.1.22-1.el7.noarch
ovirt-engine-metrics-1.3.7-1.el7.noarch
ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
ovirt-engine-wildfly-overlay-17.0.1-1.el7.noarch
ovirt-ansible-roles-1.1.7-1.el7.noarch
ovirt-engine-dwh-setup-4.3.8-1.el7.noarch
python2-ovirt-engine-lib-4.3.10.4-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.10-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.10.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.10.4-1.el7.noarch
ovirt-engine-tools-backup-4.3.10.4-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.10.4-1.el7.noarch
ovirt-host-deploy-common-1.8.5-1.el7.noarch
ovirt-ansible-image-template-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.14-1.el7.noarch
ovirt-engine-wildfly-17.0.1-1.el7.x86_64
ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
ovirt-imageio-common-1.5.3-0.el7.x86_64
ovirt-imageio-proxy-1.5.3-0.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-engine-setup-base-4.3.10.4-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.10.4-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.10.4-1.el7.noarch
ovirt-release43-4.3.10-1.el7.noarch
ovirt-engine-backend-4.3.10.4-1.el7.noarch
ovirt-engine-tools-4.3.10.4-1.el7.noarch
ovirt-web-ui-1.6.0-1.el7.noarch
ovirt-ansible-cluster-upgrade-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-ui-extensions-1.0.10-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.10.4-1.el7.noarch
ovirt-engine-4.3.10.4-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-host-deploy-java-1.8.5-1.el7.noarch
ovirt-engine-dwh-4.3.8-1.el7.noarch
ovirt-engine-api-explorer-0.0.5-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-engine-setup-4.3.10.4-1.el7.noarch
ovirt-engine-dbscripts-4.3.10.4-1.el7.noarch
In /var/log/ovirt-engine/engine.log:
2020-07-24 09:38:44,472+03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [] VM
'18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99'(rtb-stagedsw02-ovh) move
d from 'Up' --> 'Paused'
2020-07-24 09:38:44,493+03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [] EVENT_ID:
VM_PAUSED(1,025), VM rtb-stagedsw02-ovh has been paused.
In /var/log/vdsm/vdsm.log
2020-07-24 09:38:42,771+0300 INFO (libvirt/events) [virt.vm]
(vmId='18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99') CPU stopped: onSuspend
(vm:6100)
2020-07-24 09:38:44,328+0300 INFO (jsonrpc/1) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code':
0}, 'io_tune_policies_dict':
{'4d9519f6-1ab9-4032-8fdf-4c6118531544': {'poli
cy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L,
'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glust
erSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/images/e1cfb9ec-39d8-416d-9f5f-0b54765301d4/8f95d60d-931b-4764-993c-ba9373efe361',
'name': 'sda'}]}, 'b031a269-6bcd-40b7-9737-e47112a54b3a': {'po
licy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L,
'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glu
sterSD/10.0.11.101:_vmstore01/5e05fed3-448b-4f86-b5ba-004982194c90/images/9c3cc7a0-254e-4756-91b6-fb54e21abf38/71dd8024-8aec-46da-a80f-34260655e929',
'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_io
ps_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glusterSD/10.0.11.101:_vmstore01/5e05fed3-448b-4f86-b5ba-004982194c90/images/
3e3a5064-5fe1-40c0-81f5-44f1a3a4d503/13549972-82de-4746-aeea-3e1531f9c180',
'name': 'sdb'}]}, 'b5fad17c-fa9d-4a80-99e7-6f86e6e19c9b': {'policy':
[], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_
iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glusterSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/image
s/15ce6cb0-6f06-4a31-92d8-b6e1bcabf3bc/613de344-d1ad-49aa-a2d0-d60ca9eb7cd3',
'name': 'sda'}]}, '18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99': {'policy':
[], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'tota
l_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
u'/rhev/data-center/mnt/glusterSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/im
ages/7978e2db-c560-4315-a775-223f1b13ae31/d927eea8-e588-449e-b07b-c845d15b082e',
'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec':
0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_s
ec': 0L, 'total_bytes_sec': 0L}, 'path':
u'/rhev/data-center/mnt/glusterSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/images/b925dc2e-17ba-470d-a9be-cb96d4ef1f0d/951d9712-7160-4f88-a838-970aec8
2b3ea', 'name': 'sdb'}]}}} from=::1,34598 (api:54)
2020-07-24 09:38:49,747+0300 WARN (qgapoller/1)
[virt.periodic.VmDispatcher] could not run <function <lambda> at
0x7fe5c84de6e0> on ['18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99']
(periodic:289)
In /var/log/libvirt/qemu/rtb-stagedsw03-ovh.log
KVM: entry failed, hardware error 0x80000021
If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an
invalid
state for Intel VT. For example, the guest maybe running in big real
mode
which is not supported on less recent Intel processors.
EAX=00001000 EBX=43117da8 ECX=0000000c EDX=00000121
ESI=00000003 EDI=17921000 EBP=43117cb0 ESP=43117c98
EIP=00008000 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =0000 00000000 ffffffff 00809300
CS =9b00 7ff9b000 ffffffff 00809300
SS =0000 00000000 ffffffff 00809300
DS =0000 00000000 ffffffff 00809300
FS =0000 00000000 ffffffff 00809300
GS =0000 00000000 ffffffff 00809300
LDT=0000 00000000 000fffff 00000000
TR =0040 001ce000 0000206f 00008b00
GDT= 001cc000 0000007f
IDT= 00000000 00000000
CR0=00050032 CR2=17921000 CR3=2b92a003 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
DR3=0000000000000000
DR6=00000000fffe0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
<ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff ff ff ff ff ff ff ff
4 years, 4 months
Changing management network
by Alex K
Hi all,
I see this has been asked in the past though I was not able to find a
specific answer.
I want to change the management network at a new subnet. In this case I am
using /etc/hosts and not DNS, though the steps can be performed with DNS
also.
What i do is:
1. Enable global maintenance
2. SSH into engine and change its IP address to the new network and update
/etc/hosts with new engine and hosts IP addresses. restart engine network.
3. Update /etc/hosts at each host to reflect new engine and host IPs.
4. Login at engine. the hosts will be by now shown as down. For each host,
set host at maintenance and change its management net. Activate host,
migrate engine and repeat.
5. Disable global maintenance.
I did receive some errors at step 4, failing to update host network and
such, though I did run at each host:
ip addr add 10.10.50.X/24 dev ovirtmgmt
so as to add on the fly the new host IP, in case engine wanted to have
access a priory to this new IP (as already configured at /etc/hosts of
engine), and unchecked the "verify engine connectivity" when saving network
changes. When doing this, the host mgmt net was updated.
Now, to make it slightly more difficult, I want to add also a VLAN tag at
the same management net. When I attempt to enable VLAN tagging at the ovirt
management net I get:
[image: image.png]
I proceed and since having configured already the VLANs, and after
switching the host ports to the VLAN network, I do get access at the hosts
and engine.
Do you think this is fine or are there any other recommended steps?
Thanx for any feedback.
Alex
4 years, 4 months
Re: Management Engine IP change
by Alex K
On Thu, Jul 30, 2020 at 2:45 PM Yedidyah Bar David <didi(a)redhat.com> wrote:
> On Thu, Jul 30, 2020 at 1:47 PM Alex K <rightkicktech(a)gmail.com> wrote:
>
>>
>>
>> On Thu, Jul 30, 2020 at 1:30 PM Yedidyah Bar David <didi(a)redhat.com>
>> wrote:
>>
>>> On Thu, Jul 30, 2020 at 1:20 PM Alex K <rightkicktech(a)gmail.com> wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Jul 30, 2020 at 12:56 PM Yedidyah Bar David <didi(a)redhat.com>
>>> wrote:
>>> >>
>>> >> On Thu, Jul 30, 2020 at 12:42 PM Alex K <rightkicktech(a)gmail.com>
>>> wrote:
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Thu, Jul 30, 2020 at 12:01 PM Yedidyah Bar David <
>>> didi(a)redhat.com> wrote:
>>> >> >>
>>> >> >> On Thu, Jul 30, 2020 at 11:30 AM Alex K <rightkicktech(a)gmail.com>
>>> wrote:
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> On Tue, Jul 28, 2020 at 11:51 AM Anton Louw via Users <
>>> users(a)ovirt.org> wrote:
>>> >> >>>>
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> Hi All,
>>> >> >>>>
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> Does somebody perhaps know the process of changing the Hosted
>>> Engine IP address? I see that it is possible, I am just not sure if it is a
>>> straight forward process using ‘nmtui’ or editing the network config file.
>>> I have also ensured that everything was configured using the FQDN.
>>> >> >>>
>>> >> >>> Since the FQDN is not changing you should not have issues just
>>> updating your DNS then changing manually the engine IP from the ifcfg-ethx
>>> files then restart networking.
>>> >> >>> What i find difficult and perhaps impossible is to change engine
>>> FQDN, as one will need to regenerate all certs from scratch (otherwise you
>>> will have issues with several services: imageio proxy, OVN, etc) and there
>>> is no such procedure documented/or supported.
>>> >> >>
>>> >> >>
>>> >> >> I wonder - how/what did you search for, that led you to this
>>> conclusion? Or perhaps you even found it explicitly written somewhere?
>>> >> >
>>> >> > Searching around and testing in LAB. I am testing 4.3 though not
>>> 4.4. I used engine-rename tool and although was able to change fqdn for
>>> hosts and engine, I observed that some certificates were left out (for
>>> example OVN was still complaining about certificate issue with subject name
>>> not agreeing with the new FQDN - checking/downloading the relevant cert was
>>> still showing the previous FQDN). I do not deem successful the renaming of
>>> not all services are functional.
>>> >>
>>> >> Very well.
>>> >>
>>> >> I'd find your above statement less puzzling if you wrote instead "...
>>> >> and the procedure for doing this is buggy/broken/incomplete"...
>>> >
>>> > I'm sorry for the confusion.
>>>
>>> No problem :-)
>>>
>>> >>
>>> >>
>>> >> >>
>>> >> >>
>>> >> >> There actually is:
>>> >> >>
>>> >> >>
>>> >> >>
>>> https://www.ovirt.org/documentation/administration_guide/#sect-The_oVirt_...
>>> >> >
>>> >> >
>>> >> > At this same link it reads:
>>> >> > While the ovirt-engine-rename command creates a new certificate for
>>> the web server on which the Engine runs, it does not affect the certificate
>>> for the Engine or the certificate authority. Due to this, there is some
>>> risk involved in using the ovirt-engine-rename command, particularly in
>>> environments that have been upgraded from Red Hat Enterprise Virtualization
>>> 3.2 and earlier. Therefore, changing the fully qualified domain name of the
>>> Engine by running engine-cleanup and engine-setup is recommended where
>>> possible.
>>> >> > explaining my above findings from the tests.
>>> >>
>>> >> No. These are two different things:
>>> >>
>>> >> 1. Bugs. All software has bugs. Hopefully we fix them over time. If
>>> >> you find one, please file it.
>>> >>
>>> >> 2. Inherent design (or other) problems - the software works as
>>> >> intended, but that's not what you want...
>>> >
>>> > I do not intend to blame anyone. I really appreciate the work you all
>>> are doing with this great project and understand that the community stream
>>> may have bugs and rough edges or simply I might not be well informed.
>>> >>
>>> >>
>>> >> See also:
>>> >>
>>> >>
>>> https://www.ovirt.org/develop/networking/changing-engine-hostname.html
>>> >>
>>> >> >>
>>> >> >>
>>> >> >> That said, it indeed was somewhat broken for some time now - some
>>> fixed were only added quite recently, and are available only in current 4.4:
>>> >> >
>>> >> > This is interesting and needed for migration scenarios.
>>> >>
>>> >> Can you please elaborate?
>>> >
>>> > I am thinking about a scenario where one will need to migrate a DC
>>> from one FQDN to a completely new one (say I currently have
>>> host1.domain1.com, host2.domain1.com, engine.domain1.com and want to
>>> switch to host1.domain2.com, host2.domain2.com, engine.domain2.com) I
>>> am currently facing one such need. I need to migrate existing DC from
>>> domain1.com to domain2.com. Tried the engine-rename tool and changed
>>> IPs of engine and hosts but observed the OVN certificate issue with 4.3. In
>>> case this is sorted with 4.4 then I will see if this resolves my issue.
>>>
>>> These are _names_, for the same machines, right? I'd call it a rename,
>>> then, not a migration.
>>>
>> Indeed. It is a rename. Same dc/cluster with different names. (my setup
>> is one DC which has one cluster)
>>
>>>
>>> If it's migration (you have two sets of physical machines, and want to
>>> migrate the VMs from one set to the other), indeed using storage
>>> import is simpler (perhaps using the DR tool/doc).
>>>
>> I tested a storage domain import at a 4.3 virtual test environment at
>> same renamed DC/cluster and found out that the VM snapshots where not
>> retained.
>> As per
>> https://www.ovirt.org/develop/release-management/features/storage/imports...
>> docs it should have kept the snapshots metadata. I am wondering why this is
>> as losing the snapshots will be a major issue for me.
>>
>
> I have no idea - please start a new thread, or file a bug and attach
> relevant logs. Thanks.
>
>
>>
>> The steps I followed for the rename and data storage import are the
>> following:
>>
>> *Assumptions: *
>> We have an ovirt cluster (v4.3) with two hosts named v0 and v1.
>> Also the "vms" storage domain does have the guest VMs. The guest VMs do
>> have disk snapshots.
>>
>> *Steps: *
>> 1. Set v1 ovirt host at maintenance then remove it.
>>
> 2. At v1 install fresh CentOS7 using the new FQDN
>>
>
> You mean you have gluster on separate disks and do not wipe them during
> reinstall, I guess. Reasonable, but a bit risky, depending on exactly how
> you do this.
>
> If this is important, I'd consider doing a test (perhaps with a copy of
> your real data, if possible), and see that I can restore everything from v1
> after this reinstallation (e.g. for the case where v0 dies right after or
> during the reinstallation).
>
> Or perhaps you mean that you do wipe everything? This means you have no
> storage replication for the duration of the process (which probably takes
> several hours?).
>
I mean I do a complete wipe of the host having first removed its gluster
bricks. The data are retained at the other host. Then later the same clean
host is added as gluster peer and the relevant storage domain is synced
back so as to repeat the wipe at the remaining host. Tested and all data
are retained, except snapshots of VMs are lost or not visible when
importing back the same storage domain, which does seem to not be related
with gluster. At test environment it takes only a few minutes to sync as I
have only one VM on this storage domain.
>
>
>> 3. at v0, set global maintenance and shutdown engine. wipe the engine
>> storage data from the relevant gluster mount. (the engine VM is completely
>> deleted!)
>> 4. at v0, remove bricks belonging to v1 and detach gluster peer v1.
>> 5. On v1, prepare gluster service, reattach peer and add bricks from v0.
>> At this phase all data from vms gluster volume will be synced to the new
>> host. Verify with `gluster heal info vms`.
>> 6. At freshly installed v1 install engine using the same clean gluster
>> engine volume:
>> `hosted-engine --deploy --config-append=/root/storage.conf
>> --config-append=answers.conf` (use new FQDN!)
>> 7. Upon completion of engine deployment and after having ensured the vms
>> gluster volume is synced (step 5) remove bricks of v0 host (v0 now should
>> not be visible at ovirt GUI) and detach gluster peer v0.
>> 8. Install fresh CentOS7 on v0 and prepare it with ovirt node packages,
>> networking and gluster.
>> 9. At v0, attach gluster bricks from v1. Confirm sync with gluster volume
>> heal info.
>> 10. at engine, add entry for v0 host at /etc/hosts or update your DNS. At
>> ovirt GUI, add v0 host
>> 11. At ovirt GUI import vms gluster volume as vms storage domain.
>> 12. At ovirt GUI, import VMs from vms storage domain.
>>
>> At step 11 I had to confirm the import as I received the following:
>>
>> [image: image.png]
>>
>
> Perhaps this is why (or related to) you did not get the snapshots? But I
> really don't know. Just note this on the other thread, when you post it.
>
This is the only blocking issue to complete my renaming of the cluster. Im
happy with the long steps to wipe and setup from scratch, as long as
snapshots are retained. I will try to open a new thread.
>
>
>> At step 12, I successfully imported the VM though observed that the VM
>> had no any snapshots.
>>
>>
>>
>>> >>
>>> >>
>>> >> If it's DR migration, perhaps you want storage export/import, as is
>>> >> done using the DR tool:
>>> >>
>>> >>
>>> https://www.ovirt.org/documentation/disaster-recovery-guide/disaster-reco...
>>> >>
>>> >> If you just want to use a new name, but do not need to completely
>>> >> forget the old one, you can add it using SSO_ALTERNATE_ENGINE_FQDNS.
>>> >
>>> > I need to wipe out completely any reference to the old domain/FQDN.
>>>
>>> If it's indeed really completely, as in "if someone finds the old name
>>> somewhere, it's going to be a problem/cost money/whatever", then the
>>> rename tool is not for you. It's designed to impose minimal downtime
>>> and use the new name wherever really important, but will keep the old
>>> name e.g. in the CA (meaning, in the ca cert, and all the certs it
>>> signed/signs). If that's a problem for you, the rename tool is not
>>> solving it. If in current 4.4 you find an "important" place with the
>>> old name, please file a bug. Thanks.
>>>
>> Yes, it must be wiped due to policy. I do not think there are other
>> implications :)
>>
>
> :-)
>
>
>> Speaking about the production, which is still 4.2, I will then have to
>> upgrade to 4.3 and then 4.4 if my LAB tests confirm the full rename is
>> sorted at 4.4.
>>
>
> Please note that 4.4 is EL8 only, both engine and hosts.
>
indeed. I would prefer at the moment to stick at 4.3.
>
>
>> Thanx for pointing this out.
>> I have to sort out also how to change the management network for which I
>> will open a new thread.
>>
>> Thanx for your swift responses
>>
>
> YW, good luck!
>
>
>>
>>> >>
>>> >>
>>> >> > Also I am wondering if I can change in some way the management
>>> network and make from untagged to VLAN tagged.
>>> >>
>>> >> Sorry, no idea. Perhaps start a different thread about this.
>>> >
>>> > I will. thanx.
>>> >>
>>> >>
>>> >> Best regards,
>>> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> https://github.com/oVirt/ovirt-engine/commits/master/packaging/setup/plug...
>>> >> >>
>>> >> >> I do not think I am aware of currently still-open bugs. If you
>>> find one, please file it in bugzilla. Thanks!
>>> >> >>
>>> >> >>>
>>> >> >>> I might be able to soon test this engine IP change in a virtual
>>> environment and let you know.
>>> >> >>
>>> >> >>
>>> >> >> Thanks and good luck!
>>> >> >> --
>>> >> >> Didi
>>> >> >
>>> >> > _______________________________________________
>>> >> > Users mailing list -- users(a)ovirt.org
>>> >> > To unsubscribe send an email to users-leave(a)ovirt.org
>>> >> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> >> > oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> >> > List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R5ZWCNEL3HP...
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Didi
>>> >>
>>> > _______________________________________________
>>> > Users mailing list -- users(a)ovirt.org
>>> > To unsubscribe send an email to users-leave(a)ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> > oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KX4WAZR3CH...
>>>
>>>
>>>
>>> --
>>> Didi
>>>
>>>
>
> --
> Didi
>
4 years, 4 months