noVNC error.
by tommy
Hi:
I use novnc console can connect to engine vm.
But when I using novnc console connect to other vm in other datacenter,
failed.
The ovirt-websocket-proxy log is:
Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]:
192.168.10.104 - - [19/Jan/2021 15:43:32] connecting to:
ohost2.tltd.com:5900 (using SSL)
Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]:
ovirt-websocket-proxy[24096] INFO msg:824 handler exception: [SSL:
UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:618)
What reason ?
4 years, 3 months
Guest don't start : Cannot access backing file
by Lionel Caignec
Hi,
i've a big problem, i juste shutdown (power off completely) a guest to make a cold restart. And at startup the guest say : "Cannot access backing file '/rhev/data-center/mnt/blockSD/69348aea-7f55-41be-ae4e-febd86c33855/images/8224b2b0-39ba-44ef-ae41-18fe726f26ca/ca141675-c6f5-4b03-98b0-0312254f91e8'"
When i look from shell on Hypervisor the device file is red blinking...
Trying to change SPM, look device into all hosts, copy disk etc... no way to get my disk back online. It seems ovirt completely lost (delete?) the block device.
There is a way to manually dump (dd) the device in command line in order to import it inside ovirt?
My environment
Storage : SAN (managed by ovirt)
Ovirt-engine 4.4.3.12-1.el8
Host centos 8.2
Vdsm 4.40.26
Thanks for help i'm stuck and it's really urgent
--
Lionel Caignec
4 years, 3 months
oVirt Engine no longer Starting
by penguin pages
3 node cluster. Gluster for shared storage.
CentOS8
Updated to CentOS 8 Streams :P -> https://bugzilla.redhat.com/show_bug.cgi?id=1911910
After several weeks .. I am really in need of direction to get this fixed.
I saw several postings about oVirt package issues but not found a fix.
[root@thor ~]# dnf update
Last metadata expiration check: 2:54:29 ago on Fri 15 Jan 2021 06:49:16 AM EST.
Error:
Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-bridge-234-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64
- cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64
Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch
Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
- cannot install the best update candidate for package cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root@thor ~]# yum install cockpit-dashboard --nobest
Last metadata expiration check: 2:54:52 ago on Fri 15 Jan 2021 06:49:16 AM EST.
Package cockpit-dashboard-217-1.el8.noarch is already installed.
Dependencies resolved.
Problem: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best candidate for the job
=========================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=========================================================================================================================================================================================================================================
Skipping packages with broken dependencies:
ovirt-host x86_64 4.4.1-1.el8 ovirt-4.4 13 k
ovirt-host x86_64 4.4.1-2.el8 ovirt-4.4 13 k
ovirt-host x86_64 4.4.1-3.el8 ovirt-4.4 13 k
Transaction Summary
=========================================================================================================================================================================================================================================
Skip 3 Packages
Nothing to do.
Complete!
[root@thor ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─WDC_WDS100T2B0B-00YS70_19106A802926 253:3 0 931.5G 0 mpath
└─vdo_2926 253:5 0 4T 0 vdo /gluster_bricks/gv0
sdb 8:16 0 931.5G 0 disk
└─WDC_WDS100T2B0B-00YS70_192490801828 253:4 0 931.5G 0 mpath
sdc 8:32 0 477G 0 disk
└─vdo_sdc 253:6 0 2.1T 0 vdo
├─gluster_vg_sdc-gluster_lv_engine 253:7 0 100G 0 lvm /gluster_bricks/engine
├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tmeta 253:8 0 1G 0 lvm
│ └─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm
│ ├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm
│ ├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data
│ └─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore
└─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tdata 253:9 0 2T 0 lvm
└─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm
├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm
├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data
└─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore
sdd 8:48 1 58.8G 0 disk
├─sdd1 8:49 1 1G 0 part /boot
└─sdd2 8:50 1 57.8G 0 part
├─cl-root 253:0 0 36.1G 0 lvm /
├─cl-swap 253:1 0 4G 0 lvm [SWAP]
└─cl-home 253:2 0 17.6G 0 lvm /home
[root@thor ~]# mount |grep engine
/dev/mapper/gluster_vg_sdc-gluster_lv_engine on /gluster_bricks/engine type xfs (rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota,_netdev,x-systemd.requires=vdo.service)
thorst.penguinpages.local:/engine on /media/engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev)
thorst.penguinpages.local:/engine on /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev,x-systemd.device-timeout=0)
[root@thor ~]#
[root@thor ~]# tail -50 /var/log/messages
Jan 15 09:46:43 thor platform-python[28088]: detected unhandled Python exception in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 15 09:46:43 thor abrt-server[28116]: Not saving repeating crash in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Main process exited, code=exited, status=1/FAILURE
Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Failed with result 'exit-code'.
Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Service RestartSec=100ms expired, scheduling restart.
Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Scheduled restart job, restart counter is at 241.
Jan 15 09:46:43 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Communications Broker.
Jan 15 09:46:43 thor systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
Jan 15 09:46:45 thor systemd[1]: Started Session c448 of user root.
Jan 15 09:46:45 thor systemd[1]: session-c448.scope: Succeeded.
Jan 15 09:46:45 thor upsmon[2232]: Poll UPS [nutmonitor@localhost] failed - [nutmonitor] does not exist on server localhost
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Service RestartSec=10s expired, scheduling restart.
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Scheduled restart job, restart counter is at 211.
Jan 15 09:46:48 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Monitoring Agent.
Jan 15 09:46:48 thor systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
Jan 15 09:46:48 thor journal[28118]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Failed initializing the broker: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata'
Jan 15 09:46:48 thor journal[28118]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 64, in run#012 self._storage_broker_instance = self._get_storage_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 143, in _get_storage_broker#012 return storage_broker.StorageBroker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 97, in __init__#012 self._backend.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 408, in connect#012 self._check_symlinks(self._storage_path, volume.path, service_link)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 105, in _check_symlinks#012 os.unlink(service_link)#012OSError: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata'
Jan 15 09:46:48 thor journal[28118]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Trying to restart the broker
Jan 15 09:46:48 thor platform-python[28118]: detected unhandled Python exception in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Main process exited, code=exited, status=1/FAILURE
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Failed with result 'exit-code'.
Jan 15 09:46:48 thor abrt-server[28144]: Deleting problem directory Python3-2021-01-15-09:46:48-28118 (dup of Python3-2020-09-18-14:25:13-1363)
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Service RestartSec=100ms expired, scheduling restart.
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Scheduled restart job, restart counter is at 242.
Jan 15 09:46:48 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Communications Broker.
Jan 15 09:46:48 thor systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
Jan 15 09:46:48 thor journal[28140]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors
Jan 15 09:46:48 thor journal[28140]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 85, in start_monitor#012 response = self._proxy.start_monitor(type, options)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__#012 return self.__send(self.__name, args)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request#012 verbose=self.__verbose#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request#012 return self.single_request(host, handler, request_body, verbose)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request#012 http_conn = self.send_request(host, handler, request_body, verbose)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request#012 self.send_content(connection, request_body)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content#012 connection.endheaders(request_body)#012 File "/usr/lib64/python3.6/http/client.py", line 1264, in endheaders#012 self._send_output(message_body, encode_chunked=encode_chunked)#012 File "/usr/lib64/python3.6/http/client.py", line 1040, in _send_output#012 self.send(msg)#012 File "/usr/lib64/python3.6/http/client.py", line 978, in send#012 self.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 74, in connect#012 self.sock.connect(base64.b16decode(self.host))#012FileNotFoundError: [Errno 2] No such file or directory#012#012During handling of the above exception, another exception occurred:#012#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent#012 return action(he)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012 return he.start_monitoring()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 437, in start_monitoring#012 self._initialize_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 561, in _initialize_broker#012 m.get('options', {}))#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 91, in start_monitor#012 ).format(t=type, o=options, e=e)#012ovirt_hosted_engine_ha.lib.exceptions.RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'addr': '172.16.100.1', 'network_test': 'dns', 'tcp_t_address': '', 'tcp_t_port': ''}]
Jan 15 09:46:48 thor journal[28140]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent
Jan 15 09:46:48 thor abrt-server[28144]: /bin/sh: reporter-systemd-journal: command not found
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Main process exited, code=exited, status=157/n/a
Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Failed with result 'exit-code'.
Jan 15 09:46:49 thor vdsm[8421]: WARN Failed to retrieve Hosted Engine HA info, is Hosted Engine setup finished?
Jan 15 09:46:50 thor systemd[1]: Started Session c449 of user root.
Jan 15 09:46:50 thor systemd[1]: session-c449.scope: Succeeded.
Jan 15 09:46:50 thor upsmon[2232]: Poll UPS [nutmonitor@localhost] failed - [nutmonitor] does not exist on server localhost
Jan 15 09:46:53 thor journal[28165]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Failed initializing the broker: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata'
Jan 15 09:46:53 thor journal[28165]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 64, in run#012 self._storage_broker_instance = self._get_storage_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 143, in _get_storage_broker#012 return storage_broker.StorageBroker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 97, in __init__#012 self._backend.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 408, in connect#012 self._check_symlinks(self._storage_path, volume.path, service_link)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 105, in _check_symlinks#012 os.unlink(service_link)#012OSError: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata'
Jan 15 09:46:53 thor journal[28165]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Trying to restart the broker
Jan 15 09:46:53 thor platform-python[28165]: detected unhandled Python exception in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 15 09:46:53 thor abrt-server[28199]: Not saving repeating crash in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Main process exited, code=exited, status=1/FAILURE
Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Failed with result 'exit-code'.
Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Service RestartSec=100ms expired, scheduling restart.
Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Scheduled restart job, restart counter is at 243.
Jan 15 09:46:53 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Communications Broker.
Jan 15 09:46:53 thor systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker.
Jan 15 09:46:55 thor systemd[1]: Started Session c450 of user root.
Jan 15 09:46:55 thor systemd[1]: session-c450.scope: Succeeded.
Jan 15 09:46:55 thor upsmon[2232]: Poll UPS [nutmonitor@localhost] failed - [nutmonitor] does not exist on server localhost
Questions:
1) I have two important VMs that have snapshots that I need to boot up. Is their a means with an HCI configuration to manually start the VMs without oVirt engine being up?
2) Is their a means to debug what is going on with the engine failing to start to repair (I hate reloading as the only fix for systems)
3) Is their a means to re-deploy HCI setup wizard, but use the "engine" volume and so retain the VMs and templates?
4 years, 3 months
VG issue / Non Operational
by Christian Reiss
Hey folks,
quick (I hope) question: On my 3-node cluster I am swapping out all the
SSDs with fewer but higher capacity ones. So I took one node down
(maintenance, stop), then removed all SSDs, set up a new RAID, set up
lvm and gluster, let it resync. Gluster health status shows no unsynced
entries.
Uppon going from maintenance to online from ovirt management It goes
into non-operational status, vdsm log on the node shows:
2021-01-17 11:13:29,051+0100 INFO (jsonrpc/6) [api.host] START
getAllVmStats() from=::1,48580 (api:48)
2021-01-17 11:13:29,051+0100 INFO (jsonrpc/6) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=::1,48580 (api:54)
2021-01-17 11:13:29,052+0100 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer]
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2021-01-17 11:13:30,420+0100 WARN (monitor/4a62cdb) [storage.LVM]
Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f'] rc=5
out=[] err=[' Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f" not
found', ' Cannot process volume group
4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb) [storage.Monitor]
Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
(monitor:330)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
327, in _setupLoop
self._setupMonitor()
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
349, in _setupMonitor
self._produceDomain()
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
wrapper
value = meth(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
367, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
110, in produce
domain.getRealDomain()
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
134, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
151, in _findDomain
return findMethod(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
176, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
I assume due to the changed LVM UUID it fails, right? Can I someone
fix/change the UUID and get the node back up again? It does not seem to
be a major issue, to be honest.
I can see the gluster mount (what ovirt mounts when it onlines a node)
already, and gluster is happy too.
Any help is appreciated!
-Chris.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 3 months
encrypted GENEVE traffic
by Pavel Nakonechnyi
Dear oVirt Community,
From my understanding oVirt does not support Open vSwitch IPSEC tunneling for GENEVE traffic (which is described on pages http://docs.openvswitch.org/en/latest/howto/ipsec/ and http://docs.openvswitch.org/en/latest/tutorials/ipsec/).
Are there plans to introduce such support? (or explicitly not to..)
Is it possible to somehow manually configure such tunneling for existing virtual networks? (even in a limited way)
Alternatively, is it possible to deploy oVirt on top of the tunneled (i.e. via VXLAN, IPSec) interfaces? This will allow to encrypt all management traffic.
Such requirement arises when using oVirt deployment on third-party premises with untrusted network.
Thank in advance for any clarifications. :)
--
WBR, Pavel
+32478910884
4 years, 3 months
Q: Node SW Install from oVirt Engine UI Failed
by Andrei Verovski
Hi,
I installed fresh CentOS Stream on HP ProLiant and now trying to install Node SW from oVirt Engine WEB UI.
Software installation failed but no comprehensive description displayed in WEB UI.
Where I can find detailed logs what’s going wrong?
On node "sudo dnf repolist —all” lists required repos, with unnecessary disabled.
Thanks in advance.
Andrei.
repo id repo name status
appstream CentOS Stream 8 - AppStream enabled
baseos CentOS Stream 8 - BaseOS enabled
debuginfo CentOS Stream 8 - Debuginfo disabled
extras CentOS Stream 8 - Extras enabled
ha CentOS Stream 8 - HighAvailability disabled
media-appstream CentOS Stream 8 - Media - AppStream disabled
media-baseos CentOS Stream 8 - Media - BaseOS disabled
ovirt-4.4 Latest oVirt 4.4 Release enabled
ovirt-4.4-advanced-virtualization-testing Advanced Virtualization testing pack enabled
ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch enabled
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd enabled
ovirt-4.4-centos-ovirt44-testing CentOS-8 - oVirt 4.4 - testing enabled
ovirt-4.4-copr:copr.fedorainfracloud.org:mdbarroso:ovsdbapp Copr repo for ovsdbapp owned by mdba enabled
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo for gluster-ansible owned enabled
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr repo for EL8_collection owned b enabled
ovirt-4.4-epel Extra Packages for Enterprise Linux enabled
ovirt-4.4-glusterfs-7-testing GlusterFS 7 testing packages for x86 enabled
ovirt-4.4-ioprocess-preview Copr repo for iprocess-preview owned enabled
ovirt-4.4-ovirt-imageio-preview Copr repo for ovirt-imageio-preview enabled
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching w enabled
powertools CentOS Stream 8 - PowerTools enabled
rt CentOS Stream 8 - RealTime disabled
spp Service Pack for ProLiant enabled
4 years, 3 months
About Translation
by Reyhan Gülçetin
Hello,
I'm Reyhan! I am a third-grade physics engineering student in Turkey, I
also worked as a part-time system administration in the university's IT
Head Office until last month. (Now I'm more into DevOps). Back in there, I
was responsible for understanding how we can apply oVirt to our IT Office,
so I was scrolling through the documentations. I've decided to contribute,
oVirt could be the project I used the most, since I'm not a bug-fixer, at
least I can help with the translation. I'm a native Turkish speaker, so I
thought it would go well.
I wish you the best,
Best regards.
Reyhan Gülçetin.
4 years, 3 months
how to install hosted-engine without internet connection ?
by tommy
Hi,every one!
My env cannot access internet, but during the hosted-engine installation, It
needs to run yum install to update system.
I have made a yum source mirror local, but there is a step ( after create
engine vm, the vm runs yum update using default repo configure ) it failed
to connect yum source.
How can I configure yum source to local yum server before the vm run update
?
Thanks.
4 years, 3 months
Re: Networking question on setting up self-hosting engine
by Yedidyah Bar David
Hi,
On Fri, Jan 15, 2021 at 7:10 AM Kim Yuan Lee <netgoogling(a)gmail.com> wrote:
>
> Hi Didi,
>
> I sent this to the list, but I cc you here as I think I didn't format the email correctly using the web interface :)
I didn't see your message to the list. Now adding the list back.
>
> Dear Ovirt list,
>
> I saw some failures in /var/log/ovirt-hosted-engine-setup such as:
> 1) Install oVirt Engine packages for restoring backup
> 2) Copy engine logs etc
>
> But I'm not sure what is the root cause for the setup to fail?
>
> Attached is the log file for your reference. Could you please advise? Thanks.
It seems to have failed in:
2021-01-14 19:30:59,509+0800 DEBUG ansible on_any args TASK:
ovirt.hosted_engine_setup : Install Oracle oVirt repo kwargs
is_conditional:False
2021-01-14 19:30:59,509+0800 DEBUG ansible on_any args localhostTASK:
ovirt.hosted_engine_setup : Install Oracle oVirt repo kwargs
2021-01-14 19:31:01,189+0800 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_task': u'Install Oracle oVirt repo',
'task_duration': 1, 'ansible_host': u'localhost', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2021-01-14 19:31:01,189+0800 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7faffb39f590>
kwargs
2021-01-14 19:31:01,443+0800 INFO ansible task start {'status': 'OK',
'ansible_task': u'ovirt.hosted_engine_setup : Install oVirt Engine
packages for restoring backup', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml',
'ansible_type': 'task'}
2021-01-14 19:31:01,443+0800 DEBUG ansible on_any args TASK:
ovirt.hosted_engine_setup : Install oVirt Engine packages for
restoring backup kwargs is_conditional:False
2021-01-14 19:31:01,444+0800 DEBUG ansible on_any args localhostTASK:
ovirt.hosted_engine_setup : Install oVirt Engine packages for
restoring backup kwargs
2021-01-14 19:31:18,336+0800 ERROR ansible failed {'status': 'FAILED',
'ansible_type': 'task', 'ansible_task': u'Install oVirt Engine
packages for restoring backup', 'ansible_result': u"type: <type
'dict'>\nstr: {'_ansible_no_log': False, '_ansible_delegated_vars':
{'ansible_host': u'engine.orakvm2.local'}, u'changed': False,
u'results': [u'Loaded plugins: ulninfo\\nResolving Dependencies\\n-->
Running transaction check\\n---> Package ovirt-engine.noarch
0:4.3.6.6-1.0.8.el7 will be installed\\n--> Processing De",
'task_duration': 17, 'ansible_host': u'localhost', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
This is Oracle-specific code, not part of oVirt. I suggest that you
contact Oracle.
Good luck and best regards,
>
>
> Sincerely, Lee.
>
>
> On Thu, Jan 14, 2021 at 2:46 PM Yedidyah Bar David <didi(a)redhat.com> wrote:
>>
>> Hi,
>>
>> On Wed, Jan 13, 2021 at 7:16 PM <netgoogling(a)gmail.com> wrote:
>> >
>> > Dear list,
>> >
>> > I have tried setting up self-hosting engine on a host with ONE Nic (Ovirt 4.4 CentOS 8 Stream). I followed the Quick Start Guide, and tried the command line self-host setup, but ended up with the following error:
>> >
>> > {u'msg': u'There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results....
>> >
>> > I tried on another host with TWO NICs (Ovirt 4.3 Oracle Linux 7 Update 9). This time I setup a bridge BR0 and disable EM1 (the first Ethernet interface on the host), and then created Bond0 on-top of BR0. Both Bond0 and EM2 (the second Ethernet interface on the host) were up. And then I tried again using Ovirt-Cockpit wizard, with the Engine VM set on BR0, and the deployment of Engine VM simply failed. The Engine and Host are using the same network numbers (192.168.2.0/24) and they resolved correctly. I read the logs in var/log/ovirt-engine/engine.log but there wasn't any error reported.
>> >
>> > I have already tried many times for the past few days and I'm at my wits-end. May I know:
>> > 1) Is it possible to install Self-hosted engine with just ONE NIC?
>>
>> Generally speaking, yes.
>>
>> > 2) Any suggestion how to troubleshoot these problems? And tested network configurations?
>>
>> Please check/share all relevant logs. If unsure, all of /var/log.
>> Specifically, /var/log/ovirt-hosted-engine-setup/* (also engine-logs
>> subdirs, if relevant) and /var/log/vdsm/* on the hosts.
>>
>> Good luck and best regards,
>> --
>> Didi
>>
--
Didi
4 years, 3 months
Disaster Recovery using Ansible roles
by Henry lol
Hi,
I have 2 questions about Disaster Recovery with Ansible.
1)
regarding Ansible failover, AFAIK, a mapping file must be generated before
running failover.
then, should I periodically generate a new mapping file to reflect the
latest structure of the environment?
2)
I guess Ansible failover uses OVF_STOREs to restore the environment and
OVF_STOREs in storage domains are updated every 1 min as default.
then, how to recover the changes made in the meantime?
4 years, 3 months