Problem with connecting the Storage Domain (Host host2 cannot access the Storage Domain(s) <UNKNOWN>)
by Patrick Lomakin
Hello, everyone! I need specialist help, because I'm already desperate. My company has four hosts that are connected to the vault. Each host has its own IP to access the storage, which means host 1 has an ip 10.42.0.10 and 10.42.1.10 -> host 2 has an ip 10.42.0.20 and 10.42.0.20 respectively. Host 1 cannot ping the address 10.42.0.20. Hardware I tried to explain in more detail.
Host 1 has ovirt node 4.3.9 installed and hosted-engine deployed.
When trying to add host 2 to a cluster it is installed, but not activated. There is an error in ovirt manager - "Host host2 cannot access the Storage Domain(s) <UNKNOWN>" and host 2 goes to "Not operational" status. On host 2, it writes "connect to 10.42.1.10:3260 failed (No route to host)" in the logs and repeats indefinitely. I manually connected host 2 to the storage using iscsiadm to ip 10.42.0.20. But the error is not missing(. At the same time, when the host tries to activate it, I can run virtual machines on it until the host shows an error message. VMs that have been run on host 2 continue to run even when the host has "Not operational" status.
I assume that when adding host 2 to a cluster, ovirt tries to connect it to the same repository host 1 is connected to from ip 10.42.1.10. There may be a way to get ovirt to connect to another ip address instead of the ip domain address for the first host. I'm attaching logs:
----> /var/log/ovirt-engine/engine.log
2020-03-31 09:13:03,866+03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghan dling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7fa128f4] EVENT_ID: VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host2.school34.local c annot access the Storage Domain(s) <UNKNOWN> attached to the Data Center DataCenter. Setting Host state to Non-Operational.
2020-03-31 10:40:04,883+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [7a48ebb7] START, ConnectStorageServerVDSCommand(HostName = host2.school34.local, StorageServerConnectionManagementVDSParameters:{hostId='d82c3a76-e417-4fe4-8b08-a29414e3a9c1', storagePoolId='6052cc0a-71b9-11ea-ba5a-00163e10c7e7', storageType='ISCSI', connectionList='[StorageServerConnections:{id='c8a05dc2-f8a2-4354-96ed-907762c29761', connection='10.42.0.10', iqn='iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='0ec6f34e-01c8-4ecc-9bd4-7e2a250d589d', connection='10.42.1.10', iqn='iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnF
ailure='true'}), log id: 2c1a22b5
2020-03-31 10:43:05,061+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [7a48ebb7] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host2.school34.local command ConnectStorageServerVDS failed: Message timeout which can be caused by communication issues
----> vdsm.log
2020-03-31 09:34:07,264+0300 ERROR (jsonrpc/5) [storage.HSM] Could not connect to storageServer (hsm:2420)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in connectStorageServer
conObj.connect()
File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 488, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 337, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: default, target: iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0, portal: 10.42.1.10,3260] (multiple)'], ['iscsiadm: Could not login to [iface: default, target: iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0, portal: 10.42.1.10,3260].', 'iscsiadm: initiator reported error (8 - connection timed out)', 'iscsiadm: Could not log into all portals'])
2020-03-31 09:36:01,583+0300 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/0 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'port': u'3260', u'connection': u'10.42.0.10', u'iqn': u'iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', u'user': u'', u'tpgt': u'2', u'ipv6_enabled': u'false', u'password': '********', u'id': u'c8a05dc2-f8a2-4354-96ed-907762c29761'}, {u'port': u'3260', u'connection': u'10.42.1.10', u'iqn': u'iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'password': '********', u'id': u'0ec6f34e-01c8-4ecc-9bd4-7e2a250d589d'}], u'storagepoolID': u'6052cc0a-71b9-11ea-ba5a-00163e10c7e7', u'domainType': 3}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'64cc0385-3a11-474b-98f0-b0ecaa6c67c8'} at 0x7fe1ac1ff510> timeout=60, duration=60.00 at 0x7fe1ac1ffb10> task#=316 at 0x7fe1f0041ad0>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 260, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task
task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request
res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod
result = fn(*methodArgs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1102, in connectStorageServer
connectionParams)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare
result = self._run(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File: "<string>", line 2, in connectStorageServer
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in connectStorageServer
conObj.connect()
File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 488, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 327, in node_login
portal, "-l"])
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 122, in _runCmd
return misc.execCmd(cmd, printable=printCmd, sudo=True, sync=sync)
File: "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line 213, in execCmd
(out, err) = p.communicate(data)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 924, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1706, in _communicate
orig_timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1779, in _communicate_with_poll
ready = poller.poll(self._remaining_time(endtime)) (executor:363)
4 years, 7 months
Failing to redeploy self hosted engine
by Maton, Brett
I keep running into this error when I try to (re)deploy self-hosted engine.
# ovirt-hosted-engine-cleanup
# hosted-engine --deploy
...
...
[ INFO ] TASK [ovirt.hosted_engine_setup : Fail with error description]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
host has been set in non_operational status, deployment errors: code
9000: Failed to verify Power Management configuration for Host
physhost01.example.com, fix accordingly and re-deploy."}
I shut down all of the VM's and detached the storage before cleaning up and
trying to re-deploy the hosted engine, first time I've run into this
particular problem.
Any help appreciated
Brett
4 years, 7 months
Artwork: 4.4 GA banners
by Sandro Bonazzola
Hi,
in preparation of oVirt 4.4 GA it would be nice to have some graphics we
can use for launching oVirt 4.4 GA on social media and oVirt website.
If you don't have coding skills but you have marketing or design skills
this is a good opportunity to contribute back to the project.
Looking forward to your designs!
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 7 months
NFS permissions error on ISODomain file with correct permissions
by Shareef Jalloq
Hi,
I asked this question in another thread but it seems to have been lost in
the noise so I'm reposting with a more descriptive subject.
I'm trying to start a Windows VM and use the virtio-win VFD floppy to get
the drivers but the VM startup fails due to a permissions issue detailed
below. The permissions look fine to me so why can't the VFD be read?
Shareef.
I found a permissions issue in the engine.log:
2020-03-25 21:28:41,662Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is
down with error. Exit message: internal error: qemu unexpectedly closed the
monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
Could not open '/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd':
Permission denied.
But when I look at that path on the node in question, every folder and the
final file have the correct vdsm:kvm permissions:
[root@ovirt-node-01 ~]# ll /rhev/data-center/mnt/nas-01.phoelex.com:
_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd
-rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
/rhev/data-center/mnt/nas-01.phoelex.com:
_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd
The files were uploaded to the ISO domain using:
engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
virtio-win_servers_amd64.vfd
4 years, 7 months
Re: Windows VirtIO drivers
by eevans@digitaldatatechs.com
Ok. Personally, I like having an ISO repository, but I like the fact that's
it will be optional.
Thanks.
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Wednesday, March 25, 2020 4:26 PM
To: eevans(a)digitaldatatechs.com; 'Shareef Jalloq' <shareef(a)jalloq.co.uk>;
users(a)ovirt.org
Subject: Re: [ovirt-users] Re: Windows VirtIO drivers
It does work that way.
I found that out as I was testing oVirt and had not created a separate ISO
Domain. I believe it was Strahil who pointed me in the right direction.
So if one does not have an ISO Domain, it is no longer required. Along with
the fact that ISO Domains are or in the process of being deprecated.
________________________________________
From: eevans(a)digitaldatatechs.com <eevans(a)digitaldatatechs.com>
Sent: Wednesday, March 25, 2020 3:13 PM
To: Robert Webb; 'Shareef Jalloq'; users(a)ovirt.org
Subject: RE: [ovirt-users] Re: Windows VirtIO drivers
That may be true, but in the ISO domain, when you open virt viewer you can
change the cd very easily...maybe it works that way as well..
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Wednesday, March 25, 2020 2:35 PM
To: eevans(a)digitaldatatechs.com; 'Shareef Jalloq' <shareef(a)jalloq.co.uk>;
users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
Don't think you have to use the ISO Domain any longer.
You can upload to a Data Domain and when you highlight the VM in the
management GUI, select the three dots in the top left for extra options and
there is a change cd option. That option will allow for attaching an ISO
from a Data Domain.
That is what I recall when I was using oVirt a month or so ago.
________________________________________
From: eevans(a)digitaldatatechs.com <eevans(a)digitaldatatechs.com>
Sent: Wednesday, March 25, 2020 2:28 PM
To: 'Shareef Jalloq'; users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
You have to copy the iso and vfd files to the ISO domain to make them
available to the vm's that need drivers.
engine-iso-uploader options list
# engine-iso-uploader options upload file file file Documentation is found
here: https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html
Eric Evans
Digital Data Services LLC.
304.660.9080
[cid:image001.jpg@01D602B1.9FEA2EC0]
From: Shareef Jalloq <shareef(a)jalloq.co.uk>
Sent: Wednesday, March 25, 2020 1:51 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Windows VirtIO drivers
Hi,
it seems the online documentation regarding the windows installation steps
is well out of date. Where is there any current documentation on where to
get the VirtIO drivers for a Windows installation?
From a bit of Googling, it seems that I need to 'yum install virtio-win' on
the engine VM and then copy the relevant .iso/.vfd to the ISO domain. Is
that correct?
Where is the documentation maintained and how do I open a bug on it?
Thanks, Shareef.
4 years, 7 months
Engine status : unknown stale-data on single node
by Wood, Randall
I have a three node Ovirt cluster where one node has stale-data for the hosted engine, but the other two nodes do not:
Output of `hosted-engine --vm-status` on a good node:
```
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt2.low.mdds.tcs-sec.com (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt2.low.mdds.tcs-sec.com
Host ID : 1
Engine status : {"health": "good", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : f91f57e4
local_conf_timestamp : 9915242
Host timestamp : 9915241
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9915241 (Fri Mar 27 14:38:14 2020)
host-id=1
score=3400
vm_conf_refresh_time=9915242 (Fri Mar 27 14:38:14 2020)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt1.low.mdds.tcs-sec.com (id: 2) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt1.low.mdds.tcs-sec.com
Host ID : 2
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 48f9c0fc
local_conf_timestamp : 9218845
Host timestamp : 9218845
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9218845 (Fri Mar 27 14:38:22 2020)
host-id=2
score=3400
vm_conf_refresh_time=9218845 (Fri Mar 27 14:38:22 2020)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt3.low.mdds.tcs-sec.com (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt3.low.mdds.tcs-sec.com
Host ID : 3
Engine status : unknown stale-data
Score : 3400
stopped : False
Local maintenance : False
crc32 : 620c8566
local_conf_timestamp : 1208310
Host timestamp : 1208310
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1208310 (Mon Dec 16 21:14:24 2019)
host-id=3
score=3400
vm_conf_refresh_time=1208310 (Mon Dec 16 21:14:24 2019)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
```
I tried the steps in https://access.redhat.com/discussions/3511881, but `hosted-engine --vm-status` on the node with stale data shows:
```
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
```
One the stale node, ovirt-ha-agent and ovirt-ha-broker are continually restarting. Since it seems the agent depends on the broker, the broker logs includes this snippet, repeating roughly every 3 seconds:
```
MainThread::INFO::2020-03-27 15:01:06,584::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
MainThread::INFO::2020-03-27 15:01:06,584::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors
MainThread::INFO::2020-03-27 15:01:06,585::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
MainThread::INFO::2020-03-27 15:01:06,585::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
MainThread::INFO::2020-03-27 15:01:06,585::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
MainThread::INFO::2020-03-27 15:01:06,587::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
MainThread::INFO::2020-03-27 15:01:06,587::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
MainThread::INFO::2020-03-27 15:01:06,587::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
MainThread::INFO::2020-03-27 15:01:06,588::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
MainThread::INFO::2020-03-27 15:01:06,588::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
MainThread::INFO::2020-03-27 15:01:06,590::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
MainThread::INFO::2020-03-27 15:01:06,590::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
MainThread::INFO::2020-03-27 15:01:06,590::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Finished loading submonitors
MainThread::INFO::2020-03-27 15:01:06,678::storage_backends::373::ovirt_hosted_engine_ha.lib.storage_backends::(connect) Connecting the storage
MainThread::INFO::2020-03-27 15:01:06,678::storage_server::349::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
MainThread::INFO::2020-03-27 15:01:06,717::storage_server::356::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
MainThread::INFO::2020-03-27 15:01:06,732::storage_server::413::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain
MainThread::WARNING::2020-03-27 15:01:08,940::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: [Errno 5] Input/output error: '/rhev/data-center/mnt/glusterSD/ovirt2:_engine/182a4a94-743f-4941-89c1-dc2008ae1cf5/ha_agent/hosted-engine.lockspace'
```
I restarted the stale node yesterday, but it still shows stale data from December of last year.
What is the recommended way for me to try to recover from this?
(This came to my attention when warnings concerning space on the /var/log partition began popping up.)
Thank you,
Randall
4 years, 7 months
Re: Local network
by Tommaso - Shellrent
This is what i've got:
*ovs-vsctl show*
03a038d4-e81c-45e0-94d1-6f18d6504f1f
Bridge br-int
fail_mode: secure
Port "ovn-765f43-0"
Interface "ovn-765f43-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.6"}
Port br-int
Interface br-int
type: internal
Port "vnet1"
Interface "vnet1"
Port "ovn-b33f6e-0"
Interface "ovn-b33f6e-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.2"}
Port "vnet3"
Interface "vnet3"
Port "ovn-8678d9-0"
Interface "ovn-8678d9-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.8"}
Port "ovn-fdd090-0"
Interface "ovn-fdd090-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.4"}
ovs_version: "2.11.0"
I suppose that the vnic are:
Port "vnet1"
Interface "vnet1"
Port "vnet3"
Interface "vnet3"
on the engine:
*ovn-nbctl show*
switch a1f30e99-3ab7-46a4-925d-287871905cab
(ovirt-local_network_definitiva-d58aea97-bb20-4e8f-bcc3-5277754846bb)
port b82f3479-b459-4c26-aff0-053d15c74ddd
addresses: ["56:6f:96:b1:00:4c"]
port 52f09a28-1645-45ff-9b84-1e53a81bb399
addresses: ["56:6f:96:b1:00:4b"]
*ovn-sbctl show*
Chassis "ab5bdfdd-8df4-4e9b-9ce9-565cfd513a4d"
hostname: "pvt-41f18-002.serverlet.com"
Encap geneve
ip: "aaa.31.bbb.224"
options: {csum="true"}
Port_Binding "b82f3479-b459-4c26-aff0-053d15c74ddd"
Port_Binding "52f09a28-1645-45ff-9b84-1e53a81bb399"
Il 31/03/20 13:39, Staniforth, Paul ha scritto:
> The engine runs the controller so ovn-sbctl won't work, on the hosts,
> use ovs-vsctl show
>
> Paul S.
> ------------------------------------------------------------------------
> *From:* Tommaso - Shellrent <tommaso(a)shellrent.com>
> *Sent:* 31 March 2020 12:13
> *To:* Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>;
> users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] Local network
>
> *Caution External Mail:* Do not click any links or open any
> attachments unless you trust the sender and know that the content is safe.
>
> Hi.
>
> on engine all seems fine.
>
> on host the command "ovn-sbctl show" is stuck, and with a strace a se
> the following error:
>
>
> connect(5, {sa_family=AF_LOCAL,
> sun_path="/var/run/openvswitch/ovnsb_db.sock"}, 37) = -1 ENOENT (No
> such file or directory)
>
>
>
>
>
>
> Il 31/03/20 11:18, Staniforth, Paul ha scritto:
>>
>> .Hello Tommaso,
>> on your oVirt engine host run
>> check the north bridge controller
>> ovn-nbctl show
>> this should show a software switch for each ovn logical network witch
>> any ports that are active( in your case you should have 2)
>>
>> check the south bridge controller
>> ovn-sbctl show
>> this should show the software switch on each host with a geneve tunnel.
>>
>> on each host run
>> ovs-vsctl show
>> this should show the virtual switch with a geneve tunnel to each
>> other host and a port for any active vnics
>>
>> Regards,
>> Paul S.
>>
>> ------------------------------------------------------------------------
>> *From:* Tommaso - Shellrent <tommaso(a)shellrent.com>
>> <mailto:tommaso@shellrent.com>
>> *Sent:* 31 March 2020 09:27
>> *To:* users(a)ovirt.org <mailto:users@ovirt.org> <users(a)ovirt.org>
>> <mailto:users@ovirt.org>
>> *Subject:* [ovirt-users] Local network
>>
>> *Caution External Mail:* Do not click any links or open any
>> attachments unless you trust the sender and know that the content is
>> safe.
>>
>> Hi to all.
>>
>> I'm trying to connect two vm, on the same "local storage" host,
>> with an internal isolated network.
>>
>> My setup;
>>
>> VM A:
>>
>> * eth0 with an external ip
>> * eth1, with 1922.168.1.1/24
>>
>> VM B
>>
>> * eth0 with an external ip
>> * eth1, with 1922.168.1.2/24
>>
>> the eth1 interfaces are connetter by a network created on external
>> provider ovirt-network-ovn , whithout a subnet defined.
>>
>> Now, the external ip works fine, but the two vm cannot connect
>> through the local network
>>
>> ping: ko
>> arping: ko
>>
>>
>> any idea to what to check?
>>
>>
>> Regards
>>
>> --
>> --
>> Shellrent - Il primo hosting italiano Security First
>>
>> *Tommaso De Marchi*
>> /COO - Chief Operating Officer/
>> Shellrent Srl
>> Via dell'Edilizia, 19 - 36100 Vicenza
>> Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
>>
>> To view the terms under which this email is distributed, please go to:-
>> http://leedsbeckett.ac.uk/disclaimer/email/
> --
> --
> Shellrent - Il primo hosting italiano Security First
>
> *Tommaso De Marchi*
> /COO - Chief Operating Officer/
> Shellrent Srl
> Via dell'Edilizia, 19 - 36100 Vicenza
> Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
4 years, 7 months
How to install Ovirt Node without ISO
by raphael.garcia@centralesupelec.fr
Hello
Is it possible to install an Ovirt node on a CentOs 7 server without iso (CD or USB).
Sorry for this newbie question.
4 years, 7 months
Re: Local network
by Tommaso - Shellrent
Hi.
on engine all seems fine.
on host the command "ovn-sbctl show" is stuck, and with a strace a se
the following error:
connect(5, {sa_family=AF_LOCAL,
sun_path="/var/run/openvswitch/ovnsb_db.sock"}, 37) = -1 ENOENT (No such
file or directory)
Il 31/03/20 11:18, Staniforth, Paul ha scritto:
>
> .Hello Tommaso,
> on your oVirt engine host run
> check the north bridge controller
> ovn-nbctl show
> this should show a software switch for each ovn logical network witch
> any ports that are active( in your case you should have 2)
>
> check the south bridge controller
> ovn-sbctl show
> this should show the software switch on each host with a geneve tunnel.
>
> on each host run
> ovs-vsctl show
> this should show the virtual switch with a geneve tunnel to each other
> host and a port for any active vnics
>
> Regards,
> Paul S.
>
> ------------------------------------------------------------------------
> *From:* Tommaso - Shellrent <tommaso(a)shellrent.com>
> *Sent:* 31 March 2020 09:27
> *To:* users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* [ovirt-users] Local network
>
> *Caution External Mail:* Do not click any links or open any
> attachments unless you trust the sender and know that the content is safe.
>
> Hi to all.
>
> I'm trying to connect two vm, on the same "local storage" host,
> with an internal isolated network.
>
> My setup;
>
> VM A:
>
> * eth0 with an external ip
> * eth1, with 1922.168.1.1/24
>
> VM B
>
> * eth0 with an external ip
> * eth1, with 1922.168.1.2/24
>
> the eth1 interfaces are connetter by a network created on external
> provider ovirt-network-ovn , whithout a subnet defined.
>
> Now, the external ip works fine, but the two vm cannot connect through
> the local network
>
> ping: ko
> arping: ko
>
>
> any idea to what to check?
>
>
> Regards
>
> --
> --
> Shellrent - Il primo hosting italiano Security First
>
> *Tommaso De Marchi*
> /COO - Chief Operating Officer/
> Shellrent Srl
> Via dell'Edilizia, 19 - 36100 Vicenza
> Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
4 years, 7 months