SPM contending loop
by Jayme
My cluster appears to be experiencing an SPM problem. I recently placed
each host in maintenance to move the ovirt management network to another
interface. All was successful and all VMs are currently running. However,
I'm not facing an SPM contending loop with data center going in and out of
responsive status.
I have a 3 server HCI setup and all volumes are active and healed, there
are no unsynced entries or split brains.
Does anyone know how I could diagnose the SPM issue?
engine.log:
2020-01-13 22:24:54,777-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
(DefaultQuartzScheduler2) [213adf4f] START,
GlusterTasksListVDSCommand(HostName = Orchard0,
VdsIdVDSCommandParametersBase:{hostId='771c67eb-56e6-4736-8c67-668502d4ecf5'}),
log id: 349f80a9
2020-01-13 22:24:55,231-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
(DefaultQuartzScheduler2) [213adf4f] FINISH, GlusterTasksListVDSCommand,
return: [], log id: 349f80a9
2020-01-13 22:24:58,245-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler3) [4f66c75b] START,
GlusterServersListVDSCommand(HostName = Orchard0,
VdsIdVDSCommandParametersBase:{hostId='771c67eb-56e6-4736-8c67-668502d4ecf5'}),
log id: 7b04f110
2020-01-13 22:24:58,887-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand'
return value '
TaskStatusListReturn:{status='Status [code=654, message=Not SPM]'}
'
2020-01-13 22:24:58,888-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] HostName = Orchard1
2020-01-13 22:24:58,888-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Command
'HSMGetAllTasksStatusesVDSCommand(HostName = Orchard1,
VdsIdVDSCommandParametersBase:{hostId='fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d'})'
execution failed: IRSGenericException: IRSErrorException:
IRSNonOperationalException: Not SPM
2020-01-13 22:24:59,034-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler3) [4f66c75b] FINISH, GlusterServersListVDSCommand,
return: [10.12.0.220/24:CONNECTED, orchard1.grove.silverorange.com:CONNECTED,
orchard2.grove.silverorange.com:DISCONNECTED], log id: 7b04f110
2020-01-13 22:24:59,049-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler3) [4f66c75b] START,
GlusterServersListVDSCommand(HostName = Orchard2,
VdsIdVDSCommandParametersBase:{hostId='fd0752d8-2d41-45b0-887a-0ffacbb8a237'}),
log id: 43f1dd82
2020-01-13 22:24:59,099-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] START,
ConnectStoragePoolVDSCommand(HostName = Orchard1,
ConnectStoragePoolVDSCommandParameters:{hostId='fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d',
vdsId='fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d',
storagePoolId='a45e442e-9989-11e8-b0e4-00163e4bf18a', masterVersion='1'}),
log id: 2b397b31
2020-01-13 22:24:59,099-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Executing with
domain map: {edc68a7c-7604-47e6-89bc-3738d727e8fc=active,
23c22a0f-0482-425e-8ada-730cf8ec0751=active,
390c0320-e843-4ff3-a4bb-a9973058447f=active,
fb43d33a-82c8-44cb-8169-090cd0d8f56e=active,
d70b171e-7488-4d52-8cad-bbc581dbf16e=active,
1f2e9989-9ab3-43d5-971d-568b8feca918=active}
2020-01-13 22:24:59,850-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler3) [4f66c75b] FINISH, GlusterServersListVDSCommand,
return: [10.12.0.222/24:CONNECTED, 10.11.0.220:CONNECTED,
orchard1.grove.silverorange.com:CONNECTED], log id: 43f1dd82
2020-01-13 22:24:59,852-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler3) [4f66c75b] START,
GlusterVolumesListVDSCommand(HostName = Orchard0,
GlusterVolumesListVDSParameters:{hostId='771c67eb-56e6-4736-8c67-668502d4ecf5'}),
log id: 263be6f8
2020-01-13 22:25:00,019-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] FINISH,
ConnectStoragePoolVDSCommand, return: , log id: 2b397b31
2020-01-13 22:25:00,036-04 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) []
hostFromVds::selectedVds - 'Orchard1', spmStatus 'Free', storage pool
'Default', storage pool version '4.3'
2020-01-13 22:25:00,056-04 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] starting spm on vds
'Orchard1', storage pool 'Default', prevId '-1', LVER '-1'
2020-01-13 22:25:00,057-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] START,
SpmStartVDSCommand(HostName = Orchard1,
SpmStartVDSCommandParameters:{hostId='fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d',
storagePoolId='a45e442e-9989-11e8-b0e4-00163e4bf18a', prevId='-1',
prevLVER='-1', storagePoolFormatType='V4', recoveryMode='Manual',
SCSIFencing='false'}), log id: 3dea111a
2020-01-13 22:25:00,065-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] spmStart polling
started: taskId '671d5904-e062-4d45-9eb4-83a6f13657fe'
2020-01-13 22:25:00,500-04 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler3) [4f66c75b] FINISH, GlusterVolumesListVDSCommand,
return:
{3f8f6a0f-aed4-48e3-9129-18a2a3f64eef=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@743f4102,
71ff56d9-79b8-445d-b637-72ffc974f109=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cad5f8f4,
752a9438-cd11-426c-b384-bc3c5f86ed07=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f9f55499,
c3e7447e-8514-4e4a-9ff5-a648fe6aa537=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a2c16da,
79e8e93c-57c8-4541-a360-726cec3790cf=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@33e1ab37,
095fd8fc-5322-4741-8805-fc0bb64b554f=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@b28195c0},
log id: 263be6f8
2020-01-13 22:25:03,089-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Failed in
'HSMGetTaskStatusVDS' method
2020-01-13 22:25:03,090-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] spmStart polling
ended: taskId '671d5904-e062-4d45-9eb4-83a6f13657fe' task status 'finished'
2020-01-13 22:25:03,090-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] Start SPM Task
failed - result: 'cleanSuccess', message: VDSGenericException:
VDSErrorException: Failed to HSMGetTaskStatusVDS, error = TaskManager
error, unable to add task: ('Task id already in use:
7df9eeb4-f7a8-4de4-b3f7-5e5607d48dda',), code = 100
2020-01-13 22:25:03,104-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] spmStart polling
ended, spm status: Free
2020-01-13 22:25:03,105-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] START,
HSMClearTaskVDSCommand(HostName = Orchard1,
HSMTaskGuidBaseVDSCommandParameters:{hostId='fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d',
taskId='671d5904-e062-4d45-9eb4-83a6f13657fe'}), log id: 5335606f
2020-01-13 22:25:03,110-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] FINISH,
HSMClearTaskVDSCommand, return: , log id: 5335606f
2020-01-13 22:25:03,110-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [] FINISH,
SpmStartVDSCommand, return: SpmStatusResult:{SPM Id='-1', SPM LVER='-1',
SPM Status='Free'}, log id: 3dea111a
2020-01-13 22:25:03,117-04 INFO
[org.ovirt.engine.core.bll.storage.pool.SetStoragePoolStatusCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [1c8884ac] Running
command: SetStoragePoolStatusCommand internal: true. Entities affected :
ID: a45e442e-9989-11e8-b0e4-00163e4bf18a Type: StoragePool
2020-01-13 22:25:03,179-04 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [1c8884ac]
IrsBroker::Failed::GetStoragePoolInfoVDS: IRSGenericException:
IRSErrorException: SpmStart failed
2020-01-13 22:25:03,205-04 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [1c8884ac] Irs placed
on server 'fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d' failed. Proceed Failover
2020-01-13 22:25:03,223-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [1c8884ac] START,
ConnectStoragePoolVDSCommand(HostName = Orchard2,
ConnectStoragePoolVDSCommandParameters:{hostId='fd0752d8-2d41-45b0-887a-0ffacbb8a237',
vdsId='fd0752d8-2d41-45b0-887a-0ffacbb8a237',
storagePoolId='a45e442e-9989-11e8-b0e4-00163e4bf18a', masterVersion='1'}),
log id: 6a7302e9
2020-01-13 22:25:03,224-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-72) [1c8884ac] Executing
with domain map: {edc68a7c-7604-47e6-89bc-3738d727e8fc=active,
23c22a0f-0482-425e-8ada-730cf8ec0751=active,
390c0320-e843-4ff3-a4bb-a9973058447f=active,
fb43d33a-82c8-44cb-8169-090cd0d8f56e=active,
d70b171e-7488-4d52-8cad-bbc581dbf16e=active,
1f2e9989-9ab3-43d5-971d-568b8feca918=active}
-----------------------------
vdsm.log from one of my hosts:
2020-01-13 22:26:12,434-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,435-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 88772af7-8cf5-433e-8be3-8d0adf0bbf04
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 746,
in _load
self._loadJobMetaFile(taskDir, jn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 696,
in _loadJobMetaFile
self._loadMetaFile(taskFile, self.jobs[n], Job.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/88772af7-8cf5-433e-8be3-8d0adf0bbf04/88772af7-8cf5-433e-8be3-8d0adf0bbf04.job.0',)
2020-01-13 22:26:12,462-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,462-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: a9b11e33-9b93-46a0-a36e-85063fd53ebe
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 751,
in _load
self._loadRecoveryMetaFile(taskDir, rn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 705,
in _loadRecoveryMetaFile
self._loadMetaFile(taskFile, self.recoveries[n], Recovery.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/a9b11e33-9b93-46a0-a36e-85063fd53ebe/a9b11e33-9b93-46a0-a36e-85063fd53ebe.recover.0',)
2020-01-13 22:26:12,476-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,476-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 650f2df4-6489-47e2-af5d-db86a22f01c0
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 751,
in _load
self._loadRecoveryMetaFile(taskDir, rn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 705,
in _loadRecoveryMetaFile
self._loadMetaFile(taskFile, self.recoveries[n], Recovery.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/650f2df4-6489-47e2-af5d-db86a22f01c0/650f2df4-6489-47e2-af5d-db86a22f01c0.recover.0',)
2020-01-13 22:26:12,487-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,488-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: a2e86fcc-8e7e-4e6d-bf5e-5ac61a98169e
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 746,
in _load
self._loadJobMetaFile(taskDir, jn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 696,
in _loadJobMetaFile
self._loadMetaFile(taskFile, self.jobs[n], Job.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/a2e86fcc-8e7e-4e6d-bf5e-5ac61a98169e/a2e86fcc-8e7e-4e6d-bf5e-5ac61a98169e.job.0',)
2020-01-13 22:26:12,493-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,493-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: eb8b4f7a-9b5c-46f6-aaa7-7ef05dbf1743
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 737,
in _load
self._loadTaskMetaFile(taskDir)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 688,
in _loadTaskMetaFile
self._loadMetaFile(taskFile, self, Task.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/eb8b4f7a-9b5c-46f6-aaa7-7ef05dbf1743/eb8b4f7a-9b5c-46f6-aaa7-7ef05dbf1743.task',)
2020-01-13 22:26:12,505-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,506-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 75318d21-45b2-4dbd-985c-a7851a10a463
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 746,
in _load
self._loadJobMetaFile(taskDir, jn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 696,
in _loadJobMetaFile
self._loadMetaFile(taskFile, self.jobs[n], Job.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/75318d21-45b2-4dbd-985c-a7851a10a463/75318d21-45b2-4dbd-985c-a7851a10a463.job.0',)
2020-01-13 22:26:12,517-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,517-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 1779b352-022c-49a3-9388-2f688d33cdab
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 746,
in _load
self._loadJobMetaFile(taskDir, jn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 696,
in _loadJobMetaFile
self._loadMetaFile(taskFile, self.jobs[n], Job.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/1779b352-022c-49a3-9388-2f688d33cdab/1779b352-022c-49a3-9388-2f688d33cdab.job.0',)
2020-01-13 22:26:12,532-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,532-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 0302036a-7d99-4685-befb-6fee1602feaf
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 746,
in _load
self._loadJobMetaFile(taskDir, jn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 696,
in _loadJobMetaFile
self._loadMetaFile(taskFile, self.jobs[n], Job.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/0302036a-7d99-4685-befb-6fee1602feaf/0302036a-7d99-4685-befb-6fee1602feaf.job.0',)
2020-01-13 22:26:12,545-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,546-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 24a825b9-d48d-4134-8aa1-b4db7a9c6ab1
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 737,
in _load
self._loadTaskMetaFile(taskDir)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 688,
in _loadTaskMetaFile
self._loadMetaFile(taskFile, self, Task.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/24a825b9-d48d-4134-8aa1-b4db7a9c6ab1.backup/24a825b9-d48d-4134-8aa1-b4db7a9c6ab1.task',)
2020-01-13 22:26:12,560-0400 ERROR (tasks/5) [storage.TaskManager.Task]
Unexpected error (task:652)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 631,
in _loadMetaFile
for line in getProcPool().readLines(filename):
File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 334, in readLines
return ioproc.readlines(path)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 555,
in readlines
return self.readfile(path, direct).splitlines()
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 2] No such file or directory
2020-01-13 22:26:12,561-0400 ERROR (tasks/5) [storage.TaskManager]
taskManager: Skipping directory: 90d88529-a051-4acd-bef2-d0aa034c15de
(taskManager:222)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
214, in loadDumpedTasks
t = Task.loadTask(store, taskID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1162,
in loadTask
t._load(store, ext)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 751,
in _load
self._loadRecoveryMetaFile(taskDir, rn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 705,
in _loadRecoveryMetaFile
self._loadMetaFile(taskFile, self.recoveries[n], Recovery.fields)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 653,
in _loadMetaFile
raise se.TaskMetaDataLoadError(filename)
TaskMetaDataLoadError: Can't load Task Metadata:
('/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/mastersd/master/tasks/90d88529-a051-4acd-bef2-d0aa034c15de/90d88529-a051-4acd-bef2-d0aa034c15de.recover.0',)
2020-01-13 22:26:12,607-0400 ERROR (tasks/5) [storage.StoragePool]
Unexpected error (sp:383)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 378, in
startSpm
self.taskMng.recoverDumpedTasks()
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
227, in recoverDumpedTasks
self.queueRecovery(task)
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
48, in queueRecovery
return self._queueTask(task, task.recover)
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
54, in _queueTask
'Task id already in use: {0}'.format(task.id))
AddTaskError: TaskManager error, unable to add task: ('Task id already in
use: 7df9eeb4-f7a8-4de4-b3f7-5e5607d48dda',)
2020-01-13 22:26:12,608-0400 ERROR (tasks/5) [storage.StoragePool] failed:
TaskManager error, unable to add task: ('Task id already in use:
7df9eeb4-f7a8-4de4-b3f7-5e5607d48dda',) (sp:384)
2020-01-13 22:26:12,635-0400 ERROR (tasks/5) [storage.TaskManager.Task]
(Task='3c7de2a0-597c-4ebe-b4de-689dba26045b') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 378, in
startSpm
self.taskMng.recoverDumpedTasks()
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
227, in recoverDumpedTasks
self.queueRecovery(task)
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
48, in queueRecovery
return self._queueTask(task, task.recover)
File "/usr/lib/python2.7/site-packages/vdsm/storage/taskManager.py", line
54, in _queueTask
'Task id already in use: {0}'.format(task.id))
AddTaskError: TaskManager error, unable to add task: ('Task id already in
use: 7df9eeb4-f7a8-4de4-b3f7-5e5607d48dda',)
4 years, 11 months
[ANN] oVirt 4.3.8 Third Release Candidate is now available for testing
by Lev Veyde
The oVirt Project is pleased to announce the availability of the oVirt
4.3.8 Third Release Candidate for testing, as of January 8th, 2020.
This update is a release candidate of the eighth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.8 release highlights:
http://www.ovirt.org/release/4.3.8/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.8/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 11 months
Re: gluster shards not healing
by Strahil
If you observe similar problems in the future, you can just rsync the files from one brick to the other and run a heal (usually a full heal resolves it all).
Most probably the Hypervisor has accesed the file that consists the shards ans the FUSE client noticed that the first host needs healing and uploaded the files .
I have used the following to also trigger a heal (not good for large volumes)
find /fuse-mountpoint -iname '*' -exec stat {} \;
Best Regards,
Strahil Nikolov
On Jan 14, 2020 02:33, Jayme <jaymef(a)gmail.com> wrote:
>
> I'm not exactly sure how but it looks like the problem worked itself out after a few hours
>
> On Mon, Jan 13, 2020 at 5:02 PM Jayme <jaymef(a)gmail.com> wrote:
>>
>> I have a 3-way replica HCI setup. I recently placed one host in maintenance to perform work on it. When I re-activated it I've noticed that many of my gluster volumes are not completing the heal process.
>>
>> heal info shows shard files in heal pending. I looked up the files and it appears that they exist on the other two hosts (the ones that remained active) but do not exist on the host that was in maintenance.
>>
>> I tried to run a manual heal on one of the volumes and then a full heal as well but there are still unhealed shards. The shard files also still do not exist on the maintenance host. Here is an example from one of my volumes:
>>
>> # gluster volume heal prod_a info
>> Brick gluster0:/gluster_bricks/prod_a/prod_a
>> Status: Connected
>> Number of entries: 0
>>
>> Brick gluster1:/gluster_bricks/prod_a/prod_a
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
>> Status: Connected
>> Number of entries: 2
>>
>> Brick gluster2:/gluster_bricks/prod_a/prod_a
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
>> Status: Connected
>> Number of entries: 2
>>
>>
>> host0:
>>
>> # ls -al /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> ls: cannot access /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177: No such file or directory
>>
>> host1:
>>
>> # ls -al /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> -rw-rw----. 2 root root 67108864 Jan 13 16:57 /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>>
>> host2:
>>
>> # ls -al /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> -rw-rw----. 2 root root 67108864 Jan 13 16:57 /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>>
>>
>> How can I heal these volumes?
>>
>> Thanks!
4 years, 11 months
gluster shards not healing
by Jayme
I have a 3-way replica HCI setup. I recently placed one host in
maintenance to perform work on it. When I re-activated it I've noticed
that many of my gluster volumes are not completing the heal process.
heal info shows shard files in heal pending. I looked up the files and it
appears that they exist on the other two hosts (the ones that remained
active) but do not exist on the host that was in maintenance.
I tried to run a manual heal on one of the volumes and then a full heal as
well but there are still unhealed shards. The shard files also still do
not exist on the maintenance host. Here is an example from one of my
volumes:
# gluster volume heal prod_a info
Brick gluster0:/gluster_bricks/prod_a/prod_a
Status: Connected
Number of entries: 0
Brick gluster1:/gluster_bricks/prod_a/prod_a
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
Status: Connected
Number of entries: 2
Brick gluster2:/gluster_bricks/prod_a/prod_a
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
Status: Connected
Number of entries: 2
host0:
# ls -al
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
ls: cannot access
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177:
No such file or directory
host1:
# ls -al
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
-rw-rw----. 2 root root 67108864 Jan 13 16:57
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
host2:
# ls -al
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
-rw-rw----. 2 root root 67108864 Jan 13 16:57
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
How can I heal these volumes?
Thanks!
4 years, 11 months
How can I link the cloned disks from a snapshot to the original disks?
by FMGarcia
Hello,
Sorry, I posted in bugzilla, and you advise me than my question isn't
error. https://bugzilla.redhat.com/show_bug.cgi?id=1789534
I will attach the same archives and the same text.
Description of problem:
This problem is bizarre/unusual/convoluted, such that I think that it won't affect to anybody. But the system allows it.
In a backup from snapshot with several disks, we can know the original disks from each disk through their names and their sizes. But, and if several disks had the same name and same size? How could we do it without inspect the content of each disk?
With the java Sdk, the order of the discs from the original machine to the cloned machine changes, making this path not possible either.
In attached files, I show the disks[with different name and order of java Sdk] of the original vm[vm1] and the cloned vm[vm1Clone]. And others[vm2] and [vm2Clone] with the same process, but all disks have the same name. The disks have two sizes: 1Gb and 2Gb.
Steps to Reproduce:
1. Create a vm, and attach several disks with same name and same size.
2. Create a backup from snapshot.
3. Now, try to differentiate the disks cloned from original, without looking at their contents.
Actual results:
I think that it is impossible without I look at their contents.
Expected results:
Knowing which disk of the cloned machine links to which disk of the original machine.
Additional info:
If you need more info, request me it
Vm.log -> Vm original with different disk name
VmClone.log -> Vm clone with different disk name
Vm2.log -> Vm original with same disk name
Vm2Clone.log -> Cloned vm with same disk name
Thanks for your time,
Fran
4 years, 11 months
Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2
by William Smyth
I'm running into the exact same issue as described in this thread. The
HostedEngine VM (HE) loses connectivity during the deployment while on the
NAT'd network and the deployment fails. My HE VM is never migrated to
shared storage because that part of the process hasn't been reached, nor
have I been asked for the shared storage details, which are simple NFS
shares. I've tried the deployment from the CLI and through Cockpit, both
times using the Ansible deployment, and both times reaching the same
conclusion.
I did manage to connect to the Centos 7 host that is running the HE guest
with virt-manager and moved the NIC to the 'ovirtmgmt' vswitch using
macvtap, which allowed me to access HE from the outside world. One of the
big issues though is that the HE isn't on shared storage and I'm not sure
how to move it there. I'm running 2 Centos 7 hosts with different CPU
makes, AMD and Intel. So i have two separate single-host clusters but I'm
trying to at least have the HA setup for the HE, which I think is still
possible. I know that I won't be able to do a live migration, but that's
not a big deal. I'm afraid my failed deployment of HE is preventing this
and I'm at an impasse. I don't want to give up on oVirt but it's becoming
difficult to get my lab environment operational and I'm wasting a lot of
time troubleshooting.
Any help or suggestions would be greatly appreciated.
William Smyth
4 years, 11 months
Can we please migrate to 2020 and get a user friendly issues/support tool?
by m.skrzetuski@gmail.com
Hello everyone,
I can speak only for myself but I find mailing lists so 1990. Could we migrate this list to something (user friendly) Jira like?
I have to admit it's rather difficult to search an read the archive for old issues in the e-mail format.
Kind regards
Skrzetuski
4 years, 11 months
Setting up cockpit?
by m.skrzetuski@gmail.com
Hello everyone,
I'd like to get cockpit to work because currently when I click "Host Console" on a host I just get "connection refused". I checked and after the engine installation the cockpit service was not running. When I start it, it runs and answers on port 9090, however the SSL certificate is broken.
- How do I auto enable cockpit on installation?
- How do I supply my own SSL certification to cockpit?
Kind regards
Skrzetuski
4 years, 11 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
Can you check for any differences betwewn the newly created VM and the old one.
Set an alias as follows:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Then:
virsh dumpxml oldVM
virsh dumpxml newVM
Maybe somethhing can give a clue what is going on.
Best Regards,
Strahil NikolovOn Jan 12, 2020 20:03, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Dear Strahil,
>
>
>
> Last strange discovery. Newly installed VM – same OS, same NIC’s – connected to the same VLAN’s. Both machines are running on the same virtualization host.
>
> Newly installed VM – no network issues at all. It works flawlessly.
>
>
>
> It is really very strange.
>
>
>
> Best,
>
> Latcho
>
>
4 years, 11 months