Lots of storage.MailBox.SpmMailMonitor
by Fabrice Bacchella
My vdsm log files are huge:
-rw-r--r-- 1 vdsm kvm 1.8G Nov 22 11:32 vdsm.log
And this is juste half an hour of logs:
$ head -1 vdsm.log
2018-11-22 11:01:12,132+0100 ERROR (mailbox-spm) [storage.MailBox.SpmMailMonitor] mailbox 2 checksum failed, not clearing mailbox, clearing new mail (data='...lots of data', expected='\xa4\x06\x08\x00') (mailbox:612)
I just upgraded vdsm:
$ rpm -qi vdsm
Name : vdsm
Version : 4.20.43
2 years, 10 months
poweroff and reboot with ovirt_vm ansible module
by Nathanaël Blanchet
Hello, is there a way to poweroff or reboot (without stopped and running
state) a vm with the ovirt_vm ansible module?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 6 months
OVN and change of mgmt network
by Gianluca Cecchi
Hello,
I previously had OVN running on engine (as OVN provider with northd and
northbound and southbound DBs) and hosts (with OVN controller).
After changing mgmt ip of hosts (engine has retained instead the same ip),
I executed again on them the command:
vdsm-tool ovn-config <ip_of_engine> <nel_local_ip_of_host>
Now I think I have to clean up some things, eg:
1) On engine
where I get these lines below
systemctl status ovn-northd.service -l
. . .
Sep 29 14:41:42 ovmgr1 ovsdb-server[940]: ovs|00005|reconnect|ERR|tcp:
10.4.167.40:37272: no response to inactivity probe after 5 seconds,
disconnecting
Oct 03 11:52:00 ovmgr1 ovsdb-server[940]: ovs|00006|reconnect|ERR|tcp:
10.4.167.41:52078: no response to inactivity probe after 5 seconds,
disconnecting
The two IPs are the old ones of two hosts
It seems that a restart of the services has fixed...
Can anyone confirm if I have to do anything else?
2) On hosts (there are 3 hosts with OVN on ip 10.4.192.32/33/34)
where I currently have this output
[root@ov301 ~]# ovs-vsctl show
3a38c5bb-0abf-493d-a2e6-345af8aedfe3
Bridge br-int
fail_mode: secure
Port "ovn-1dce5b-0"
Interface "ovn-1dce5b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.32"}
Port "ovn-ddecf0-0"
Interface "ovn-ddecf0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.33"}
Port "ovn-fd413b-0"
Interface "ovn-fd413b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.168.74"}
Port br-int
Interface br-int
type: internal
ovs_version: "2.7.2"
[root@ov301 ~]#
The IPs of kind 10.4.192.x are ok.
But there is a left-over of an old host I initially used for tests,
corresponding to 10.4.168.74, that now doesn't exist anymore
How can I clean records for 1) and 2)?
Thanks,
Gianluca
3 years, 9 months
deprecating export domain?
by Charles Kozler
Hello,
I recently read on this list from a redhat member that export domain is
either being deprecated or looking at being deprecated
To that end, can you share details? Can you share any notes/postings/bz's
that document this? I would imagine something like this would be discussed
in larger audience
This seems like a somewhat significant change to make and I am curious
where this is scheduled? Currently, a lot of my backups rely explicitly on
an export domain for online snapshots, so I'd like to plan accordingly
Thanks!
4 years, 2 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
4 years, 9 months
Ovirt-engine-ha cannot to see live status of Hosted Engine
by asm@pioner.kz
Good day for all.
I have some issues with Ovirt 4.2.6. But now the main this of it:
I have two Centos 7 Nodes with same config and last Ovirt 4.2.6 with Hostedengine with disk on NFS storage.
Also some of virtual machines working good.
But, when HostedEngine running on one node (srv02.local) everything is fine.
After migrating to another node (srv00.local), i see that agent cannot to check livelinness of HostedEngine. After few minutes HostedEngine going to reboot and after some time i see some situation. After migration to another node (srv00.local) all looks OK.
hosted-engine --vm-status commang when HosterEngine on srv00 node:
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : srv02.local
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : ecc7ad2d
local_conf_timestamp : 78328
Host timestamp : 78328
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=78328 (Tue Sep 18 12:44:18 2018)
host-id=1
score=0
vm_conf_refresh_time=78328 (Tue Sep 18 12:44:18 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan 2 03:49:58 1970
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : srv00.local
Host ID : 2
Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 1d62b106
local_conf_timestamp : 326288
Host timestamp : 326288
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=326288 (Tue Sep 18 12:44:21 2018)
host-id=2
score=3400
vm_conf_refresh_time=326288 (Tue Sep 18 12:44:21 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
Log agent.log from srv00.local:
MainThread::INFO::2018-09-18 12:40:51,749::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:40:52,052::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:01,066::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:01,374::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Global metadata: {'maintenance': False}
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Host srv02.local.pioner.kz (id 1): {'conf_on_shared_storage': True, 'extra': 'meta
data_parse_version=1\nmetadata_feature_version=1\ntimestamp=78128 (Tue Sep 18 12:40:58 2018)\nhost-id=1\ns
core=0\nvm_conf_refresh_time=78128 (Tue Sep 18 12:40:58 2018)\nconf_on_shared_storage=True\nmaintenance=Fa
lse\nstate=EngineUnexpectedlyDown\nstopped=False\ntimeout=Fri Jan 2 03:49:58 1970\n', 'hostname': 'srv02.
local.pioner.kz', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host',
'health': 'bad', 'vm': 'down_unexpected', 'detail': 'unknown'}, 'score': 0, 'stopped': False, 'maintenance
': False, 'crc32': 'e18e3f22', 'local_conf_timestamp': 78128, 'host-ts': 78128}
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'failed liveliness check', 'health': 'b
ad', 'vm': 'up', 'detail': 'Up'}, 'bridge': True, 'mem-free': 12763.0, 'maintenance': False, 'cpu-load': 0
.0364, 'gateway': 1.0, 'storage-domain': True}
MainThread::INFO::2018-09-18 12:41:11,393::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:11,703::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:21,716::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:22,020::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:31,033::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:31,344::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
As we can see, agent thinking that HostedEngine just in powering up mode. I cannot to do anythink with it. I allready reinstalled many times srv00 node without success.
One time i even has to uninstall ovirt* and vdsm* software. Also here one interesting point, after installing just "yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm" on this node i try to install this node from engine web interface with "Deploy" action. But, installation was unsuccesfull, before i didnt install ovirt-hosted-engine-ha on this node. I dont see in documentation that its need bofore installation of new hosts. But this is for information and checking. After installing ovirt-hosted-engine-ha node was installed with HostedEngine support. But the main issue not changed.
Thanks in advance for help.
BR,
Alexandr
4 years, 9 months
disk locked after export as OVA
by adam_xu@adagene.com.cn
Hello, everyone. I tried to export a vm as OVA. I got a error in webui:
VDSM ovirt1.ntbaobei.com command HSMGetAllTasksStatusesVDS failed: Volume Group not big enough: (u'Not enough free extents for extending LV d9f2378f-92f0-4bc1-96a8-20a0f2c575cb/c515a3f9-0590-4ebe-81c5-9d0993f1fec9 (free=848, needed=872)',)
seems no enough space in the storage domain.
I deleted the vm which i wanted to export as OVA before, but I saw a disk with id 8140d2e7-6908-438e-a646-23e58a33913e left in the storage and the status is locked.
here's some log in the engine.log:
2018-09-17 14:27:03,467+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-107849) [a407f6a9-445d-4ccc-b998-f9dfc6dc67ec] START, CopyImageVDSCommand( CopyImageVDSCommandParameters:{storagePoolId='79432500-ad45-11e8-98f3-00163e188641', ignoreFailoverLimit='false', storageDomainId='d9f2378f-92f0-4bc1-96a8-20a0f2c575cb', imageGroupId='ee55bbe7-8001-4c82-b1a6-0cac7710704b', imageId='701a7472-8c00-4b9f-aa87-4837eb128695', dstImageGroupId='8140d2e7-6908-438e-a646-23e58a33913e', vmId='b3aa623e-3072-430e-87d6-b81a3b07f466', dstImageId='c515a3f9-0590-4ebe-81c5-9d0993f1fec9', imageDescription='', dstStorageDomainId='d9f2378f-92f0-4bc1-96a8-20a0f2c575cb', copyVolumeType='LeafVol', volumeFormat='COW', preallocate='Sparse', postZero='false', discard='false', force='true'}), log id: 164e3046
2018-09-17 14:27:03,467+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-107849) [a407f6a9-445d-4ccc-b998-f9dfc6dc67ec] ++ dstImageGUID=8140d2e7-6908-438e-a646-23e58a33913e
I think maybe some command did not been excuted after I export the vm as OVA. I can find the similar case here:
https://access.redhat.com/solutions/1298893
I have used the tool unlock_entity.sh but cannot find any disk that is locked.
So what should I do to delete the disk that is been locked?
yours Adam
5 years, 1 month
Unable to find OVF_STORE after recovery / upgrade
by Sam Cappello
This is a multi-part message in MIME format.
--------------05F3D0780062C6D37C1619A7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
so i was running a 3.4 hosted engine two node setup on centos 6, had
some disk issues so i tried to upgrade to centos 7 and follow the path
3.4 > 3.5 > 3.6 > 4.0. i screwed up dig time somewhere between 3.6 and
4.0, so i wiped the drives, installed a fresh 4.0.3, then created the
database and restored the 3.6 engine backup before running engine-setup
as per the docs. things seemed to work, but i have the the following
issues / symptoms:
- ovirt-ha-agent running 100% CPU on both nodes
- messages in the UI that the Hosted Engine storage Domain isn't active
and Failed to import the Hosted Engine Storage Domain
- hosted engine is not visible in the UI
and the following repeating in the agent.log:
MainThread::INFO::2016-10-03
12:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 3400)
MainThread::INFO::2016-10-03
12:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host vmhost1.oracool.net (id: 1, score: 3400)
MainThread::INFO::2016-10-03
12:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2016-10-03
12:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2016-10-03
12:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2016-10-03
12:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2016-10-03
12:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2016-10-03
12:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
MainThread::INFO::2016-10-03
12:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
MainThread::INFO::2016-10-03
12:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
MainThread::INFO::2016-10-03
12:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-10-03
12:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::WARNING::2016-10-03
12:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE
MainThread::ERROR::2016-10-03
12:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
I have searched a bit and not really found a solution, and have come to
the conclusion that i have made a mess of things, and am wondering if
the best solution is to export the VMs, and reinstall everything then
import them back?
i am using remote NFS storage.
if i try and add the hosted engine storage domain it says it is already
registered.
i have also upgraded and am now running oVirt Engine Version:
4.0.4.4-1.el7.centos
hosts were installed using ovirt-node. currently at
3.10.0-327.28.3.el7.x86_64
if a fresh install is best, any advice / pointer to doc that explains
best way to do this?
i have not moved my most important server over to this cluster yet so i
can take some downtime to reinstall.
thanks!
sam
--------------05F3D0780062C6D37C1619A7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
so i was running a 3.4 hosted engine two node setup on centos 6, had
some disk issues so i tried to upgrade to centos 7 and follow the
path 3.4 > 3.5 > 3.6 > 4.0. i screwed up dig time
somewhere between 3.6 and 4.0, so i wiped the drives, installed a
fresh 4.0.3, then created the database and restored the 3.6 engine
backup before running engine-setup as per the docs. things seemed
to work, but i have the the following issues / symptoms:<br>
- ovirt-ha-agent running 100% CPU on both nodes<br>
- messages in the UI that the Hosted Engine storage Domain isn't
active and Failed to import the Hosted Engine Storage Domain<br>
- hosted engine is not visible in the UI<br>
and the following repeating in the agent.log:<br>
<br>
MainThread::INFO::2016-10-03
12:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 3400)<br>
MainThread::INFO::2016-10-03
12:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host vmhost1.oracool.net (id: 1, score: 3400)<br>
MainThread::INFO::2016-10-03
12:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost<br>
MainThread::INFO::2016-10-03
12:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM<br>
MainThread::INFO::2016-10-03
12:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage<br>
MainThread::INFO::2016-10-03
12:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server<br>
MainThread::INFO::2016-10-03
12:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server<br>
MainThread::INFO::2016-10-03
12:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain<br>
MainThread::INFO::2016-10-03
12:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images<br>
MainThread::INFO::2016-10-03
12:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images<br>
MainThread::INFO::2016-10-03
12:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain<br>
MainThread::INFO::2016-10-03
12:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE<br>
MainThread::WARNING::2016-10-03
12:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE<br>
MainThread::ERROR::2016-10-03
12:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial
vm.conf<br>
<br>
I have searched a bit and not really found a solution, and have come
to the conclusion that i have made a mess of things, and am
wondering if the best solution is to export the VMs, and reinstall
everything then import them back?<br>
i am using remote NFS storage.<br>
if i try and add the hosted engine storage domain it says it is
already registered.<br>
i have also upgraded and am now running <span
class="gwt-InlineLabel">oVirt Engine Version: 4.0.4.4-1.el7.centos<br>
hosts were installed using ovirt-node. currently at
3.10.0-327.28.3.el7.x86_64<br>
if a fresh install is best, any advice / pointer to doc that
explains best way to do this?<br>
i have not moved my most important server over to this cluster yet
so i can take some downtime to reinstall.<br>
thanks!<br>
sam<br>
<br>
</span><br>
</body>
</html>
--------------05F3D0780062C6D37C1619A7--
5 years, 6 months
Any plans for oVirt and SBD fencing (a.k.a. poison pill)
by hunter86_bg@yahoo.com
Hello Community,
Do you have any idea if SBD fencing mechanism will be implemented in the nearest future?
It will be nice to be able to resset a host just like corosync/pacemaker cluster implementation.
5 years, 6 months