[oVirt 4.3.1-RC2 Test Day] Hyperconverged HE Deployment
by Guillaume Pavese
Hi, I tried again today to deploy HE on Gluster with oVirt 4.3.1 RC2 on a
clean Nested environment (no precedent deploy attempts to clean before...).
Gluster was deployed without problem from cockpit.
I then snapshoted my vms before trying to deploy HE both from cockpit, then
from cmdline.
Both attempts failed at the same spot :
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of
steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials]
[ ERROR ] ConnectionError: Error while sending HTTP request: (7, 'Failed
connect to vs-inf-int-ovt-fr-301-210.hostics.fr:443; No route to host')
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false,
"msg": "Error while sending HTTP request: (7, 'Failed connect to
vs-inf-int-ovt-fr-301-210.hostics.fr:443; No route to host')"}
Please specify the storage you would like to use (glusterfs, iscsi, fc,
nfs)[nfs]:
[root@vs-inf-int-kvm-fr-301-210 ~]# traceroute
vs-inf-int-ovt-fr-301-210.hostics.fr
traceroute to vs-inf-int-ovt-fr-301-210.hostics.fr (192.168.122.147), 30
hops max, 60 byte packets
1 vs-inf-int-kvm-fr-301-210.hostics.fr (192.168.122.1) 3006.344 ms !H
3006.290 ms !H 3006.275 ms !H
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 8 months
error: "cannot set lock, no free lockspace" (localized)
by Mike Lykov
Hi all. I have a HCI setup, glusterfs 3.12, ovirt 4.2.7, 4 nodes
Yesterday I see 3 VMs detected by engine as "not responding" (it is marked as HA VMs)
(it all located on ovirtnode1 server)
Two of them are restarted by engine on other nodes successfully, but one are not. All get LOCALIZED message like 'cannot set lock: no free space on device' - what is this ? Why engine get that errors, and why some VMs can restart automatically, some not (but successfully restarted bu user via webui after some pause?)
Who knows, what are you think? Full engine logs may be uploaded.
from engine.log: start event
---------------
2019-02-26 17:04:05,308+04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM 'd546add1-126a-4490-bc83-469bab659854'(openfire.miac) moved from 'Up' --> 'NotResponding'
2019-02-26 17:04:05,865+04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: VM_NOT_RESPONDING(126), VM openfire.miac is not responding.
2019-02-26 17:04:05,865+04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM '7a3af2e7-8296-4fe0-ac55-c52a4b1de93f'(e-l-k.miac) moved from 'Up' --> 'NotResponding'
2019-02-26 17:04:05,894+04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: VM_NOT_RESPONDING(126), VM e-l-k.miac is not responding.
2019-02-26 17:04:05,895+04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM 'de76aa6c-a211-41de-8d85-7d2821c3980d'(tsgr-mon) moved from 'Up' --> 'NotResponding'
2019-02-26 17:04:05,926+04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: VM_NOT_RESPONDING(126), VM tsgr-mon is not responding.
---------------------------
2019-02-26 17:04:22,237+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM openfire.miac is down with error. Exit message: VM has been terminated on the host.
....
2019-02-26 17:04:22,374+04 INFO [org.ovirt.engine.core.bll.VdsEventListener] (ForkJoinPool-1-worker-9) [] Highly Available VM went down. Attempting to restart. VM Name 'openfire.miac', VM Id 'd546add1-126a-4490-bc83-469bab659854'
...
2019-02-26 17:04:27,737+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [] EVENT_ID: VM_DOWN_ERROR(119), VM openfire.miac is down with error. Exit message: resource busy: Failed to acquire lock: Lease is held by another host.
...
2019-02-26 17:04:28,350+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2886073) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM openfire.miac on Host ovirtnode6.miac
2019-02-26 17:04:31,841+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM openfire.miac is down with error. Exit message: resource busy: Failed to acquire lock: Lease is held by another host.
2019-02-26 17:04:31,877+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2886082) [] EVENT_ID: VDS_INITIATED_RUN_VM_FAILED(507), Failed to restart VM openfire.miac on Host ovirtnode6.miac
...
2019-02-26 17:04:31,994+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2886082) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM openfire.miac on Host ovirtnode1.miac
.....
2019-02-26 17:04:36,054+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM openfire.miac is down with error. Exit message: Не удалось установить блокировку: На устройстве не осталось свободного места.
2019-02-26 17:04:36,054+04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM 'd546add1-126a-4490-bc83-469bab659854'(openfire.miac) to rerun treatment
2019-02-26 17:04:36,091+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2886083) [] EVENT_ID: VDS_INITIATED_RUN_VM_FAILED(507), Failed to restart VM openfire.miac on Host ovirtnode1.miac
-------------------
No more attemtps for this VM were made (state now is 'Down')
Engine tried restart this VM on some other nodes, get ' Lease is held by another host' (it is normal, because timeout for lock not expired?) and then got (LOCALIZED MESSAGE ?? Why it is localized while all other are not?) 'cannot set lock: no free space on device'
Which 'device' it mean? How to see how many 'free space' in it? gluster volumes almost empty:
ovirtstor1.miac:/engine 150G 7,5G 143G 5% /rhev/data-center/mnt/glusterSD/ovirtstor1.miac:_engine
ovirtnode1.miac:/data 5,9T 42G 5,9T 1% /rhev/data-center/mnt/glusterSD/ovirtnode1.miac:_data
ovirtnode1.miac:/vmstore 4,4T 194G 4,2T 5% /rhev/data-center/mnt/glusterSD/ovirtnode1.miac:_vmstore
ovirtnode1.miac:/iso_storage 600G 1,3G 599G 1% /rhev/data-center/mnt/glusterSD/ovirtnode1.miac:_iso__storage
ovirtnode1.miac:/export 5,1T 14G 5,1T 1% /rhev/data-center/mnt/glusterSD/ovirtnode1.miac:_export
Other VMs also get this error about free space:
-------------
2019-02-26 17:05:05,142+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] EVENT_ID: VM_DOWN_ERROR(119), VM tsgr-mon is down with error. Exit message: Не удалось установить блокир
овку: На устройстве не осталось свободного места.
2019-02-26 17:05:43,526+04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM 'de76aa6c-a211-41de-8d85-7d2821c3980d'(tsgr-mon) moved from 'PoweringUp' --> 'Up'
2019-02-26 17:05:43,537+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM tsgr-mon on Host ovirtnode6.miac
----------------
And it is last message about this VM, state now is 'Up'
Therefore all VMs are migrated from this node ovirtnode1, and later in log are records about it unresponsiveness:
2019-02-26 17:07:43,584+04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler9) [35e05907] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtnode1.miac command GetGlusterVolumeHealIn
foVDS failed: Message timeout which can be caused by communication issues
2019-02-26 17:07:43,592+04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2886194) [35e05907] EVENT_ID: VDS_HOST_NOT_RESPONDING_CONNECTING(9,008), Host ovirtnode1.miac i
s not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued.
2019-02-26 17:07:49,684+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-99) [4511cab1] EVENT_ID: FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Executing power management status on Host ovirtnode1.miac using Proxy Host ovirtnode5.miac and Fence Agent ipmilan:172.16.16.1.
2019-02-26 17:07:49,685+04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-99) [4511cab1] START, FenceVdsVDSCommand(HostName = ovirtnode5.miac, ... action='STATUS', agent='FenceAgent:{id='338cbdd9-aab6-484c-a906-21e747a96a84', ... type='ipmilan', ip='172.16.16.1', port='null', user='admin', password='***', encryptOptions='false', options=''}', policy='null'}), log id: 281e6c2019-02-26 17:07:49,942+04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-99) [4511cab1] FINISH, FenceVdsVDSCommand, return: FenceOperationResult:{status='SUCCESS', powerStatus='ON', message=''}, log id: 281e6c
2019-02-26 17:07:56,131+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-99) [317316e4] EVENT_ID: SYNC_STORAGE_DEVICES_IN_HOST(4,126), Manually synced the storage devices from host ovirtnode1.miac
2019-02-26 17:07:56,550+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-99) [3a88be4f] EVENT_ID: VDS_DETECTED(13), Status of host ovirtnode1.miac was set to Up.
2019-02-26 17:07:56,554+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-99) [3a88be4f] EVENT_ID: VDS_FENCE_STATUS(496), Host ovirtnode1.miac power managementwas verified successfully.
-------------
After all, now I try to start VM that not restarted automatically:
-------------
2019-02-27 11:23:30,139+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2921212) [8e405cb9-3a3a-42ab-ae05-5aaf40194b30] EVENT_ID: USER_STARTED_VM(153), VM openfire.miac was started by admin@internal-authz (Host: ovirtnode1.miac).
.....
2019-02-27 11:24:12,489+04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-1) [] VM 'd546add1-126a-4490-bc83-469bab659854'(openfire.miac) moved from 'PoweringUp' --> 'Up'
2019-02-27 11:24:12,502+04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-1) [] EVENT_ID: USER_RUN_VM(32), VM openfire.miac started on Host ovirtnode1.miac
---------------
It started without problems on same host ovirtnode1. Strange.
--
Mike
5 years, 8 months
VM poor iops
by Leo David
Hi Everyone,
I am encountering the following issue on a single instance hyper-converged
4.2 setup.
The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
The results are very poor doing the test inside of a vm with a prealocated
disk on the ssd store: ~2k IOPS
Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS
Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on
ssd !! )?
Thank you very much !
--
Best regards, Leo David
5 years, 8 months
oVirt - Unable to delete VM snapshots
by dean@datacom.ca
Unable to delete VM snapshots. Error messages and log info below.
Running oVirt VM 4.2.8.2-1.el7,
I haven't been able to find good documentation on how to fix. Can someone point me in the right direction?
Thank you,
... Dean
Feb 28, 2019, 3:32:34 PM
Failed to delete snapshot 'Centos7_20190120' for VM 'n2_Centos7'.
Feb 28, 2019, 3:32:28 PM
VDSM n1 command HSMGetAllTasksStatusesVDS failed: Volume does not exist: (u'b2391a0b-c9e5-4dfe-9551-1e1f25c5c622',)
Feb 28, 2019, 3:32:22 PM
VDSM n2 command MergeVDS failed: Merge failed
The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the vm before successfully retrying the snapshot delete.
engine.log
2019-02-28 15:32:33,420-05 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [f702317d-87cc-41dd-bcb2-12c6a2a93f10] Merging o
f snapshot '3ddab9e6-a6ce-40d6-9ec6-815251471c5f' images 'fb974fb8-99d3-4a69-9f4e-096e30b85749'..'b2391a0b-c9e5-4dfe-9551-1e1f25c5c622' failed. Images have been marked illegal and can no longer be previewed
or reverted to. Please retry Live Merge on the snapshot to complete the operation.
2019-02-28 15:32:33,429-05 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [f702317d-87cc-41dd-bcb2-12c6a2a93f10] Ending co
mmand 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' with failure.
2019-02-28 15:32:33,578-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [f702317d-87cc-41dd-bcb2-12c6a2a93f10] Command 'Remov
eSnapshot' id: '2243e15a-2569-498c-b0f8-de345321ba2f' child commands '[2e8dccfe-1c17-40ec-9f98-84f61863e450]' executions were completed, status 'FAILED'
2019-02-28 15:32:34,773-05 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [f702317d-87cc-41dd-bcb2-12c6a2a93f10] Ending command 'org.ovi
rt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
2019-02-28 15:32:35,023-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [f702317d-87cc-41dd-bcb2-12c6a2a93f10] EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot 'Centos7_20190120' for VM 'n2_Centos7'.
vdsm.log
2019-02-28 15:32:22,251-0500 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.02 seconds (__init__:573)
2019-02-28 15:32:22,326-0500 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err= (hooks:110)
2019-02-28 15:32:22,449-0500 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::ffff:192.168.1.10,48074 (api:46)
2019-02-28 15:32:22,452-0500 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.1.10,48074 (api:52)
2019-02-28 15:32:22,453-0500 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573)
2019-02-28 15:32:22,669-0500 INFO (jsonrpc/4) [api.virt] START merge(drive={u'imageID': u'96fbec76-c518-4010-8988-46b3c3172f09', u'volumeID': u'f7f1a3ec-d936-4482-8ec9-af2b3990b111', u'domainID': u'34af4473
-0779-42a0-8b97-e1ccc725fd60', u'poolID': u'5a3025a3-005f-03d4-0333-000000000382'}, baseVolUUID=u'fb974fb8-99d3-4a69-9f4e-096e30b85749', topVolUUID=u'b2391a0b-c9e5-4dfe-9551-1e1f25c5c622', bandwidth=u'0', jo
bUUID=u'0b551b37-a6ee-4ac7-a004-2d410eaa8304') from=::ffff:192.168.1.10,48074, flow_id=f702317d-87cc-41dd-bcb2-12c6a2a93f10, vmId=a9082c6d-038e-4eb5-8436-69f93839a2ec (api:46)
2019-02-28 15:32:22,693-0500 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err= (hooks:110)
2019-02-28 15:32:22,730-0500 ERROR (jsonrpc/4) [virt.vm] (vmId='a9082c6d-038e-4eb5-8436-69f93839a2ec') merge: Cannot find volume b2391a0b-c9e5-4dfe-9551-1e1f25c5c622 in drive vda's volume chain (vm:5944)
2019-02-28 15:32:22,731-0500 INFO (jsonrpc/4) [api.virt] FINISH merge return={'status': {'message': 'Merge failed', 'code': 52}} from=::ffff:192.168.1.10,48074, flow_id=f702317d-87cc-41dd-bcb2-12c6a2a93f10,
vmId=a9082c6d-038e-4eb5-8436-69f93839a2ec (api:52)
5 years, 8 months
non-redhat version of ovirt-host-deploy
by ivan.kabaivanov@gmail.com
Hi,
I'm trying to get ovirt working on Linux From Scratch. So far so good, I can build and install it, and engine-setup also works fine. Now, I'm at the last stage, configuring it. And this is where I hit serious trouble: ovirt-host-deploy requires dnf/yum/rpm. But since I'm not running a redhat-based distro, I have none of those.
Short of reading the source code, what are the steps I need to manually perform to get a working host?
Thanks,
IvanK.
5 years, 8 months
reinstall centos node (ovirt 4.2.7) fails due missing dependency on librbd1
by Mike Lykov
Hi All!
I'm using ovirt-release42-4.2.7.1-1.el7.noarch
centos-release-7-5.1804.5.el7.centos.x86_64
And HCI glusterfs deployment (no ceph).
And yesterday I wanted an add hosted engine deploy on some node.
It requires to enable maintenance mode, reinstall node with "hosted engine = deploy" config (in webui).
I made it, but reinstall fails. In logs I see (not full list):
...
Feb 27 16:35:39 Updated: libvirt-daemon-4.5.0-10.el7_6.4.x86_64
Feb 27 16:35:39 Updated: libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
Feb 27 16:36:04 Updated: libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
Feb 27 16:36:04 Updated: libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
Feb 27 16:36:04 Updated: libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
After it
# rpm -q librbd1
librbd1-0.94.5-2.el7.x86_64
фев 27 16:36:51 ovirtnode5.miac systemd[1]: Starting Virtualization daemon...
фев 27 16:36:51 ovirtnode5.miac libvirtd[537]: 2019-02-27 12:36:51.338+0000: 537: info : libvirt version: 4.5.0, package: 10.el7_6.4 (CentOS BuildSystem <http://bugs.centos.org>, 2019-01-29-17:31:22, x86-01.bsys.centos.org)
фев 27 16:36:51 ovirtnode5.miac libvirtd[537]: 2019-02-27 12:36:51.338+0000: 537: info : hostname: ovirtnode5.miac
фев 27 16:36:51 ovirtnode5.miac libvirtd[537]: 2019-02-27 12:36:51.338+0000: 537: error : virModuleLoadFile:53 : внутренняя ошибка: Failed to load module '/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so': /usr/lib64/libv
irt/storage-backend/libvirt_storage_backend_rbd.so: undefined symbol: rbd_diff_iterate2
фев 27 16:36:51 ovirtnode5.miac systemd[1]: libvirtd.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
фев 27 16:36:51 ovirtnode5.miac systemd[1]: Failed to start Virtualization daemon.
There is a old bugs like
https://bugzilla.redhat.com/show_bug.cgi?id=1316911
or new posts like
http://dreamcloud.artark.ca/libvirtd-failure-after-latest-upgrade-in-rhel...
Obviously, the libvirt* rpms are updated, but librbd* not. But why? Why libvirt packages are not strictly dependenced on all required package/lib versions?
libvirt-daemon-driver-storage-rbd must require exactly that librbd version that can work with, i think.
--
Mike
5 years, 8 months
Feature: Hosted engine VM management
by Roy Golan
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review and comment on the wiki below:
http://www.ovirt.org/Hosted_engine_VM_management
Thanks,
Roy
5 years, 9 months
Facing issues with settings up shared data-center
by Jinesh Ks
Host Node3 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational.
VDSM Node3 command ConnectStoragePoolVDS failed: Cannot find master domain: u'spUUID=a171b7d6-2c10-11e9-acee-10604b98fee8, msdUUID=bb73461e-02a2-45a6-85fa-5e39d860485c'
The error message for connection 10.0.10.11:/exports/data returned by VDSM was: Problem while trying to mount target
-----------
we have 3 Nodes, using NFS for storage domain... using shared data center.
Anybody can help me?
5 years, 9 months