did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266
by kelley bryan
I am experiencing the error message in the ovirt-hosted-engine-setup-ansible-create_target_vm log
{2020-05-06 14:15:30,024-0500 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u"Fail if Engine IP is different from engine's he_fqdn resolved IP", 'ansible_result': u'type: <type \'dict\'>\nstr: {\'msg\': u"Engine VM IP address is while the engine\'s he_fqdn ovirt1-engine.kelleykars.org resolves to 192.168.122.2. If you are using DHCP, check your DHCP reservation configuration", \'changed\': False, \'_ansible_no_log\': False}', 'task_duration': 1, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}}:Q!
The bug 1590266 says it should report the engine VM IP address xxx.xxx.xxx.xxx while the Engines he_fqdn is xxxxxxxxx
I need to see what it thins is wrong as both dig fqdn engine name and dig -x ip return the correct information.
Now this bug looks like it may play but I don't see the failed rediness check in the this log https://access.redhat.com/solutions/4462431
or is it because the vm fails or dies or ???
2 years, 11 months
Lots of storage.MailBox.SpmMailMonitor
by Fabrice Bacchella
My vdsm log files are huge:
-rw-r--r-- 1 vdsm kvm 1.8G Nov 22 11:32 vdsm.log
And this is juste half an hour of logs:
$ head -1 vdsm.log
2018-11-22 11:01:12,132+0100 ERROR (mailbox-spm) [storage.MailBox.SpmMailMonitor] mailbox 2 checksum failed, not clearing mailbox, clearing new mail (data='...lots of data', expected='\xa4\x06\x08\x00') (mailbox:612)
I just upgraded vdsm:
$ rpm -qi vdsm
Name : vdsm
Version : 4.20.43
2 years, 11 months
How to renew vmconsole-proxy* certificates
by capelle@labri.fr
Hi,
Since a few weeks, we are not able to connect to the vmconsole proxy:
$ ssh -t -p 2222 ovirt-vmconsole@ovirt
ovirt-vmconsole@ovirt: Permission denied (publickey).
Last successful login record: Mar 29 11:31:32
First login failure record: Mar 31 17:28:51
We tracked the issue to the following log in /var/log/ovirt-engine/engine.log:
ERROR [org.ovirt.engine.core.services.VMConsoleProxyServlet] (default task-11) [] Error validating ticket: : sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Indeed, certificate /etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer and others did expire:
--
# grep 'Not After' /etc/pki/ovirt-engine/certs/vmconsole-proxy-*
/etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer: Not After : Mar 31 13:18:44 2021 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-host.cer: Not After : Mar 31 13:18:44 2021 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-user.cer: Not After : Mar 31 13:18:44 2021 GMT
--
But we did not manage to found how to renew them. Any advice ?
--
Benoît
3 years, 1 month
Snapshot and disk size allocation
by jorgevisentini@gmail.com
Hello everyone.
I would like to know how disk size and snapshot allocation works, because every time I create a new snapshot, it increases 1 GB in the VM's disk size, and when I remove the snap, that space is not returned to Domain Storage.
I'm using the oVirt 4.3.10
How do I reprovision the VM disk?
Thank you all.
3 years, 1 month
fresh hyperconverged Gluster setup failed in ovirt 4.4.8
by dhanaraj.ramesh@yahoo.com
Hi Team
I'm trying to setup 3 node Gluster + ovirt setup with latest stable 4.4.8 version but while deploying the gluster from cokpit getting below error what could be the reason
TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux context on the bricks] ***
failed: [beclovkvma03.bec. lab ] (item={'path': '/gluster_bricks/engine', 'lvname': 'gluster_lv_engine', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma01.bec. lab ] (item={'path': '/gluster_bricks/engine', 'lvname': 'gluster_lv_engine', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma02.bec. lab ] (item={'path': '/gluster_bricks/engine', 'lvname': 'gluster_lv_engine', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma03.bec. lab ] (item={'path': '/gluster_bricks/data', 'lvname': 'gluster_lv_data', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_data", "path": "/gluster_bricks/data", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma01.bec. lab ] (item={'path': '/gluster_bricks/data', 'lvname': 'gluster_lv_data', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_data", "path": "/gluster_bricks/data", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma02.bec. lab ] (item={'path': '/gluster_bricks/data', 'lvname': 'gluster_lv_data', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_data", "path": "/gluster_bricks/data", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma03.bec. lab ] (item={'path': '/gluster_bricks/vmstore', 'lvname': 'gluster_lv_vmstore', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": "/gluster_bricks/vmstore", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma01.bec. lab ] (item={'path': '/gluster_bricks/vmstore', 'lvname': 'gluster_lv_vmstore', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": "/gluster_bricks/vmstore", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [beclovkvma02.bec. lab ] (item={'path': '/gluster_bricks/vmstore', 'lvname': 'gluster_lv_vmstore', 'vgname': 'gluster_vg_sde'}) => {"ansible_loop_var": "item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": "/gluster_bricks/vmstore", "vgname": "gluster_vg_sde"}, "msg": "ValueError: Type glusterd_brick_t is invalid, must be a file or device type\n"}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
beclovkvma01.bec. lab : ok=53 changed=14 unreachable=0 failed=1 skipped=116 rescued=0 ignored=1
beclovkvma02.bec. lab : ok=52 changed=13 unreachable=0 failed=1 skipped=116 rescued=0 ignored=1
beclovkvma03.bec. lab : ok=52 changed=13 unreachable=0 failed=1 skipped=116 rescued=0 ignored=1
3 years, 2 months
UEFI Guest can only be started on UEFI host (4.4)
by nroach44@nroach44.id.au
Hi All,
A problem I've just "dealt with" over the past months is that the two UEFI VMs I have installed (One Windows 10, one RHEL8) will only start on the oVirt Node (4.4.x, still an issue on 4.4.8) hosts that have been installed using UEFI.
In the case of both guests, they will "start" but get stuck on a small 640x480-ish black screen, with no CPU or disk activity. It looks as if the VM has been started with "Start paused" enabled, but the VM is not paused. I've noticed that this matches the normal startup of the guest, although it only spends a second or two like that before TianoCore takes over.
Occasionally, I'm able to migrate the VM to a BIOS host. When it fails, the following is seen on the /sending/ host:
2021-09-21 20:09:42,915+0800 ERROR (migsrc/86df93bc) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') internal error: qemu unexpectedly closed the monitor: 2021-09-21T12:08:57.355188Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 1 should be encrypted
2021-09-21T12:08:57.393585Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 3 should be encrypted
2021-09-21T12:08:57.393805Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 4 should be encrypted
2021-09-21T12:08:57.393960Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 2 should be encrypted
2021-09-21T12:09:40.799119Z qemu-kvm: warning: TSC frequency mismatch between VM (3099980 kHz) and host (3392282 kHz), and TSC scaling unavailable
2021-09-21T12:09:40.799228Z qemu-kvm: error: failed to set MSR 0x204 to 0x1000000000
qemu-kvm: ../target/i386/kvm/kvm.c:2778: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. (migration:331)
2021-09-21 20:09:42,938+0800 INFO (migsrc/86df93bc) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Switching from State.STARTED to State.FAILED (migration:234)
2021-09-21 20:09:42,938+0800 ERROR (migsrc/86df93bc) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Failed to migrate (migration:503)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 477, in _regular_run
time.time(), machineParams
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 578, in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 667, in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 596, in _perform_migration
self._migration_flags)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in call
return getattr(self._vm._dom, name)(*a, **kw)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2126, in migrateToURI3
raise libvirtError('virDomainMigrateToURI3() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2021-09-21T12:08:57.355188Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 1 should be encrypted
2021-09-21T12:08:57.393585Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 3 should be encrypted
2021-09-21T12:08:57.393805Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 4 should be encrypted
2021-09-21T12:08:57.393960Z qemu-kvm: warning: Spice: reds.c:2305:reds_handle_read_link_done: spice channels 2 should be encrypted
2021-09-21T12:09:40.799119Z qemu-kvm: warning: TSC frequency mismatch between VM (3099980 kHz) and host (3392282 kHz), and TSC scaling unavailable
2021-09-21T12:09:40.799228Z qemu-kvm: error: failed to set MSR 0x204 to 0x1000000000
qemu-kvm: ../target/i386/kvm/kvm.c:2778: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
The receiving host simply sees
2021-09-21 20:09:42,840+0800 INFO (libvirt/events) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') underlying process disconnected (vm:1135)
2021-09-21 20:09:42,840+0800 INFO (libvirt/events) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Release VM resources (vm:5325)
2021-09-21 20:09:42,840+0800 INFO (libvirt/events) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Stopping connection (guestagent:438)
2021-09-21 20:09:42,840+0800 INFO (libvirt/events) [vdsm.api] START teardownImage(sdUUID='3f46f0f3-1cbb-4154-8af5-dcc3a09c6177', spUUID='924e5fbe-beba-11ea-b679-00163e03ad3e', imgUUID='d91282d3-2552-44d3-aa0f-84f7330be4ce', volUUID=None) from=internal, task_id=51eb32fc-1167-4c4c-bea8-4664c92d15e9 (api:48)
2021-09-21 20:09:42,841+0800 INFO (libvirt/events) [storage.StorageDomain] Removing image rundir link '/run/vdsm/storage/3f46f0f3-1cbb-4154-8af5-dcc3a09c6177/d91282d3-2552-44d3-aa0f-84f7330be4ce' (fileSD:601)
2021-09-21 20:09:42,841+0800 INFO (libvirt/events) [vdsm.api] FINISH teardownImage return=None from=internal, task_id=51eb32fc-1167-4c4c-bea8-4664c92d15e9 (api:54)
2021-09-21 20:09:42,841+0800 INFO (libvirt/events) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Stopping connection (guestagent:438)
2021-09-21 20:09:42,841+0800 INFO (libvirt/events) [vdsm.api] START inappropriateDevices(thiefId='86df93bc-3304-4002-8939-cbefdea4cc60') from=internal, task_id=1e3aafc2-62c7-4fe5-a807-69942709e936 (api:48)
2021-09-21 20:09:42,842+0800 INFO (libvirt/events) [vdsm.api] FINISH inappropriateDevices return=None from=internal, task_id=1e3aafc2-62c7-4fe5-a807-69942709e936 (api:54)
2021-09-21 20:09:42,847+0800 WARN (vm/86df93bc) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Couldn't destroy incoming VM: Domain not found: no domain with matching uuid '86df93bc-3304-4002-8939-cbefdea4cc60' (vm:4073)
2021-09-21 20:09:42,847+0800 INFO (vm/86df93bc) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Changed state to Down: VM destroyed during the startup (code=10) (vm:1921)
2021-09-21 20:09:42,849+0800 INFO (vm/86df93bc) [virt.vm] (vmId='86df93bc-3304-4002-8939-cbefdea4cc60') Stopping connection (guestagent:438)
2021-09-21 20:09:42,856+0800 INFO (jsonrpc/3) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.1.2.30,59424, flow_id=47e0a91b, vmId=86df93bc-3304-4002-8939-cbefdea4cc60 (api:48)
2021-09-21 20:09:42,917+0800 INFO (jsonrpc/5) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.1.2.7,50798, vmId=86df93bc-3304-4002-8939-cbefdea4cc60 (api:48)
The Data center is configured with BIOS as a default.
As an aside, *all* hosts have the following cmdline set: (to allow nested virt)
intel_iommu=on kvm-intel.nested=1 kvm.ignore_msrs=1
Any suggestions?
3 years, 2 months
HA VM and vm leases usage with site failure
by Gianluca Cecchi
Hello,
supposing latest 4.4.7 environment installed with an external engine and
two hosts, one in one site and one in another site.
For storage I have one FC storage domain.
I try to simulate a sort of "site failure scenario" to see what kind of HA
I should expect.
The 2 hosts have power mgmt configured through fence_ipmilan.
I have 2 VMs, one configured as HA with lease on storage (Resume Behavior:
kill) and one not marked as HA.
Initially host1 is SPM and it is the host that runs the two VMs.
Fencing of host1 from host2 initially works ok. I can test also from
command line:
# fence_ipmilan -a 10.10.193.152 -P -l my_fence_user -A password -L
operator -S /usr/local/bin/pwd.sh -o status
Status: ON
On host2 I then prevent reaching host1 iDRAC:
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -d 10.10.193.152 -p
udp --dport 623 -j DROP
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 1 -j ACCEPT
so that:
# fence_ipmilan -a 10.10.193.152 -P -l my_fence_user -A password -L
operator -S /usr/local/bin/pwd.sh -o status
2021-08-05 15:06:07,254 ERROR: Failed: Unable to obtain correct plug status
or plug is not available
On host1 I generate panic:
# date ; echo 1 > /proc/sys/kernel/sysrq ; echo c > /proc/sysrq-trigger
Thu Aug 5 15:06:24 CEST 2021
host1 correctly completes its crash dump (kdump integration is enabled) and
reboots, but I stop it at grub prompt so that host1 is unreachable from
host2 point of view and also power fencing not determined
At this point I thought that VM lease functionality would have come in
place and host2 would be able to re-start the HA VM, as it is able to see
that the lease is not taken from the other host and so it can acquire the
lock itself....
Instead it goes through the attempt to power fence loop
I wait about 25 minutes without any effect but continuous attempts.
After 2 minutes host2 correctly becomes SPM and VMs are marked as unknown
At a certain point after the failures in power fencing host1, I see the
event:
Failed to power fence host host1. Please check the host status and it's
power management settings, and then manually reboot it and click "Confirm
Host Has Been Rebooted"
If I select host and choose "Confirm Host Has Been Rebooted", then the two
VMs are marked as down and the HA one is correctly booted by host2.
But this requires my manual intervention.
Is the behavior above the expected one or the use of VM leases should have
allowed host2 to bypass fencing inability and start the HA VM with lease?
Otherwise I don't understand the reason to have the lease itself at all....
Thanks,
Gianluca
3 years, 2 months
Re: About the vm memory limit
by Tommy Sway
transparent huge pages are not used on vm and physical host.
But, can I enable hugepage memory on virtual machines but not on a physical machine?
For database running on vm, and it needs to config hugepage.
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Saturday, September 25, 2021 5:32 PM
To: Tommy Sway <sz_cuitao(a)163.com>
Subject: Re: [ovirt-users] About the vm memory limit
It depends on the numa configuration of the host.
If you have 256G per CPU, it's best to stay into that range.
Also, consider disabling transparent huge pages on the host & VM.
Since 4.4 Regular Huge Pages (do not confuse them with THP) can be used on the Hypervisors, while on 4.3 there were some issues but I can't provode any details.
Best Regards,
Strahil Nikolov
On Fri, Sep 24, 2021 at 6:40, Tommy Sway
<sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> > wrote:
I would like to ask if there is any limit on the memory size of virtual machines, or performance curve or something like that?
As long as there is memory on the physical machine, the more virtual machines the better?
In our usage scenario, there are many virtual machines with databases, and their memory varies greatly.
For some virtual machines, 4G memory is enough, while for some virtual machines, 64GB memory is needed.
I want to know what is the best use of memory for a virtual machine, since the virtual machine is just a QEMU emulation process on a physical machine, and I worry that it is not using as much memory as a physical machine. Understand this so that we can develop guidelines for optimal memory usage scenarios for virtual machines.
Thank you!
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y6XDOIMKCP4...
3 years, 2 months
Re: oVirt 4.3 DWH with Grafana
by Tommy Sway
Thank you!
I tried it, and I can import the json after replace the 4.4 with 4.3, but still have some errors like this:
pq: column "count_threads_as_cores" does not exist
Object
status:400
statusText:"Bad Request"
data:Object
results:Object
message:"pq: column "count_threads_as_cores" does not exist"
message:"pq: column "count_threads_as_cores" does not exist"
Many of the other reports worked fine, with only a few similar errors.
Is this caused by version incompatibility?
After all, 4.4 in the original document was replaced with 4.3 before the import.
From: Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com>
Sent: Wednesday, September 15, 2021 3:28 AM
To: Tommy Sway <sz_cuitao(a)163.com>; 'Michal Gutowski' <michal.gutowski(a)oracle.com>; 'Vrgotic, Marko' <M.Vrgotic(a)activevideo.com>
Cc: users(a)ovirt.org
Subject: RE: [ovirt-users] Re: oVirt 4.3 DWH with Grafana
The instructions to modify the json files are missing. (Using find and sed to change all instances of v4_4_ to v4_3_ before importing them into Grafana)
This is from an Oracle Blog on the doing this with OLVM 4.3 (which is basically a repackaged oVirt 4.3)
Build Grafana Dashboard for Oracle Linux Virtualization Manager 4.3 <https://blogs.oracle.com/scoter/post/build-grafana-dashboard-for-oracle-l...>
From: Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> >
Sent: Tuesday, September 14, 2021 3:01 AM
To: 'Michal Gutowski' <michal.gutowski(a)oracle.com <mailto:michal.gutowski@oracle.com> >; 'Vrgotic, Marko' <M.Vrgotic(a)activevideo.com <mailto:M.Vrgotic@activevideo.com> >
Cc: users(a)ovirt.org <mailto:users@ovirt.org>
Subject: [ovirt-users] Re: oVirt 4.3 DWH with Grafana
The oVirt version I'm using is 4.3, and I get an error when I import JSON.
pq: relation"v4_4_latest_configuration_datacenters" does not exist.
From: users-bounces(a)ovirt.org <mailto:users-bounces@ovirt.org> <users-bounces(a)ovirt.org <mailto:users-bounces@ovirt.org> > On Behalf Of Michal Gutowski
Sent: Wednesday, November 25, 2020 12:26 AM
To: Vrgotic, Marko <M.Vrgotic(a)activevideo.com <mailto:M.Vrgotic@activevideo.com> >
Cc: users(a)ovirt.org <mailto:users@ovirt.org>
Subject: [ovirt-users] Re: oVirt 4.3 DWH with Grafana
Hi Marko,
I've tested this myself as I like playing with various Grafana use-cases and following steps allow you to set up Grafana Monitoring for your oVirt 4.3 environment and re-use all Grafana Dashboards from latest oVirt 4.4 on a previous release.
1. Allowing Grafana to connect to oVirt DWH database (Data Warehouse)
Login to the oVirt engine 4.3 and create a user "grafana" with password "grafana" that will get a read-only access to the ovirt_engine_history database and will be able to use public schema
# su - postgres -c 'scl enable rh-postgresql10 bash'
# psql -U postgres -c "CREATE ROLE grafana WITH LOGIN ENCRYPTED PASSWORD 'grafana';" -d ovirt_engine_history
# psql -U postgres -c "GRANT CONNECT ON DATABASE ovirt_engine_history TO grafana;"
# psql -U postgres -c "GRANT USAGE ON SCHEMA public TO grafana;" ovirt_engine_history
Generate the rest of the permissions that will be granted to the newly created user and save them to a file:
# psql -U postgres -c "SELECT 'GRANT SELECT ON ' || relname || ' TO grafana;' FROM pg_class JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE nspname = 'public' AND relkind IN ('r', 'v');" --pset=tuples_only=on ovirt_engine_history > grant.sql
Use the file you created in the previous step to grant permissions to the newly created user:
# psql -U postgres -f grant.sql ovirt_engine_history
Remove the file you used to grant permissions:
# rm grant.sql
Exit the postgres user shell by pressing Ctrl+d
Add the following lines for the newly created user to /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf preceding the line beginning local all all
host ovirt_engine_history grafana 0.0.0.0/0 md5
host ovirt_engine_history grafana ::0/0 md5
Reload postgres service
# systemctl reload rh-postgresql10-postgresql
2. Installing Grafana
You can install Grafana directly on the oVirt Engine machine (this is how it's done in oVirt 4.4) or on a separate machine. Following steps shows how you can install Grafana on a Oracle Linux 7 server. Note: Oracle provides Grafana in the OLCNE yum repository - you only need to install the repository definition package to pickup Grafana and it's dependencies.
# yum install oraclelinux-release-el7
# yum install oracle-olcne-release-el7
# yum-config-manager --enable ol7_optional_latest ol7_olcne11
# yum install grafana
# systemctl enable --now grafana-server
3. Adding oVirt DWH database as Data Source in Grafana
Login to Grafana (default port 3000) and navigate to Configuration -> DataSources and click on Add Data Source button.
Select PostgreSQL source and use the following settings (adjust the Host IP address to match your oVirt Engine IP but do not change the Name):
Name: oVirt DWH
Host: your-engine-ip-address:5432
user: grafana
pass: grafana
SSL mode: disable
4. Importing Dashboards from oVirt 4.4
Download Grafana Dashboards from oVirt 4.4 repository: https://github.com/oVirt/ovirt-dwh/tree/master/packaging/conf/grafana-das...
You can now import them in Grafana by navigating to Create -> Import and clicking on Upload .json file or by simply pasting JSON content.
I've tested this on my OLVM/oVirt 4.3 and works perfectly well.
Have fun!
Michal
Michał Gutowski
Principal Solutions Engineer, EMEA
+48 665 222 979
Oracle Open Cloud Infrastructure Software - Linux & Virtualization
On 24 Nov 2020, at 11:53, Vrgotic, Marko <M.Vrgotic(a)activevideo.com <mailto:M.Vrgotic@activevideo.com> > wrote:
Dear oVirt folks,
Thank you all for suggestions.
I will give it a go and see how far I get.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: <mailto:m.vrgotic@activevideo.com> m.vrgotic(a)activevideo.com
w: <https://urldefense.com/v3/__http:/www.activevideo.com__;!!GqivPVa7Brio!MN...> www.activevideo.com
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Yedidyah Bar David <didi(a)redhat.com <mailto:didi@redhat.com> >
Date: Sunday, 22 November 2020 at 08:39
To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com <mailto:M.Vrgotic@activevideo.com> >
Cc: "users(a)ovirt.org <mailto:users@ovirt.org> " <users(a)ovirt.org <mailto:users@ovirt.org> >
Subject: Re: [ovirt-users] oVirt 4.3 DWH with Grafana
***CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender!!!***
On Fri, Nov 20, 2020 at 8:45 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com <mailto:M.Vrgotic@activevideo.com> > wrote:
Dear oVirt,
We are currently running oVirt 4.3 and upgrade/migration to 4.4 won’t be possible for few more months.
I am looking into guidelines, how to, for setting up Grafana using DataWarehouse as data source.
Did anyone already did this, and would be willing to share the steps?
AFAIU this is definitely not tested/recommended/supported, but the current (4.4) dashboards use only 4.3 dwh compatibility views. So in theory, 4.4 grafana setup can work against your 4.3 engine/dwh without problems. So you can try something like:
1. Install el8 on some machine
2. Install ovirt-release (4.4!)
3. Install ovirt-engine-dwh-grafana-integration-setup. I *think*, didn't try, that it would carry with it all the dependencies it needs.
4. Run engine-setup. When prompted, only accept "Configure grafana?", and reply "No" to everything else
5. Follow the other prompts as applicable
This should be enough.
Not sure about the commands to add SSO on the engine machine. Perhaps better verify this first against a test engine (can be on another el7/ovirt4.3 VM with no hosts).
But, repeating: this isn't recommended.
Did you consider upgrading only your engine, and keep your hosts 4.3 until you can upgrade them later?
Best regards,
--
Didi
_______________________________________________
Users mailing list -- <mailto:users@ovirt.org> users(a)ovirt.org
To unsubscribe send an email to <mailto:users-leave@ovirt.org> users-leave(a)ovirt.org
Privacy Statement: <https://urldefense.com/v3/__https:/www.ovirt.org/privacy-policy.html__;!!...> https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!...
oVirt Code of Conduct: <https://urldefense.com/v3/__https:/www.ovirt.org/community/about/communit...> https://urldefense.com/v3/__https://www.ovirt.org/community/about/communi...
List Archives: <https://urldefense.com/v3/__https:/lists.ovirt.org/archives/list/users@ov...> https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@o...
3 years, 2 months