Re: Issues when Creating a Gluster Brick with Cache
by Strahil
The cache will not take immediately into effect. It is designed to save your cache disk and to load only repetitive data.
Try to run your benchmark 20 times and you will notice slight improvement over reading the same object.
Of course , you can use lvm cache in writethrough mode, but then it is recommended to use 2 cache disks (raid1 or lvm mirroring) .
Best Regards,
Strahil NikolovOn Jun 26, 2019 17:44, Robert Crawford <robert.crawford4.14.10(a)gmail.com> wrote:
>
> So i created the brick without the cache an add the cache doing the commands below; but it appers it's not working because I see no improvement in the benchmarks.
>
> pvcreate /dev/md/VMStorageHot
> vgextend RHGS_vg_VMStorage /dev/md/VMStorageHot
> lvcreate -L 1500G -n lv_cache RHGS_vg_VMStorage /dev/md/VMStorageHot
> lvcreate -L 100M -n lv_cache_meta RHGS_vg_VMStorage /dev/md/VMStorageHot
> lvconvert --type cache-pool --cachemode writeback --poolmetadata RHGS_vg_VMStorage/lv_cache_meta RHGS_vg_VMStorage/lv_cache
> lvconvert --type cache --cachepool HGS_vg_VMStorage/lv_cache RHGS_vg_VMStorage/VMStorage_lv_pool
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQ4ZS245WZD...
5 years, 5 months
vm unresponsive cpu soft lock on heavy operation
by Jayme
Basic setup notes: 3 node HCI running oVirt 4.3.3 using nodeNG for hosts.
Storage is SSD backed with 10Gb network dedicated to gluster with jumbo
frames enabled.
The ovirt management network (which also acts as VM network) is 1Gb network
Hosts are dell R720s w/ 256gb ram and E5-2690 procs
I have a VM configured with 16Gb ram and 6 virtual CPUs. When this VM does
a heavy operation (in this case it's dumping a large DB from a remote
server) the load spikes quickly. When this happens the VM becomes
unresponsive and in some cases I get cpu soft lock messages.
I'm trying to determine where the bottleneck here is and how I can prevent
the VM from becoming unresponsive when doing heavy tasks.
Here is info from syslog showing the soft lockup:
Jun 26 16:13:10 roble kernel: NMI watchdog: BUG: soft lockup - CPU#5 stuck
for 23s! [pg_dump:4025]
Jun 26 16:13:10 roble kernel: Modules linked in: binfmt_misc
rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache
sunrpc ppdev iosf_mbi crc32_pclmul ghash_clmulni_intel aesni_intel lrw
gf128mul glue_helper ablk_helper cryptd sg joydev parport_pc parport
virtio_rng i2c_piix4 pcspkr ip_tables xfs libcrc32c sd_mod crc_t10dif
crct10dif_generic sr_mod cdrom ata_generic virtio_console virtio_net
virtio_scsi pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel qxl
drm_kms_helper syscopyarea sysfillrect sysimgblt serio_raw fb_sys_fops ttm
drm floppy ata_piix libata virtio_pci virtio_ring virtio
drm_panel_orientation_quirks dm_mirror dm_region_hash dm_log dm_mod
Jun 26 16:13:10 roble kernel: CPU: 5 PID: 4025 Comm: pg_dump Kdump: loaded
Not tainted 3.10.0-957.21.2.el7.x86_64 #1
Jun 26 16:13:10 roble kernel: Hardware name: oVirt oVirt Node, BIOS
1.11.0-2.el7 04/01/2014
Jun 26 16:13:10 roble kernel: task: ffffa01443499040 ti: ffffa014b24a4000
task.ti: ffffa014b24a4000
Jun 26 16:13:10 roble kernel: RIP: 0010:[<ffffffffaeb113ea>]
[<ffffffffaeb113ea>] generic_exec_single+0xfa/0x1b0
Jun 26 16:13:10 roble kernel: RSP: 0018:ffffa014b24a7c30 EFLAGS: 00000202
Jun 26 16:13:10 roble kernel: RAX: 0000000000000010 RBX: ffffa014b24a7c00
RCX: 0000000000000030
Jun 26 16:13:10 roble kernel: RDX: 000000000000ffff RSI: 0000000000000010
RDI: 0000000000000286
Jun 26 16:13:10 roble kernel: RBP: ffffa014b24a7c78 R08: ffffffffaf213640
R09: 000000018040003f
Jun 26 16:13:10 roble kernel: R10: 0000000000000001 R11: fffff83b8e91b540
R12: ffffa014b24a7bc0
Jun 26 16:13:10 roble kernel: R13: 0000000000000c9b R14: ffffa01477bf94e8
R15: ffffa01545074270
Jun 26 16:13:10 roble kernel: FS: 00007f9fefe89840(0000)
GS:ffffa0172f340000(0000) knlGS:0000000000000000
Jun 26 16:13:10 roble kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
Jun 26 16:13:10 roble kernel: CR2: 00007f9fef391e90 CR3: 000000014aa00000
CR4: 00000000000606e0
Jun 26 16:13:10 roble kernel: Call Trace:
Jun 26 16:13:10 roble kernel: [<ffffffffaea7a4e0>] ? leave_mm+0x110/0x110
Jun 26 16:13:10 roble kernel: [<ffffffffaea7a4e0>] ? leave_mm+0x110/0x110
Jun 26 16:13:10 roble kernel: [<ffffffffaea7a4e0>] ? leave_mm+0x110/0x110
Jun 26 16:13:10 roble kernel: [<ffffffffaeb114ff>]
smp_call_function_single+0x5f/0xa0
Jun 26 16:13:10 roble kernel: [<ffffffffaed75cd5>] ?
cpumask_next_and+0x35/0x50
Jun 26 16:13:10 roble kernel: [<ffffffffaeb11aab>]
smp_call_function_many+0x22b/0x270
Jun 26 16:13:10 roble kernel: [<ffffffffaea7a6a8>]
native_flush_tlb_others+0xb8/0xc0
Jun 26 16:13:10 roble kernel: [<ffffffffaea7a718>]
flush_tlb_mm_range+0x68/0x140
Jun 26 16:13:10 roble kernel: [<ffffffffaebe4687>]
tlb_flush_mmu.part.76+0x37/0xe0
Jun 26 16:13:10 roble kernel: [<ffffffffaebe5f85>] tlb_finish_mmu+0x55/0x60
Jun 26 16:13:10 roble kernel: [<ffffffffaebef624>] unmap_region+0xf4/0x140
Jun 26 16:13:10 roble kernel: [<ffffffffaecfc8d3>] ?
selinux_file_free_security+0x23/0x30
Jun 26 16:13:10 roble kernel: [<ffffffffaebefbe1>] ?
__vma_rb_erase+0x121/0x220
Jun 26 16:13:10 roble kernel: [<ffffffffaebf1c15>] do_munmap+0x2a5/0x480
Jun 26 16:13:10 roble kernel: [<ffffffffaebf1e55>] vm_munmap+0x65/0xb0
Jun 26 16:13:10 roble kernel: [<ffffffffaebf30e2>] SyS_munmap+0x22/0x30
Jun 26 16:13:10 roble kernel: [<ffffffffaf175ddb>]
system_call_fastpath+0x22/0x27
Jun 26 16:13:10 roble kernel: [<ffffffffaf175d21>] ?
system_call_after_swapgs+0xae/0x146
Jun 26 16:13:10 roble kernel: Code: 00 b7 01 00 48 89 de 48 03 14 c5 60 bc
74 af 48 89 df e8 4a b7 27 00 84 c0 75 46 45 85 ed 74 11 f6 43 20 01 74 0b
0f 1f 00 f3 90 <f6> 43 20 01 75 f8 31 c0 48 8b 7c 24 28 65 48 33 3c 25 28
00 00
Jun 26 16:13:14 roble kernel: NMI watchdog: BUG: soft lockup - CPU#4 stuck
for 27s! [kworker/4:3:16530]
Jun 26 16:13:14 roble kernel: Modules linked in: binfmt_misc
rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache
sunrpc ppdev iosf_mbi crc32_pclmul ghash_clmulni_intel aesni_intel lrw
gf128mul glue_helper ablk_helper cryptd sg joydev parport_pc parport
virtio_rng i2c_piix4 pcspkr ip_tables xfs libcrc32c sd_mod crc_t10dif
crct10dif_generic sr_mod cdrom ata_generic virtio_console virtio_net
virtio_scsi pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel qxl
drm_kms_helper syscopyarea sysfillrect sysimgblt serio_raw fb_sys_fops ttm
drm floppy ata_piix libata virtio_pci virtio_ring virtio
drm_panel_orientation_quirks dm_mirror dm_region_hash dm_log dm_mod
Jun 26 16:13:14 roble kernel: CPU: 4 PID: 16530 Comm: kworker/4:3 Kdump:
loaded Tainted: G L ------------ 3.10.0-957.21.2.el7.x86_64 #1
Jun 26 16:13:14 roble kernel: Hardware name: oVirt oVirt Node, BIOS
1.11.0-2.el7 04/01/2014
Jun 26 16:13:14 roble kernel: Workqueue: events tsc_refine_calibration_work
Jun 26 16:13:14 roble kernel: task: ffffa016acf74100 ti: ffffa014f76a8000
task.ti: ffffa014f76a8000
Jun 26 16:13:14 roble kernel: RIP: 0010:[<ffffffffaefb97e0>]
[<ffffffffaefb97e0>] acpi_pm_read_verified+0x10/0x60
Jun 26 16:13:14 roble kernel: RSP: 0018:ffffa014f76abdb8 EFLAGS: 00000202
Jun 26 16:13:14 roble kernel: RAX: 00000000000addf6 RBX: 0000000000000086
RCX: 0000000000f16644
Jun 26 16:13:14 roble kernel: RDX: 0000000000000608 RSI: 00000000002aac00
RDI: ffffffffaf971901
Jun 26 16:13:14 roble kernel: RBP: ffffa014f76abdb8 R08: 00000000aff71901
R09: 0001427407e2e9e0
Jun 26 16:13:14 roble kernel: R10: 0001427407e2e9e0 R11: 0000000000000000
R12: 0000000000000004
Jun 26 16:13:14 roble kernel: R13: ffffa014f76abd38 R14: ffffffffaf62ea00
R15: ffffa0172f313900
Jun 26 16:13:14 roble kernel: FS: 0000000000000000(0000)
GS:ffffa0172f300000(0000) knlGS:0000000000000000
Jun 26 16:13:14 roble kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
Jun 26 16:13:14 roble kernel: CR2: 0000000000448d70 CR3: 000000017789a000
CR4: 00000000000606e0
Jun 26 16:13:14 roble kernel: Call Trace:
Jun 26 16:13:14 roble kernel: [<ffffffffaea3426d>] tsc_read_refs+0x8d/0xb0
Jun 26 16:13:14 roble kernel: [<ffffffffaea344b2>]
tsc_refine_calibration_work+0x1c2/0x220
Jun 26 16:13:14 roble kernel: [<ffffffffaeab9ebf>]
process_one_work+0x17f/0x440
Jun 26 16:13:14 roble kernel: [<ffffffffaeabaf56>] worker_thread+0x126/0x3c0
Jun 26 16:13:14 roble kernel: [<ffffffffaeabae30>] ?
manage_workers.isra.25+0x2a0/0x2a0
Jun 26 16:13:14 roble kernel: [<ffffffffaeac1da1>] kthread+0xd1/0xe0
Jun 26 16:13:14 roble kernel: [<ffffffffaeac1cd0>] ?
insert_kthread_work+0x40/0x40
Jun 26 16:13:14 roble kernel: [<ffffffffaf175c37>]
ret_from_fork_nospec_begin+0x21/0x21
Jun 26 16:13:14 roble kernel: [<ffffffffaeac1cd0>] ?
insert_kthread_work+0x40/0x40
Jun 26 16:13:14 roble kernel: Code: 43 ea 74 00 30 98 fb ae c7 05 81 ea 74
00 78 00 00 00 5d c3 0f 1f 80 00 00 00 00 66 66 66 66 90 8b 15 5d 74 7a 00
55 48 89 e5 ed <89> c6 81 e6 ff ff ff 00 ed 89 c1 81 e1 ff ff ff 00 ed 25
ff ff
5 years, 5 months
VDI Brokering
by Ciro Iriarte
Hi!,
What's the usual way to implement VDI with oVirt?, is it usuallly
paired with a broker like UDS Enterprise / OpenUDS?, does it have
anything embedded that can solve the VM assignment for a group of
users?.
Regards.
CI.-
5 years, 5 months
Graphics performance and Windows VM
by Ciro Iriarte
Hello!,
I'm wondering what would be a good fit for decent graphics performance
on Windows 10/2016/2019 + oVirt/KVM.
Is there some kind of offloading with SPICE?, should I use RDP
instead?, will vGPU backed by Nvidia cards help somehow?. The idea is
to build a VDI environment for Windows VM, probably using
OpenThinClient or a custom Linux image at the endpoint.
The clients will use office suites, Java IDE, AutoCAD and Sketchup.
Comments?
Regards,
--
CI.-
--
5 years, 5 months
[ANN] oVirt 4.3.5 Third Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.5 Third Release Candidate for testing, as of June 26th, 2019.
While testing this release candidate please consider deeper testing on
gluster upgrade since with this release we are switching from Gluster 5 to
Gluster 6.
This update is a release candidate of the fifth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
- oVirt Appliance has not been updated due to a regression in the tooling
we use for building it.[3]
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.5 release highlights:
http://www.ovirt.org/release/4.3.5/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.5/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1724076
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
5 years, 5 months
Issues when Creating a Gluster Brick with Cache
by Robert Crawford
Hey Everyone,
When in the server manager and creating a brick from the storage device the brick will fail whenever i attach a cache device to it.
I'm not really sure why? It just says unknown.
5 years, 5 months
HE on single oVirt node
by Mike Davis
I installed oVirt node for oVirt v4.3.4 on a single server (default
partitioning). I know a single node is not best practice, but fits our
current need.
Now, I am trying to deploy the hosted engine, but getting the error
"Device /dev/sdb not found" (full output below). /dev/sdb does not
exist. Changing to /dev/sda fails (as I would expect) with "Creating
physical volume '/dev/sda' failed". There is only one logical disk on
this server (RAID), which is /dev/sda.
The process does not seem correct to me. Why would the wizard try to
build gluster volumes on bare partitions rather than in a file system
like root? What do we need to do to deploy the HE?
I used Cockpit and "Hyperconverged : Configure Gluster storage and oVirt
hosted engine". Since it is a single node, I selected "Run Gluster
Wizard For Single Node".
Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11)
No Packages, Update Hosts
Volumes left at defaults (engine, data, and vmstore)
Bricks left at defaults
- (Raid 6 / 256 / 12)
- Host 10.0.0.11
- device name /dev/sdb (all LVs)
- engine 100GB, data 500GB, vmstore 500GB
Output after "Deployment failed":
PLAY [Setup backend]
***********************************************************
TASK [Gathering Facts]
*********************************************************
ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not
already started] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : check if required variables
are set] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports]
********
ok: [10.0.0.11] => (item=2049/tcp)
ok: [10.0.0.11] => (item=54321/tcp)
ok: [10.0.0.11] => (item=5900/tcp)
ok: [10.0.0.11] => (item=5900-6923/tcp)
ok: [10.0.0.11] => (item=5666/tcp)
ok: [10.0.0.11] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to
firewalld rules] ***
ok: [10.0.0.11] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the
OS distribution] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
for debian systems.] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
for RHEL systems.] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package
for Debian systems] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array]
***********
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)]
*********
skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service]
********
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified
size] ******
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is
provided] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for
JBOD] ******
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for
RAID] ******
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size
for RAID] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item":
{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device
/dev/sdb not found."}
NO MORE HOSTS LEFT
*************************************************************
NO MORE HOSTS LEFT
*************************************************************
PLAY RECAP
*********************************************************************
10.0.0.11 : ok=9 changed=0 unreachable=0
failed=1 skipped=8 rescued=0 ignored=0
---
5 years, 6 months
Re: Unable to log into Administration Portal
by Strahil
Zachary,
The version lock is used by oVirt devs to prevent the system to update too far ahead of the oVirt level.
Don't remove that next time.
You should access the engine only by https://FQDN .IP should not be used.
If you have some kind of snapshot (for example gluster) - you can consider reverting.
Even consider restore from backup.
I'm pretty sure that almost every failiure is recorded somewhere there ... (I am still new to advise location).
Bet Regards,
Strahil Nikolov
On Jun 23, 2019 04:41, zachary.winter(a)witsconsult.com wrote:
>
> I suppose I just keep striking out on recent oVirt updates. Today (22 June 2019, all previous updates have been installed successfully) I saw that an update was available for my Enterprise Linux Host (main oVirt engine, CentOS 7 x64), and I attempted to update it. The update failed, which had never happened before. I logged in via SSH and saw that the updates had been halted due to "yum versionlock." I cleared versionlock and proceeded with the update, which appeared to work successfully. I rebooted the system, which came up a-ok. However, I can no longer reach the Administration Portal page. The browser only hangs. I see the following:
>
> - type in the IP address to the server and get the https://<ip address>/ovirt-engine/sso/oauth/authorize page, which tells me:
>
> "The FQDN used to access the system is not a valid engine FQDN. You must access the system using the engine FQDN or one of the engine alternate FQDNs.
> Click here to continue."
>
> When I click the provided link, I get the same hanging behavior and it never loads the login page.
>
> - I was able to connect via Cockpit to https://<IP Address>:9090 and log in successfully as root after SSH'ing in and restarting the engine. There are no major issues displayed, and I was able to create a Diagnostic Report. Under Hostname > oVirt Machines, it will actually redirect me to the login page at https://fqdn/ovirt-engine/sso/login.html. The page will actually load after the redirect, but when I enter my admin@internal credentials it just hangs and spins.
>
> When I return to "Virtual Machines" in Cockpit, I have Host/Cluster/Templates/VDSM options, I see "oVirt login in progress" with a continually spinning circle, never actually able to authenticate.
>
>
> In /var/log/ovirt-engine/ui.log, I see:
>
> 2019-06-22 19:08:02,533-04 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-4) [] Permutation name: C92E6928986552EDD0E1C99CDC0CC8AB
> 2019-06-22 19:08:02,533-04 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-4) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: (TypeError) : Cannot read property 'kh' of null
> at org.ovirt.engine.ui.uicommonweb.dataprovider.AsyncDataProvider.$lambda$4(AsyncDataProvider.java:387)
> at org.ovirt.engine.ui.uicommonweb.dataprovider.AsyncDataProvider$lambda$4$Type.executed(AsyncDataProvider.java:387)
> at org.ovirt.engine.ui.frontend.Frontend$2.$onFailure(Frontend.java:329) [frontend.jar:]
> at org.ovirt.engine.ui.frontend.Frontend$2.onFailure(Frontend.java:329) [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.$onFailure(OperationProcessor.java:184) [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.onFailure(OperationProcessor.java:184) [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider.$handleMultipleQueriesFailure(GWTRPCCommunicationProvider.java:305) [frontend.jar:]
> at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onFailure(GWTRPCCommunicationProvider.java:263) [frontend.jar:]
> at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
> at com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233) [gwt-servlet.jar:]
> at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409) [gwt-servlet.jar:]
> at Unknown.eval(webadmin-0.js)
> at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) [gwt-servlet.jar:]
> at com.google.gwt.cor
5 years, 6 months
Error when upgrading Node
by zachary.winter@witsconsult.com
I am receiving the following error when attempting to update a Node manually that will not update through the administration portal:
error: %pre(ovirt-node-ng-image-update-4.3.4-1.el7.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update-4.3.4-1.el7.noarch
Software
OS Version: RHEL - 7 - 6.1810.2.el7.centos
OS Description: oVirt Node 4.3.3.1
Kernel Version: 3.10.0 - 957.10.1.el7.x86_64
KVM Version: 2.12.0 - 18.el7_6.3.1
LIBVIRT Version: libvirt-4.5.0-10.el7_6.6
VDSM Version: vdsm-4.30.13-1.el7
SPICE Version: 0.14.0 - 6.el7_6.1
GlusterFS Version: glusterfs-5.5-1.el7
CEPH Version: librbd1-10.2.5-4.el7
Open vSwitch Version: openvswitch-2.10.1-3.el7
Kernel Features: PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption: Enabled
Any advice?
5 years, 6 months
Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4
by Stefano Danzi
I've just upgraded my test environment from ovirt 4.2 to 4.3.4.
System has only one host (Centos 7.6.1810) and run a self hosted engine.
After upgrade I'm not able to run vdsmd (and so hosted engine....)
Above the error in log:
journalctl -xe
-- L'unità libvirtd.service ha iniziato la fase di avvio.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+0000: 8176: info : libvirt version: 4.5.0, package:
10.el7_6.12 (CentOS BuildSystem <http://bugs.centos.org>,
2019-06-20-15:01:15, x86-01.bsys.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+0000: 8176: info : hostname: ovirt01.hawai.lan
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+0000: 8176: error : virNetTLSContextLoadCertFromFile:513 :
Unable to import server certificate /etc/pki/vdsm/certs/vdsmcert.pem
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: libvirtd.service: main
process exited, code=exited, status=6/NOTCONFIGURED
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: Failed to start
Virtualization daemon.
-- Subject: L'unità libvirtd.service è fallita
5 years, 6 months