VM Portal
by carl langlois
Hi,
I know thta the VM portal add issue with displaying user VMs. It seem to be
fix with this bug
*https://bugzilla.redhat.com/show_bug.cgi?id=148087
<https://bugzilla.redhat.com/show_bug.cgi?id=148087>*
But if i try the portal does not show anything, My engine is up to date
with 4.2 repo. It this still a issue?
Thanks
6 years, 5 months
Switching from Public Network to Private Network Space
by Sakhi Hadebe
Hi,
We have installed oVirtNode-4.2.4 in 3 host machines and is running fine.
All the hosts machines and the engine are on Public IP address space
15x.xxx.xx.xx Host1
15x.xxx.xx.xx Host2
15x.xxx.xx.xx Host3
15x.xxx.xx.xx Engine VM
We wanna move the cluster nodes and the engine VM to the privaye IP address
space and access them through VPN.
Besides the network interfaces files, which other files can we change for a
smooth transition to private IP address space
Below is the current network setup on the host machines:
bond0.20 - VLAN for the public bridge
bond0.20 - VLAN for ovirtmgmt brigde
bond0.22 - VLAN for storage briidge
ovirtmgmt bridge interface:
[root@host2 network-scripts]# cat ifcfg-ovirtmgmt
# Generated by VDSM version 4.20.32-1.el7
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=15x.xxx.xx.xx
NETMASK=255.xxx.xxx.xxx
BOOTPROTO=none
MTU=9000
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
[root@host2 ~]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
ether 08:9e:01:87:ca:e1 txqueuelen 1000 (Ethernet)
RX packets 4051316 bytes 1013145487 (966.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14031524 bytes 17967035125 (16.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond0.20: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 08:9e:01:87:ca:e1 txqueuelen 1000 (Ethernet)
RX packets 3836861 bytes 912581782 (870.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3183802 bytes 17125114804 (15.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond0.22: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::a9e:1ff:fe87:cae1 prefixlen 64 scopeid 0x20<link>
ether 08:9e:01:87:ca:e1 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 08:9e:01:87:ca:e1 txqueuelen 1000 (Ethernet)
RX packets 2591947 bytes 739574288 (705.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9112112 bytes 11340058277 (10.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 08:9e:01:87:ca:e1 txqueuelen 1000 (Ethernet)
RX packets 1459369 bytes 273571199 (260.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4919412 bytes 6626976848 (6.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
genev_sys_6081: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65000
inet6 fe80::3807:70ff:feb3:15d1 prefixlen 64 scopeid 0x20<link>
ether 3a:07:70:b3:15:d1 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 8 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 11464303 bytes 90848049840 (84.6 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11464303 bytes 90848049840 (84.6 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 15x.xxx.xx.xx netmask 255.255.255.240 broadcast 15x.xxx.xx.xx
inet6 fe80::a9e:1ff:fe87:cae1 prefixlen 64 scopeid 0x20<link>
ether 08:9e:01:87:ca:e1 txqueuelen 1000 (Ethernet)
RX packets 4050802 bytes 916285919 (873.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2955965 bytes 17099409954 (15.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p2p1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 90:e2:ba:3d:90:f4 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p2p2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 90:e2:ba:3d:90:f5 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.124.1 netmask 255.255.255.0 broadcast 192.168.124.255
ether 52:54:00:bd:16:db txqueuelen 1000 (Ethernet)
RX packets 5260 bytes 3927233 (3.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5590 bytes 3342473 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::fc16:3eff:fe05:44 prefixlen 64 scopeid 0x20<link>
ether fe:16:3e:05:00:44 txqueuelen 1000 (Ethernet)
RX packets 8952 bytes 17748713 (16.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14842 bytes 4166526 (3.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
--
Regards,
Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency
Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213>
Fax: +27 12 841 4223 <+27128414223>
Cell: +27 71 331 9622 <+27823034657>
Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
6 years, 5 months
Hosted Engine Deploy kernel panics on boot
by Karli Sjöberg
Hey all!
I'm trying to deploy Hosted Engine through the Cockpit UI and it's
going well until it's time to start the local VM and it kernel panics:
[ 2.032053] Call Trace:
[ 2.032053] [<ffffffffb687e78c>] load_elf_binary+0x33c/0xe50
[ 2.032053] [<ffffffffb68f3919>] ? ima_bprm_check+0x49/0x50
[ 2.032053] [<ffffffffb687e450>] ? load_elf_library+0x220/0x220
[ 2.032053] [<ffffffffb682236f>] search_binary_handler+0xef/0x310
[ 2.032053] [<ffffffffb6823b4b>] do_execve_common.isra.24+0x5db/0x6e0
[ 2.032053] [<ffffffffb6823c68>] do_execve+0x18/0x20
[ 2.032053] [<ffffffffb66afc1f>] ____call_usermodehelper+0xff/0x140
[ 2.032053] [<ffffffffb66afb20>] ? call_usermodehelper+0x60/0x60
[ 2.032053] [<ffffffffb6d20677>] ret_from_fork_nospec_begin+0x21/0x21
[ 2.032053] [<ffffffffb66afb20>] ? call_usermodehelper+0x60/0x60
[ 2.032053] Code: cf e9 ff 4c 89 f7 e8 7b 32 e7 ff e9 4d fa ff ff 65 8b 05 03 a0 7e 49 a8 01 0f 84 85 fc ff ff 31 d2 b8 01 00 00 00 b9 49 00 00 00 <0f> 30 0f 1f 44 00 00 48 c7 c0 10 00 00 00 e8 07 00 00 00 f3 90
[ 2.032053] RIP [<ffffffffb6823025>] flush_old_exec+0x725/0x980
[ 2.032053] RSP <ffff8add75ecbd00>
[ 2.298131] ---[ end trace 354b4039b6fb0889 ]---
[ 2.303914] Kernel panic - not syncing: Fatal exception
[ 2.304835] Kernel Offset: 0x35600000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
I've never had this problem so I'd just want to know if it's a known
issue right now or if I've done anything special to deserve this:)
The "Hosts" I'm deploying this on are VMs with nested virt activated,
and I've done this before but this time around it's bombing, as earlier
explained.
Thanks in advance!
/K
6 years, 5 months
Re: Failed to Run VM on host (self hosted engine)
by Deekshith
Dear Team
I have reinstalled the OS and tried issue is same unable to launch the VM, I don’t have any SAN or any storages it’s all in Local .
Please help me to resolve the issue
My server details
Lenovo x3650 M5 ,
Deekshith
6 years, 5 months
Q: Power failure - proper data center shutdown procedure
by Andrei Verovski
Hi,
I have the following simple setup:
oVirt engine - 4.2.3 (running as KVM appliance on SuSE Linux)
node1.1 - 4.2.1 (installed with clean CentOS and newer elrepo 4.15.4-1el1 kernel)
node1.2 - 4.2.3 (installed with clean CentOS and newer elrepo 4.16.2-1el7 kernel)
VMs - Debian, CentOS, and Windows guests, with ovirt-agent installed (except one with Linux telephony software).
What is the proper shutdown procedure in case of power failure?
I have simple scripts which simply run “shutdown now” on each host after 10 min.
However, oVirt nodes fail to shutdown properly, process take about 25 minutes.
node1.2
libvirt-guests.sh[10696]: Waiting for guest GUEST_VM_NAME to shut down
node1.1 - these processes took VERY LONG time:
Unmounting /rhev/data-center/mnt/node11.mydomain.com:_vm_raid_nfs_export…
Unmounting /rhev/data-center/mnt/node12.mydomain.com:_vm_raid_nfs_disks…
Unmounting /rhev/data-center/mnt/node11.mydomain.com:_vm_raid_nfs_disks…
Unmounting /rhev/data-center/mnt/node11.mydomain.com:_vm_raid_nfs_iso…
on both nodes:
[] Forcibly powering off: job timed out
as result, OS is down, server is not, still drawing power from UPS battery.
Powering on was problematic, I had to restart node11 2nd time before it becomes “green” in oVirt engine.
One VM failed to start with error message “Can’t acquire ticket” and struck with grey arrows (with no possibilty to make anything) - unfortunately, I forgot to take screenshot or record exact message. Restarting node1.1 cured that problem.
Q: what is proper oVirt data center automatic shutdown procedure?
Thanks in advance
Andrei
6 years, 5 months
Uncaught exception occurred - NS_ERROR_FILE_CORRUPTED
by Michael Seidel
Hi,
I am running oVirt in version 4.2 and just started getting this error
message upon login:
Uncaught exception occurred. Please try reloading the page. Details:
(NS_ERROR_FILE_CORRUPTED) :
Please have your administrator check the UI logs
The relevant content of the ui.log is given below (or, nicely formatted
at https://paste.ee/p/LUrbt).
I am able to access the js file directly using a browser so the file is
definitely not missing and is readable. However, the side-bar of the
ovirt webadmin is not showing (or more precisely: it loads and shows for
a split-second and then disappears).
The Web Console shows a similar error message as the one below,
referring to the same js file and in addition says:
Tue Jul 24 14:28:03 GMT+200 2018
com.gwtplatform.mvp.client.presenter.slots.LegacySlotConvertor
SEVERE: Warning: You're using an untyped slot!
Untyped slots are dangerous! Please upgrade your slots using
the Arcbee's easy upgrade tool at
https://arcbees.github.io/gwtp-slot-upgrader
The github-URL however is not working.
I also upgraded and rebooted the ovirt engine, but the error remains.
Do you have any suggestions what might cause this issue and how I can
fix it.
Best,
- Michael
2018-07-24 14:19:24,582+02 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-1) [] Permutation name: 2029DD69C3B3D95611AAF9AC05D7EF9B
2018-07-24 14:19:24,582+02 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-1) [] Uncaught exception:
com.google.gwt.core.client.JavaScriptException: (NS_ERROR_FILE_CORRUPTED) :
at
Unknown.loadFromLocalStorage(https://XXXXX/ovirt-engine/webadmin/theme/00-ovirt.brand/patternfly-functions-vertical-nav-custom-3.26.1.js)
at
Unknown.init(https:/XXXXX/ovirt-engine/webadmin/theme/00-ovirt.brand/patternfly-functions-vertical-nav-custom-3.26.1.js)
at
Unknown.fn.setupVerticalNavigation(https://XXXXX/ovirt-engine/webadmin/theme/00-ovirt.brand/patternfly-functions-vertical-nav-custom-3.26.1.js)
at
org.ovirt.engine.ui.webadmin.section.main.view.MainSectionView$lambda$0$Type.execute(MainSectionView.java:45)
at
com.google.gwt.core.client.impl.SchedulerImpl.runScheduledTasks(SchedulerImpl.java:167)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl.$flushPostEventPumpCommands(SchedulerImpl.java:338)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl$Flusher.execute(SchedulerImpl.java:76)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl.execute(SchedulerImpl.java:140)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275)
[gwt-servlet.jar:]
at Unknown.Ju/<(https://XXXXX/ovirt-engine/webadmin/?locale=en_US)
at Unknown.d(https://XXXXX/ovirt-engine/webadmin/?locale=en_US)
at Unknown.anonymous(Unknown)
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
6 years, 5 months
Self hosted to bare metal engine
by Christophe TREFOIS
Hi,
What would be the process to go from self hosted engine to bare metal engine?
(Actually it would move to a VM in a VMware cluster)
All user guides I found are from bare metal to self hosted.
Thanks for any tips or pointers
Christophe
Sent from my iPhone
6 years, 5 months
Cannot switch to maintenance, image transfer is in progress
by Randy Bender
We've just updated our hosted engine to 4.2.4.
When trying to place one of the hosts in maintenance mode, we get the error
in the subject line. I've found the following that looks to be the issue
we are hitting:
https://bugzilla.redhat.com/show_bug.cgi?id=1586126
I've looked in the engine database with the following command and results:
engine=# select disk_id from image_transfers ;
disk_id
--------------------------------------
52966b71-2d01-4a16-9830-65a888717656
c72d64b7-93c9-485b-b031-29c785c5bf9a
(2 rows)
Full details on these records show that they are indeed old - there are no
transfers actually in progress.
Can anyone confirm the correct way to remedy this? The bug report notes
changing the phase on the record in question. Before I try that, I'd like
confirmation if anyone else has seen this issue and resolved it.
And if this is the resolution, can you describe the exact psql commands to
achieve this?
Thanks
6 years, 5 months
Freshly installed ovirt-node-ng 4.2.4 unable to deploy hosted-engine
by Ralf Schenk
Hello List,
after having freshly installed my first ovirt-node-ng 4.2.4 configuring
basic networking and preparing NFS Shares on my NFS-Server I would like
to deploy hosted engine on this first host and NFS Storage.
In Cockpit I set Performance Profile to "virtual host" and mounted an
export from my Sotrage as /mnt/engine (Which shouldn't be needed since I
expected the engine-setup wizard to ask me...)
I'm, already stuck when selecting "Hosted Engine Deploy oVirt hosted
engine on storage that has already been provisioned":
save image
"hosted-engine-setup" on the command-line is not available.
Furthermore I've warnings in Node Health Status. I didn't configure
discard anywhere. It is used as it was set up by the installer on the
logical volumes residing on a SATA Disk.
save image
save image
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
6 years, 5 months