Proxmox - oVirt Migration
by Leo David
Hello everyone,
I have this situation where I need to migrate about 20 vms from Proxmox to
oVirt.
In this case, its about qcow2 images running on Proxmox.
I there a recomended way and procedure for doing this ?
Thank you very much !
--
Best regards, Leo David
6 years, 1 month
liveliness check fails on one host
by g.vasilopoulos@uoc.gr
I have an ovirt installation (4.2.6), and I have come accross this issue:
Hosted engine can migrate to a certain host say host5. Admin portal is working fine, everything is working fine AFAIK but hosted-engne --vm-status shows "failed livelines check", when engine is on that host, so after a while the engine migrates somewhere else, it not a terrible problem, but it is annoying and I would like to know why is this happening, and if there is a way to fix it
On every other host in the cluster this problem does not exists. firewall rules are exactly the same on hosts (in fact they are copied) . Any places to look at?
Thank you
6 years, 1 month
Ovirt engine General command validation failure.
by Alex K
Hi All,
I have a ovirt 4.1.9 self hosted setup with 3 servers. As storage gluster
is used in replica 3.
I have at a cluster the following error, and this seems to have risen
following a routine update (software packages) of the servers.
When trying to delete a snapshot of a VM I get at GUI: General command
validation failure.
Engine logs the attached.
Any help is appreciated.
Thanx,
Alex
6 years, 1 month
Suggestion: Addition of a one-off PXE boot option in the "Edit Virtual Machine > Boot Options" section
by Zach Dzielinski
My current usage of oVirt requires me to switch the boot priorities for virtual machines from hard disk to pxe fairly often.
It would be nice to see an option added in that would set a one-off pxe boot for virtual machines, reverting back to the original boot priorities after restart.
This would save some time when dealing with multiple machines, by eliminating the need to switch them to pxe and then back to their previous boot options in the middle of their boot process.
6 years, 1 month
Networking in oVirt
by sinan@turka.nl
Hi all,
Today I have installed oVirt 4.2 on a single machine (self hosted
engine).
My server has 6 physical adapters, but only 1 has an active uplink
(enp2s0f0). The server is directly connected to the internet.
The installation of oVirt created the following interfaces:
- br-int
- ovirtmgmt [IP configured: inet 85.17.x.x]
- virbr0 [IP configured: inet 192.168.122.1]
- virbr0-nic
- vnet0
The hosted-engine is attached to ovirtmgmt and got IP 192.168.122.76
assigned. The hosted-engine does have access to the internet.
The logical network ovirtmgmt is assigned to the physical interface
enp2s0f0 when I check this via the oVirt webgui.
I have 2 questions.
Q1:
I would like to achieve the following:
- Create a new network, call it vm_network.
- Create VM's and attach them to the vm_network.
- These VM's should able to access the internet.
Is this possible, if so, how do I achieve this?
Q2:
When my first question is answered and I have VM's running, some of the
VM's I would like to give an external/public IP address. How do I
achieve this?
Thanks!
Sinan
6 years, 1 month
Unable to Manage Snapshots Using UI
by turboaaa@gmail.com
I want to delete some old snapshots, but when I try to access the list of snapshots in using the engine UI I get
"Uncaught exception occurred...Details: Exception Caught: html is null..."
The logs show the following
2018-09-29 18:42:50,953-04 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-41) [] Permutation name: 53B2DF1D843A2038913DD336F86985B4
2018-09-29 18:42:50,953-04 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-41) [] Uncaught exception: com.google.gwt.event.shared.UmbrellaException: Exception caught: html is null
at java.lang.Throwable.Throwable(Throwable.java:70) [rt.jar:1.8.0_161]
at java.lang.RuntimeException.RuntimeException(RuntimeException.java:32) [rt.jar:1.8.0_161]
at com.google.web.bindery.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:64) [gwt-servlet.jar:]
at Unknown.new C0(webadmin-0.js)
at com.google.gwt.event.shared.HandlerManager.$fireEvent(HandlerManager.java:117) [gwt-servlet.jar:]
at com.google.gwt.view.client.SelectionChangeEvent.fire(SelectionChangeEvent.java:67) [gwt-servlet.jar:]
at com.google.gwt.view.client.SingleSelectionModel.$resolveChanges(SingleSelectionModel.java:118) [gwt-servlet.jar:]
at com.google.gwt.view.client.SingleSelectionModel.fireSelectionChangeEvent(SingleSelectionModel.java:107) [gwt-servlet.jar:]
at com.google.gwt.view.client.SelectionModel$AbstractSelectionModel$1.execute(SelectionModel.java:128) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.SchedulerImpl.runScheduledTasks(SchedulerImpl.java:167) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.SchedulerImpl.$flushFinallyCommands(SchedulerImpl.java:272) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.exit(Impl.java:313) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) [gwt-servlet.jar:]
at Unknown.eval(webadmin-0.js)
Caused by: java.lang.NullPointerException: html is null
at java.lang.Throwable.Throwable(Throwable.java:64) [rt.jar:1.8.0_161]
at java.lang.Exception.Exception(Exception.java:28) [rt.jar:1.8.0_161]
at java.lang.RuntimeException.RuntimeException(RuntimeException.java:28) [rt.jar:1.8.0_161]
at Unknown.new lIf(webadmin-0.js)
at Unknown.new uKe(webadmin-0.js)
at org.ovirt.engine.ui.common.widget.uicommon.vm.VmSnapshotListViewItem.$createNicsItemContainerPanel(VmSnapshotListViewItem.java:91)
at org.ovirt.engine.ui.common.widget.uicommon.vm.VmSnapshotListViewItem.$updateValues(VmSnapshotListViewItem.java:397)
at Unknown.new _Ip(webadmin-143.js)
at org.ovirt.engine.ui.webadmin.section.main.view.tab.virtualMachine.SubTabVirtualMachineSnapshotView.$createListViewItem(SubTabVirtualMachineSnapshotView.java:68)
at org.ovirt.engine.ui.webadmin.section.main.view.tab.virtualMachine.SubTabVirtualMachineSnapshotView.createListViewItem(SubTabVirtualMachineSnapshotView.java:68)
at org.ovirt.engine.ui.common.widget.listgroup.PatternflyListView.$updateInfoPanel(PatternflyListView.java:137)
at org.ovirt.engine.ui.common.widget.listgroup.PatternflyListView.$processSelectionChanged(PatternflyListView.java:126)
at org.ovirt.engine.ui.common.widget.listgroup.PatternflyListView$lambda$0$Type.onSelectionChange(PatternflyListView.java:59)
at com.google.gwt.view.client.SelectionChangeEvent.dispatch(SelectionChangeEvent.java:98) [gwt-servlet.jar:]
at com.google.gwt.event.shared.GwtEvent.dispatch(GwtEvent.java:76) [gwt-servlet.jar:]
at com.google.web.bindery.event.shared.SimpleEventBus.$doFire(SimpleEventBus.java:173) [gwt-servlet.jar:]
Any ideas? Can I delete the snapshots using virsh as a backup?
6 years, 1 month
vGPU: oVirt 4.2.6.1 + GRID K2
by femi adegoke
Anyone had luck using a GRID K2 for vGPU pass-through?
I can't seem to install the driver (367.128-370.28)
6 years, 1 month
Re: Info about procedure to shutdown hosted engine VM
by Giuseppe Ragusa
Hi all,
sorry for being late to such an interesting thread.
I discussed almost this same issue (properly and programmatically
shutting down a complete oVirt environment in a way that also
guarantees a clean and easy power up later) privately with some friends
some time ago.
Please note that the issue has been already discussed on the mailing
list before (we had started from those hints):http://lists.ovirt.org/pipermail/users/2017-August/083667.html
I will translate here from Italian our description of the scenario,
hoping to add something to the discussion (maybe simply as another
use case):
Setup:
* We are talking about an hyperconverged oVirt+GlusterFS (HE-HC) setup
(let's say 1 or 3 nodes, but more should work the same)
* We are talking about abusing the "hyperconverged" term above (so CTDB/Samba/Gluster-NFS/Gluster-
block are also running, directly on the nodes) ;-)
Business case:
* Let's say that we are in a small business setup and we do not have
the luxury of diesel-powered generators guaranteeing no black-outs
* Let's say that we have (intelligent) UPSs with limited battery, so
that we must make sure that a clean global power down gets initiated
as soon as the UPSs signal that a certain low threshold has been
passed (threshold to be carefully defined in order to give enough
time for a clean shutdown)
* Let's say that those UPSs may be:
* 1 UPS powering everything (smells single-point-of-failure,
but could be)
* 2 UPSs with all physical equipment having redundant (2) power cords
* 3 or more UPSs somehow variously connected
* Let's say the the UPSs may be network-monitored (SNMP on the
ovirtmgmt network) or directly attached to the nodes (USB/serial)
General strategy leading to shutdown decision:
* We want to centralize UPS management and use something like NUT[1]
running on the Engine vm
* Network controlled UPSs will be directly controlled by NUT running on
the Engine vm, while directly attached UPSs (USB/serial) will be
controlled by NUT running on the nodes they are attached to, but only
in a "proxy" mode (relaying actual control/logic to the NUT service
running on the Engine vm)
* A proper logic will be devised (knowing the capacity of each UPS, the
load it sustains and what actually means to power down those
connected equipment in view of quorum maintenance) in order to decide
whether a partial power down or a complete global power down are
needed, in case only a subset of UPSs should experience a low-battery
event (obviously a complete low-battery on all UPSs means global
power down)
Detailed strategy of shutdown implementation:
* A partial power down (only some nodes) means:
* Those nodes will be put in local maintenance (vms get automatically
migrated on other nodes or cleanly shut down if migration is
impossible because of constraints or limited resources; shutdown of
vms should respect proper order, using tags, dependency rules, HA
status or other hints) but without stopping GlusterFS services
(since there are further services depending on those, see below)
* Services running on those nodes get cleanly stopped:
* Proper stopping of oVirt HA Agent and Broker services on
those nodes
* Proper stopping of CTDB (brings down Samba too) and Gluster-block
(NFS-Ganesha too, if used instead of Gluster-NFS) services on
those nodes
* Clean unmounting of all still-mounted GlusterFS volumes on
those nodes
* Clean OS poweroff of those nodes
* A global power down of everything means:
* All guest vms (except the Engine) get cleanly shut down (by means
of oVirt guest agent), possibly in a proper dependency order (using
tags, dependency rules, HA status or other hints)
* All storage domains (except the Engine one) are put in maintenance
* Global oVirt maintenance is activated (no more HA actions to
guarantee that the Engine is up)
* Clean OS poweroff of the Engine vm
* Proper stopping of oVirt HA Agent and Broker services on all nodes
* Proper stopping of CTDB (brings down Samba too) and Gluster-
block (NFS-Ganesha too, if used instead of Gluster-NFS) services
on all nodes
* Clean unmounting of all still-mounted GlusterFS volumes on all
nodes
* Clean stop of all GlusterFS volumes (issued from a single,
chosen node)
* Clean OS poweroff of all nodes
Sorry for the lenghty email :-)
Many thanks.
Best regards,
Giuseppe
PS: I will read through the official Ansible role for shutdown asap (I surely still need a lot of learning for writing proper Ansible playbooks... :-D )I just published our Ansible mockup [2]of the above detailed global
strategy, but it's based on statically collected info and must be run
from an external machine, to say nothing of my awful Ansible style and
the complete lack of the NUT logic and configuration part
On Wed, Sep 12, 2018, at 16:15, Simone Tiraboschi wrote:
>
>
> On Wed, Sep 12, 2018 at 3:49 PM Gianluca Cecchi
> <gianluca.cecchi(a)gmail.com> wrote:>> On Wed, Sep 12, 2018 at 10:03 AM Simone Tiraboschi
>> <stirabos(a)redhat.com> wrote:>>>
>>>>
>>>> Does it mean that I have to run the ansible-playbook command from
>>>> an external server and use as host in inventory the engine server,
>>>> or does it mean that the ansible-playbook command is to be run from
>>>> within the server where the ovirt-engine service is running and so
>>>> keep intact the lines inside the smple yal file:>>>> "
>>>> - name: oVirt shutdown environment
>>>> hosts: localhost
>>>> connection: local
>>>> "
>>>>
>>>
>>> Both options are valid.
>>
>> Good! It seems it worked ok in shutdown mode (the default one) in a
>> test hosted engine based 4.2.6 environment, where I have 2 hosts
>> (both are hosted engine hosts), the hosted engine VM + 3 VMs>> Initially ovnode2 is both SPM and hosts the HostedEngine VM
>> If I run the playbook from inside ovmgr42:
>>
>> [root@ovmgr42 tests]# ansible-playbook test.yml
>> [WARNING]: provided hosts list is empty, only localhost is
>> available. Note that the implicit>> localhost does not match 'all'
>>
>> PLAY [oVirt shutdown environment]
>> ******************************************************************>>
>> TASK [oVirt.shutdown-env : Populate service facts]
>> *************************************************>> ok: [localhost]
>>
>> TASK [oVirt.shutdown-env : Enforce ovirt-engine machine]
>> *******************************************>> skipping: [localhost]
>>
>> TASK [oVirt.shutdown-env : Enforce ovirt-engine status]
>> ********************************************>> skipping: [localhost]
>>
>> TASK [oVirt.shutdown-env : Login to oVirt]
>> *********************************************************>> ok: [localhost]
>>
>> TASK [oVirt.shutdown-env : Get hosts]
>> **************************************************************>> ok: [localhost]
>>
>> TASK [oVirt.shutdown-env : set_fact]
>> ***************************************************************ok:
>> [localhost]>>
>> TASK [oVirt.shutdown-env : Enforce global maintenance mode]
>> ****************************************>> skipping: [localhost]
>>
>> TASK [oVirt.shutdown-env : Warn about HE global maintenace mode]
>> ***********************************>> ok: [localhost] => {
>> "msg": "HE global maintenance mode has been set; you have to exit
>> it to get the engine VM started when needed\n">> }
>>
>> TASK [oVirt.shutdown-env : Shutdown of HE hosts]
>> ***************************************************>> changed: [localhost] => (item= . . . u'name': u'ovnode1', . . .
>> u'spm': {u'priority': 5, u'status': u'none'}})>> changed: [localhost] => (item= . . . u'name': u'ovnode2', . . .
>> u'spm': {u'priority': 5, u'status': u'spm'}})>>
>>
>> TASK [oVirt.shutdown-env : Shutdown engine host/VM]
>> ************************************************>> Connection to ovmgr42 closed by remote host.
>> Connection to ovmgr42 closed.
>> [g.cecchi@ope46 ~]$
>>
>> At the end the 2 hosts (HP blades) are in power off state, as
>> expected.>>
>> ILO event log of ovnode1:
>> Last Update Initial Update Count Description
>> 09/12/2018 10:13 09/12/2018 10:13 1 Server power removed.
>>
>> ILO event log of ovnode2:
>> Last Update Initial Update Count Description
>> 09/12/2018 10:14 09/12/2018 10:14 1 Server power removed.
>>
>> Actually due to time settings, they are to be intended as 11:13 and
>> 11:14 my local time>>
>> In /var/log/libvirt/qemu/HostedEngine.log of node ovnode2
>>
>> 2018-09-11 17:04:16.388+0000: starting up libvirt version: 3.9.0, . .
>> . hostname: ovnode2>> ...
>> 2018-09-12 09:11:29.641+0000: shutting down, reason=shutdown
>>
>> Actually we are at 11:11 local time
>>
>> For now I have then manually restarted all the env
>> I began starting from ovnode2 (that was SPM and with HostedEngine
>> during shutdown), keeping ovnode1 powered off, and it took some time
>> because I got some messages like this (to be read bottom up)>>
>> Host ovnode1 failed to recover. 9/12/18 2:30:21 PM
>> Host ovnode1 is non responsive. 9/12/18 2:30:21 PM
>> ...
>> Host ovnode1 is not responding. It will stay in Connecting state for
>> a grace period of 60 seconds and after that an attempt to fence the
>> host will be issued. 9/12/18 2:27:40 PM>> Failed to Reconstruct Master Domain for Data Center MYDC42. 9/12/18
>> 2:27:34 PM>> VDSM ovnode2 command ConnectStoragePoolVDS failed: Cannot find master
>> domain: u'spUUID=5af30d59-004c-02f2-01c9-0000000000b8, sdUUID=cbc308db-5468-4e6d-aabb-
>> f9d133d05de2' 9/12/18 2:27:33 PM>> Invalid status on Data Center MYDC42. Setting status to Non
>> Responsive. 9/12/18 2:27:27 PM>> ...
>> ETL Service Started 9/12/18 2:26:27 PM
>>
>> With ovnode1 still powered off, if I try to start it from the
>> gui I get:>>
>> Trying to power on ovnode1 I get in events:
>> Host ovnode1 became non responsive. Fence operation skipped as the
>> system is still initializing and this is not a host where hosted
>> engine was running on previously. 9/12/18 2:30:21 PM>>
>> and as popup I get this "operation canceled" window:
>> https://drive.google.com/file/d/1IWXASJHRylZR6ePWtGUcKiLbYjg__eNS/view?us...>>
>> What's the meaning?
>> In the phrase "the system is still initializing and this is not a
>> host where hosted engine was running" the term THIS to which host is
>> referred?>> After some minutes I automatically get (to be read bottom up):
>
> We are tracing ad discussing it here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1609029
>
> As you noticed after a few minutes everything comes back to up status
> but the startup phase is really confusing.> We are working on patch to provide a smoother startup experience
> although I don't any concrete drawback of the current code.>
>>
>> Host ovnode1 power management was verified successfully. 9/12/18
>> 2:40:47 PM>> Status of host ovnode1 was set to Up. 9/12/18 2:40:47 PM
>> ..
>> No faulty multipath paths on host ovnode1 9/12/18 2:40:46 PM
>> Storage Pool Manager runs on Host ovnode2 (Address: ovnode2), Data
>> Center MYDC42. 9/12/18 2:37:55 PM>> Reconstruct Master Domain for Data Center MYDC42 completed.
>> 9/12/182:37:49 PM>> ..
>> Host ovnode1 was started by SYSTEM. 9/12/18 2:32:37 PM
>> Power management start of Host ovnode1 succeeded. 9/12/18 2:32:37 PM>> Executing power management status on Host ovnode1 using Proxy Host
>> ovnode2 and Fence Agent ipmilan:172.16.1.52. 9/12/18 2:32:26 PM>> Power management start of Host ovnode1 initiated. 9/12/18 2:32:26 PM>> Auto fence for host ovnode1 was started. 9/12/18 2:32:26 PM
>> Storage Domain ISCSI_2TB (Data Center MYDC42) was deactivated by
>> system because it's not visible by any of the hosts. 9/12/18
>> 2:32:22 PM>> ..
>> Executing power management status on Host ovnode1 using Proxy Host
>> ovnode2 and Fence Agent ipmilan:172.16.1.52. 9/12/18 2:32:19 PM>> Power management stop of Host ovnode1 initiated. 9/12/18 2:32:17 PM
>> Executing power management status on Host ovnode1 using Proxy Host
>> ovnode2 and Fence Agent ipmilan:172.16.1.52. 9/12/18 2:32:16 PM>> ...
>> Host ovnode1 failed to recover. 9/12/18 2:30:21 PM
>> Host ovnode1 is non responsive. 9/12/18 2:30:21 PM
>>
>> My questions are:
>>
>> - what if for some reason ovnode1 was not available during restart?
>> Would have the system started the services anyway after some time
>> in that case or could have it been a problem?>
> ovnode1 will be in non operation state until available.
> In the mean time the engine could elect a different SPM host
> and so on.>
>> - If I want to try to start the environment through ansible playbook
>> I see that it seems I have to use "startup" tag, but it is not
>> fully automated?>
> As you can see from the playbook that role require an access to the
> engine host or VM but not to each managed host.> This is required to fetch hosts list from the engine, use its power
> management capabilities, credentiuals and so on.> No host details are required for playbook execution.
>
>> "
>> A startup mode is also available:
>> in the startup mode the role will bring up all the IPMI configured
>> hosts and it>> will unset the global maintenance mode if on an hosted-engine
>> environment.>> The startup mode will be executed only if the 'startup' tag is
>> applied; shutdown mode is the default.>> The startup mode requires the engine to be already up.
>> "
>> Is the last sentence referred to a non-hosted engine environment?
>
> No, the engine host should be manually powered on if physical or the
> at least one HE host (2 for the hyper converged case) should be
> powered on.> Exiting global maintenance mode is up to the user as well.
>
>> Otherwise I don't understand "will unset the global maintenance mode
>> if on an hosted-engine environment.">
> You can also manually power on the engine VM with hosted-engine --vm-
> start on a specific host while still in global maintenance mode.>
>> Also with IPMI does it mean in general the power mgmt feature (in my
>> case I have iLO and not ipmilan) or what?>
> yes, power management in general, sorry for the confusion.
>
>> Where does it get the facts about hosts in hosted engine
>> environment, as the engine is forcibly down if the hosted engine
>> hosts are powered down?>
> That's why the engine should be up.
>
>>
>> Thanks in advance for your time
>> Gianluca
>>
> _________________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/76S4YUHUDLK...
Links:
1. https://networkupstools.org/
2. https://github.com/Heretic-oVirt/ansible/blob/master/hvp/roles/common/glo...
6 years, 1 month
Libvirt import to oVirt 3.5?
by Mark Steele
Hello,
Is there a mechanism for importing VM's from a KEMU Libvirt host to oVirt
3.5?
I see there is an import menu item for import on 4.1 but not on my
installation of 3.5.
Best regards,
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
6 years, 1 month