Re: help me thanks
by liug74@sina.com
I restart the win10 after I suspend the win10, then follow error message:
VM win10_admin2 was started by admin@internal-authz (Host: h4).
VM win10_admin2 is down with error. Exit message: Wake up from hibernation
failed:(<Element 'disk' at 0x7f8b8c0891d0>, 'source').
Failed to run VM win10_admin2 on Host h4.
Why?
Please attach any exceptions you see in engine.log and vdsm.log.
Best wishes,
Greg
There has same question when stared Win7_liug
Thanks
6 years, 3 months
help me thanks
by liug74@sina.com
I restart the win10 after I suspend the win10, then follow error message:
VM win10_admin2 was started by admin@internal-authz (Host: h4).
VM win10_admin2 is down with error. Exit message: Wake up from hibernation failed:(<Element 'disk' at 0x7f8b8c0891d0>, 'source').
Failed to run VM win10_admin2 on Host h4.
Why?
6 years, 3 months
upgrading 4.0.1 to latest 4.2
by Charles Kozler
Hello -
I am kicking around different ideas of how I could achieve this
I have been bitten, hard, with in place upgrades before so I am not really
wanting to do this unless its a complete last resort...and even then I'm
iffy
Really, the only configuration I have is 1 VDSM hook and about 20 networks.
The physical hardware is not going anywhere. Out of my three nodes, I can
push all VM's to the other 2 nodes and do a fresh build on 1 node
I am hoping there may be a way for me to do a fresh install of 4.2, export
engine config from 4.0.1, and then import it in some way - is this
possible? Quick google doesnt yield much and searches for 'export' usually
result in storage related results
6 years, 3 months
Is there a way to specify which volume is used for VM imports?
by Jayme
I have multiple data volumes and when I import a VM from an NFS attached
export domain the disk always gets created on on specific volume. I
understand I can move the disk from one volume to another after the fact
but is there a way I can specify which volume to use for the initial
import?
6 years, 3 months
Network requirements for ovirtmgmt
by Marcel Hanke
Hi,
we currently have a big layer 2 network in our datacenters, so every node in a
cluster can reach the other nodes in the datacenter on network layer 2.
In the future we'd like to route directly on the switches and connect the
nodes in one cluster, as well es other clusters in one datacenter via network
layer 3.
The question now is, do we need multicast and/or broadcast for any operation,
or is the connectivity with layer 3 fine?
Thanks
Marcel
6 years, 3 months
Connection issues when using gluster + infiniband + RDMA
by housemouse@squeakz.net
I am trying to setup gluster using Infiniband and mounting it in RDMA mode. I have created a network hook so that it configures the interfaces as MTU=65520 and CONNECTED_MODE=yes. I have a second server doing NFS with RDMA over Infiniband with some VMs on it. When I try and transfer files to the gluster storage it is taking a while and I am seeing the message “VDSM {hostname} command GetGlusterVolumeHealInfoVDS failed: Message timeout which can be caused by communication issues” This is usually followed by “Host {hostname} is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued.” I just installed the Infiniband hardware. The switch is a Qlogic 12200-18 with QLE7340 single port Infiniband cards in each of the 3 servers. The error message varies on which of the 3 nodes it comes from. Each of the 3 servers is running the opensm service.
6 years, 3 months
Passing through Quadro cards to guest
by Wesley Stewart
Has anyone experimented with this?
I have had really good success with AMD cards, but consumer end graphics
cards fail to load due to NVIDIA blocking them.
I happened to have an NVIDIA Quadro P2000 lying around and I thought I
would give it a shot. I really like these cards as they only require the
75W provided PCI power.
I can pass the Quadro through just fine, however after a couple minutes of
playing a video, the audio gets really "buzzy" and the video drops to about
3-5 FPS. This is while doing pretty minimal stuff. Currently testing with
720/1080 youtube videos. Card is pulling about 6-10W total. And otherwise
working fine on my Windows 10 guest.
I have also tried enabling/disabling MSI interrupts which the card supports
without making a difference.
I am running a single host setup:
X10SDV-6c+ (Xeon D - 1528)
running oVirt 4.2.3
6 years, 3 months
Connection issues when using gluster + infiniband + RDMA
by Matthew Hoberg
I am trying to setup gluster using Infiniband and mounting it in RDMA mode. I have created a network hook so that it configures the interfaces as MTU=65520 and CONNECTED_MODE=yes. I have a second server doing NFS with RDMA over Infiniband with some VMs on it. When I try and transfer files to the gluster storage it is taking a while and I am seeing the message “VDSM {hostname} command GetGlusterVolumeHealInfoVDS failed: Message timeout which can be caused by communication issues” This is usually followed by “Host {hostname} is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued.” I just installed the Infiniband hardware. The switch is a Qlogic 12200-18 with QLE7340 single port Infiniband cards in each of the 3 servers. The error message varies on which of the 3 nodes it comes from. Each of the 3 servers is running the opensm service.
Thanks,
Matt
6 years, 3 months