paused vm's will not resume
by eevans@digitaldatatechs.com
I have 2 vm's, which are the most important in my world, that paused and will not resume. I have googled this to death but no solution. It stated a lack of space but none of the drives on my hosts are using more than 30% or there space and these 2 have ran on kvm host for several years and always had at least 50% free space.
I like ovirt and want to use it but I cannot tolerate the down time. If I cannot get this resolved, I'm going back to kvm hosts. I am pulling my hair out here.
If anyone can help with this issue, please let me know.
4 years, 10 months
Hugepages and running out of memory
by klaasdemter@gmail.com
Hi,
this e-mail is meant to caution people using hugepages and mixing those
VMs with non-hugepages VMs on the same hypervisors. I ran into major
trouble, the hypervisors ran out of memory because the VM scheduling
disregards the hugepages in it's calculations. So if you have hugepages
and non-hugepages VMs better check the memory commited on a hypervisor
manually :)
https://bugzilla.redhat.com/show_bug.cgi?id=1804037
https://bugzilla.redhat.com/show_bug.cgi?id=1804046
As for workarounds: so far it seems the only viable solution is
splitting hugepages/nonhugepages VMs with affinity groups but at least
for me that means wasting a lot of resources.
Greetings
Klaas
4 years, 10 months
main channel: failed to connect HTTP proxy connection not allowed
by Jorick Astrego
Hi,
Having a spice console issue on our new 4.3.8 cluster. The console stays
blank and with the cli and debug on, I get the following error "main
channel: failed to connect HTTP proxy connection not allowed"
We have a userportal through haproxy so this is configured in ovirt engine
engine-config -g SpiceProxyDefault
SpiceProxyDefault: http://userportal.*.*:****/ version: general
But the same setting works fine on our 4.2 cluster.
I can telnet to the ports on the 4.3.8 hosts without issue.
Non working console on newer cluster:
remote-viewer -v --debug console.vv
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.697: Opening
display to console.vv
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.700: Guest (null)
has a spice display
Guest (null) has a spice display
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.789: Spice
foreign menu updated
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.790: After open
connection callback fd=-1
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.790: Opening
connection to display at console.vv
Opening connection to display at console.vv
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.791: fullscreen
display 0: 0
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.791: app is not
in full screen
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.792: New spice
channel 0x55704bf70d40 SpiceMainChannel 0
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.793: notebook
show status 0x55704b820280
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: main
channel: failed to connect HTTP proxy connection not allowed
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: Destroy
SPICE channel SpiceMainChannel 0
(remote-viewer:29267): virt-viewer-DEBUG: 16:24:24.969: zap main channel
(remote-viewer:29267): virt-viewer-WARNING **: 16:24:24.969: Channel
error: HTTP proxy connection not allowed
Working console on older cluster:
remote-viewer -v --debug console.vv
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.111: Opening
display to console.vv
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.111: Guest (null)
has a spice display
Guest (null) has a spice display
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: Spice
foreign menu updated
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: After open
connection callback fd=-1
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.209: Opening
connection to display at console.vv
Opening connection to display at console.vv
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.211: fullscreen
display 0: 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.211: app is not
in full screen
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.212: New spice
channel 0x556a03f7e820 SpiceMainChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.212: notebook
show status 0x556a03c7a280
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.397: main
channel: opened
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.397: notebook
show status 0x556a03c7a280
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.404:
virt_viewer_app_set_uuid_string: UUID changed to
a2d64ffe-7583-45b6-92f3-87661039388c
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.404: app is not
in full screen
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.502: app is not
in full screen
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.514: New spice
channel 0x556a043bb660 SpiceDisplayChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.514: New spice
channel 0x556a0422b570 SpiceCursorChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.515: New spice
channel 0x556a0422c000 SpiceInputsChannel 0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.515: new inputs
channel
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.661: creating
spice display (#:0)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.661: Insert
display 0 0x556a03ca29e0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: creating
spice display (#:1)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: Insert
display 1 0x556a03ca2830
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.699: creating
spice display (#:2)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: Insert
display 2 0x556a03ca2680
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: creating
spice display (#:3)
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.700: Insert
display 3 0x556a03ca24d0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.701: Found a
window without a display, reusing for display #0
(remote-viewer:29334): virt-viewer-DEBUG: 16:25:59.701: notebook
show display 0x556a03c7a280
(remote-viewer:29334): virt-viewer-DEBUG: 16:26:00.074: Allocated
1024x768
(remote-viewer:29334): virt-viewer-DEBUG: 16:26:00.074: Child
allocate 1024x768
(remote-viewer:29334): virt-viewer-DEBUG: 16:26:04.846: Allocated
1024x768
(remote-viewer:29334): virt-viewer-DEBUG: 16:26:04.846: Child
allocate 1024x768
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
4 years, 10 months
[ANN] oVirt 4.3.9 Second Release Candidate is now available for testing
by Lev Veyde
The oVirt Project is pleased to announce the availability of the oVirt
4.3.9 Second Release Candidate for testing, as of February 20th, 2020.
This update is a release candidate of the ninth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.9 release highlights:
http://www.ovirt.org/release/4.3.9/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.9/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 10 months
Understanding dashboard memory data
by nux@li.nux.ro
Hello,
I'm having to deal with an oVirt installation, once I log in the
dashboard says:
Memory: "1.2 Available of 4.5 TiB" and "Virtual resources - Committed:
113%, Allocated: 114%".
So, clearly there is RAM available, but what's with the committed and
allocated numbers?
Regards
4 years, 10 months
Ovirt storage domain ovirt-image-repository is unattached and can't use change CD to install windows on VM
by Heinrich Momberg
Good day,
Whenever i am trying to use the change CD to mount the ovirt tools iso so i can install windows server or any windows it says "could not find ISO file collection".It doesn't pick up the hdd that i have created in the windwos setup so i have to do the whole ovirt tools mount and select the windows i want to insteall etc. It has been working fine until a couple of days ago when we had power outages. I also noticed that the storage domain,ovirt-image-repository is unattached... How do i go about fixing this problem or where can i check logs. I have checked the permissions where the ISO's or images are stored.
4 years, 10 months
Spacewalk Integration
by eevans@digitaldatatechs.com
I finally figured out how to use Spacewalk as an external provider in Ovirt.
For the URL I used http://sat.digitaldatatechs.com/XMLRPC and used the admin login credentials. It registered and Spacewalk has the capability of provisioning a bare bones machine.
I have not tested provisioning but I will comment once I do.
4 years, 10 months
Get CPU Family
by Luca
Hello guys,
how can I get the correct CPU Family in a ovirt host?
When I execute virsh -r capabilities I get only "Broadwell-noTSX-IBRS" and not the full "Broadwell-noTSX IBRS SSBD MDS"
Regards,
Luca
4 years, 10 months
Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3
by Goorkate, B.J.
Hi all,
I'm in the process of upgrading oVirt-nodes from 4.2 to 4.3.
After upgrading the first of 3 oVirt/gluster nodes, there are between 600-1200 unsynced entries for a week now on 1 upgraded node and one not-yet-upgraded node. The third node (also not-yet-upgraded) says it's OK (no unsynced entries).
The cluster doesn't seem to be very busy, but somehow self-heal doesn't complete.
Is this because of different gluster versions across the nodes and will it resolve as soon as I upgraded all nodes? Since it's our production cluster, I don't want to take any risk...
Does anybody recognise this problem? Of course I can provide more information if necessary.
Any hints on troubleshooting the unsynced entries are more than welcome!
Thanks in advance!
Regards,
Bertjan
------------------------------------------------------------------------------
De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
------------------------------------------------------------------------------
This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.
Please consider the environment before printing this e-mail.
4 years, 10 months