[ovirt-users] Couple issues found with oVirt 3.6.0 Third Beta Release

SULLIVAN, Chris (WGK) Chris.Sullivan at woodgroupkenny.com
Thu Aug 27 21:19:47 EDT 2015


Hi,

I recently re-installed my test oVirt environment, upgrading from 3.5.2 to 3.6.0 beta 3 (Engine 3.6.0-0.0.master.20150819134454.gite6b79a7.el7.centos, VDSM 4.17.3-0.el7.centos, GlusterFS 3.7.3-1.el7) in the process. Due to a software issue outside of oVirt I had to start with a clean install for 3.6, however I kept all the old storage domains. All hosts and the engine are running CentOS 7.1 and I'm using hosted-engine.

During the setup/recovery process I encountered the following issues:

:: Could not create GlusterFS Data domain due to 'General Exception'
I attempted to create a Data domain using a FQDN associated with a floating IP, so that hosts could still mount the domain when the specific GlusterFS host used to define the storage was down. This FQDN was resolvable and contactable on each host in the farm. The floating IP is shared between two of the four GlusterFS hosts. The logs reported an unhandled exception ('x not in list') raised by the below statement (line 340 in https://github.com/oVirt/vdsm/blob/master/vdsm/storage/storageServer.py ):
    def _get_backup_servers_option(self):
        servers = [brick.split(":")[0] for brick in self.volinfo['bricks']]
        servers.remove(self._volfileserver)   #<--- Exception thrown here
        if not servers:
            return ""

        return "backup-volfile-servers=" + ":".join(servers)

My assumption (without looking too deep in the code) was that since I used a FQDN that did not have any bricks associated with it, 'self._volfileserver' would be set to a name that would not appear in 'servers', resulting in the exception. I patched it as per the following:
    def _get_backup_servers_option(self):
        servers = [brick.split(":")[0] for brick in self.volinfo['bricks']]
        if self._volfileserver in servers:
            self.log.warn("Removing current volfileserver %s..." % self._volfileserver)
            servers.remove(self._volfileserver)
        else:
            self.log.warn("Current volfileserver not in servers.")
        if not servers:
            return ""

        return "backup-volfile-servers=" + ":".join(servers)

Once patched the Data domain created successfully and appears to be working normally, although I'm not sure if the above change has any negative knock-on effects throughout the code or in specific situations. I'd suggest that the _get_backup_servers_option method be tweaked to handle this configuration gracefully by someone with more knowledge of the code, either by allowing the configuration or rejecting it with a suitable error message if the configuration is intended to be unsupported.

:: Could not import VMs from old Data domain due to unsupported video type (VGA)
Once the new data center was up and running, I attached the old Data domain and attempted to import the VMs/templates. Template import worked fine, however VM import failed with an error stating the video device (which came up as VGA) was not supported. I attempted to fix this by specifically defining the video type as 'qxl' in the .ovf file for the VM in the OVF_STORE for the old storage however the VM would always come up with video type VGA in the import dialog, and the import dialog does not permit the value to be changed.

The workaround was to add 'vnc/vga' to the supported protocols list in a .properties file in the engine OSinfo folder, e.g.:
os.other.devices.display.protocols.value = spice/qxl,vnc/cirrus,vnc/qxl,vnc/vga

Once the engine was restarted the VM import process worked fine, and there have been no issues starting the VM with a VGA device or accessing the VM's console. To resolve the issue I'd suggest that either:
 - 'vnc/vga' be added to the default supported protocols list; or
- the video type defined in the .ovf file for the VM to be imported is recognized/honoured by the import dialog; or
- if the import dialog defaults to a particular video device, that it default to one that is supported by the engine for the OS defined in the VM's .ovf file.

I can create Bugzilla entries for the above if required.

Cheers,

Chris




PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY NEED TO.

This email and its attachments may contain information which is confidential and/or legally privileged. If you are not the intended recipient of this e-mail please notify the sender immediately by e-mail and delete this e-mail and its attachments from your computer and IT systems. You must not copy, re-transmit, use or disclose (other than to the sender) the existence or contents of this email or its attachments or permit anyone else to do so.

-----------------------------

-----Original Message-----
Date: Thu, 20 Aug 2015 16:06:29 +0200
From: Sandro Bonazzola <sbonazzo at redhat.com>
To: announce at ovirt.org, devel at ovirt.org, users <users at ovirt.org>
Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Beta Release is now
        available       for testing
Message-ID:
        <CAPQRNT=3VKsCpmL7PKS9zJ1LO9pm7iOBC1hJcFyiozHXrTKZzQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

The oVirt Project is pleased to announce the availability of the Third
Beta release
of oVirt 3.6 for testing, as of August 20th, 2015.

oVirt is an open source alternative to VMware vSphere, and provides an
excellent KVM management interface for multi-node virtualization.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).

This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
Fedora 21 and Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.

This release of oVirt 3.6.0 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.

Please refer to release notes [1] for Installation / Upgrade instructions.
New Node ISO and oVirt Live ISO will be available soon as well[2].

Please note that mirrors[3] may need usually one day before being
synchronized.

Please refer to the release notes for known issues in this release.

[1] https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ovirt.org_OVirt-5F3.6-5FRelease-5FNotes&d=BQICAg&c=TIOfIC5RJMrWnrHy3gMA_uzZorHPsT2JAZ4ovvKrwfU&r=BySoqRnx7DkFgW9jTr0X3n4jyUCYMhEW3zohkx1y9UMLY3AiKE4p3_xbrUE-TEkT&m=Nn2-jEauzS-3jS1qzRYroOb5HLr9xHTH-kdgQC0YWhI&s=CzCRmHZiGpuu341ieDFsmwEUuRajELDr78ua7UX9mKU&e=
[2] https://urldefense.proofpoint.com/v2/url?u=http-3A__plain.resources.ovirt.org_pub_ovirt-2D3.6-2Dpre_iso_&d=BQICAg&c=TIOfIC5RJMrWnrHy3gMA_uzZorHPsT2JAZ4ovvKrwfU&r=BySoqRnx7DkFgW9jTr0X3n4jyUCYMhEW3zohkx1y9UMLY3AiKE4p3_xbrUE-TEkT&m=Nn2-jEauzS-3jS1qzRYroOb5HLr9xHTH-kdgQC0YWhI&s=mPY7dO52ezx7oRn7jBQP9bzNs_rGLMOEPs20pEhuQR8&e=
<https://urldefense.proofpoint.com/v2/url?u=http-3A__plain.resources.ovirt.org_pub_ovirt-2D3.6-2Dpre_iso_ovirt-2Dnode_&d=BQICAg&c=TIOfIC5RJMrWnrHy3gMA_uzZorHPsT2JAZ4ovvKrwfU&r=BySoqRnx7DkFgW9jTr0X3n4jyUCYMhEW3zohkx1y9UMLY3AiKE4p3_xbrUE-TEkT&m=Nn2-jEauzS-3jS1qzRYroOb5HLr9xHTH-kdgQC0YWhI&s=avw4dK3ngXfUFAdV5uDXc6LXp0WdOFK8fXswDC7zqC4&e= >
[3] https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ovirt.org_Repository-5Fmirrors-23Current-5Fmirrors&d=BQICAg&c=TIOfIC5RJMrWnrHy3gMA_uzZorHPsT2JAZ4ovvKrwfU&r=BySoqRnx7DkFgW9jTr0X3n4jyUCYMhEW3zohkx1y9UMLY3AiKE4p3_xbrUE-TEkT&m=Nn2-jEauzS-3jS1qzRYroOb5HLr9xHTH-kdgQC0YWhI&s=T95FRLT9YbdmZ2CV28geRRgYdV-a-eULjbrn6Xz0Jcw&e=

--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com



More information about the Users mailing list