Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email: ydary(a)redhat.com
IRC : ydary
On Fri, Aug 28, 2015 at 4:19 AM, SULLIVAN, Chris (WGK) <
Chris.Sullivan(a)woodgroupkenny.com> wrote:
Hi,
I recently re-installed my test oVirt environment, upgrading from 3.5.2 to
3.6.0 beta 3 (Engine 3.6.0-0.0.master.20150819134454.gite6b79a7.el7.centos,
VDSM 4.17.3-0.el7.centos, GlusterFS 3.7.3-1.el7) in the process. Due to a
software issue outside of oVirt I had to start with a clean install for
3.6, however I kept all the old storage domains. All hosts and the engine
are running CentOS 7.1 and I'm using hosted-engine.
During the setup/recovery process I encountered the following issues:
:: Could not create GlusterFS Data domain due to 'General Exception'
I attempted to create a Data domain using a FQDN associated with a
floating IP, so that hosts could still mount the domain when the specific
GlusterFS host used to define the storage was down. This FQDN was
resolvable and contactable on each host in the farm. The floating IP is
shared between two of the four GlusterFS hosts. The logs reported an
unhandled exception ('x not in list') raised by the below statement (line
340 in
https://github.com/oVirt/vdsm/blob/master/vdsm/storage/storageServer.py ):
def _get_backup_servers_option(self):
servers = [brick.split(":")[0] for brick in
self.volinfo['bricks']]
servers.remove(self._volfileserver) #<--- Exception thrown here
if not servers:
return ""
return "backup-volfile-servers=" + ":".join(servers)
My assumption (without looking too deep in the code) was that since I used
a FQDN that did not have any bricks associated with it,
'self._volfileserver' would be set to a name that would not appear in
'servers', resulting in the exception. I patched it as per the following:
def _get_backup_servers_option(self):
servers = [brick.split(":")[0] for brick in
self.volinfo['bricks']]
if self._volfileserver in servers:
self.log.warn("Removing current volfileserver %s..." %
self._volfileserver)
servers.remove(self._volfileserver)
else:
self.log.warn("Current volfileserver not in servers.")
if not servers:
return ""
return "backup-volfile-servers=" + ":".join(servers)
Once patched the Data domain created successfully and appears to be
working normally, although I'm not sure if the above change has any
negative knock-on effects throughout the code or in specific situations.
I'd suggest that the _get_backup_servers_option method be tweaked to handle
this configuration gracefully by someone with more knowledge of the code,
either by allowing the configuration or rejecting it with a suitable error
message if the configuration is intended to be unsupported.
:: Could not import VMs from old Data domain due to unsupported video type
(VGA)
Once the new data center was up and running, I attached the old Data
domain and attempted to import the VMs/templates. Template import worked
fine, however VM import failed with an error stating the video device
(which came up as VGA) was not supported. I attempted to fix this by
specifically defining the video type as 'qxl' in the .ovf file for the VM
in the OVF_STORE for the old storage however the VM would always come up
with video type VGA in the import dialog, and the import dialog does not
permit the value to be changed.
The workaround was to add 'vnc/vga' to the supported protocols list in a
.properties file in the engine OSinfo folder, e.g.:
os.other.devices.display.protocols.value =
spice/qxl,vnc/cirrus,vnc/qxl,vnc/vga
Once the engine was restarted the VM import process worked fine, and there
have been no issues starting the VM with a VGA device or accessing the VM's
console. To resolve the issue I'd suggest that either:
- 'vnc/vga' be added to the default supported protocols list; or
- the video type defined in the .ovf file for the VM to be imported is
recognized/honoured by the import dialog; or
- if the import dialog defaults to a particular video device, that it
default to one that is supported by the engine for the OS defined in the
VM's .ovf file.
I can create Bugzilla entries for the above if required.
Please do.
Thanks!
Cheers,
Chris
PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY
NEED TO.
This email and its attachments may contain information which is
confidential and/or legally privileged. If you are not the intended
recipient of this e-mail please notify the sender immediately by e-mail and
delete this e-mail and its attachments from your computer and IT systems.
You must not copy, re-transmit, use or disclose (other than to the sender)
the existence or contents of this email or its attachments or permit anyone
else to do so.
-----------------------------
-----Original Message-----
Date: Thu, 20 Aug 2015 16:06:29 +0200
From: Sandro Bonazzola <sbonazzo(a)redhat.com>
To: announce(a)ovirt.org, devel(a)ovirt.org, users <users(a)ovirt.org>
Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Beta Release is now
available for testing
Message-ID:
<CAPQRNT=
3VKsCpmL7PKS9zJ1LO9pm7iOBC1hJcFyiozHXrTKZzQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
The oVirt Project is pleased to announce the availability of the Third
Beta release
of oVirt 3.6 for testing, as of August 20th, 2015.
oVirt is an open source alternative to VMware vSphere, and provides an
excellent KVM management interface for multi-node virtualization.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
Fedora 21 and Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.
This release of oVirt 3.6.0 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
New Node ISO and oVirt Live ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1]
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ovirt.org_OVirt-5...
[2]
https://urldefense.proofpoint.com/v2/url?u=http-3A__plain.resources.ovirt...
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__plain.resources.ovirt...
>
[3]
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ovirt.org_Reposit...
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at
redhat.com
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users