Export VM from oVirt/RHEV to VMWare
by Colin Coe
Hi all
We run RHEV exclusively and I need to export a guest to one of our vendors
for analysis.
The vendor uses VMWare. How can I export a VM in OVF format out of RHEV
v3.5.8?
Thanks
CC
7 years, 11 months
Error creating a storage pool
by Sergei Hanus
Hi, All!
When trying to attach iscsi device from my storage system, ovirt complains,
that it cannot create a storage pool.
In logs I can see only this:
----------------------------------------------------------
RuntimeError(u'Error creating a storage pool:
(u"spUUID=3fcace71-cd2b-4ab4-9d64-8ef926ead05a,
poolName=hosted_datacenter, masterDom=71b0d510-c24a-49fa-86ec-6b5ebf530140,
domList=[u\'71b0d510-c24a-49fa-86ec-6b5ebf530140\',
u\'793985b7-9d11-44a4-bf68-4b509a967a52\'], masterVersion=1, clusterlock
params: ({\'LEASETIMESEC\': 60, \'LOCKPOLICY\': \'ON\', \'IOOPTIMEOUTSEC\':
10, \'LEASERETRIES\': 3, \'LOCKRENEWALINTERVALSEC\': 5})",)',), <traceback
object at 0x35935a8>)
-----------------------------------------------------------
I have verified, that this block device can be successfully connected and
partitioned on the same server using standard tools (iscsiadm, parted,
mount), so, storage system on its own is working fine.
My question is: what are requirements for block device in order for it to
be compliant with ovirt? I know, that block size must be 512 bytes, but
anything else?
Thank you in advance, any comments appreciated.
Sergei.
7 years, 11 months
oVirt Reports
by Marcin Michta
Hi,
Someone can tell me what kind of informations I can get from ovirt
reports? oVirt web page is poor about it.
Screenshots will be helpful.
Thank you,
Marcin
--
-------------------------------
The information in this email is confidential and may be legally
privileged, it may contain information that is confidential in CodiLime Sp.
z o. o. It is intended solely for the addressee. Any access to this email
by third parties is unauthorized. If you are not the intended recipient of
this message, any disclosure, copying, distribution or any action
undertaken or neglected in reliance thereon is prohibited and may result in
your liability for damages.
7 years, 11 months
Live migration Centos 7.3 -> Centos 7.2
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-dcd10716-a91b-4639-9b2e-e86d32cb73cc
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi there,=0A=
=0A=
maybe i missed the discussion on the mailing list. Today we installed=0A=
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.=0A=
It can be attached to our cluster wihtout problems. We are running Ovirt =
=0A=
4.0.6 but the cluster compatibility level is still 3.6.=0A=
=0A=
We can migrate a VM from qemu 2.3 to 2.6 =0A=
We cannot migrate a VM from qemu 2.6 to 2.3=0A=
=0A=
What happens:=0A=
=0A=
- qemu is started on the target host (centos 7.2)=0A=
- source qemu says: "initiating migration"=0A=
- dominfo on target gives:=0A=
Id: 21=0A=
Name: testvm=0A=
UUID: d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b=0A=
OS Typ: hvm=0A=
Status: pausiert=0A=
CPU(s): 2=0A=
CPU-Zeit: 48,5s=0A=
Max Speicher: 8388608 KiB=0A=
Verwendeter Speicher: 8388608 KiB=0A=
Bleibend: nein=0A=
Automatischer Start: deaktiviert=0A=
Verwaltete Sicherung: nein=0A=
Sicherheits-Modell: selinux=0A=
Sicherheits-DOI: 0=0A=
Sicherheitskennung: system_u:system_r:svirt_t:s0:c344,c836 (enforcing)=0A=
=0A=
Anyone experienced this behaviour? Maybe desired?=0A=
=0A=
Current software versions:=0A=
=0A=
centos 7.2 host:=0A=
- libvirt 1.2.17-13.el7_2.6=0A=
- qemu 2.3.0-31.el7.21.1=0A=
=0A=
centos 7.3 host:=0A=
- libvirt 2.0.0-10.el7_3.2=0A=
- qemu 2.6.0-27.1.el7=0A=
=0A=
Ovirt engine=0A=
- ovirt 4.0.6=0A=
=0A=
Thanks in advance.=0A=
=0A=
Markus=
------=_NextPartTM-000-dcd10716-a91b-4639-9b2e-e86d32cb73cc
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-dcd10716-a91b-4639-9b2e-e86d32cb73cc--
7 years, 11 months
glusterfs heal issues
by Gary Pedretty
--Apple-Mail=_B7D419EA-0EAE-41C9-87E3-583B5DAC7590
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
This is a self hosted Glusterized setup, with 3 hosts. I have had some =
glusterfs data storage domains have some disk issues where healing was =
required. The self heal seemed to startup and the Ovirt Management =
portal showed healing taking place in the Volumes/Brick tab. Later it =
showed everything ok. This is a replica 3 volume. I noticed however =
that the brick tab was not showing even use of the 3 bricks and looking =
on the actual hosts a df command also shows uneven use of the bricks. =
However gluster volume heal (vol) info shows zero entries for all =
bricks. There are no errors reported in the Data Center or Cluster, yet =
I see this uneven use of the bricks across the 3 hosts. =20
Doing a gluster volume status (vol) detail indeed shows different free =
disk space across the different bricks. However the Inode Count and =
Free Inodes are identical across all bricks. =20
I thought replica 3 was supposed to be mirrored across all nodes. Any =
idea why I am seeing the uneven use, or is this just something about =
glusterfs that is different when it comes to free space vs Inode Count?
Gary
------------------------------------------------------------------------
Gary Pedretty gary(a)ravnalaska.net =
<mailto:gary@eraalaska.net>
Systems Manager www.flyravn.com =
<http://www.flyravn.com/>
Ravn Alaska /\ 907-450-7251
5245 Airport Industrial Road / \/\ 907-450-7238 fax
Fairbanks, Alaska 99709 /\ / \ \ Second greatest commandment
Serving All of Alaska / \/ /\ \ \/\ =E2=80=9CLove your =
neighbor as
Really loving the record green up date! Summmer!! yourself=E2=80=9D =
Matt 22:39
------------------------------------------------------------------------
--Apple-Mail=_B7D419EA-0EAE-41C9-87E3-583B5DAC7590
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">This is a self hosted Glusterized setup, with 3 hosts. =
I have had some glusterfs data storage domains have some disk =
issues where healing was required. The self heal seemed to startup =
and the Ovirt Management portal showed healing taking place in the =
Volumes/Brick tab. Later it showed everything ok. This is a =
replica 3 volume. I noticed however that the brick tab was not =
showing even use of the 3 bricks and looking on the actual hosts a df =
command also shows uneven use of the bricks. However gluster =
volume heal (vol) info shows zero entries for all bricks. There =
are no errors reported in the Data Center or Cluster, yet I see this =
uneven use of the bricks across the 3 hosts. <div class=3D""><br =
class=3D""></div><div class=3D"">Doing a gluster volume status (vol) =
detail indeed shows different free disk space across the different =
bricks. However the Inode Count and Free Inodes are identical =
across all bricks. </div><div class=3D""><br class=3D""></div><div =
class=3D"">I thought replica 3 was supposed to be mirrored across all =
nodes. Any idea why I am seeing the uneven use, or is this just =
something about glusterfs that is different when it comes to free space =
vs Inode Count?<div class=3D""><br class=3D""></div><div =
class=3D"">Gary</div><div class=3D""><br class=3D""></div><div =
class=3D""><br class=3D""><div class=3D"">
<div style=3D"color: rgb(0, 0, 0); letter-spacing: normal; orphans: =
auto; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; widows: auto; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><div style=3D"color: rgb(0, 0, 0); letter-spacing: normal; =
orphans: auto; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; widows: auto; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><div style=3D"orphans: auto; text-align: start; text-indent: =
0px; widows: auto; word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;" class=3D""><div style=3D"orphans: =
auto; text-align: start; text-indent: 0px; widows: auto; word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space;" class=3D""><div style=3D"orphans: auto; text-align: =
start; text-indent: 0px; widows: auto; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><div style=3D"orphans: auto; text-align: start; text-indent: =
0px; widows: auto; word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;" class=3D""><div style=3D"orphans: =
auto; text-align: start; text-indent: 0px; widows: auto; word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space;" class=3D""><div style=3D"orphans: auto; text-align: =
start; text-indent: 0px; widows: auto; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><div style=3D"orphans: auto; text-align: start; text-indent: =
0px; widows: auto; word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;" class=3D""><font face=3D"Menlo" =
style=3D"color: rgb(0, 0, 0); font-size: 12px; letter-spacing: normal; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px;" class=3D""><div =
class=3D"">---------------------------------------------------------------=
---------</div><div class=3D"">Gary Pedretty =
=
<a =
href=3D"mailto:gary@eraalaska.net" =
class=3D"">gary(a)ravnalaska.net</a></div><div class=3D"">Systems Manager =
=
=
<a href=3D"http://www.flyravn.com" =
class=3D"">www.flyravn.com</a></div><div class=3D"">Ravn Alaska =
=
/\ =
907-450-7251</div><div class=3D"">5245 Airport Industrial =
Road / \/\ =
907-450-7238 fax</div><div class=3D"">Fairbanks, Alaska =
99709 /\ / \ \ =
Second greatest commandment</div></font><font face=3D"Monaco" =
class=3D""><span style=3D"font-size: 12px;" class=3D"">Serving All of =
Alaska / \/ /\ \ \/\ =
=E2=80=9CLove your neighbor as</span></font><br =
style=3D"font-family: Monaco;" class=3D""><font face=3D"Menlo" =
style=3D"color: rgb(0, 0, 0); letter-spacing: normal; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: =
0px;" class=3D""><span style=3D"font-size: 12px;" class=3D"">Really =
loving the record green up date! Summmer!! yourself=E2=80=9D M=
att 22:39</span></font><div style=3D"color: rgb(0, 0, 0); =
letter-spacing: normal; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; font-family: Menlo;" =
class=3D""></div><font face=3D"Menlo" style=3D"font-size: 12px;" =
class=3D""></font><span style=3D"color: rgb(0, 0, 0); letter-spacing: =
normal; text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; font-size: 12px;" class=3D""><font =
face=3D"Menlo" class=3D""><div =
class=3D"">---------------------------------------------------------------=
---------</div></font></span><div style=3D"color: rgb(0, 0, 0); =
letter-spacing: normal; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=3D""><font =
face=3D"Menlo" style=3D"font-size: 12px;" class=3D""><br =
class=3D""></font></div></div><span style=3D"color: rgb(0, 0, 0); =
letter-spacing: normal; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; font-size: 12px;" =
class=3D""><br class=3D"Apple-interchange-newline"></span></div><span =
style=3D"color: rgb(0, 0, 0); letter-spacing: normal; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: =
0px; font-size: 12px;" class=3D""><br =
class=3D"Apple-interchange-newline"></span></div><span style=3D"color: =
rgb(0, 0, 0); letter-spacing: normal; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; font-size: =
12px;" class=3D""><br class=3D"Apple-interchange-newline"></span></div><br=
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"><br =
class=3D"Apple-interchange-newline">
</div>
<br class=3D""></div></div></body></html>=
--Apple-Mail=_B7D419EA-0EAE-41C9-87E3-583B5DAC7590--
7 years, 11 months
Re: [ovirt-users] OVF disk errors
by Pavel Gashev
Michael,
oVirt 3.6 doesn't work well on CentOS 7.3. Upgrade vdsm to 4.17.35.
-----Original Message-----
From: <users-bounces(a)ovirt.org> on behalf of Michael Watters <Michael.Watters(a)dart.biz>
Date: Thursday 5 January 2017 at 23:12
To: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: [ovirt-users] OVF disk errors
Hello,
I have two hosts in a cluster running ovirt 3.6 and I keep seeing
errors in the event log as follows.
Jan 5, 2017 2:34:01 PM

Host ovirt-node-production2 power management was verified successfully.

Jan 5, 2017 2:34:01 PM

Status of host ovirt-node-production2 was set to Up.

Jan 5, 2017 2:33:58 PM
Executing power management status on Host ovirt-node-production2 using Proxy Host ovirt-node-production1 and Fence Agent ipmilan:1.2.3.4

Jan 5, 2017 2:33:48 PM

Failed to update OVF disks c7c567a3-ebd5-4e3a-bf1e-66080e8a09b4, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain 2-Production-Faster).

Jan 5, 2017 2:33:47 PM

Host ovirt-node-production2 is not responding. It will stay in Connecting state for a grace period of 104 seconds and after that an attempt to fence the host will be issued.

Jan 5, 2017 2:33:47 PM

Failed to update OVF disks 389bd0fe-804c-428d-9de1-640d83fe9a29, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain 1-Production-Slower).

Jan 5, 2017 2:33:42 PM

VDSM ovirt-node-production2 command failed: Logical Volume extend failed
What concerns me the most is the last two errors. I've verified that all disks are online and all volume groups are working as expected. Is there a way to manually update an OVF disk? Here is what the engine database shows for these disk IDs.
engine=# select * from storage_domains_ovf_info where ovf_disk_id = 'c7c567a3-ebd5-4e3a-bf1e-66080e8a09b4' ;
storage_domain_id | status | ovf_disk_id | stored_ovfs_ids | last_updated
--------------------------------------+--------+--------------------------------------+-----------------+----------------------------
32f7c737-c1ee-4d2e-82a7-1b5e6efe0cf8 | 1 | c7c567a3-ebd5-4e3a-bf1e-66080e8a09b4 | | 2016-11-21 13:48:55.756-05
(1 row)
engine=# select * from storage_domains_ovf_info where ovf_disk_id = '389bd0fe-804c-428d-9de1-640d83fe9a29' ;
storage_domain_id | status | ovf_disk_id | stored_ovfs_ids | last_updated
--------------------------------------+--------+--------------------------------------+-----------------+----------------------------
52e48bb6-e477-41fe-aa25-69fc04b47c98 | 1 | 389bd0fe-804c-428d-9de1-640d83fe9a29 | | 2016-12-31 00:13:58.924-05
(1 row)
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 11 months
Upgrade cluster + adding nodes
by Davide Ferrari
Hello everyone
currently I've a 4-nodes oVirt 4.0.4 cluster running on CentOS 7.2.
Given that we have now oVirt 4.0.6 and CentOS 7.3, what's the best update
path? I was reading threads about qemu-kvm-ev 2.6 when CentOS 7.3 was
released but I was wondering if with 4.0.6 release something has changed.
Moreover, I'd like to add 4 more nodes in the near future, I guess I should
have all the same version and thus I must fully upgrade the original 4
nodes (both OS + oVirt) before adding new, fully up-to-date nodes, right?
Thanks!
--
Davide Ferrari
Senior Systems Engineer
7 years, 11 months
oVirt Engine 4.0.5 and CentOS 7.3 Instability
by Rogério Ceni Coelho
Hi oVirt Gurus,
Happy new year to everyone !!!
I update oVirt Engine to 4.0.5 from 4.0.4 and Centos to 7.3 from 7.2 last
week and after that I have instability four times. Every time ovirt engine
seems to loose communication with one or more node servers like this image
below. Every time I rebooted oVirt engine server and everything came back
to normal.
Anyone with this kind of problem ???
[image: pasted1]
After reboot :
[image: pasted2]
7 years, 11 months