Re: [ovirt-users] Fwd: Re: urgent issue
by Ravishankar N
This is a multi-part message in MIME format.
--------------030305080206090500040208
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi Chris,
Replies inline..
On 09/22/2015 09:31 AM, Sahina Bose wrote:
>
>
>
> -------- Forwarded Message --------
> Subject: Re: [ovirt-users] urgent issue
> Date: Wed, 9 Sep 2015 08:31:07 -0700
> From: Chris Liebman <chris.l(a)taboola.com>
> To: users <users(a)ovirt.org>
>
>
>
> Ok - I think I'm going to switch to local storage - I've had way to
> many unexplainable issue with glusterfs  :-(. Is there any reason I
> cant add local storage to the existing shared-storage cluster? I see
> that the menu item is greyed out....
>
>
What version of gluster and ovirt are you using?
>
>
>
> On Tue, Sep 8, 2015 at 4:19 PM, Chris Liebman <chris.l(a)taboola.com
> <mailto:chris.l@taboola.com>> wrote:
>
> Its possible that this is specific to just one gluster volume...Â
> I've moved a few VM disks off of that volume and am able to start
> them fine. My recolection is that any VM started on the "bad"
> volume causes it to be disconnected and forces the ovirt node to
> be marked down until Maint->Activate.
>
> On Tue, Sep 8, 2015 at 3:52 PM, Chris Liebman
> <chris.l(a)taboola.com> wrote:
>
> In attempting to put an ovirt cluster in production I'm
> running into some off errors with gluster it looks like. Its
> 12 hosts each with one brick in distributed-replicate.
> Â (actually 2 bricks but they are separate volumes)
>
These 12 nodes in dist-rep config, are they in replica 2 or replica 3?
The latter is what is recommended for VM use-cases. Could you give the
output of `gluster volume info` ?
>
> [root@ovirt-node268 glusterfs]# rpm -qa | grep vdsm
>
> vdsm-jsonrpc-4.16.20-0.el6.noarch
>
> vdsm-gluster-4.16.20-0.el6.noarch
>
> vdsm-xmlrpc-4.16.20-0.el6.noarch
>
> vdsm-yajsonrpc-4.16.20-0.el6.noarch
>
> vdsm-4.16.20-0.el6.x86_64
>
> vdsm-python-zombiereaper-4.16.20-0.el6.noarch
>
> vdsm-python-4.16.20-0.el6.noarch
>
> vdsm-cli-4.16.20-0.el6.noarch
>
>
> Â Â Everything was fine last week, however, today various
> clients in the gluster cluster seem get "client quorum not
> met" periodically - when they get this they take one of the
> bricks offline - this causes VM's to be attempted to move -
> sometimes 20 at a time. That takes a long time :-(. I've
> tried disabling automatic migration and teh VM's get paused
> when this happens - resuming gets nothing at that point as the
> volumes mount on the server hosting the VM is not connected:
>
>
> from
> rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02.log:
>
> [2015-09-08 21:18:42.920771] W [MSGID: 108001]
> [afr-common.c:4043:afr_notify] 2-LADC-TBX-V02-replicate-2:
> Client-quorum is not met
>
When client-quorum is not met (due to network disconnects, or gluster
brick processes going down etc), gluster makes the volume read-only.
This is expected behavior and prevents split-brains. It's probably a bit
late, but do you have the gluster fuse mount logs to confirm this
indeed was the issue?
> [2015-09-08 21:18:42.931751] I
> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02
>
> [2015-09-08 21:18:42.931836] W
> [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f1bebc84a51]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x
>
> 65) [0x4059b5] ) 0-: received signum (15), shutting down
>
> [2015-09-08 21:18:42.931858] I [fuse-bridge.c:5595:fini]
> 0-fuse: Unmounting
> '/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02'.
>
The VM pause you saw could be because of the unmount.I understand that a
fix (https://gerrit.ovirt.org/#/c/40240/) went in for ovirt 3-.6
(vdsm-4.17) to prevent vdsm from unmounting the gluster volume when vdsm
exits/restarts.
Is it possible to run a test setup on 3.6 and see if this is still
happening?
>
> And the mount is broken at that point:
>
> [root@ovirt-node267 ~]# df
>
> *df:
> `/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
> Transport endpoint is not connected*
>
Yes because it received a SIGTERM above.
Thanks,
Ravi
>
> Filesystem       1K-blocks Â
>   Used  Available Use% Mounted on
>
> /dev/sda3Â Â Â Â Â Â
> Â Â 51475068Â Â Â 1968452Â Â Â 46885176Â Â Â 5% /
>
> tmpfs          132210244   Â
> Â Â 0Â Â 132210244Â Â Â 0% /dev/shm
>
> /dev/sda2Â Â Â Â Â Â Â Â Â 487652Â Â Â Â 32409Â Â
> Â Â 429643Â Â Â 8% /boot
>
> /dev/sda1Â Â Â Â Â Â Â Â Â 204580Â Â Â Â Â 260Â Â
> Â Â 204320Â Â Â 1% /boot/efi
>
> /dev/sda5Â Â Â Â Â Â Â 1849960960 156714056
> 1599267616Â Â Â 9% /data1
>
> /dev/sdb1Â Â Â Â Â Â Â 1902274676Â Â 18714468
> 1786923588Â Â Â 2% /data2
>
> ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01
>
> Â Â Â Â Â Â Â Â Â Â Â Â 9249804800 727008640
> 8052899712 <tel:8052899712>Â Â Â 9%
> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V01
>
> ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03
>
> Â Â Â Â Â Â Â Â Â Â Â Â 1849960960Â Â Â Â 73728
> 1755907968Â Â Â 1%
> /rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:_LADC-TBX-V03
>
> The fix for that is to put the server in maintenance mode then
> activate it again. But all VM's need to be migrated or stopped
> for that to work.
>
>
> I'm not seeing any obvious network or disk errors......Â
>
> Are their configuration options I'm missing?
>
>
>
>
>
--------------030305080206090500040208
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi Chris,<br>
<br>
Replies inline..<br>
<br>
<div class="moz-cite-prefix">On 09/22/2015 09:31 AM, Sahina Bose
wrote:<br>
</div>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<br>
<div class="moz-forward-container"><br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] urgent issue</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Date:
</th>
<td>Wed, 9 Sep 2015 08:31:07 -0700</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">From:
</th>
<td>Chris Liebman <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:chris.l@taboola.com"><chris.l(a)taboola.com></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">To: </th>
<td>users <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:users@ovirt.org"><users(a)ovirt.org></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div dir="ltr">Ok - I think I'm going to switch to local storage
- I've had way to many unexplainable issue with glusterfs
 :-(. Is there any reason I cant add local storage to the
existing shared-storage cluster? I see that the menu item is
greyed out....
<div><br>
</div>
<div><br>
</div>
</div>
</div>
</blockquote>
<br>
What version of gluster and ovirt are you using? <br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div dir="ltr">
<div> </div>
<div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 8, 2015 at 4:19 PM, Chris
Liebman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:chris.l@taboola.com" target="_blank">chris.l(a)taboola.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Its possible that this is specific to just
one gluster volume... I've moved a few VM disks off of
that volume and am able to start them fine. My
recolection is that any VM started on the "bad" volume
causes it to be disconnected and forces the ovirt node
to be marked down until Maint->Activate.</div>
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 8, 2015 at 3:52
PM, Chris Liebman <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:chris.l@taboola.com"><a class="moz-txt-link-abbreviated" href="mailto:chris.l@taboola.com">chris.l(a)taboola.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">In attempting to put an ovirt
cluster in production I'm running into some
off errors with gluster it looks like. Its
12 hosts each with one brick in
distributed-replicate. Â (actually 2 bricks
but they are separate volumes)
<div><br>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
These 12 nodes in dist-rep config, are they in replica 2 or replica
3? The latter is what is recommended for VM use-cases. Could you
give the output of `gluster volume info` ?<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div> </div>
<div>
<p><span>[root@ovirt-node268 glusterfs]# rpm
-qa | grep vdsm</span></p>
<p><span>vdsm-jsonrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-gluster-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-xmlrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-yajsonrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-4.16.20-0.el6.x86_64</span></p>
<p><span>vdsm-python-zombiereaper-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-python-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-cli-4.16.20-0.el6.noarch</span></p>
<p><br>
</p>
<p>Â Â Everything was fine last week,
however, today various clients in the
gluster cluster seem get "client quorum
not met" periodically - when they get this
they take one of the bricks offline - this
causes VM's to be attempted to move -
sometimes 20 at a time. That takes a
long time :-(. I've tried disabling
automatic migration and teh VM's get
paused when this happens - resuming gets
nothing at that point as the volumes mount
on the server hosting the VM is not
connected:</p>
<div><br>
</div>
<div>
<p>from
rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02.log:</p>
<p><span>[2015-09-08 21:18:42.920771] W
[MSGID: 108001]
[afr-common.c:4043:afr_notify]
2-LADC-TBX-V02-replicate-2:
Client-quorum is </span><span>not met</span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
When client-quorum is not met (due to network disconnects, or
gluster brick processes going down etc), gluster makes the volume
read-only. This is expected behavior and prevents split-brains. It's
probably a bit late, but do you have the gluster fuse mount logs to
confirm this indeed was the issue?<br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span>[2015-09-08 21:18:42.931751] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02</span></p>
<p><span>[2015-09-08 21:18:42.931836] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51)
[0x7f1bebc84a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd)
[0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x</span></p>
<p><span>65) [0x4059b5] ) 0-: received
signum (15), shutting down</span></p>
<p><span>[2015-09-08 21:18:42.931858] I
[fuse-bridge.c:5595:fini] 0-fuse:
Unmounting
'/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02'.</span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
The VM pause you saw could be because of the unmount.I understand
that a fix (<a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/40240/">https://gerrit.ovirt.org/#/c/40240/</a>) went in for ovirt
3-.6 (vdsm-4.17) to prevent vdsm from unmounting the gluster volume
when vdsm exits/restarts. <br>
Is it possible to run a test setup on 3.6 and see if this is still
happening?<br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span><br>
</span></p>
<p><span>And the mount is broken at that
point:</span></p>
</div>
<div>
<p><span>[root@ovirt-node267 ~]# df</span></p>
<p><span><font color="#ff0000"><b>df:
`/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
Transport endpoint is not
connected</b></font></span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
Yes because it received a SIGTERM above.<br>
<br>
Thanks,<br>
Ravi<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span>Filesystem    Â
  1K-blocks Â
  Used  Available Use% Mounted on</span></p>
<p><span>/dev/sda3Â Â Â Â Â Â
  51475068   1968452   46885176   5%
/</span></p>
<p><span>tmpfs       Â
  132210244   Â
  0  132210244   0% /dev/shm</span></p>
<p><span>/dev/sda2Â Â Â Â Â Â Â
  487652    32409 Â
  429643   8% /boot</span></p>
<p><span>/dev/sda1Â Â Â Â Â Â Â
  204580     260 Â
  204320   1% /boot/efi</span></p>
<p><span>/dev/sda5Â Â Â Â Â
  1849960960 156714056
1599267616Â Â Â 9% /data1</span></p>
<p><span>/dev/sdb1Â Â Â Â Â
  1902274676  18714468
1786923588Â Â Â 2% /data2</span></p>
<p><span>ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01</span></p>
<p><span>Â Â Â Â Â Â Â Â Â Â
  9249804800 727008640 <a
moz-do-not-send="true"
href="tel:8052899712"
value="+18052899712" target="_blank">8052899712</a>Â Â Â 9%
/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V01</span></p>
<p><span>ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03</span></p>
<p><span>Â Â Â Â Â Â Â Â Â Â
  1849960960    73728
1755907968Â Â Â 1%
/rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:_LADC-TBX-V03</span></p>
<p>The fix for that is to put the server
in maintenance mode then activate it
again. But all VM's need to be migrated
or stopped for that to work.</p>
</div>
<div><br>
</div>
<div>I'm not seeing any obvious network or
disk errors...... </div>
</div>
<div><br>
</div>
<div>Are their configuration options I'm
missing?</div>
<div><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
</div>
<br>
</blockquote>
<br>
</body>
</html>
--------------030305080206090500040208--
9 years, 2 months
adding gluster domains
by Brett Stevens
Hi. First time on the lists. I've searched for this but no luck so sorry if
this has been covered before.
Im working with the latest 3.6 beta with the following infrastructure.
1 management host (to be used for a number of tasks so chose not to use
self hosted, we are a school and will need to keep an eye on hardware costs)
2 compute nodes
2 gluster nodes
so far built one gluster volume using the gluster cli to give me 2 nodes
and one arbiter node (management host)
so far, every time I create a volume, it shows up strait away on the ovirt
gui. however no matter what I try, I cannot create or import it as a data
domain.
the current error in the ovirt gui is "Error while executing action
AddGlusterFsStorageDomain: Error creating a storage domain's metadata"
logs, continuously rolling the following errors around
Scheduler_Worker-53) [] START, GlusterVolumesListVDSCommand(HostName =
sjcstorage02, GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
2015-09-22 03:57:29,903 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not associate brick
'sjcstorage01:/export/vmstore/brick01' of volume
'878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no gluster
network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not associate brick
'sjcstorage02:/export/vmstore/brick01' of volume
'878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no gluster
network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH, GlusterVolumesListVDSCommand,
return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf
I'm new to ovirt and gluster, so any help would be great
thanks
Brett Stevens
9 years, 2 months
Re: [ovirt-users] Password getting failed while Conversion
by Richard W.M. Jones
On Mon, Sep 21, 2015 at 02:51:32PM -0400, Douglas Schilling Landgraf wrote:
> Hi Budur,
>
> On 09/21/2015 03:39 AM, Budur Nagaraju wrote:
> >Hi
> >
> >While converting vwware to ovirt getting below error ,can someone help me ?
Which version of virt-v2v?
The latest version can be found by reading the instructions here:
https://www.redhat.com/archives/libguestfs/2015-April/msg00038.html
https://www.redhat.com/archives/libguestfs/2015-April/msg00039.html
Please don't use the old (0.9) version.
> >I have given the passowd in the file " $HOME/.netrc" ,
> >
> >[root@cstnfs ~]# virt-v2v -ic esx://10.206.68.57?no_verify=1
> ><http://10.206.68.57?no_verify=1> -o rhev -os
> >10.204.206.10:/cst/secondary --network perfmgt vm
> >virt-v2v: Failed to connect to esx://10.206.68.57?no_verify=1
> ><http://10.206.68.57?no_verify=1>: libvirt error code: 45, message:
> >authentication failed: Password request failed
>
> Have you used the below format in the .netrc?
> machine esx.example.com login root password s3cr3t
>
> Additionally, have you set 0600 as permission to .netrc?
> chmod 600 ~/.netrc
The new version of virt-v2v does not use '.netrc' at all. Instead
there is a '--password-file' option. Best to read the manual page:
http://libguestfs.org/virt-v2v.1.html
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
9 years, 2 months
noVNC error
by Michael Kleinpaste
So when I used oVirt 3.4.x noVNC worked wonderfully. We're running
3.5.1.1-1.el6 now and when I try to connect to a VMs console via noVNC I
get this error.
[image: Screen Shot 2015-09-21 at 10.09.23 AM.png]
I've downloaded the ca.crt file and installed it but I still get an HTTPS
error when connecting to the oVirt management console. Looking at the SSL
information Chrome says the following:
"The identity of this website has been verified by
ovirtm01.sharperlending.aws.96747. No Certificate Transparency information
was supplied by the server.
The certificate chain for this website contains at least one certificate
that was signed using a deprecated signature algorithm based on SHA-1."
Is this a known issue?
Thanks,
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
9 years, 2 months
[ovirt 3.5.4] cannot clone vm after updating to 3.5.4
by wodel youchi
Hi all,
We have o hosted-engine deployment with two hypervisors and iscsi for VMS
and NFS4 for the VM engine + ISO + Export.
Yesterday we did an update from ovirt 3.5.3 to 3.5.4 along with OS updates
for the hypervisors and the VM engine.
After that we are unable to clone VMs, the task does not finish.
We have this in vdsmd log
Thread-38::DEBUG::2015-09-21
11:50:42,374::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-66::DEBUG::2015-09-21
11:50:43,721::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-67::DEBUG::2015-09-21
11:50:44,189::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
vdsm-python-4.16.26-0.el7.centos.noarch
vdsm-4.16.26-0.el7.centos.x86_64
vdsm-xmlrpc-4.16.26-0.el7.centos.noarch
vdsm-yajsonrpc-4.16.26-0.el7.centos.noarch
vdsm-jsonrpc-4.16.26-0.el7.centos.noarch
vdsm-cli-4.16.26-0.el7.centos.noarch
vdsm-python-zombiereaper-4.16.26-0.el7.centos.noarch
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-1.2.8-16.el7_1.4.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.4.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.4.x86_64
libvirt-client-1.2.8-16.el7_1.4.x86_64
Thanks in advance
9 years, 2 months
Need info on importing the vms
by Budur Nagaraju
HI
Can you pls provide me the info on how to import a vmware ova format to
ovirt format ?
Thanks,
Nagaraju
9 years, 2 months
Password getting failed while Conversion
by Budur Nagaraju
Hi
While converting vwware to ovirt getting below error ,can someone help me ?
I have given the passowd in the file " $HOME/.netrc" ,
[root@cstnfs ~]# virt-v2v -ic esx://10.206.68.57?no_verify=1 -o rhev -os
10.204.206.10:/cst/secondary --network perfmgt vm
virt-v2v: Failed to connect to esx://10.206.68.57?no_verify=1: libvirt
error code: 45, message: authentication failed: Password request failed
Thanks,
Nagaraju
9 years, 2 months
[ANN] [QE] Bugzilla updates for oVirt Product
by Sandro Bonazzola
The oVirt team is pleased to announce that today oVirt moved to its own
classification within our Bugzilla system as previously anticipated [1].
No longer limited as a set of sub-projects, each building block
(sub-project) of oVirt will be a Bugzilla product.
This will allow tracking of package versions and target releases based on
their own versioning schema.
Each maintainer, for example, will have administrative rights on his or her
Bugzilla sub-project and will be able to change flags,
versions, targets, and components.
As part of the improvements of the Bugzilla tracking system, a flag system
has been added to the oVirt product in order to ease its management [2].
The changes will go into affect in stages, please review the wiki for more
details.
We invite you to review the new tracking system and get involved with oVirt
QA [3] to make oVirt better than ever!
[1] http://community.redhat.com/blog/2015/06/moving-focus-to-the-upstream/
[2] http://www.ovirt.org/Bugzilla_rework
[3] http://www.ovirt.org/OVirt_Quality_Assurance
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 2 months
Python: Clone snapshot into VM
by gregor
Hi,
I write currently a little backup tool in Python which use the following
workflow:
- create a snapshot -> works
- clone snapshot into VM -> help needed
- delete the snapshot -> works
- export VM to NFS share -> works
- delete cloned VM -> TODO
Is it possible to clone a snapshot into a VM like from the web-interface?
The above workflow is a little bit resource expensive but it will when
it is finished make Online-Full-backups of VM's.
cheers
gregor
9 years, 2 months
Live Storage Migration
by Markus Stockhausen
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
somehow I got lost about the possibility to do a live storage migration.
We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)
>From the WebUI I have the following possibilites:
1) disk without snapshot: VMs tab -> Disks -> Move: Button is active
but it does not allow to do a migration. No selectable storage domain
although we have 2 NFS systems. Gives warning hints about
"you are doing live migration, bla bla, ..."
2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed out
3) BUT! Disks tab -> Move: Works! No hints about "live migration"
I do not dare to click go ...
While 1/2 might be consistent they do not match to 3. Maybe someone
can give a hint what should work, what not and where me might have
an error.
Thanks.
Markus
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" id=3D"owaParaStyle"></style>
</head>
<body fpstyle=3D"1" ocsi=3D"0">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hi,
<div><br>
</div>
<div>somehow I got lost about the possibility to do a live storage migratio=
n.</div>
<div>We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)<=
/div>
<div><br>
</div>
<div>From the WebUI I have the following possibilites:</div>
<div><br>
</div>
<div>1) disk without snapshot: VMs tab -> Disks -> Move: Button is ac=
tive </div>
<div>but it <span style=3D"font-size: 10pt;">does not allow to do a mi=
gration. No selectable storage domain </span></div>
<div><span style=3D"font-size: 10pt;">although </span><span style=3D"f=
ont-size: 10pt;">we have 2 NFS systems. Gives warning hints about </sp=
an></div>
<div><span style=3D"font-size: 10pt;">"you are doing live migration, b=
la bla, ..."</span></div>
<div><br>
</div>
<div>2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed ou=
t</div>
<div><br>
</div>
<div>3) BUT! Disks tab -> Move: Works! No hints about "live migrati=
on"</div>
<div>I do not dare to click go ...</div>
<div><br>
</div>
<div>While 1/2 might be consistent they do not match to 3. Maybe someone</d=
iv>
<div>can give a hint what should work, what not and where me might have</di=
v>
<div>an error.</div>
<div><br>
</div>
<div>Thanks.</div>
<div><br>
</div>
<div>Markus</div>
<div><br>
</div>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_--
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750--
9 years, 2 months