ovirt 4.1: migrate vm to node 2 results in lost connectivity
by yayo (j)
Hi all,
I have a simple 3 node ovirt 4.1 gluster. All works fine but when I create
or move a vm to node 2 the connection is lost, if I move the vm back to
node 1 all works fine again. Looking to ovirt engine, network seems
identical (for now I have only ovirtmng network). If I ping something from
node 2, ping works fine (internet in general). Where can I check also?
thank you
7 years, 7 months
Smtp configuration for notification
by wodel youchi
Hi,
When we install the vm engine, in the configuration process we are asked to
give an smtp server and email addresses for notification but then nothing.
When we looked to the event notification configuration we found this
file ovirt-engine-notifier.conf and we thought that the first configuration
was there but since the notification service was disabled notifications
were not working, but No, there was no configuration on that file we had to
modify it and now notifications are working.
The question is : is the first smtp configuration used ? If yes, when? Can
we test it ? And if we want to modify it, where the configuration is stored
?
Regards
7 years, 7 months
Regenerating SSL keys
by Jamie Lawrence
The key generated by the engine install ended up with a bad CN; it has a five-digit number appended to the host name, and no SAN.
I've lived with this through setup, but now I'm getting close to prod use, and need to clean up so that it is usable for general consumption. And the SPICE HTML client is completely busted due to this; that's a problem because we're mostly MacOS on the client side, and the Mac Spice client is unusable for normal humans.
I'm wary of attempting to regenerate these manually, as I don't have a handle on how the keysare used by the various components.
What is the approved method of regenerating these keys?
-j
7 years, 7 months
Ovirt nested in VMware
by Mark Duggan
Hi everyone,
>From reading through the mailing list, it does appear that it's possible to
have the ovirt nodes/hosts be VMware virtual machines, once I enable the
appropriate settings on the VMware side. All seems to have gone well, I can
see the hosts in the ovirt interface, but when I attempt to create and
start a VM it never gets past printing the SeaBios version and the machine
UUID to the screen/console. It doesn't appear to try to boot from the hard
disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
Are there additional debug logs I can look at or enable to help further
diagnose what is happening?
Thanks
Mark
7 years, 7 months
Backup ovirt engine
by andrewzabomb@aol.com
------=_Part_53727_1194598365.1494860695128
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I'm not sure how to backup the Ovirt engine to an external hard drive. I plan on updating to Ovirt 3.6 (as I currently have 3.5), then centOS 7 (currently using centOS 6), and after updating the Ovirt 3.6 to 4.0, but I want to have a backup file on an external drive before going through with the updates.
Thanks
------=_Part_53727_1194598365.1494860695128
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<font color='black' size='2' face='arial'><pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0" style="font-size: small; white-space: pre-wrap; word-wrap: break-word; width: 50em;"><br>
</pre><pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0" style="font-size: small; white-space: pre-wrap; word-wrap: break-word; width: 50em;"><font face="Arial, Helvetica, sans-serif">I'm not sure how to backup the Ovirt engine to an external hard drive. I plan on updating to Ovirt 3.6 (as I currently have 3.5), then centOS 7 (currently using centOS 6), and after updating the Ovirt 3.6 to 4.0, but I want to have a backup file on an external drive before going through with the updates. </font></pre><pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0" style="font-size: small; white-space: pre-wrap; word-wrap: break-word; width: 50em;"><font face="Arial, Helvetica, sans-serif"><br>
</font></pre><pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0" style="font-size: small; white-space: pre-wrap; word-wrap: break-word; width: 50em;"><font face="Arial, Helvetica, sans-serif">Thanks</font></pre></font>
------=_Part_53727_1194598365.1494860695128--
7 years, 7 months
Ovirt + Citrix Xenserver support?
by santosh bhabal
Hello Experts,
I am new to Ovirt community.
Apology if this question has asked earlier.
I just wanted to know that does Ovirt support Citrix Xenserver or not?
Reagrds
Santosh.
7 years, 7 months
Self-Hosted Engine Deploy - CephFS Not a Storage Option
by Michael Poat
This is a multi-part message in MIME format.
--------------EDA4F48E9FCD0C6C900C6E59
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
I currently have a 30 node ~200TB Raw Ceph storage primarily used as a
CephFS POSIX Complaint storage for my enclave. Currently I am working on
setting up a dual node High Availability ovirt 4.1 cluster. Considering
avoiding any single points of failure, I would like to use CephFS as my
storage domains and want to use the Self-Hosed Engine feature.
When setting up the Self-Hosted Engine and reaching the steps for
"Configuring Storage", POSIX Compliant FS or CephFS is not an option...
Prompt:
|During customization use CTRL-D to abort. Please specify the storage you
would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:|
Is there a work around for this or any future POSIX FS/CephFS Support
for the Self-Hosted-Engine setup? I was able to create CephFS Domains
within Ovirt and they work great. I was also wondering if it would be
possible to migrate the self-hosted VM to the CephFS storage but this
would require shutting down the VM itself, sort of a catch-22.
Thanks,
-Michael
--------------EDA4F48E9FCD0C6C900C6E59
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hello,<br>
</p>
<p>I currently have a 30 node ~200TB Raw Ceph storage primarily used
as a CephFS POSIX Complaint storage for my enclave. Currently I am
working on setting up a dual node High Availability ovirt 4.1
cluster. Considering avoiding any single points of failure, I
would like to use CephFS as my storage domains and want to use the
Self-Hosed Engine feature.<br>
</p>
<p>When setting up the Self-Hosted Engine and reaching the steps for
"Configuring Storage", POSIX Compliant FS or CephFS is not an
option...</p>
<p>Prompt:<br>
</p>
<pre class="highlight plaintext"><code> During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:</code></pre>
<p>Is there a work around for this or any future POSIX FS/CephFS
Support for the Self-Hosted-Engine setup? I was able to create
CephFS Domains within Ovirt and they work great. I was also
wondering if it would be possible to migrate the self-hosted VM to
the CephFS storage but this would require shutting down the VM
itself, sort of a catch-22.</p>
<p>Thanks,</p>
<p>-Michael<br>
</p>
</body>
</html>
--------------EDA4F48E9FCD0C6C900C6E59--
7 years, 7 months
Setting up GeoReplication
by Jim Kusznir
Hi All:
I've been trying to set up georeplication for a while now, but can't seem
to make it work. I've found documentation on the web (mostly
https://gluster.readthedocs.io/en/refactor/Administrator%20Guide/Geo%20Re...),
and I found
http://blog.gluster.org/2015/09/introducing-georepsetup-gluster-geo-repli...
Unfortunately, it seems that some critical steps are missing from both, and
I can't figure out for sure what they are.
My environment:
Production: replica 2 + arbitrator running on my 3-node oVirt cluster, 3
volumes (engine, data, iso).
New geo-replication: Raspberry Pi3 with USB hard drive shoved in some other
data closet off-site.
I've installed rasbian-lite, and after much fighting, got
glusterfs-*-3.8.11 installed. I've created my mountpoint (USB hard drive,
much larger than my gluster volumes), and then ran the command. I get this
far:
[ OK] georep.nwfiber.com is Reachable(Port 22)
[ OK] SSH Connection established root(a)georep.nwfiber.com
[ OK] Master Volume and Slave Volume are compatible (Version: 3.8.11)
[NOT OK] Unable to Mount Gluster Volume georep.nwfiber.com:engine-rep
Trying it with the steps in the gluster docs also has the same problem. No
long files are generated on the slave. Log files on the master include:
[root@ovirt1 geo-replication]# more georepsetup.mount.log
[2017-05-13 17:26:27.318599] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-glusterfs: Started running glusterfs version 3.8.11 (args:
glusterfs --xlator-option="*dht.lookup-unhashed=off" --volfile-server
localhost --volfile-id engine -l /var/log/glusterfs/geo-repli
cation/georepsetup.mount.log --client-pid=-1 /tmp/georepsetup_wZtfkN)
[2017-05-13 17:26:27.341170] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-05-13 17:26:27.341260] E [socket.c:2309:socket_connect_finish]
0-glusterfs: connection to ::1:24007 failed (Connection refused
)
[2017-05-13 17:26:27.341846] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: local
host (Transport endpoint is not connected)
[2017-05-13 17:26:31.335849] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2017-05-13 17:26:31.337545] I [MSGID: 114020] [client.c:2356:notify]
0-engine-client-0: parent translators are ready, attempting co
nnect on transport
[2017-05-13 17:26:31.344485] I [MSGID: 114020] [client.c:2356:notify]
0-engine-client-1: parent translators are ready, attempting co
nnect on transport
[2017-05-13 17:26:31.345146] I [rpc-clnt.c:1965:rpc_clnt_reconfig]
0-engine-client-0: changing port to 49157 (from 0)
[2017-05-13 17:26:31.350868] I [MSGID: 114020] [client.c:2356:notify]
0-engine-client-2: parent translators are ready, attempting co
nnect on transport
[2017-05-13 17:26:31.355946] I [MSGID: 114057]
[client-handshake.c:1440:select_server_supported_programs]
0-engine-client-0: Using P
rogram GlusterFS 3.3, Num (1298437), Version (330)
[2017-05-13 17:26:31.356280] I [rpc-clnt.c:1965:rpc_clnt_reconfig]
0-engine-client-1: changing port to 49157 (from 0)
Final graph:
+------------------------------------------------------------------------------+
1: volume engine-client-0
2: type protocol/client
3: option clnt-lk-version 1
4: option volfile-checksum 0
5: option volfile-key engine
6: option client-version 3.8.11
7: option process-uuid
ovirt1.nwfiber.com-25660-2017/05/13-17:26:27:311929-engine-client-0-0-0
8: option fops-version 1298437
9: option ping-timeout 30
10: option remote-host ovirt1.nwfiber.com
11: option remote-subvolume /gluster/brick1/engine
12: option transport-type socket
13: option username 028984cf-0399-42e6-b04b-bb9b1685c536
14: option password eae737cc-9659-405f-865e-9a7ef97a3307
15: option filter-O_DIRECT off
16: option send-gids true
17: end-volume
18:
19: volume engine-client-1
20: type protocol/client
21: option ping-timeout 30
22: option remote-host ovirt2.nwfiber.com
23: option remote-subvolume /gluster/brick1/engine
24: option transport-type socket
25: option username 028984cf-0399-42e6-b04b-bb9b1685c536
26: option password eae737cc-9659-405f-865e-9a7ef97a3307
27: option filter-O_DIRECT off
28: option send-gids true
29: end-volume
30:
31: volume engine-client-2
32: type protocol/client
33: option ping-timeout 30
34: option remote-host ovirt3.nwfiber.com
35: option remote-subvolume /gluster/brick1/engine
36: option transport-type socket
37: option username 028984cf-0399-42e6-b04b-bb9b1685c536
38: option password eae737cc-9659-405f-865e-9a7ef97a3307
39: option filter-O_DIRECT off
40: option send-gids true
41: end-volume
42:
43: volume engine-replicate-0
44: type cluster/replicate
45: option arbiter-count 1
46: option data-self-heal-algorithm full
47: option eager-lock enable
48: option quorum-type auto
49: option shd-max-threads 6
50: option shd-wait-qlength 10000
51: option locking-scheme granular
52: subvolumes engine-client-0 engine-client-1 engine-client-2
53: end-volume
54:
55: volume engine-dht
56: type cluster/distribute
57: option lock-migration off
58: subvolumes engine-replicate-0
59: end-volume
60:
61: volume engine-shard
62: type features/shard
63: option shard-block-size 512MB
64: subvolumes engine-dht
65: end-volume
66:
67: volume engine-write-behind
68: type performance/write-behind
69: option strict-O_DIRECT on
70: subvolumes engine-shard
71: end-volume
72:
73: volume engine-readdir-ahead
74: type performance/readdir-ahead
75: subvolumes engine-write-behind
76: end-volume
77:
78: volume engine-open-behind
79: type performance/open-behind
80: subvolumes engine-readdir-ahead
81: end-volume
82:
83: volume engine
84: type debug/io-stats
85: option log-level INFO
86: option latency-measurement off
87: option count-fop-hits off
88: subvolumes engine-open-behind
89: end-volume
90:
91: volume meta-autoload
92: type meta
93: subvolumes engine
94: end-volume
95:
+------------------------------------------------------------------------------+
[2017-05-13 17:26:31.360579] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 0-engine-client-0: Connected
to engine
-client-0, attached to remote volume '/gluster/brick1/engine'.
[2017-05-13 17:26:31.360599] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 0-engine-client-0: Server
and Client l
k-version numbers are not same, reopening the fds
[2017-05-13 17:26:31.360707] I [MSGID: 108005]
[afr-common.c:4387:afr_notify] 0-engine-replicate-0: Subvolume
'engine-client-0' came
back up; going online.
[2017-05-13 17:26:31.360793] I [MSGID: 114035]
[client-handshake.c:202:client_set_lk_version_cbk] 0-engine-client-0:
Server lk versi
on = 1
[2017-05-13 17:26:31.361284] I [rpc-clnt.c:1965:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49158 (from 0)
[2017-05-13 17:26:31.365070] I [MSGID: 114057]
[client-handshake.c:1440:select_server_supported_programs]
0-engine-client-1: Using P
rogram GlusterFS 3.3, Num (1298437), Version (330)
[2017-05-13 17:26:31.365788] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 0-engine-client-1: Connected
to engine
-client-1, attached to remote volume '/gluster/brick1/engine'.
[2017-05-13 17:26:31.365821] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 0-engine-client-1: Server
and Client l
k-version numbers are not same, reopening the fds
[2017-05-13 17:26:31.366059] I [MSGID: 114035]
[client-handshake.c:202:client_set_lk_version_cbk] 0-engine-client-1:
Server lk versi
on = 1
[2017-05-13 17:26:31.369948] I [MSGID: 114057]
[client-handshake.c:1440:select_server_supported_programs]
0-engine-client-2: Using P
rogram GlusterFS 3.3, Num (1298437), Version (330)
[2017-05-13 17:26:31.370657] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 0-engine-client-2: Connected
to engine
-client-2, attached to remote volume '/gluster/brick1/engine'.
[2017-05-13 17:26:31.370683] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 0-engine-client-2: Server
and Client l
k-version numbers are not same, reopening the fds
[2017-05-13 17:26:31.383548] I [MSGID: 114035]
[client-handshake.c:202:client_set_lk_version_cbk] 0-engine-client-2:
Server lk versi
on = 1
[2017-05-13 17:26:31.383649] I [fuse-bridge.c:4147:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 k
ernel 7.22
[2017-05-13 17:26:31.383676] I [fuse-bridge.c:4832:fuse_graph_sync] 0-fuse:
switched to graph 0
[2017-05-13 17:26:31.385453] I [MSGID: 108031]
[afr-common.c:2157:afr_local_discovery_cbk] 0-engine-replicate-0: selecting
local rea
d_child engine-client-0
[2017-05-13 17:26:31.396741] I [fuse-bridge.c:5080:fuse_thread_proc]
0-fuse: unmounting /tmp/georepsetup_wZtfkN
[2017-05-13 17:26:31.397086] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f8838df6dc5] -->glusterf
s(glusterfs_sigwaiter+0xe5) [0x7f883a488cd5]
-->glusterfs(cleanup_and_exit+0x6b) [0x7f883a488b4b] ) 0-: received signum
(15), shutti
ng down
[2017-05-13 17:26:31.397112] I [fuse-bridge.c:5788:fini] 0-fuse: Unmounting
'/tmp/georepsetup_wZtfkN'.
[2017-05-13 17:26:31.413901] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-glusterfs: Started running glusterfs version 3.8.11 (args:
glusterfs --xlator-option="*dht.lookup-unhashed=off" --volfile-server
georep.nwfiber.com --volfile-id engine -l /var/log/glusterfs/
geo-replication/georepsetup.mount.log --client-pid=-1
/tmp/georepsetup_M5poIr)
[2017-05-13 17:26:31.458733] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-05-13 17:26:31.458833] E [socket.c:2309:socket_connect_finish]
0-glusterfs: connection to 192.168.8.126:24007 failed (Connecti
on refused)
[2017-05-13 17:26:31.458886] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: geore
p.nwfiber.com (Transport endpoint is not connected)
[2017-05-13 17:26:31.458900] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2017-05-13 17:26:31.459173] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xdb) [0x7f18d6c89aab]
-->glusterfs(+0x10309) [0x7f18d73b9309] -->glusterfs(cleanup_and_exit+0x6b)
[0x7f18d73b2b4b] ) 0-: received signum (1), shutting dow
n
[2017-05-13 17:26:31.459218] I [fuse-bridge.c:5788:fini] 0-fuse: Unmounting
'/tmp/georepsetup_M5poIr'.
[2017-05-13 17:26:31.459887] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f18d5d20dc5] -->glusterf
s(glusterfs_sigwaiter+0xe5) [0x7f18d73b2cd5]
-->glusterfs(cleanup_and_exit+0x6b) [0x7f18d73b2b4b] ) 0-: received signum
(15), shutti
ng down
I don't know what to make of that.
On a whim, I thought that perhaps the georep setup does not set up the
remote volume (I assumed it would, I thought that was what the ssh was
required for, and none of the instructions mentioned create your
destination (replication) volume. So I tried to create it, but it won't
let me create a volume with replica 1. this is already a backup, I don't
need a backup of a backup. This further supported my thought that the
volume needs to be created by the georep setup commands.
Where am I wrong / what do I need to do to fix this?
--Jim
7 years, 7 months
Fwd: [Pkg-libvirt-discuss] Packaging oVirt for Debian
by Leni Kadali Mutungi
Forwarding this so that it is seen on oVirt's mailing list as well. In
future, I'll try to make sure that I mail both the Debian
Virtualization team and the oVirt users insofar as the topic is
relevant to both sides.
---------- Forwarded message ----------
From: Guido Günther <agx(a)sigxcpu.org>
Date: Mon, 15 May 2017 08:13:52 +0200
Subject: Re: [Pkg-libvirt-discuss] Packaging oVirt for Debian
To: Leni Kadali Mutungi <lenikmutungi(a)gmail.com>
On Mon, May 15, 2017 at 05:20:19AM +0300, Leni Kadali Mutungi wrote:
> Hello all. This is where I am so far with the compiling and packaging.
>
> http://lists.ovirt.org/pipermail/users/2017-April/081523.html
Nice you're making progress!
>
> I am now looking for how to enable the virt service and assign an IP
> address that
> ovirt-host-deploy can use.
>
> Also it seems that the yumpackager.py and miniyum.py scripts are
> dependencies for ovirt-host-deploy. So it seems I need to write/find a
> python script that is equivalent to yumpackager.py and miniyum.py so
> that that dependency for ovirt-host-deploy is satisfied
> as well, because setting it to not use a package was a short-term
> solution to get it to run successfully.
Have a look at python-apt. That will allow you to write a wrapper
similar to miniyum.py.
>
> So if I could be pointed in the direction of a suitable script that
> would work with Debian's aptitude, apt-get and apt in a similar
> manner, I'd be much obliged.
Please keep them mailing list in cc: so more people get a chance to help
you!
Cheers,
-- Guido
--
- Warm regards
Leni Kadali Mutungi
7 years, 7 months
ovirt-engine4, 404 - Not Found
by David David
Hello
Centos7.
Fresh install engine 4.1
Getting "404 - Not Found", while entering in webUI main page using https.
how to fix it?
engine.log in attachment.
7 years, 7 months