Questions about Spice Proxy
by Kevin COUSIN
Hi list,
Hi try to setup a SPICE Proxy with HAproxy. It works fine for the WebUI. I define the proxy with engine-config -s SpiceProxyDefault="http://mygreatproxy.tld:8080". The connections are going on the nodes with the same port (here 8080 or 3128 line in documentation) or on the engine ? Need I to install a squid proxy instead of HAproxy ?
Regards
----
Kevin C.
9 years, 5 months
Comma Seperated Value doesn't accept by custom properties
by Punit Dambiwal
Hi,
I have installed noipspoof VDSM hooks...in the vdsm hooks it specify that
you can use multiple ip address by comma separate list....but it's not
working...
single ip can work without any issue...but can not add multiple ip address
in this filed...
[image: Inline image 1]
Thanks,
Punit
9 years, 5 months
Move VirtIO disk to External LUN disk
by Juan Carlos YJ. Lin
--=_29ee8d9f-0144-4834-b7ed-9ac528cd762c
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I have performance issue with the VirtIO disks and like to move it to External LUN on the ISCI storage to use the ISCSI pathtrought
How I can do that?
Juan Carlos Lin
Unisoft S.A.
+595-993-288330
---------------------------------------------------
"Antes de imprimir, recuérdese de su compromiso con el Medio Ambiente"
"Aviso: Este mensaje es dirigido para su destinatario y contiene informaciones que no pueden ser usadas por otras personas que no sean su(s) destinatario(s). La retransmisión del contenido no está autorizada fuera del contexto de su envío y a quien corresponde. El uso no autorizado de la información en este mensaje se halla penado por las leyes vigentes en todo el mundo. Si ha recibido este mensaje por error, por favor bórrala y notifique al remitente en la brevedad posible. El contenido de este mensaje no es responsabilidad de la Empresa y debe ser atribuido siempre a su autor. Gracias."
--=_29ee8d9f-0144-4834-b7ed-9ac528cd762c
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 10pt; color: #000000'><span>I have performance issue with the VirtIO disks and like to move it to External LUN on the ISCI storage to use the ISCSI pathtrought<br>How I can do that?<br><br><span name="x"></span>Juan Carlos Lin<br>Unisoft S.A.<br>+595-993-288330<span name="x"></span><br></span><br></div>
<br>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <HTML lang="es"> <HEAD> <TITLE>System-wide Disclaimer</TITLE> </HEAD> <BODY>---------------------------------------------------<br>"Antes de imprimir, recuérdese de su compromiso con el Medio Ambiente"<br>"Aviso: Este mensaje es dirigido para su destinatario y contiene informaciones que no pueden ser usadas por otras personas que no sean su(s) destinatario(s). La retransmisión del contenido no está autorizada fuera del contexto de su envío y a quien corresponde. El uso no autorizado de la información en este mensaje se halla penado por las leyes vigentes en todo el mundo. Si ha recibido este mensaje por error, por favor bórrala y notifique al remitente en la brevedad posible. El contenido de este mensaje no es responsabilidad de la Empresa y debe ser atribuido siempre a su autor. Gracias." </BODY> </HTML><br>
</body></html>
--=_29ee8d9f-0144-4834-b7ed-9ac528cd762c--
9 years, 5 months
Re: [ovirt-users] HA storage based on two nodes with one point of failure
by Юрий Полторацкий
This is a multi-part message in MIME format.
--------------070106010204050704030701
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi,
I have made a lab with a config listed below and have got unexpected
result. Someone, tell me, please, where did I go wrong?
I am testing oVirt. Data Center has two clusters: the first as a
computing with three nodes (node1, node2, node3); the second as a
storage (node5, node6) based on glusterfs (replica 2).
I want the storage to be HA. I have read here
<https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Admi...>
next:
For a replicated volume with two nodes and one brick on each machine, if
the server-side quorum is enabled and one of the nodes goes offline, the
other node will also be taken offline because of the quorum
configuration. As a result, the high availability provided by the
replication is ineffective. To prevent this situation, a dummy node can
be added to the trusted storage pool which does not contain any bricks.
This ensures that even if one of the nodes which contains data goes
offline, the other node will remain online. Note that if the dummy node
and one of the data nodes goes offline, the brick on other node will be
also be taken offline, and will result in data unavailability.
So, I have added my "Engine" (not self-hosted) as a dummy node without a
brick and have configured quorum as listed below:
cluster.quorum-type: fixed
cluster.quorum-count: 1
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
Then, I've run a VM and have dropped the network link from node6, after
one a hour have switched back the link and after a while have got a
split-brain. But why? No one could write to the brick on node6: the VM
was running on node3 and node1 was SPM.
Gluster's log from node6:
Июн 07 15:35:06 node6.virt.local etc-glusterfs-glusterd.vol[28491]:
[2015-06-07 12:35:06.106270] C [MSGID: 106002]
[glusterd-server-quorum.c:356:glusterd_do_volume_quorum_action]
0-management: Server quorum lost for volume vol3. Stopping local bricks.
Июн 07 16:30:06 node6.virt.local etc-glusterfs-glusterd.vol[28491]:
[2015-06-07 13:30:06.261505] C [MSGID: 106003]
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume vol3. Starting local bricks.
gluster> volume heal vol3 info
Brick node5.virt.local:/storage/brick12/
/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain
Number of entries: 1
Brick node6.virt.local:/storage/brick13/
/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain
Number of entries: 1
gluster> volume info vol3
Volume Name: vol3
Type: Replicate
Volume ID: 69ba8c68-6593-41ca-b1d9-40b3be50ac80
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node5.virt.local:/storage/brick12
Brick2: node6.virt.local:/storage/brick13
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: fixed
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: disable
nfs.disable: on
performance.readdir-ahead: on
cluster.quorum-count: 1
cluster.server-quorum-ratio: 51%
06.06.2015 12:09, Юрий Полторацкий пишет:
> Hi,
>
> I want to build a HA storage based on two servers. I want that if one
> goes down, my storage will be available in RW mode.
>
> If I will use replica 2, then split-brain can occur. To avoid this I
> would use a quorum. As I understand correctly, I can use quorum on a
> client side, on a server side, or on both. I want to add a dummy node
> without a brick and make such config:
>
> cluster.quorum-type: fixed
> cluster.quorum-count: 1
> cluster.server-quorum-type: server
> cluster.server-quorum-ratio: 51%
>
> I expect that client will have access in RW mode until one brick
> alive. On the other side if server's quorum will not meet, then bricks
> will be RO.
>
> Say, HOST1 with a brick BRICK1, HOST2 with a brick BRICK2, and HOST3
> without a brick.
>
> Once HOST1 lose a network connection, than on this node server quorum
> will not meet and the brick BRICK1 will not be able for writing. But
> on HOST2 there is no problem with server quorum (HOST2 + HOST3 > 51%)
> and that's why BRICK2 still accessible for writing. With client's
> quorum there is no problem also - one brick is alive, so client can
> write on it.
>
> I have made a lab using KVM on my desktop and it seems to be worked
> well and as expected.
>
> The main question is:
> Can I use such a storage for production?
>
> Thanks.
>
--------------070106010204050704030701
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi,<br>
<br>
I have made a lab with a config listed below and have got unexpected
result. Someone, tell me, please, where did I go wrong?<br>
<br>
I am testing oVirt. Data Center has two clusters: the first as a
computing with three nodes (node1, node2, node3); the second as a
storage (node5, node6) based on glusterfs (replica 2).<br>
<br>
I want the storage to be HA. I have read <a
href="https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Admi...">here</a>
next:<br>
<tt>For a replicated volume with two nodes and one brick on each
machine, if the server-side quorum is enabled and one of the nodes
goes offline, the other node will also be taken offline because of
the quorum configuration. As a result, the high availability
provided by the replication is ineffective. To prevent this
situation, a dummy node can be added to the trusted storage pool
which does not contain any bricks. This ensures that even if one
of the nodes which contains data goes offline, the other node will
remain online. Note that if the dummy node and one of the data
nodes goes offline, the brick on other node will be also be taken
offline, and will result in data unavailability. </tt><br>
<br>
So, I have added my "Engine" (not self-hosted) as a dummy node
without a brick and have configured quorum as listed below:<br>
<tt>cluster.quorum-type: fixed</tt><tt><br>
</tt><tt>cluster.quorum-count: 1</tt><tt><br>
</tt><tt>cluster.server-quorum-type: server</tt><tt><br>
</tt><tt>cluster.server-quorum-ratio: 51%</tt><br>
<br>
<br>
Then, I've run a VM and have dropped the network link from node6,
after one a hour have switched back the link and after a while have
got a split-brain. But why? No one could write to the brick on
node6: the VM was running on node3 and node1 was SPM.<br>
<br>
Gluster's log from node6:<br>
<tt>Июн 07 15:35:06 node6.virt.local
etc-glusterfs-glusterd.vol[28491]: [2015-06-07 12:35:06.106270] C
[MSGID: 106002]
[glusterd-server-quorum.c:356:glusterd_do_volume_quorum_action]
0-management: Server quorum lost for volume vol3. Stopping local
bricks.</tt><tt><br>
</tt><tt>Июн 07 16:30:06 node6.virt.local
etc-glusterfs-glusterd.vol[28491]: [2015-06-07 13:30:06.261505] C
[MSGID: 106003]
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume vol3. Starting
local bricks.</tt><br>
<tt><br>
<br>
</tt><tt>gluster> volume heal vol3 info </tt><tt><br>
</tt><tt>Brick node5.virt.local:/storage/brick12/</tt><tt><br>
</tt><tt>/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in
split-brain</tt><tt><br>
</tt><tt><br>
</tt><tt>Number of entries: 1</tt><tt><br>
</tt><tt><br>
</tt><tt>Brick node6.virt.local:/storage/brick13/</tt><tt><br>
</tt><tt>/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in
split-brain</tt><tt><br>
</tt><tt><br>
</tt><tt>Number of entries: 1</tt><br>
<br>
<br>
<tt>gluster> volume info vol3</tt><tt><br>
</tt><tt> </tt><tt><br>
</tt><tt>Volume Name: vol3</tt><tt><br>
</tt><tt>Type: Replicate</tt><tt><br>
</tt><tt>Volume ID: 69ba8c68-6593-41ca-b1d9-40b3be50ac80</tt><tt><br>
</tt><tt>Status: Started</tt><tt><br>
</tt><tt>Number of Bricks: 1 x 2 = 2</tt><tt><br>
</tt><tt>Transport-type: tcp</tt><tt><br>
</tt><tt>Bricks:</tt><tt><br>
</tt><tt>Brick1: node5.virt.local:/storage/brick12</tt><tt><br>
</tt><tt>Brick2: node6.virt.local:/storage/brick13</tt><tt><br>
</tt><tt>Options Reconfigured:</tt><tt><br>
</tt><tt>storage.owner-gid: 36</tt><tt><br>
</tt><tt>storage.owner-uid: 36</tt><tt><br>
</tt><tt>cluster.server-quorum-type: server</tt><tt><br>
</tt><tt>cluster.quorum-type: fixed</tt><tt><br>
</tt><tt>network.remote-dio: enable</tt><tt><br>
</tt><tt>cluster.eager-lock: enable</tt><tt><br>
</tt><tt>performance.stat-prefetch: off</tt><tt><br>
</tt><tt>performance.io-cache: off</tt><tt><br>
</tt><tt>performance.read-ahead: off</tt><tt><br>
</tt><tt>performance.quick-read: off</tt><tt><br>
</tt><tt>auth.allow: *</tt><tt><br>
</tt><tt>user.cifs: disable</tt><tt><br>
</tt><tt>nfs.disable: on</tt><tt><br>
</tt><tt>performance.readdir-ahead: on</tt><tt><br>
</tt><tt>cluster.quorum-count: 1</tt><tt><br>
</tt><tt>cluster.server-quorum-ratio: 51%</tt><br>
<br>
<br>
<br>
<div class="moz-cite-prefix">06.06.2015 12:09, Юрий Полторацкий
пишет:<br>
</div>
<blockquote
cite="mid:CANgBB_sWBZwDh3yaAJV=ETLqrJyYxT9_EfRjnjMHF-1NVbevqw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hi,<br>
<br>
</div>
I want to build a HA storage based on two servers.
I want that if one goes down, my storage will be
available in RW mode.<br>
<br>
</div>
If I will use replica 2, then split-brain can occur.
To avoid this I would use a quorum. As I understand
correctly, I can use quorum on a client side, on a
server side, or on both. I want to add a dummy node
without a brick and make such config:<br>
</div>
<br>
cluster.quorum-type: fixed<br>
cluster.quorum-count: 1<br>
cluster.server-quorum-type: server<br>
cluster.server-quorum-ratio: 51%<br>
<br>
</div>
I expect that client will have access in RW mode until
one brick alive. On the other side if server's quorum
will not meet, then bricks will be RO. <br>
<br>
Say, HOST1 with a brick BRICK1, HOST2 with a brick
BRICK2, and HOST3 without a brick.<br>
<br>
Once HOST1 lose a network connection, than on this node
server quorum will not meet and the brick BRICK1 will
not be able for writing. But on HOST2 there is no
problem with server quorum (HOST2 + HOST3 > 51%) and
that's why BRICK2 still accessible for writing. With
client's quorum there is no problem also - one brick is
alive, so client can write on it.<br>
<br>
</div>
I have made a lab using KVM on my desktop and it seems to
be worked well and as expected.<br>
<br>
</div>
The main question is:<br>
</div>
Can I use such a storage for production?<br>
<br>
</div>
Thanks. <br>
<div>
<div>
<div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------070106010204050704030701--
9 years, 5 months
Why not bond0
by 肖力
------=_Part_150959_749523399.1432432244076
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64
SGkgbmljIGJvbmQgd2h5IG5vdCBjaG9pY2UgYm9uZDAgPwoK
------=_Part_150959_749523399.1432432244076
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64
PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6QXJpYWwiPjxkaXY+SGkgbmljIGJvbmQgd2h5IG5vdCBjaG9pY2UgYm9uZDAg
Pzxicj48YnI+PC9kaXY+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+
PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_150959_749523399.1432432244076--
9 years, 5 months
[Centos7.1] [Ovirt 3.5.2] hosted engine, is there a way to resume deploy
by wodel youchi
Hi,
I tried to deploy ovirt hosted engine 3.5.2 on a Centos7.1
I messed up things with datacenter naming, I used something else than
Default, and the result was that, after the DB welcome message between the
engine and the hypervisor, there was in error (I didn't catch it I was
using screen command over ssh without log :-( ), and the last steps were
not done, so I ended up
an engine up but without the hypervisor being registred.
Is there a way to force the registration again, or will I have to deploy
from the beginning and reinstall the VM engine?
thanks.
9 years, 5 months
Remove default cluster
by Nicolas Ecarnot
Hello,
I finished upgrading a 3.5.1 DC that contained some CentOS 6.6 hosts.
For that, I added a second cluster in which I progressively moved my
upgraded hosts into CentOS 7.
Now my old cluster is empty, and the new one contains my CentOS 7 hosts.
I'd like to get rid of the old empty cluster, but when trying to delete
it, oVirt explains that some template are still used in this cluster.
Actually, I have only the default blank template and another custom one.
Once the custom one has easily been moved, I have no way to move the
blank one because every action is greyd out.
I guess no one will tell me to play with psql... :)
--
Nicolas ECARNOT
9 years, 5 months
Multiple NICs on hosted engine?
by Chris Adams
I have installed the first node of a new oVirt 3.5 setup with a hosted
engine VM. I have multiple networks: one public-accessible and one
private (with storage, iDRAC/IPMI, etc.). I set the engine VM up on the
public LAN, but now realize that it can't access the power control. I
tried to add a second NIC to the engine VM through the web interface,
but of course that doesn't work (because it isn't really managed there).
How can I add a second NIC to the hosted engine VM?
--
Chris Adams <cma(a)cmadams.net>
9 years, 5 months
Re: [ovirt-users] ovirt vm disk space not release
by smiling dream
Thanks for you reply .
So if I understand you correctly you want to reduce the space a thin
provision disk takes up on the NFS share because you deleted files within
the VM?
Yes
Is there any others way to reclaim disk space , coz i have bulks of vm
under thin provision manual process kind of impossible . As i know Vmware
have this kind of facility to reclaim from thin provision .
On Fri, Jun 5, 2015 at 11:48 AM, Alex McWhirter <alexmcwhirter(a)triadic.us>
wrote:
> Sorry, I must have misunderstood.
>
> So if I understand you correctly you want to reduce the space a thin
> provision disk takes up on the NFS share because you deleted files within
> the VM?
>
> I'm pretty sure that thin provisioned disks can only grow. Once they have
> been expanded there's no way to reclaim that space on the NFS share.
>
> The only thing I think you can do is create a new thin provisioned disk
> and copy the old data over at the file level, not the block level.
> Afterwards you could delete the original disk.
>
> Sent from my iPhone
>
> On Jun 5, 2015, at 1:19 AM, smiling dream <smiling.dream(a)gmail.com> wrote:
>
> I mean vm internal disk space .
> All of VM under thin provision . If i delete files and free space from VM
> still ovirt stroage showing vm disk space is used . How to reclaim disk
> space from guest VM .
>
> On Fri, Jun 5, 2015 at 11:14 AM, Alex McWhirter <alexmcwhirter(a)triadic.us>
> wrote:
>
>> From what I understand, ovirt simply reports the available storage that
>> the NFS server says it is free. Ovirt itself doesn't control the storage.
>>
>> Under the storage tab click on the storage domain you're having issues
>> with and check to see if the images themselves have been deleted or if they
>> are still there. When you delete a virtual machine you have the option to
>> delete the virtual image as well.
>>
>> If a virtual machine has more than one image then I'm not sure how this
>> is handled as I've only used single images for virtual machines. Any
>> additional storage I need I handle over NFS directly to the virtual machine.
>>
>> Perhaps ovirt doesn't automatically delete secondary images? Either way
>> you should be able to delete them manually and reclaim space.
>>
>> Sent from my iPhone
>>
>> > On Jun 5, 2015, at 12:39 AM, smiling dream <smiling.dream(a)gmail.com>
>> wrote:
>> >
>> > I have ovirt 3.5.1 installed with VDSM 4.16.14 EL6 node and NFS as
>> storage . In my infrastructure i have multiple instance of centos / windows
>> vm under ovirt and once guest vm disk space is used ovirt is not release
>> guest vm disk space after delete .
>> > Looking for help .
>> >
>> > Regards
>> > Suvro
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
9 years, 5 months