This is a multi-part message in MIME format.
--------------1EF5E87F75A716DDFC387F61
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
On 06/27/2017 09:49 PM, Abi Askushi wrote:
Hi all,
Just in case ones needs it, in order to remove the secondary network
interface from engine, you can go to:
Virtual Machines -> Hostedengine -> Network Interfaces -> edit ->
unplug it -> confirm -> remove it.
cool. But in your previous mail you did
mention that it fails for you
since the engine is running. Instead of remove you tried unplug here ?
It was simple...
On Tue, Jun 27, 2017 at 4:54 PM, Abi Askushi <rightkicktech(a)gmail.com
<mailto:rightkicktech@gmail.com>> wrote:
Hi Knarra,
Then I had already enabled NFS on ISO gluster volume.
Maybe i had some networking issue then. I need to remove the
secondary interface in order to test that again.
On Tue, Jun 27, 2017 at 4:25 PM, knarra <knarra(a)redhat.com
<mailto:knarra@redhat.com>> wrote:
On 06/27/2017 06:34 PM, Abi Askushi wrote:
> Hi Knarra,
>
> The ISO domain is of type gluster though I had nfs enabled on
> that volume.
you need to have nfs enabled on the volume. what i meant is
nfs.disable off which means nfs is on.
For more info please refer to bug
https://bugzilla.redhat.com/show_bug.cgi?id=1437799
<
https://bugzilla.redhat.com/show_bug.cgi?id=1437799>
> I will disable the nfs and try. Though in order to try I need
> first to remove that second interface from engine.
> Is there a way I can remove the secondary storage network
> interface from the engine?
I am not sure how to do that, but you may shutdown the vm
using the command hosted-engine --vm-shutdown which will power
off the vm and try to remove the networks using vdsclient.
(not sure if this is right, but suggesting a way)
>
> Thanx
>
> On Tue, Jun 27, 2017 at 3:32 PM, knarra <knarra(a)redhat.com
> <mailto:knarra@redhat.com>> wrote:
>
> On 06/27/2017 05:41 PM, Abi Askushi wrote:
>> Hi all,
>>
>> When setting up hosted engine setup on top gluster with
>> 3 nodes, I had gluster configured on a separate network
>> interface, as recommended. When I tried later to upload
>> ISO from engine to ISO domain, the engine was not able
>> to upload it since the VM did not have access to the
>> separate storage network. I then added the storage
>> network interface to the hosted engine and ISO upload
>> succeeded.
> May i know what was the volume type created and added as
> ISO domain ?
>
> If you plan to use a glusterfs volume below is the
> procedure :
>
> 1) Create a glusterfs volume.
> 2) While adding storage domain select Domain Function as
> 'ISO' and Storage Type as 'glusterfs' .
> 3) You can either use 'use managed gluster volume' check
> box and select the gluster volume which you have created
> for storing ISO's or you can type the full path of the
> volume.
> 4) Once this is added please make sure to set the option
> nfs.disable off.
> 5) Now you can go to HE engine and run the command
> engine-iso-uploader upload -i <gluster_iso_domain_name>
> <iso_file>
>
> Iso gets uploaded successfully.
>
>>
>> 1st question: do I need to add the network interface to
>> engine in order to upload ISOs? does there exist any
>> alternate way?
> AFAIK, this is not required when glusterfs volume is used.
>
> Attached is the screenshot where i have only one network
> attached to my HE which is ovirtmgmt.
>>
>> Then I proceeded to configure bonding for the storage
>> domain, bonding 2 NICs at each server. When trying to
>> set a custom bond of mode=6 (as recommended from
>> gluster) I received a warning that mode0, 5 and 6 cannot
>> be configured since the interface is used from VMs. I
>> also understood that having the storage network assigned
>> to VMs makes it a bridge which decreases performance of
>> networking. When trying to remove the network interface
>> from engine it cannot be done, since the engine is running.
>>
>> 2nd question: Is there a way I can remove the secondary
>> storage network interface from the engine?
>>
>> Many thanx
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>
http://lists.ovirt.org/mailman/listinfo/users
>> <
http://lists.ovirt.org/mailman/listinfo/users>
>
--------------1EF5E87F75A716DDFC387F61
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 06/27/2017 09:49 PM, Abi Askushi
wrote:<br>
</div>
<blockquote
cite="mid:CABMULtJ1t7DzewdFmEyj1nPUypAjxN3DvWgWJBPq9d5LGW1jxQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>Hi all, <br>
<br>
</div>
Just in case ones needs it, in order to remove the secondary
network interface from engine, you can go to: <br>
</div>
Virtual Machines -> Hostedengine -> Network Interfaces
-> edit -> unplug it -> confirm -> remove it.
<br>
</div>
</div>
</blockquote>
cool. But in your previous mail you did mention that it fails for
you since the engine is running. Instead of remove you tried unplug
here ?<br>
<blockquote
cite="mid:CABMULtJ1t7DzewdFmEyj1nPUypAjxN3DvWgWJBPq9d5LGW1jxQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
It was simple...<br>
<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jun 27, 2017 at 4:54 PM, Abi
Askushi <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:rightkicktech@gmail.com"
target="_blank">rightkicktech(a)gmail.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<div>
<div>Hi Knarra, <br>
<br>
</div>
Then I had already enabled NFS on ISO gluster
volume. <br>
</div>
Maybe i had some networking issue then. I need to
remove the secondary interface in order to test that
again. <br>
<br>
</div>
</div>
<div>
<div>
<div><br>
</div>
</div>
</div>
</div>
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jun 27, 2017 at 4:25
PM, knarra <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:knarra@redhat.com"
target="_blank">knarra(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF"
text="#000000"><span>
<div
class="m_889488259470675464m_-5677008902476285395moz-cite-prefix">On
06/27/2017 06:34 PM, Abi Askushi wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>Hi Knarra, <br>
<br>
</div>
The ISO domain is of type gluster
though I had nfs enabled on that
volume. <br>
</div>
</div>
</div>
</blockquote>
</span> you need to have nfs enabled on the
volume. what i meant is nfs.disable off which
means nfs is on.<br>
<br>
For more info please refer to bug <a
moz-do-not-send="true"
class="m_889488259470675464m_-5677008902476285395moz-txt-link-freetext"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1437799"
target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_...
<blockquote type="cite">
<div dir="ltr">
<div>I will disable the nfs and try.
Though in order to try I need first to
remove that second interface from
engine. <br>
<span
class="m_889488259470675464m_-5677008902476285395gmail-im">Is
there a way I can remove the secondary
storage network interface from the
engine?<br>
</span></div>
</div>
</blockquote>
</span> I am not sure how to do that, but you
may shutdown the vm using the command
hosted-engine --vm-shutdown which will power off
the vm and try to remove the networks using
vdsclient. (not sure if this is right, but
suggesting a way)<span><br>
<blockquote type="cite">
<div dir="ltr">
<div><span
class="m_889488259470675464m_-5677008902476285395gmail-im"><br>
</span></div>
<span
class="m_889488259470675464m_-5677008902476285395gmail-im">Thanx<br>
</span></div>
</blockquote>
<br>
<blockquote type="cite">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jun 27,
2017 at 3:32 PM, knarra <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:knarra@redhat.com"
target="_blank">knarra(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div bgcolor="#FFFFFF"
text="#000000"><span>
<div
class="m_889488259470675464m_-5677008902476285395m_-6805624692919781363moz-cite-prefix">On
06/27/2017 05:41 PM, Abi Askushi
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>Hi all, <br>
<br>
</div>
When setting up hosted
engine setup on top
gluster with 3 nodes, I
had gluster configured
on a separate network
interface, as
recommended. When I
tried later to upload
ISO from engine to ISO
domain, the engine was
not able to upload it
since the VM did not
have access to the
separate storage
network. I then added
the storage network
interface to the hosted
engine and ISO upload
succeeded. <br>
</div>
</div>
</div>
</div>
</blockquote>
</span> May i know what was the
volume type created and added as ISO
domain ?<br>
<br>
If you plan to use a glusterfs
volume below is the procedure :<br>
<br>
1) Create a glusterfs volume.<br>
2) While adding storage domain
select Domain Function as 'ISO' and
Storage Type as 'glusterfs' .<br>
3) You can either use 'use managed
gluster volume' check box and select
the gluster volume which you have
created for storing ISO's or you can
type the full path of the volume.<br>
4) Once this is added please make
sure to set the option nfs.disable
off.<br>
5) Now you can go to HE engine and
run the command engine-iso-uploader
upload -i
<gluster_iso_domain_name>
<iso_file><br>
<br>
Iso gets uploaded successfully.<span><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div><br>
</div>
1st question: do I need to
add the network interface
to engine in order to
upload ISOs? does there
exist any alternate way?<br>
</div>
</div>
</div>
</blockquote>
</span> AFAIK, this is not required
when glusterfs volume is used.<br>
<br>
Attached is the screenshot where i
have only one network attached to my
HE which is ovirtmgmt.<br>
<blockquote type="cite"><span>
<div dir="ltr">
<div>
<div><br>
</div>
Then I proceeded to
configure bonding for the
storage domain, bonding 2
NICs at each server. When
trying to set a custom bond
of mode=6 (as recommended
from gluster) I received a
warning that mode0, 5 and 6
cannot be configured since
the interface is used from
VMs. I also understood that
having the storage network
assigned to VMs makes it a
bridge which decreases
performance of networking.
When trying to remove the
network interface from
engine it cannot be done,
since the engine is running.
<br>
<br>
2nd question: Is there a way
I can remove the secondary
storage network interface
from the engine?<br>
<br>
</div>
Many thanx<br>
</div>
<br>
<fieldset
class="m_889488259470675464m_-5677008902476285395m_-6805624692919781363mimeAttachmentHeader"></fieldset>
<br>
</span>
<pre>______________________________<wbr>_________________
Users mailing list
<a moz-do-not-send="true"
class="m_889488259470675464m_-5677008902476285395m_-6805624692919781363moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true"
class="m_889488259470675464m_-5677008902476285395m_-6805624692919781363moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users"
target="_blank">http://lists.ovirt.org/mailman<wbr>/li...
</pre>
</blockquote>
<p>
</p>
</div>
</blockquote></div>
</div>
</blockquote><p>
</p></span></div></blockquote></div>
</div>
</div></div></blockquote></div>
</div>
</blockquote><p>
</p></body></html>
--------------1EF5E87F75A716DDFC387F61--