[ovirt-users] Ovirt remove network from hosted engine
knarra
knarra at redhat.com
Wed Jun 28 05:29:34 UTC 2017
On 06/27/2017 09:49 PM, Abi Askushi wrote:
> Hi all,
>
> Just in case ones needs it, in order to remove the secondary network
> interface from engine, you can go to:
> Virtual Machines -> Hostedengine -> Network Interfaces -> edit ->
> unplug it -> confirm -> remove it.
cool. But in your previous mail you did mention that it fails for you
since the engine is running. Instead of remove you tried unplug here ?
>
> It was simple...
>
>
> On Tue, Jun 27, 2017 at 4:54 PM, Abi Askushi <rightkicktech at gmail.com
> <mailto:rightkicktech at gmail.com>> wrote:
>
> Hi Knarra,
>
> Then I had already enabled NFS on ISO gluster volume.
> Maybe i had some networking issue then. I need to remove the
> secondary interface in order to test that again.
>
>
>
> On Tue, Jun 27, 2017 at 4:25 PM, knarra <knarra at redhat.com
> <mailto:knarra at redhat.com>> wrote:
>
> On 06/27/2017 06:34 PM, Abi Askushi wrote:
>> Hi Knarra,
>>
>> The ISO domain is of type gluster though I had nfs enabled on
>> that volume.
> you need to have nfs enabled on the volume. what i meant is
> nfs.disable off which means nfs is on.
>
> For more info please refer to bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1437799
> <https://bugzilla.redhat.com/show_bug.cgi?id=1437799>
>> I will disable the nfs and try. Though in order to try I need
>> first to remove that second interface from engine.
>> Is there a way I can remove the secondary storage network
>> interface from the engine?
> I am not sure how to do that, but you may shutdown the vm
> using the command hosted-engine --vm-shutdown which will power
> off the vm and try to remove the networks using vdsclient.
> (not sure if this is right, but suggesting a way)
>>
>> Thanx
>
>>
>> On Tue, Jun 27, 2017 at 3:32 PM, knarra <knarra at redhat.com
>> <mailto:knarra at redhat.com>> wrote:
>>
>> On 06/27/2017 05:41 PM, Abi Askushi wrote:
>>> Hi all,
>>>
>>> When setting up hosted engine setup on top gluster with
>>> 3 nodes, I had gluster configured on a separate network
>>> interface, as recommended. When I tried later to upload
>>> ISO from engine to ISO domain, the engine was not able
>>> to upload it since the VM did not have access to the
>>> separate storage network. I then added the storage
>>> network interface to the hosted engine and ISO upload
>>> succeeded.
>> May i know what was the volume type created and added as
>> ISO domain ?
>>
>> If you plan to use a glusterfs volume below is the
>> procedure :
>>
>> 1) Create a glusterfs volume.
>> 2) While adding storage domain select Domain Function as
>> 'ISO' and Storage Type as 'glusterfs' .
>> 3) You can either use 'use managed gluster volume' check
>> box and select the gluster volume which you have created
>> for storing ISO's or you can type the full path of the
>> volume.
>> 4) Once this is added please make sure to set the option
>> nfs.disable off.
>> 5) Now you can go to HE engine and run the command
>> engine-iso-uploader upload -i <gluster_iso_domain_name>
>> <iso_file>
>>
>> Iso gets uploaded successfully.
>>
>>>
>>> 1st question: do I need to add the network interface to
>>> engine in order to upload ISOs? does there exist any
>>> alternate way?
>> AFAIK, this is not required when glusterfs volume is used.
>>
>> Attached is the screenshot where i have only one network
>> attached to my HE which is ovirtmgmt.
>>>
>>> Then I proceeded to configure bonding for the storage
>>> domain, bonding 2 NICs at each server. When trying to
>>> set a custom bond of mode=6 (as recommended from
>>> gluster) I received a warning that mode0, 5 and 6 cannot
>>> be configured since the interface is used from VMs. I
>>> also understood that having the storage network assigned
>>> to VMs makes it a bridge which decreases performance of
>>> networking. When trying to remove the network interface
>>> from engine it cannot be done, since the engine is running.
>>>
>>> 2nd question: Is there a way I can remove the secondary
>>> storage network interface from the engine?
>>>
>>> Many thanx
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> <http://lists.ovirt.org/mailman/listinfo/users>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170628/81153127/attachment-0001.html>
More information about the Users
mailing list