<div dir="ltr"><div><div><div><div>Hi Knarra, <br><br></div>Then I had already enabled NFS on ISO gluster volume. <br></div>Maybe i had some networking issue then. I need to remove the secondary interface in order to test that again. <br><br></div></div><div><div><div><br></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 27, 2017 at 4:25 PM, knarra <span dir="ltr">&lt;<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><span class="">
    <div class="m_-5677008902476285395moz-cite-prefix">On 06/27/2017 06:34 PM, Abi Askushi
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">
        <div>
          <div>
            <div>Hi Knarra, <br>
              <br>
            </div>
            The ISO domain is of type gluster though I had nfs enabled
            on that volume. <br>
          </div>
        </div>
      </div>
    </blockquote></span>
    you need to have nfs enabled on the volume. what i meant is
    nfs.disable off which means nfs is on.<br>
    <br>
    For more info please refer to bug
    <a class="m_-5677008902476285395moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1437799" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1437799</a><span class=""><br>
    <blockquote type="cite">
      <div dir="ltr">
        <div>I will disable the nfs and try. Though in order to try I
          need first to remove that second interface from engine. <br>
          <span class="m_-5677008902476285395gmail-im">Is there a way I can remove the
            secondary storage network interface from the engine?<br>
          </span></div>
      </div>
    </blockquote></span>
    I am not sure how to do that, but   you may shutdown the vm using
    the command hosted-engine --vm-shutdown which will power off the vm
    and try to remove the networks using vdsclient. (not sure if this is
    right, but suggesting a way)<span class=""><br>
    <blockquote type="cite">
      <div dir="ltr">
        <div><span class="m_-5677008902476285395gmail-im"><br>
          </span></div>
        <span class="m_-5677008902476285395gmail-im">Thanx<br>
        </span></div>
    </blockquote>
    <br>
    <blockquote type="cite">
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Tue, Jun 27, 2017 at 3:32 PM, knarra
          <span dir="ltr">&lt;<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"><span>
                <div class="m_-5677008902476285395m_-6805624692919781363moz-cite-prefix">On
                  06/27/2017 05:41 PM, Abi Askushi wrote:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div>
                      <div>
                        <div>
                          <div>Hi all, <br>
                            <br>
                          </div>
                          When setting up hosted engine setup on top
                          gluster with 3 nodes, I had gluster configured
                          on a separate network interface, as
                          recommended. When I tried later to upload ISO
                          from engine to ISO domain, the engine was not
                          able to upload it since the VM did not have
                          access to the separate storage network. I then
                          added the storage network interface to the
                          hosted engine and ISO upload succeeded. <br>
                        </div>
                      </div>
                    </div>
                  </div>
                </blockquote>
              </span> May i know what was the volume type created and
              added as ISO domain ?<br>
              <br>
              If you plan to use a glusterfs volume below is the
              procedure :<br>
              <br>
              1) Create a glusterfs volume.<br>
              2) While adding storage domain select Domain Function as
              &#39;ISO&#39; and Storage Type as &#39;glusterfs&#39; .<br>
              3) You can either use &#39;use managed gluster volume&#39; check
              box and select the gluster volume which you have created
              for storing ISO&#39;s or you can type the full path of the
              volume.<br>
              4) Once this is added please make sure to set the option
              nfs.disable off.<br>
              5) Now you can go to HE engine and run the command
              engine-iso-uploader upload -i
              &lt;gluster_iso_domain_name&gt; &lt;iso_file&gt;<br>
              <br>
              Iso gets uploaded successfully.<span><br>
                <br>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div>
                      <div>
                        <div><br>
                        </div>
                        1st question: do I need to add the network
                        interface to engine in order to upload ISOs?
                        does there exist any alternate way?<br>
                      </div>
                    </div>
                  </div>
                </blockquote>
              </span> AFAIK, this is not required when glusterfs volume
              is used.<br>
              <br>
              Attached is the screenshot where i have only one network
              attached to my HE which is ovirtmgmt.<br>
              <blockquote type="cite"><span>
                  <div dir="ltr">
                    <div>
                      <div><br>
                      </div>
                      Then I proceeded to configure bonding for the
                      storage domain, bonding 2 NICs at each server.
                      When trying to set a custom bond of mode=6 (as
                      recommended from gluster) I received a warning
                      that mode0, 5 and 6 cannot be configured since the
                      interface is used from VMs. I also understood that
                      having the storage network assigned to VMs makes
                      it a bridge which decreases performance of
                      networking. When trying to remove the network
                      interface from engine it cannot be done, since the
                      engine is running. <br>
                      <br>
                      2nd question: Is there a way I can remove the
                      secondary storage network interface from the
                      engine?<br>
                      <br>
                    </div>
                    Many thanx<br>
                  </div>
                  <br>
                  <fieldset class="m_-5677008902476285395m_-6805624692919781363mimeAttachmentHeader"></fieldset>
                  <br>
                </span>
                <pre>______________________________<wbr>_________________
Users mailing list
<a class="m_-5677008902476285395m_-6805624692919781363moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_-5677008902476285395m_-6805624692919781363moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>
</pre>
    </blockquote>
    <p>

    </p>
  </div>

</blockquote></div>
</div>



</blockquote><p>
</p></span></div></blockquote></div><br></div>