<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 06/27/2017 06:34 PM, Abi Askushi
      wrote:<br>
    </div>
    <blockquote
cite="mid:CABMULtJwwZ1NYYv1NP0G6j1TNApitLzFXV-9eX5ptOH15WjEkQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>
          <div>
            <div>Hi Knarra, <br>
              <br>
            </div>
            The ISO domain is of type gluster though I had nfs enabled
            on that volume. <br>
          </div>
        </div>
      </div>
    </blockquote>
    you need to have nfs enabled on the volume. what i meant is
    nfs.disable off which means nfs is on.<br>
    <br>
    For more info please refer to bug
    <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1437799">https://bugzilla.redhat.com/show_bug.cgi?id=1437799</a><br>
    <blockquote
cite="mid:CABMULtJwwZ1NYYv1NP0G6j1TNApitLzFXV-9eX5ptOH15WjEkQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>I will disable the nfs and try. Though in order to try I
          need first to remove that second interface from engine. <br>
          <span class="gmail-im">Is there a way I can remove the
            secondary storage network interface from the engine?<br>
          </span></div>
      </div>
    </blockquote>
    I am not sure how to do that, but   you may shutdown the vm using
    the command hosted-engine --vm-shutdown which will power off the vm
    and try to remove the networks using vdsclient. (not sure if this is
    right, but suggesting a way)<br>
    <blockquote
cite="mid:CABMULtJwwZ1NYYv1NP0G6j1TNApitLzFXV-9eX5ptOH15WjEkQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div><span class="gmail-im"><br>
          </span></div>
        <span class="gmail-im">Thanx<br>
        </span></div>
    </blockquote>
    <br>
    <blockquote
cite="mid:CABMULtJwwZ1NYYv1NP0G6j1TNApitLzFXV-9eX5ptOH15WjEkQ@mail.gmail.com"
      type="cite">
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Tue, Jun 27, 2017 at 3:32 PM, knarra
          <span dir="ltr">&lt;<a moz-do-not-send="true"
              href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"><span class="">
                <div class="m_-6805624692919781363moz-cite-prefix">On
                  06/27/2017 05:41 PM, Abi Askushi wrote:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div>
                      <div>
                        <div>
                          <div>Hi all, <br>
                            <br>
                          </div>
                          When setting up hosted engine setup on top
                          gluster with 3 nodes, I had gluster configured
                          on a separate network interface, as
                          recommended. When I tried later to upload ISO
                          from engine to ISO domain, the engine was not
                          able to upload it since the VM did not have
                          access to the separate storage network. I then
                          added the storage network interface to the
                          hosted engine and ISO upload succeeded. <br>
                        </div>
                      </div>
                    </div>
                  </div>
                </blockquote>
              </span> May i know what was the volume type created and
              added as ISO domain ?<br>
              <br>
              If you plan to use a glusterfs volume below is the
              procedure :<br>
              <br>
              1) Create a glusterfs volume.<br>
              2) While adding storage domain select Domain Function as
              'ISO' and Storage Type as 'glusterfs' .<br>
              3) You can either use 'use managed gluster volume' check
              box and select the gluster volume which you have created
              for storing ISO's or you can type the full path of the
              volume.<br>
              4) Once this is added please make sure to set the option
              nfs.disable off.<br>
              5) Now you can go to HE engine and run the command
              engine-iso-uploader upload -i
              &lt;gluster_iso_domain_name&gt; &lt;iso_file&gt;<br>
              <br>
              Iso gets uploaded successfully.<span class=""><br>
                <br>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div>
                      <div>
                        <div><br>
                        </div>
                        1st question: do I need to add the network
                        interface to engine in order to upload ISOs?
                        does there exist any alternate way?<br>
                      </div>
                    </div>
                  </div>
                </blockquote>
              </span> AFAIK, this is not required when glusterfs volume
              is used.<br>
              <br>
              Attached is the screenshot where i have only one network
              attached to my HE which is ovirtmgmt.<br>
              <blockquote type="cite"><span class="">
                  <div dir="ltr">
                    <div>
                      <div><br>
                      </div>
                      Then I proceeded to configure bonding for the
                      storage domain, bonding 2 NICs at each server.
                      When trying to set a custom bond of mode=6 (as
                      recommended from gluster) I received a warning
                      that mode0, 5 and 6 cannot be configured since the
                      interface is used from VMs. I also understood that
                      having the storage network assigned to VMs makes
                      it a bridge which decreases performance of
                      networking. When trying to remove the network
                      interface from engine it cannot be done, since the
                      engine is running. <br>
                      <br>
                      2nd question: Is there a way I can remove the
                      secondary storage network interface from the
                      engine?<br>
                      <br>
                    </div>
                    Many thanx<br>
                  </div>
                  <br>
                  <fieldset
                    class="m_-6805624692919781363mimeAttachmentHeader"></fieldset>
                  <br>
                </span>
                <pre>______________________________<wbr>_________________
Users mailing list
<a moz-do-not-send="true" class="m_-6805624692919781363moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a moz-do-not-send="true" class="m_-6805624692919781363moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>
</pre>
    </blockquote>
    <p>

    </p>
  </div>

</blockquote></div>
</div>



</blockquote><p>
</p></body></html>