<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Adding the virtio-scsi developers.<br>
      Anyhow, virtio-scsi is newer and less established than viostor
      (the block device), so you might want to try it out.<br>
      A disclaimer: There are time and patches gaps between RHEL and
      other versions.<br>
      <br>
      Ronen.<br>
      <br>
      On 01/28/2014 10:39 PM, Steve Dainard wrote:<br>
    </div>
    <blockquote
cite="mid:CAHnsdUt1wPNcWFhy1eW8iYRrp4cNn=9WMFt8sVV5bYGJS1cvUw@mail.gmail.com"
      type="cite">
      <div dir="ltr">I've had a bit of luck here.
        <div><br>
        </div>
        <div>Overall IO performance is very poor during Windows updates,
          but a contributing factor seems to be the "SCSI Controller"
          device in the guest. This last install I didn't install a
          driver for that device, and my performance is much better.
          Updates still chug along quite slowly, but I seem to have more
          than the &lt; 100KB/s write speeds I was seeing previously.</div>
        <div><br>
        </div>
        <div>Does anyone know what this device is for? I have the "Red
          Hat VirtIO SCSI Controller" listed under storage controllers.</div>
      </div>
      <div class="gmail_extra"><br clear="all">
        <div>
          <div dir="ltr">
            <span style="font-family:arial,sans-serif;font-size:16px"><strong>Steve&nbsp;Dainard&nbsp;</strong></span><span
              style="font-size:12px"></span><br>
            <span style="font-family:arial,sans-serif;font-size:12px">IT
              Infrastructure Manager<br>
              <a moz-do-not-send="true" href="http://miovision.com/"
                target="_blank">Miovision</a> | <em>Rethink Traffic</em><br>
              519-513-2407 ex.250<br>
              877-646-8476 (toll-free)<br>
              <br>
              <strong
                style="font-family:arial,sans-serif;font-size:13px;color:#999999"><a
                  moz-do-not-send="true"
                  href="http://miovision.com/blog" target="_blank">Blog</a>&nbsp;
                | &nbsp;</strong><font
                style="font-family:arial,sans-serif;font-size:13px"
                color="#999999"><strong><a moz-do-not-send="true"
                    href="https://www.linkedin.com/company/miovision-technologies"
                    target="_blank">LinkedIn</a>&nbsp;&nbsp;| &nbsp;<a
                    moz-do-not-send="true"
                    href="https://twitter.com/miovision" target="_blank">Twitter</a>&nbsp;&nbsp;|
                  &nbsp;<a moz-do-not-send="true"
                    href="https://www.facebook.com/miovision"
                    target="_blank">Facebook</a></strong></font> </span>
            <hr
style="font-family:arial,sans-serif;font-size:13px;color:#333333;clear:both">
            <div
style="color:#999999;font-family:arial,sans-serif;font-size:13px;padding-top:5px">
              <span style="font-family:arial,sans-serif;font-size:12px">Miovision
                Technologies Inc. | 148 Manitou Drive, Suite 101,
                Kitchener, ON, Canada |&nbsp;N2C 1L3</span><br>
              <span style="font-family:arial,sans-serif;font-size:12px">This
                e-mail may contain information that is privileged or
                confidential. If you are not the intended recipient,
                please delete the e-mail and any attachments and notify
                us immediately.</span></div>
          </div>
        </div>
        <br>
        <br>
        <div class="gmail_quote">On Sun, Jan 26, 2014 at 2:33 AM, Itamar
          Heim <span dir="ltr">&lt;<a moz-do-not-send="true"
              href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div class="im">On 01/26/2014 02:37 AM, Steve Dainard wrote:<br>
            </div>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div class="im">
                Thanks for the responses everyone, really appreciate it.<br>
                <br>
                I've condensed the other questions into this reply.<br>
                <br>
                <br>
                &nbsp; &nbsp; Steve,<br>
                &nbsp; &nbsp; What is the CPU load of the GlusterFS host when
                comparing the raw<br>
                &nbsp; &nbsp; brick test to the gluster mount point test? Give it
                30 seconds and<br>
                &nbsp; &nbsp; see what top reports. You&#8217;ll probably have to
                significantly increase<br>
                &nbsp; &nbsp; the count on the test so that it runs that long.<br>
                <br>
                &nbsp; &nbsp; - Nick<br>
                <br>
                <br>
                <br>
                Gluster mount point:<br>
                <br>
                *4K* on GLUSTER host<br>
                [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1
                bs=4k count=500000<br>
                500000+0 records in<br>
                500000+0 records out<br>
              </div>
              <a moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&nbsp;&lt;tel:<a
                moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&gt;
              bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
              <div class="im">
                <br>
                <br>
                Top reported this right away:<br>
                PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                &nbsp;TIME+ &nbsp;COMMAND<br>
                &nbsp; 1826 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;294m &nbsp;33m 2540 S 27.2 &nbsp;0.4 &nbsp;
                0:04.31 glusterfs<br>
                &nbsp; 2126 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1391m &nbsp;31m 2336 S 22.6 &nbsp;0.4
                &nbsp;11:25.48 glusterfsd<br>
                <br>
                Then at about 20+ seconds top reports this:<br>
                &nbsp; &nbsp;PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                &nbsp;TIME+ &nbsp;COMMAND<br>
                &nbsp; 1826 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;294m &nbsp;35m 2660 R 141.7 &nbsp;0.5 &nbsp;
                1:14.94 glusterfs<br>
                &nbsp; 2126 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1392m &nbsp;31m 2344 S 33.7 &nbsp;0.4
                &nbsp;11:46.56 glusterfsd<br>
                <br>
                *4K* Directly on the brick:<br>
                dd if=/dev/zero of=test1 bs=4k count=500000<br>
                500000+0 records in<br>
                500000+0 records out<br>
              </div>
              <a moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&nbsp;&lt;tel:<a
                moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&gt;
              bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
              <div class="im">
                <br>
                <br>
                &nbsp; 7750 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;102m &nbsp;648 &nbsp;544 R 50.3 &nbsp;0.0 &nbsp;
                0:01.52 dd<br>
                &nbsp; 7719 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp; &nbsp; 0 &nbsp; &nbsp;0 &nbsp; &nbsp;0 D &nbsp;1.0 &nbsp;0.0 &nbsp;
                0:01.50 flush-253:2<br>
                <br>
                Same test, gluster mount point on OVIRT host:<br>
                dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000<br>
                500000+0 records in<br>
                500000+0 records out<br>
              </div>
              <a moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&nbsp;&lt;tel:<a
                moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&gt;
              bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
              <div class="im">
                <br>
                <br>
                &nbsp; &nbsp;PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                &nbsp;TIME+ &nbsp;COMMAND<br>
                &nbsp; 2126 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1396m &nbsp;31m 2360 S 40.5 &nbsp;0.4
                &nbsp;13:28.89 glusterfsd<br>
                <br>
                <br>
                Same test, on OVIRT host but against NFS mount point:<br>
                dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k
                count=500000<br>
                500000+0 records in<br>
                500000+0 records out<br>
              </div>
              <a moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&nbsp;&lt;tel:<a
                moz-do-not-send="true" href="tel:2048000000"
                value="+12048000000" target="_blank">2048000000</a>&gt;
              bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
              <div>
                <div class="h5">
                  <br>
                  <br>
                  PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                  &nbsp;TIME+ &nbsp;COMMAND<br>
                  &nbsp; 2141 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;550m 184m 2840 R 84.6 &nbsp;2.3
                  &nbsp;16:43.10 glusterfs<br>
                  &nbsp; 2126 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1407m &nbsp;30m 2368 S 49.8 &nbsp;0.4
                  &nbsp;13:49.07 glusterfsd<br>
                  <br>
                  Interesting - It looks like if I use a NFS mount
                  point, I incur a cpu<br>
                  hit on two processes instead of just the daemon. I
                  also get much better<br>
                  performance if I'm not running dd (fuse) on the
                  GLUSTER host.<br>
                  <br>
                  <br>
                  &nbsp; &nbsp; &nbsp; &nbsp; The storage servers are a bit older, but are
                  both dual socket<br>
                  &nbsp; &nbsp; &nbsp; &nbsp; quad core<br>
                  <br>
                  &nbsp; &nbsp; &nbsp; &nbsp; opterons with 4x 7200rpm drives.<br>
                  <br>
                  <br>
                  &nbsp; &nbsp; A block size of 4k is quite small so that the
                  context switch<br>
                  &nbsp; &nbsp; overhead involved with fuse would be more
                  perceivable.<br>
                  <br>
                  &nbsp; &nbsp; Would it be possible to increase the block size
                  for dd and test?<br>
                  <br>
                  <br>
                  <br>
                  &nbsp; &nbsp; &nbsp; &nbsp; I'm in the process of setting up a share from
                  my desktop and<br>
                  &nbsp; &nbsp; &nbsp; &nbsp; I'll see if<br>
                  <br>
                  &nbsp; &nbsp; &nbsp; &nbsp; I can bench between the two systems. Not sure
                  if my ssd will<br>
                  &nbsp; &nbsp; &nbsp; &nbsp; impact the<br>
                  <br>
                  &nbsp; &nbsp; &nbsp; &nbsp; tests, I've heard there isn't an advantage
                  using ssd storage for<br>
                  &nbsp; &nbsp; &nbsp; &nbsp; glusterfs.<br>
                  <br>
                  <br>
                  &nbsp; &nbsp; Do you have any pointers to this source of
                  information? Typically<br>
                  &nbsp; &nbsp; glusterfs performance for virtualization work
                  loads is bound by the<br>
                  &nbsp; &nbsp; slowest element in the entire stack. Usually
                  storage/disks happen to<br>
                  &nbsp; &nbsp; be the bottleneck and ssd storage does benefit
                  glusterfs.<br>
                  <br>
                  &nbsp; &nbsp; -Vijay<br>
                  <br>
                  <br>
                  I had a couple technical calls with RH (re: RHSS), and
                  when I asked if<br>
                  SSD's could add any benefit I was told no. The context
                  may have been in<br>
                  a product comparison to other storage vendors, where
                  they use SSD's for<br>
                  read/write caching, versus having an all SSD storage
                  domain (which I'm<br>
                  not proposing, but which is effectively what my
                  desktop would provide).<br>
                  <br>
                  Increasing bs against NFS mount point (gluster
                  backend):<br>
                  dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k
                  count=16000<br>
                  16000+0 records in<br>
                  16000+0 records out<br>
                </div>
              </div>
              <a moz-do-not-send="true" href="tel:2097152000"
                value="+12097152000" target="_blank">2097152000</a>&nbsp;&lt;tel:<a
                moz-do-not-send="true" href="tel:2097152000"
                value="+12097152000" target="_blank">2097152000</a>&gt;
              bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
              <div>
                <div class="h5"><br>
                  <br>
                  <br>
                  GLUSTER host top reports:<br>
                  &nbsp; &nbsp;PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                  &nbsp;TIME+ &nbsp;COMMAND<br>
                  &nbsp; 2141 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;550m 183m 2844 R 88.9 &nbsp;2.3
                  &nbsp;17:30.82 glusterfs<br>
                  &nbsp; 2126 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1414m &nbsp;31m 2408 S 46.1 &nbsp;0.4
                  &nbsp;14:18.18 glusterfsd<br>
                  <br>
                  So roughly the same performance as 4k writes remotely.
                  I'm guessing if I<br>
                  could randomize these writes we'd see a large
                  difference.<br>
                  <br>
                  <br>
                  &nbsp; &nbsp; Check this thread out,<br>
                  &nbsp; &nbsp; <a moz-do-not-send="true"
href="http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/"
                    target="_blank">http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/</a>
                  it's<br>
                  &nbsp; &nbsp; quite dated but I remember seeing similar figures.<br>
                  <br>
                  &nbsp; &nbsp; In fact when I used FIO on a libgfapi mounted VM I
                  got slightly<br>
                  &nbsp; &nbsp; faster read/write speeds than on the physical box
                  itself (I assume<br>
                  &nbsp; &nbsp; because of some level of caching). On NFS it was
                  close to half..<br>
                  &nbsp; &nbsp; You'll probably get a little more interesting
                  results using FIO<br>
                  &nbsp; &nbsp; opposed to dd<br>
                  <br>
                  &nbsp; &nbsp; ( -Andrew)<br>
                  <br>
                  <br>
                  Sorry Andrew, I meant to reply to your other message -
                  it looks like<br>
                  CentOS 6.5 can't use libgfapi right now, I stumbled
                  across this info in<br>
                  a couple threads. Something about how the CentOS build
                  has different<br>
                  flags set on build for RHEV snapshot support then
                  RHEL, so native<br>
                  gluster storage domains are disabled because snapshot
                  support is assumed<br>
                  and would break otherwise. I'm assuming this is still
                  valid as I cannot<br>
                  get a storage lock when I attempt a gluster storage
                  domain.<br>
                  <br>
                  <br>
                  --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
                  <br>
                  I've setup a NFS storage domain on my desktops SSD.
                  I've re-installed<br>
                  win 2008 r2 and initially it was running smoother.<br>
                  <br>
                  Disk performance peaks at 100MB/s.<br>
                  <br>
                  If I copy a 250MB file from a share into the Windows
                  VM, it writes out<br>
                  quickly, less than 5 seconds.<br>
                  <br>
                  If I copy 20 files, ranging in file sizes from 4k to
                  200MB, totaling in<br>
                  650MB from the share - windows becomes unresponsive,
                  in top the<br>
                  desktop's nfs daemon is barely being touched at all,
                  and then eventually<br>
                  is not hit. I can still interact with the VM's windows
                  through the spice<br>
                  console. Eventually the file transfer will start and
                  rocket through the<br>
                  transfer.<br>
                  <br>
                  I've opened a 271MB zip file with 4454 files and
                  started the extract<br>
                  process but the progress windows will sit on
                  'calculating...' after a<br>
                  significant period of time the decompression starts
                  and runs at<br>
                  &lt;200KB/second. Windows is guesstimating 1HR
                  completion time. Eventually<br>
                  even this freezes up, and my spice console mouse won't
                  grab. I can still<br>
                  see the resource monitor in the Windows VM doing its
                  thing but have to<br>
                  poweroff the VM as its no longer usable.<br>
                  <br>
                  The windows update process is the same. It seems like
                  when the guest<br>
                  needs quick large writes its fine, but lots of io
                  causes serious<br>
                  hanging, unresponsiveness, spice mouse cursor freeze,
                  and eventually<br>
                  poweroff/reboot is the only way to get it back.<br>
                  <br>
                  Also, during window 2008 r2 install the 'expanding
                  windows files' task<br>
                  is quite slow, roughly 1% progress every 20 seconds
                  (~30 mins to<br>
                  complete). The GLUSTER host shows these stats pretty
                  consistently:<br>
                  PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                  &nbsp;TIME+ &nbsp;COMMAND<br>
                  &nbsp; 8139 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1380m &nbsp;28m 2476 R 83.1 &nbsp;0.4 &nbsp;
                  8:35.78 glusterfsd<br>
                  &nbsp; 8295 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;550m 186m 2980 S &nbsp;4.3 &nbsp;2.4 &nbsp;
                  1:52.56 glusterfs<br>
                  <br>
                  bwm-ng v0.6 (probing every 2.000s), press 'h' for help<br>
                  input: /proc/net/dev type: rate<br>
                  &nbsp; &nbsp;\ &nbsp; &nbsp; &nbsp; &nbsp; iface &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Rx &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
                  &nbsp; &nbsp; Tx<br>
                  &nbsp; &nbsp; Total<br>
                  <br>
                  ==============================================================================<br>
                  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; lo: &nbsp; &nbsp; &nbsp; &nbsp;3719.31 KB/s &nbsp; &nbsp; &nbsp; &nbsp;
                  3719.31 KB/s<br>
                  7438.62 KB/s<br>
                  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; eth0: &nbsp; &nbsp; &nbsp; &nbsp;3405.12 KB/s &nbsp; &nbsp; &nbsp; &nbsp;
                  3903.28 KB/s<br>
                  7308.40 KB/s<br>
                  <br>
                  <br>
                  I've copied the same zip file to an nfs mount point on
                  the OVIRT host<br>
                  (gluster backend) and get about 25 - 600 KB/s during
                  unzip. The same<br>
                  test on NFS mount point (desktop SSD ext4 backend)
                  averaged a network<br>
                  transfer speed of 5MB/s and completed in about 40
                  seconds.<br>
                  <br>
                  I have a RHEL 6.5 guest running on the NFS/gluster
                  backend storage<br>
                  domain, and just did the same test. Extracting the
                  file took 22.3<br>
                  seconds (faster than the fuse mount point on the host
                  !?!?).<br>
                  <br>
                  GLUSTER host top reported this while the RHEL guest
                  was decompressing<br>
                  the zip file:<br>
                  &nbsp; &nbsp;PID USER &nbsp; &nbsp; &nbsp;PR &nbsp;NI &nbsp;VIRT &nbsp;RES &nbsp;SHR S %CPU %MEM &nbsp;
                  &nbsp;TIME+ &nbsp;COMMAND<br>
                  &nbsp; 2141 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 &nbsp;555m 187m 2844 S &nbsp;4.0 &nbsp;2.4
                  &nbsp;18:17.00 glusterfs<br>
                  &nbsp; 2122 root &nbsp; &nbsp; &nbsp;20 &nbsp; 0 1380m &nbsp;31m 2396 S &nbsp;2.3 &nbsp;0.4
                  &nbsp;83:19.40 glusterfsd<br>
                  <br>
                  <br>
                  <br>
                  <br>
                  <br>
                </div>
              </div>
              <div class="im">
                *Steve Dainard *<br>
                IT Infrastructure Manager<br>
                Miovision &lt;<a moz-do-not-send="true"
                  href="http://miovision.com/" target="_blank">http://miovision.com/</a>&gt;
                | /Rethink Traffic/<br>
                <a moz-do-not-send="true" href="tel:519-513-2407"
                  value="+15195132407" target="_blank">519-513-2407</a>&nbsp;&lt;tel:<a
                  moz-do-not-send="true" href="tel:519-513-2407"
                  value="+15195132407" target="_blank">519-513-2407</a>&gt;
                ex.250<br>
                <a moz-do-not-send="true" href="tel:877-646-8476"
                  value="+18776468476" target="_blank">877-646-8476</a>&nbsp;&lt;tel:<a
                  moz-do-not-send="true" href="tel:877-646-8476"
                  value="+18776468476" target="_blank">877-646-8476</a>&gt;
                (toll-free)<br>
                <br>
                *Blog &lt;<a moz-do-not-send="true"
                  href="http://miovision.com/blog" target="_blank">http://miovision.com/blog</a>&gt;
                &nbsp;| **LinkedIn<br>
                &lt;<a moz-do-not-send="true"
                  href="https://www.linkedin.com/company/miovision-technologies"
                  target="_blank">https://www.linkedin.com/company/miovision-technologies</a>&gt;
                &nbsp;| Twitter<br>
                &lt;<a moz-do-not-send="true"
                  href="https://twitter.com/miovision" target="_blank">https://twitter.com/miovision</a>&gt;
                &nbsp;| Facebook<br>
                &lt;<a moz-do-not-send="true"
                  href="https://www.facebook.com/miovision"
                  target="_blank">https://www.facebook.com/miovision</a>&gt;*<br>
                ------------------------------------------------------------------------<br>
              </div>
              <div class="im">
                Miovision Technologies Inc. | 148 Manitou Drive, Suite
                101, Kitchener,<br>
                ON, Canada | N2C 1L3<br>
                This e-mail may contain information that is privileged
                or confidential.<br>
                If you are not the intended recipient, please delete the
                e-mail and any<br>
                attachments and notify us immediately.<br>
                <br>
                <br>
                <br>
                <br>
              </div>
              <div class="im">
                _______________________________________________<br>
                Users mailing list<br>
                <a moz-do-not-send="true" href="mailto:Users@ovirt.org"
                  target="_blank">Users@ovirt.org</a><br>
                <a moz-do-not-send="true"
                  href="http://lists.ovirt.org/mailman/listinfo/users"
                  target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
                <br>
              </div>
            </blockquote>
            <br>
            please note currently (&gt;3.3.1), we don't use libgfapi on
            fedora as well, as we found some gaps in functionality in
            the libvirt libgfapi support for snapshots. once these are
            resolved, we can re-enable libgfapi on a glusterfs storage
            domain.<br>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </body>
</html>