<div dir="ltr">ping  here is the engine log - copying list this time too.<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 28 May 2014 12:17, Sahina Bose <span dir="ltr">&lt;<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><div class="">
    <br>
    <div>On 05/28/2014 08:36 PM, Alastair Neil
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">I just noticed this in the console and I don&#39;t know
        if it is relevant.  
        <div><br>
        </div>
        <div>When I look at the &quot;General&quot; tab on the hosts under
          &quot;GlusterFS Version&quot; it shows &quot;N/A&quot;.  <br>
        </div>
      </div>
    </blockquote>
    <br></div>
    That&#39;s not related. The GlusterFS version in UI is populated from
    the getVdsCaps output from vdsm - looks like the vdsm running on
    your gluster node is not returning that?<br>
    <br>
    Could you share the engine.log so that we can look at how the
    gluster status was interpreted and updated ? The log from the last
    10 mins should do.<br>
    <br>
    thanks!<div><div class="h5"><br>
    <br>
    <br>
    <blockquote type="cite">
      <div dir="ltr">
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">On 28 May 2014 11:03, Alastair Neil <span dir="ltr">&lt;<a href="mailto:ajneil.tech@gmail.com" target="_blank">ajneil.tech@gmail.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr">ovirt version is 3.4.  I did have a slightly
              older version of vdsm on gluster0 but I have updated it
              and the issue persists.  The compatibility version on the
              storage cluster is 3.3.
              <div>
                <br>
              </div>
              <div>I checked the logs for GlusterSyncJob notifications
                and there are none.</div>
              <div><br>
              </div>
              <div><br>
              </div>
              <div>
                <div> 
                  <div><br>
                  </div>
                  <div><br>
                  </div>
                </div>
              </div>
            </div>
            <div>
              <div>
                <div class="gmail_extra"><br>
                  <br>
                  <div class="gmail_quote">On 28 May 2014 10:19, Sahina
                    Bose <span dir="ltr">&lt;<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>&gt;</span>
                    wrote:<br>
                    <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      <div bgcolor="#FFFFFF" text="#000000"> Hi
                        Alastair,<br>
                        <br>
                        This could be a mismatch in the hostname
                        identified in ovirt and gluster.<br>
                        <br>
                        You could check for any exceptions from
                        GlusterSyncJob in engine.log.<br>
                        <br>
                        Also, what version of ovirt are you using. And
                        the compatibility version of your cluster?
                        <div>
                          <div><br>
                            <br>
                            <div>On 05/28/2014 12:40 AM, Alastair Neil
                              wrote:<br>
                            </div>
                            <blockquote type="cite">
                              <div dir="ltr">
                                <div>Hi thanks for the reply. Here is an
                                  extract from a grep I ran on the vdsm
                                  log grepping for the volume name
                                  vm-store.  It seems to indicate the
                                  bricks are ONLINE.</div>
                                <div><br>
                                </div>
                                <div>I am uncertain how to extract
                                  meaningful information from the
                                  engine.log can you provide some
                                  guidance?</div>
                                <div><br>
                                </div>
                                <div>Thanks, </div>
                                <div><br>
                                </div>
                                <div>Alastair</div>
                                <div><br>
                                </div>
                                <div> </div>
                                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Thread-100::DEBUG::2014-05-27

                                  15:01:06,335::BindingXMLRPC::1067::vds::(wrapper)
                                  client [129.174.94.239]::call
                                  volumeStatus with (&#39;vm-store&#39;, &#39;&#39;, &#39;&#39;)
                                  {}<br>
                                  Thread-100::DEBUG::2014-05-27
                                  15:01:06,356::BindingXMLRPC::1074::vds::(wrapper)
                                  return volumeStatus with
                                  {&#39;volumeStatus&#39;: {&#39;bricks&#39;:
                                  [{&#39;status&#39;: &#39;ONLINE&#39;, &#39;brick&#39;:
                                  &#39;gluster0:/export/brick0&#39;, &#39;pid&#39;:
                                  &#39;2675&#39;, &#39;port&#39;: &#39;49158&#39;, &#39;hostuuid&#39;:
                                  &#39;bcff5245-ea86-4384-a1bf-9219c8be8001&#39;},
                                  {&#39;status&#39;: &#39;ONLINE&#39;, &#39;brick&#39;:
                                  &#39;gluster1:/export/brick4/vm-store&#39;,
                                  &#39;pid&#39;: &#39;2309&#39;, &#39;port&#39;: &#39;49158&#39;,
                                  &#39;hostuuid&#39;:
                                  &#39;54d39ae4-91ae-410b-828c-67031f3d8a68&#39;}],
                                  &#39;nfs&#39;: [{&#39;status&#39;: &#39;ONLINE&#39;,
                                  &#39;hostname&#39;: &#39;129.174.126.56&#39;, &#39;pid&#39;:
                                  &#39;27012&#39;, &#39;port&#39;: &#39;2049&#39;, &#39;hostuuid&#39;:
                                  &#39;54d39ae4-91ae-410b-828c-67031f3d8a68&#39;},
                                  {&#39;status&#39;: &#39;ONLINE&#39;, &#39;hostname&#39;:
                                  &#39;gluster0&#39;, &#39;pid&#39;: &#39;12875&#39;, &#39;port&#39;:
                                  &#39;2049&#39;, &#39;hostuuid&#39;:
                                  &#39;bcff5245-ea86-4384-a1bf-9219c8be8001&#39;}],
                                  &#39;shd&#39;: [{&#39;status&#39;: &#39;ONLINE&#39;,
                                  &#39;hostname&#39;: &#39;129.174.126.56&#39;, &#39;pid&#39;:
                                  &#39;27019&#39;, &#39;hostuuid&#39;:
                                  &#39;54d39ae4-91ae-410b-828c-67031f3d8a68&#39;},
                                  {&#39;status&#39;: &#39;ONLINE&#39;, &#39;hostname&#39;:
                                  &#39;gluster0&#39;, &#39;pid&#39;: &#39;12882&#39;,
                                  &#39;hostuuid&#39;:
                                  &#39;bcff5245-ea86-4384-a1bf-9219c8be8001&#39;}],
                                  &#39;name&#39;: &#39;vm-store&#39;}, &#39;status&#39;:
                                  {&#39;message&#39;: &#39;Done&#39;, &#39;code&#39;: 0}}<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__)
                                  Reading domain in path
/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh)
                                  read lines
                                  (FileMetadataRW)=[&#39;CLASS=Data&#39;,
                                  &#39;DESCRIPTION=Gluster-VM-Store&#39;,
                                  &#39;IOOPTIMEOUTSEC=10&#39;, &#39;LEASERETRIES=3&#39;,
                                  &#39;LEASETIMESEC=60&#39;, &#39;LOCKPOLICY=&#39;,
                                  &#39;LOCKRENEWALINTERVALSEC=5&#39;,
                                  &#39;MASTER_VERSION=1&#39;,
                                  &#39;POOL_DESCRIPTION=VS-VM&#39;,
                                  &#39;POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active&#39;,

                                  &#39;POOL_SPM_ID=3&#39;, &#39;POOL_SPM_LVER=7&#39;,
                                  &#39;POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7&#39;,
                                  &#39;REMOTE_PATH=gluster0:vm-store&#39;,
                                  &#39;ROLE=Master&#39;,
                                  &#39;SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8&#39;,
                                  &#39;TYPE=GLUSTERFS&#39;, &#39;VERSION=3&#39;,
                                  &#39;_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731&#39;]<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd None)<br>
                                  Thread-16::DEBUG::2014-05-27
                                  15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata

                                  bs=4096 count=1&#39; (cwd eadDelay)
                                  &#39;/bin/dd iflag=direct
                                  if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)</blockquote>
                                <div><br>
                                </div>
                              </div>
                              <div class="gmail_extra"><br>
                                <br>
                                <div class="gmail_quote">On 21 May 2014
                                  23:51, Kanagaraj <span dir="ltr">&lt;<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>&gt;</span>
                                  wrote:<br>
                                  <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                    <div bgcolor="#FFFFFF" text="#000000"> engine.log and
                                      vdsm.log?<br>
                                      <br>
                                      This can mostly happen due to
                                      following reasons<br>
                                      - &quot;gluster volume status vm-store&quot;
                                      is not consistently returning the
                                      right output<br>
                                      - ovirt-engine is not able to
                                      identify the bricks properly<br>
                                      <br>
                                      Anyway, engine.log will give
                                      better clarity.
                                      <div>
                                        <div><br>
                                          <br>
                                          <br>
                                          <div>On 05/22/2014 02:24 AM,
                                            Alastair Neil wrote:<br>
                                          </div>
                                        </div>
                                      </div>
                                      <blockquote type="cite">
                                        <div>
                                          <div>
                                            <div dir="ltr">I just did a
                                              rolling upgrade of my
                                              gluster storage cluster to
                                              the latest 3.5 bits.  This
                                              all seems to have gone
                                              smoothly and all the
                                              volumes are on line.  All
                                              volumes are replicated 1x2
                                              <div><br>
                                              </div>
                                              <div>The ovirt console now
                                                insists that two of my
                                                volumes , including the
                                                vm-store  volume with my
                                                vm&#39;s happily running
                                                have no bricks up.</div>
                                              <div><br>
                                              </div>
                                              <div>It reports &quot;Up but
                                                all bricks are down&quot;</div>
                                              <div><br>
                                              </div>
                                              <div>This would seem to be
                                                impossible.  Gluster  on
                                                the nodes itself reports
                                                no issues</div>
                                              <div><br>
                                              </div>
                                              <div>
                                                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[root@gluster1


                                                  ~]# gluster volume
                                                  status vm-store<br>
                                                  Status of volume:
                                                  vm-store<br>
                                                  Gluster process<span style="white-space:pre-wrap">
                                                  </span>Port<span style="white-space:pre-wrap">
                                                  </span>Online<span style="white-space:pre-wrap">
                                                  </span>Pid<br>
------------------------------------------------------------------------------<br>
                                                  Brick
                                                  gluster0:/export/brick0/vm-store<span style="white-space:pre-wrap"> </span>49158<span style="white-space:pre-wrap">
                                                  </span>Y<span style="white-space:pre-wrap">
                                                  </span>2675<br>
                                                  Brick
                                                  gluster1:/export/brick4/vm-store<span style="white-space:pre-wrap"> </span>49158<span style="white-space:pre-wrap">
                                                  </span>Y<span style="white-space:pre-wrap">
                                                  </span>2309<br>
                                                  NFS Server on
                                                  localhost<span style="white-space:pre-wrap">
                                                  </span>2049<span style="white-space:pre-wrap">
                                                  </span>Y<span style="white-space:pre-wrap">
                                                  </span>27012<br>
                                                  Self-heal Daemon on
                                                  localhost<span style="white-space:pre-wrap">
                                                  </span>N/A<span style="white-space:pre-wrap">
                                                  </span>Y<span style="white-space:pre-wrap">
                                                  </span>27019<br>
                                                  NFS Server on gluster0<span style="white-space:pre-wrap"> </span>2049<span style="white-space:pre-wrap">
                                                  </span>Y<span style="white-space:pre-wrap">
                                                  </span>12875<br>
                                                  Self-heal Daemon on
                                                  gluster0<span style="white-space:pre-wrap">
                                                  </span>N/A<span style="white-space:pre-wrap">
                                                  </span>Y<span style="white-space:pre-wrap">
                                                  </span>12882<br>
                                                   <br>
                                                  Task Status of Volume
                                                  vm-store<br>
------------------------------------------------------------------------------<br>
                                                  There are no active
                                                  volume tasks</blockquote>
                                              </div>
                                              <div><br>
                                              </div>
                                              <div><br>
                                              </div>
                                              <div>As I mentioned the
                                                vms are running happily</div>
                                              <div> initially the ISOs
                                                volume had the same
                                                issue.  I did a volume
                                                start and stop on the
                                                volume as it was not
                                                being activly used and
                                                that cleared up the
                                                issue in the console.
                                                 However, as I have VMs
                                                running  I can&#39;t so this
                                                for the vm-store volume.
                                                 </div>
                                              <div><br>
                                              </div>
                                              <div><br>
                                              </div>
                                              <div>Any suggestions,
                                                Alastair</div>
                                              <div><br>
                                              </div>
                                            </div>
                                            <br>
                                            <fieldset></fieldset>
                                            <br>
                                          </div>
                                        </div>
                                        <pre>_______________________________________________
Users mailing list
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
                                      </blockquote>
                                      <br>
                                    </div>
                                  </blockquote>
                                </div>
                                <br>
                              </div>
                              <br>
                              <fieldset></fieldset>
                              <br>
                              <pre>_______________________________________________
Users mailing list
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
                            </blockquote>
                            <br>
                          </div>
                        </div>
                      </div>
                      <br>
                      _______________________________________________<br>
                      Users mailing list<br>
                      <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
                      <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
                      <br>
                    </blockquote>
                  </div>
                  <br>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div>