<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Nov 7, 2016 at 11:20 AM, Gary Pedretty <span dir="ltr"><<a href="mailto:gary@ravnalaska.net" target="_blank">gary@ravnalaska.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">As a storage domain, this gluster volume will not work whether it is preallocated or thin provision. It will work as a straight gluster volume mounted directly to any VM on the ovirt Cluster, or any physical machine, just not as a data storage domain in the Data Center.<div><br></div><div>Are there restrictions to having more than one data storage domain that has it gluster volumes on the same hosts that are also part of the Data Center and Cluster?<span class="HOEnZb"><font color="#888888"><br></font></span></div></div></blockquote><div><br></div><div>There are no such restrictions.<br><br></div><div>However your volume configuration seems suspect -"stripe 2 replica 2". Can you provide gluster volume info of your second storage domain gluster volume? The mount logs of the volume (under /var/log/glusterfs/rhev-datacenter..<volname>.log) from the host where the volume is being mounted will also help.<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div><span class="HOEnZb"><font color="#888888"><div><br></div><div><br></div><div>Gary</div><div><br></div></font></span><div><span class=""><br><div>
<div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word"><div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><div style="text-align:start;text-indent:0px;word-wrap:break-word"><font style="color:rgb(0,0,0);font-size:12px;letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px" face="Menlo"><div>------------------------------<wbr>------------------------------<wbr>------------</div><div>Gary Pedretty <a href="mailto:gary@eraalaska.net" target="_blank">gary@ravnalaska.net</a></div><div>Systems Manager <a href="http://www.flyravn.com" target="_blank">www.flyravn.com</a></div><div>Ravn Alaska /\ 907-450-7251</div><div>5245 Airport Industrial Road / \/\ 907-450-7238 fax</div><div>Fairbanks, Alaska 99709 /\ / \ \ Second greatest commandment</div></font><font face="Monaco"><span style="font-size:12px">Serving All of Alaska / \/ /\ \ \/\ “Love your neighbor as</span></font><br style="font-family:Monaco"><font style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px" face="Menlo"><span style="font-size:12px">Really loving the record green up date! Summmer!! yourself” Matt 22:39</span></font><div style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px;font-family:Menlo"></div><font style="font-size:12px" face="Menlo"></font><span style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px;font-size:12px"><font face="Menlo"><div>------------------------------<wbr>------------------------------<wbr>------------</div></font></span><div style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px"><font style="font-size:12px" face="Menlo"><br></font></div></div><span style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px;font-size:12px"><br class="m_-9137991874919206709Apple-interchange-newline"></span></div><span style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px;font-size:12px"><br class="m_-9137991874919206709Apple-interchange-newline"></span></div><span style="color:rgb(0,0,0);letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px;font-size:12px"><br class="m_-9137991874919206709Apple-interchange-newline"></span></div><br class="m_-9137991874919206709Apple-interchange-newline"></div><br class="m_-9137991874919206709Apple-interchange-newline"></div><br class="m_-9137991874919206709Apple-interchange-newline"></div><br class="m_-9137991874919206709Apple-interchange-newline"></div><br class="m_-9137991874919206709Apple-interchange-newline"></div><br class="m_-9137991874919206709Apple-interchange-newline"><br class="m_-9137991874919206709Apple-interchange-newline">
</div>
<br></span><span class=""><div><blockquote type="cite"><div>On Nov 6, 2016, at 6:28 AM, Maor Lipchuk <<a href="mailto:mlipchuk@redhat.com" target="_blank">mlipchuk@redhat.com</a>> wrote:</div><br class="m_-9137991874919206709Apple-interchange-newline"><div><div dir="ltr" style="font-family:LucidaGrande;font-size:13px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">Hi Gary,<div><br></div><div>Do you have other disks on this storage domain?</div><div>Have you tried to use other VMs with disks on this storage domain?</div><div>Is this disk is preallocated? If not can you try to create a pre-allocate disk and re-try</div><div><br></div><div>Regards,</div><div>Maor</div><div><br></div><div><br></div></div><div class="gmail_extra" style="font-family:LucidaGrande;font-size:13px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><br><div class="gmail_quote">On Sat, Nov 5, 2016 at 2:28 AM, Gary Pedretty<span class="m_-9137991874919206709Apple-converted-space"> </span><span dir="ltr"><<a href="mailto:gary@ravnalaska.net" target="_blank">gary@ravnalaska.net</a>></span><span class="m_-9137991874919206709Apple-converted-space"><wbr> </span>wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div style="word-wrap:break-word">I am having an issue in a Hosted Engine GlusterFS setup. I have 4 hosts in a cluster, with the Engined being hosted on the Cluster. This follows the pattern shown in the docs for a glusterized setup, except that I have 4 hosts. I have engine, data, iso and export storage domains all as glusterfs on a replica 3 glusterfs on the first 3 hosts. These gluster volumes are running on an SSD Hardware Raid 6, which is identical on all the hosts. All the hosts have a second Raid 6 Array with Physical Hard Drives and I have created a second data storage domain as a glusterfs across all 4 hosts as a stripe 2 replica 2 and have added it to the Data Center. However if I use this second Storage Domain as the boot disk for a VM, or as second disk for a VM that is already running, the VM will become non-responsive as soon as the VM starts using this disk. Happens during the OS install if the VM is using this storage domain for its boot disk, or if I try copying anything large to it when it is a second disk for a VM that has its boot drive on the Master Data Storage Domain. <div><br></div><div>If I mount the gluster volume that is this second storage domain on one of the hosts directly or any other machine on my local network, the gluster volume works fine. It is only when it is used as a storage domain (second data domain) on VMs in the cluster.</div><div><br></div><div>Once the vm becomes non-responsive it cannot be stopped, removed or destroyed without restarting the host machine that the VM is currently running on. The 4 hosts are connected via 10gig ethernet, so should not be a network issue.</div><div><br></div><div><br></div><div>Any ideas?</div><div><br></div><div>Gary</div></div></blockquote></div></div></div></blockquote></div><br></span></div></div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>