<div dir="ltr"><div>Can you provide "gluster volume info" and the mount logs of the data volume (I assume that this hosts the vdisks for the VM's with storage error).<br><br></div>Also vdsm.log at the corresponding time.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 16, 2018 at 3:45 AM, Endre Karlson <span dir="ltr"><<a href="mailto:endre.karlson@gmail.com" target="_blank">endre.karlson@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi, this is is here again and we are getting several vm's going into storage error in our 4 node cluster running on centos 7.4 with gluster and ovirt 4.2.1.<div><br></div><div>Gluster version: 3.12.6<br></div><div><br></div><div>volume status</div><div><div>[root@ovirt3 ~]# gluster volume status</div><div>Status of volume: data</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick ovirt0:/gluster/brick3/data 49152 0 Y 9102 </div><div>Brick ovirt2:/gluster/brick3/data 49152 0 Y 28063</div><div>Brick ovirt3:/gluster/brick3/data 49152 0 Y 28379</div><div>Brick ovirt0:/gluster/brick4/data 49153 0 Y 9111 </div><div>Brick ovirt2:/gluster/brick4/data 49153 0 Y 28069</div><div>Brick ovirt3:/gluster/brick4/data 49153 0 Y 28388</div><div>Brick ovirt0:/gluster/brick5/data 49154 0 Y 9120 </div><div>Brick ovirt2:/gluster/brick5/data 49154 0 Y 28075</div><div>Brick ovirt3:/gluster/brick5/data 49154 0 Y 28397</div><div>Brick ovirt0:/gluster/brick6/data 49155 0 Y 9129 </div><div>Brick ovirt2:/gluster/brick6_1/data 49155 0 Y 28081</div><div>Brick ovirt3:/gluster/brick6/data 49155 0 Y 28404</div><div>Brick ovirt0:/gluster/brick7/data 49156 0 Y 9138 </div><div>Brick ovirt2:/gluster/brick7/data 49156 0 Y 28089</div><div>Brick ovirt3:/gluster/brick7/data 49156 0 Y 28411</div><div>Brick ovirt0:/gluster/brick8/data 49157 0 Y 9145 </div><div>Brick ovirt2:/gluster/brick8/data 49157 0 Y 28095</div><div>Brick ovirt3:/gluster/brick8/data 49157 0 Y 28418</div><div>Brick ovirt1:/gluster/brick3/data 49152 0 Y 23139</div><div>Brick ovirt1:/gluster/brick4/data 49153 0 Y 23145</div><div>Brick ovirt1:/gluster/brick5/data 49154 0 Y 23152</div><div>Brick ovirt1:/gluster/brick6/data 49155 0 Y 23159</div><div>Brick ovirt1:/gluster/brick7/data 49156 0 Y 23166</div><div>Brick ovirt1:/gluster/brick8/data 49157 0 Y 23173</div><div>Self-heal Daemon on localhost N/A N/A Y 7757 </div><div>Bitrot Daemon on localhost N/A N/A Y 7766 </div><div>Scrubber Daemon on localhost N/A N/A Y 7785 </div><div>Self-heal Daemon on ovirt2 N/A N/A Y 8205 </div><div>Bitrot Daemon on ovirt2 N/A N/A Y 8216 </div><div>Scrubber Daemon on ovirt2 N/A N/A Y 8227 </div><div>Self-heal Daemon on ovirt0 N/A N/A Y 32665</div><div>Bitrot Daemon on ovirt0 N/A N/A Y 32674</div><div>Scrubber Daemon on ovirt0 N/A N/A Y 32712</div><div>Self-heal Daemon on ovirt1 N/A N/A Y 31759</div><div>Bitrot Daemon on ovirt1 N/A N/A Y 31768</div><div>Scrubber Daemon on ovirt1 N/A N/A Y 31790</div><div> </div><div>Task Status of Volume data</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Task : Rebalance </div><div>ID : 62942ba3-db9e-4604-aa03-<wbr>4970767f4d67</div><div>Status : completed </div><div> </div><div>Status of volume: engine</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick ovirt0:/gluster/brick1/engine 49158 0 Y 9155 </div><div>Brick ovirt2:/gluster/brick1/engine 49158 0 Y 28107</div><div>Brick ovirt3:/gluster/brick1/engine 49158 0 Y 28427</div><div>Self-heal Daemon on localhost N/A N/A Y 7757 </div><div>Self-heal Daemon on ovirt1 N/A N/A Y 31759</div><div>Self-heal Daemon on ovirt0 N/A N/A Y 32665</div><div>Self-heal Daemon on ovirt2 N/A N/A Y 8205 </div><div> </div><div>Task Status of Volume engine</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>There are no active volume tasks</div><div> </div><div>Status of volume: iso</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick ovirt0:/gluster/brick2/iso 49159 0 Y 9164 </div><div>Brick ovirt2:/gluster/brick2/iso 49159 0 Y 28116</div><div>Brick ovirt3:/gluster/brick2/iso 49159 0 Y 28436</div><div>NFS Server on localhost 2049 0 Y 7746 </div><div>Self-heal Daemon on localhost N/A N/A Y 7757 </div><div>NFS Server on ovirt1 2049 0 Y 31748</div><div>Self-heal Daemon on ovirt1 N/A N/A Y 31759</div><div>NFS Server on ovirt0 2049 0 Y 32656</div><div>Self-heal Daemon on ovirt0 N/A N/A Y 32665</div><div>NFS Server on ovirt2 2049 0 Y 8194 </div><div>Self-heal Daemon on ovirt2 N/A N/A Y 8205 </div><div> </div><div>Task Status of Volume iso</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>There are no active volume tasks</div></div><div><br></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>