<div dir="ltr">Hi Michal,<div><br></div><div>The Storage domain is up and running and mounted on all the host nodes...as i updated before that it was working perfectly before but just after reboot can not make the VM poweron...</div><div><br></div><div><img src="cid:ii_14c3120936ce2326" alt="Inline image 1" width="558" height="26"><br></div><div><br></div><div><img src="cid:ii_14c31210129cff89" alt="Inline image 2" width="558" height="27"><br></div><div><br></div><div><div>[root@cpu01 log]# gluster volume info</div><div><br></div><div>Volume Name: ds01</div><div>Type: Distributed-Replicate</div><div>Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6</div><div>Status: Started</div><div>Number of Bricks: 48 x 2 = 96</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: cpu01:/bricks/1/vol1</div><div>Brick2: cpu02:/bricks/1/vol1</div><div>Brick3: cpu03:/bricks/1/vol1</div><div>Brick4: cpu04:/bricks/1/vol1</div><div>Brick5: cpu01:/bricks/2/vol1</div><div>Brick6: cpu02:/bricks/2/vol1</div><div>Brick7: cpu03:/bricks/2/vol1</div><div>Brick8: cpu04:/bricks/2/vol1</div><div>Brick9: cpu01:/bricks/3/vol1</div><div>Brick10: cpu02:/bricks/3/vol1</div><div>Brick11: cpu03:/bricks/3/vol1</div><div>Brick12: cpu04:/bricks/3/vol1</div><div>Brick13: cpu01:/bricks/4/vol1</div><div>Brick14: cpu02:/bricks/4/vol1</div><div>Brick15: cpu03:/bricks/4/vol1</div><div>Brick16: cpu04:/bricks/4/vol1</div><div>Brick17: cpu01:/bricks/5/vol1</div><div>Brick18: cpu02:/bricks/5/vol1</div><div>Brick19: cpu03:/bricks/5/vol1</div><div>Brick20: cpu04:/bricks/5/vol1</div><div>Brick21: cpu01:/bricks/6/vol1</div><div>Brick22: cpu02:/bricks/6/vol1</div><div>Brick23: cpu03:/bricks/6/vol1</div><div>Brick24: cpu04:/bricks/6/vol1</div><div>Brick25: cpu01:/bricks/7/vol1</div><div>Brick26: cpu02:/bricks/7/vol1</div><div>Brick27: cpu03:/bricks/7/vol1</div><div>Brick28: cpu04:/bricks/7/vol1</div><div>Brick29: cpu01:/bricks/8/vol1</div><div>Brick30: cpu02:/bricks/8/vol1</div><div>Brick31: cpu03:/bricks/8/vol1</div><div>Brick32: cpu04:/bricks/8/vol1</div><div>Brick33: cpu01:/bricks/9/vol1</div><div>Brick34: cpu02:/bricks/9/vol1</div><div>Brick35: cpu03:/bricks/9/vol1</div><div>Brick36: cpu04:/bricks/9/vol1</div><div>Brick37: cpu01:/bricks/10/vol1</div><div>Brick38: cpu02:/bricks/10/vol1</div><div>Brick39: cpu03:/bricks/10/vol1</div><div>Brick40: cpu04:/bricks/10/vol1</div><div>Brick41: cpu01:/bricks/11/vol1</div><div>Brick42: cpu02:/bricks/11/vol1</div><div>Brick43: cpu03:/bricks/11/vol1</div><div>Brick44: cpu04:/bricks/11/vol1</div><div>Brick45: cpu01:/bricks/12/vol1</div><div>Brick46: cpu02:/bricks/12/vol1</div><div>Brick47: cpu03:/bricks/12/vol1</div><div>Brick48: cpu04:/bricks/12/vol1</div><div>Brick49: cpu01:/bricks/13/vol1</div><div>Brick50: cpu02:/bricks/13/vol1</div><div>Brick51: cpu03:/bricks/13/vol1</div><div>Brick52: cpu04:/bricks/13/vol1</div><div>Brick53: cpu01:/bricks/14/vol1</div><div>Brick54: cpu02:/bricks/14/vol1</div><div>Brick55: cpu03:/bricks/14/vol1</div><div>Brick56: cpu04:/bricks/14/vol1</div><div>Brick57: cpu01:/bricks/15/vol1</div><div>Brick58: cpu02:/bricks/15/vol1</div><div>Brick59: cpu03:/bricks/15/vol1</div><div>Brick60: cpu04:/bricks/15/vol1</div><div>Brick61: cpu01:/bricks/16/vol1</div><div>Brick62: cpu02:/bricks/16/vol1</div><div>Brick63: cpu03:/bricks/16/vol1</div><div>Brick64: cpu04:/bricks/16/vol1</div><div>Brick65: cpu01:/bricks/17/vol1</div><div>Brick66: cpu02:/bricks/17/vol1</div><div>Brick67: cpu03:/bricks/17/vol1</div><div>Brick68: cpu04:/bricks/17/vol1</div><div>Brick69: cpu01:/bricks/18/vol1</div><div>Brick70: cpu02:/bricks/18/vol1</div><div>Brick71: cpu03:/bricks/18/vol1</div><div>Brick72: cpu04:/bricks/18/vol1</div><div>Brick73: cpu01:/bricks/19/vol1</div><div>Brick74: cpu02:/bricks/19/vol1</div><div>Brick75: cpu03:/bricks/19/vol1</div><div>Brick76: cpu04:/bricks/19/vol1</div><div>Brick77: cpu01:/bricks/20/vol1</div><div>Brick78: cpu02:/bricks/20/vol1</div><div>Brick79: cpu03:/bricks/20/vol1</div><div>Brick80: cpu04:/bricks/20/vol1</div><div>Brick81: cpu01:/bricks/21/vol1</div><div>Brick82: cpu02:/bricks/21/vol1</div><div>Brick83: cpu03:/bricks/21/vol1</div><div>Brick84: cpu04:/bricks/21/vol1</div><div>Brick85: cpu01:/bricks/22/vol1</div><div>Brick86: cpu02:/bricks/22/vol1</div><div>Brick87: cpu03:/bricks/22/vol1</div><div>Brick88: cpu04:/bricks/22/vol1</div><div>Brick89: cpu01:/bricks/23/vol1</div><div>Brick90: cpu02:/bricks/23/vol1</div><div>Brick91: cpu03:/bricks/23/vol1</div><div>Brick92: cpu04:/bricks/23/vol1</div><div>Brick93: cpu01:/bricks/24/vol1</div><div>Brick94: cpu02:/bricks/24/vol1</div><div>Brick95: cpu03:/bricks/24/vol1</div><div>Brick96: cpu04:/bricks/24/vol1</div><div>Options Reconfigured:</div><div>diagnostics.count-fop-hits: on</div><div>diagnostics.latency-measurement: on</div><div>nfs.disable: on</div><div>user.cifs: enable</div><div>auth.allow: 10.10.0.*</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: enable</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>server.allow-insecure: on</div><div>network.ping-timeout: 100</div><div>[root@cpu01 log]#</div></div><div><br></div><div>-----------------------------------------</div><div><br></div><div><div>[root@cpu01 log]# gluster volume status</div><div>Status of volume: ds01</div><div>Gluster process Port Online Pid</div><div>------------------------------------------------------------------------------</div><div>Brick cpu01:/bricks/1/vol1 49152 Y 33474</div><div>Brick cpu02:/bricks/1/vol1 49152 Y 40717</div><div>Brick cpu03:/bricks/1/vol1 49152 Y 18080</div><div>Brick cpu04:/bricks/1/vol1 49152 Y 40447</div><div>Brick cpu01:/bricks/2/vol1 49153 Y 33481</div><div>Brick cpu02:/bricks/2/vol1 49153 Y 40724</div><div>Brick cpu03:/bricks/2/vol1 49153 Y 18086</div><div>Brick cpu04:/bricks/2/vol1 49153 Y 40453</div><div>Brick cpu01:/bricks/3/vol1 49154 Y 33489</div><div>Brick cpu02:/bricks/3/vol1 49154 Y 40731</div><div>Brick cpu03:/bricks/3/vol1 49154 Y 18097</div><div>Brick cpu04:/bricks/3/vol1 49154 Y 40460</div><div>Brick cpu01:/bricks/4/vol1 49155 Y 33495</div><div>Brick cpu02:/bricks/4/vol1 49155 Y 40738</div><div>Brick cpu03:/bricks/4/vol1 49155 Y 18103</div><div>Brick cpu04:/bricks/4/vol1 49155 Y 40468</div><div>Brick cpu01:/bricks/5/vol1 49156 Y 33502</div><div>Brick cpu02:/bricks/5/vol1 49156 Y 40745</div><div>Brick cpu03:/bricks/5/vol1 49156 Y 18110</div><div>Brick cpu04:/bricks/5/vol1 49156 Y 40474</div><div>Brick cpu01:/bricks/6/vol1 49157 Y 33509</div><div>Brick cpu02:/bricks/6/vol1 49157 Y 40752</div><div>Brick cpu03:/bricks/6/vol1 49157 Y 18116</div><div>Brick cpu04:/bricks/6/vol1 49157 Y 40481</div><div>Brick cpu01:/bricks/7/vol1 49158 Y 33516</div><div>Brick cpu02:/bricks/7/vol1 49158 Y 40759</div><div>Brick cpu03:/bricks/7/vol1 49158 Y 18122</div><div>Brick cpu04:/bricks/7/vol1 49158 Y 40488</div><div>Brick cpu01:/bricks/8/vol1 49159 Y 33525</div><div>Brick cpu02:/bricks/8/vol1 49159 Y 40766</div><div>Brick cpu03:/bricks/8/vol1 49159 Y 18130</div><div>Brick cpu04:/bricks/8/vol1 49159 Y 40495</div><div>Brick cpu01:/bricks/9/vol1 49160 Y 33530</div><div>Brick cpu02:/bricks/9/vol1 49160 Y 40773</div><div>Brick cpu03:/bricks/9/vol1 49160 Y 18137</div><div>Brick cpu04:/bricks/9/vol1 49160 Y 40502</div><div>Brick cpu01:/bricks/10/vol1 49161 Y 33538</div><div>Brick cpu02:/bricks/10/vol1 49161 Y 40780</div><div>Brick cpu03:/bricks/10/vol1 49161 Y 18143</div><div>Brick cpu04:/bricks/10/vol1 49161 Y 40509</div><div>Brick cpu01:/bricks/11/vol1 49162 Y 33544</div><div>Brick cpu02:/bricks/11/vol1 49162 Y 40787</div><div>Brick cpu03:/bricks/11/vol1 49162 Y 18150</div><div>Brick cpu04:/bricks/11/vol1 49162 Y 40516</div><div>Brick cpu01:/bricks/12/vol1 49163 Y 33551</div><div>Brick cpu02:/bricks/12/vol1 49163 Y 40794</div><div>Brick cpu03:/bricks/12/vol1 49163 Y 18157</div><div>Brick cpu04:/bricks/12/vol1 49163 Y 40692</div><div>Brick cpu01:/bricks/13/vol1 49164 Y 33558</div><div>Brick cpu02:/bricks/13/vol1 49164 Y 40801</div><div>Brick cpu03:/bricks/13/vol1 49164 Y 18165</div><div>Brick cpu04:/bricks/13/vol1 49164 Y 40700</div><div>Brick cpu01:/bricks/14/vol1 49165 Y 33566</div><div>Brick cpu02:/bricks/14/vol1 49165 Y 40809</div><div>Brick cpu03:/bricks/14/vol1 49165 Y 18172</div><div>Brick cpu04:/bricks/14/vol1 49165 Y 40706</div><div>Brick cpu01:/bricks/15/vol1 49166 Y 33572</div><div>Brick cpu02:/bricks/15/vol1 49166 Y 40815</div><div>Brick cpu03:/bricks/15/vol1 49166 Y 18179</div><div>Brick cpu04:/bricks/15/vol1 49166 Y 40714</div><div>Brick cpu01:/bricks/16/vol1 49167 Y 33579</div><div>Brick cpu02:/bricks/16/vol1 49167 Y 40822</div><div>Brick cpu03:/bricks/16/vol1 49167 Y 18185</div><div>Brick cpu04:/bricks/16/vol1 49167 Y 40722</div><div>Brick cpu01:/bricks/17/vol1 49168 Y 33586</div><div>Brick cpu02:/bricks/17/vol1 49168 Y 40829</div><div>Brick cpu03:/bricks/17/vol1 49168 Y 18192</div><div>Brick cpu04:/bricks/17/vol1 49168 Y 40727</div><div>Brick cpu01:/bricks/18/vol1 49169 Y 33593</div><div>Brick cpu02:/bricks/18/vol1 49169 Y 40836</div><div>Brick cpu03:/bricks/18/vol1 49169 Y 18201</div><div>Brick cpu04:/bricks/18/vol1 49169 Y 40735</div><div>Brick cpu01:/bricks/19/vol1 49170 Y 33600</div><div>Brick cpu02:/bricks/19/vol1 49170 Y 40843</div><div>Brick cpu03:/bricks/19/vol1 49170 Y 18207</div><div>Brick cpu04:/bricks/19/vol1 49170 Y 40741</div><div>Brick cpu01:/bricks/20/vol1 49171 Y 33608</div><div>Brick cpu02:/bricks/20/vol1 49171 Y 40850</div><div>Brick cpu03:/bricks/20/vol1 49171 Y 18214</div><div>Brick cpu04:/bricks/20/vol1 49171 Y 40748</div><div>Brick cpu01:/bricks/21/vol1 49172 Y 33614</div><div>Brick cpu02:/bricks/21/vol1 49172 Y 40858</div><div>Brick cpu03:/bricks/21/vol1 49172 Y 18222</div><div>Brick cpu04:/bricks/21/vol1 49172 Y 40756</div><div>Brick cpu01:/bricks/22/vol1 49173 Y 33621</div><div>Brick cpu02:/bricks/22/vol1 49173 Y 40864</div><div>Brick cpu03:/bricks/22/vol1 49173 Y 18227</div><div>Brick cpu04:/bricks/22/vol1 49173 Y 40762</div><div>Brick cpu01:/bricks/23/vol1 49174 Y 33626</div><div>Brick cpu02:/bricks/23/vol1 49174 Y 40869</div><div>Brick cpu03:/bricks/23/vol1 49174 Y 18234</div><div>Brick cpu04:/bricks/23/vol1 49174 Y 40769</div><div>Brick cpu01:/bricks/24/vol1 49175 Y 33631</div><div>Brick cpu02:/bricks/24/vol1 49175 Y 40874</div><div>Brick cpu03:/bricks/24/vol1 49175 Y 18239</div><div>Brick cpu04:/bricks/24/vol1 49175 Y 40774</div><div>Self-heal Daemon on localhost N/A Y 33361</div><div>Self-heal Daemon on cpu05 N/A Y 2353</div><div>Self-heal Daemon on cpu04 N/A Y 40786</div><div>Self-heal Daemon on cpu02 N/A Y 32442</div><div>Self-heal Daemon on cpu03 N/A Y 18664</div><div><br></div><div>Task Status of Volume ds01</div><div>------------------------------------------------------------------------------</div><div>Task : Rebalance</div><div>ID : 5db24b30-4b9f-4b65-8910-a7a0a6d327a4</div><div>Status : completed</div><div><br></div><div>[root@cpu01 log]#</div></div><div><br></div><div><div>[root@cpu01 log]# gluster pool list</div><div>UUID Hostname State</div><div>626c9360-8c09-480f-9707-116e67cc38e6 cpu02 Connected</div><div>dc475d62-b035-4ee6-9006-6f03bf68bf24 cpu05 Connected</div><div>41b5b2ff-3671-47b4-b477-227a107e718d cpu03 Connected</div><div>c0afe114-dfa7-407d-bad7-5a3f97a6f3fc cpu04 Connected</div><div>9b61b0a5-be78-4ac2-b6c0-2db588da5c35 localhost Connected</div><div>[root@cpu01 log]#</div></div><div><br></div><div><img src="cid:ii_14c3122cf5087490" alt="Inline image 3" width="558" height="90"><br></div><div><br></div><div>Thanks,</div><div>Punit</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 19, 2015 at 2:53 PM, Michal Skrivanek <span dir="ltr"><<a href="mailto:michal.skrivanek@redhat.com" target="_blank">michal.skrivanek@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
On Mar 19, 2015, at 03:18 , Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
<br>
> Hi All,<br>
><br>
</span>> Is there any one have any idea about this problem...it seems it's bug either in Ovirt or Glusterfs...that's why no one has the idea about it....please correct me if i am wrong….<br>
<br>
Hi,<br>
as I said, storage access times out; so it seems to me as a gluster setup problem, the storage domain you have your VMs on is not working…<br>
<br>
Thanks,<br>
michal<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> Thanks,<br>
> Punit<br>
><br>
> On Wed, Mar 18, 2015 at 5:05 PM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
> Hi Michal,<br>
><br>
> Would you mind to let me know the possible messedup things...i will check and try to resolve it....still i am communicating gluster community to resolve this issue...<br>
><br>
> But in the ovirt....gluster setup is quite straight....so how come it will be messedup with reboot ?? if it can be messedup with reboot then it seems not good and stable technology for the production storage....<br>
><br>
> Thanks,<br>
> Punit<br>
><br>
> On Wed, Mar 18, 2015 at 3:51 PM, Michal Skrivanek <<a href="mailto:michal.skrivanek@redhat.com">michal.skrivanek@redhat.com</a>> wrote:<br>
><br>
> On Mar 18, 2015, at 03:33 , Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
><br>
> > Hi,<br>
> ><br>
> > Is there any one from community can help me to solve this issue...??<br>
> ><br>
> > Thanks,<br>
> > Punit<br>
> ><br>
> > On Tue, Mar 17, 2015 at 12:52 PM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
> > Hi,<br>
> ><br>
> > I am facing one strange issue with ovirt/glusterfs....still didn't find this issue is related with glusterfs or Ovirt....<br>
> ><br>
> > Ovirt :- 3.5.1<br>
> > Glusterfs :- 3.6.1<br>
> > Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks<br>
> > Guest VM :- more then 100<br>
> ><br>
> > Issue :- When i deploy this cluster first time..it work well for me(all the guest VM created and running successfully)....but suddenly one day my one of the host node rebooted and none of the VM can boot up now...and failed with the following error "Bad Volume Specification"<br>
> ><br>
> > VMId :- d877313c18d9783ca09b62acf5588048<br>
> ><br>
> > VDSM Logs :- <a href="http://ur1.ca/jxabi" target="_blank">http://ur1.ca/jxabi</a><br>
><br>
> you've got timeouts while accessing storage…so I guess something got messed up on reboot, it may also be just a gluster misconfiguration…<br>
><br>
> > Engine Logs :- <a href="http://ur1.ca/jxabv" target="_blank">http://ur1.ca/jxabv</a><br>
> ><br>
> > ------------------------<br>
> > [root@cpu01 ~]# vdsClient -s 0 getVolumeInfo e732a82f-bae9-4368-8b98-dedc1c3814de 00000002-0002-0002-0002-000000000145 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180<br>
> > status = OK<br>
> > domain = e732a82f-bae9-4368-8b98-dedc1c3814de<br>
> > capacity = 21474836480<br>
> > voltype = LEAF<br>
> > description =<br>
> > parent = 00000000-0000-0000-0000-000000000000<br>
> > format = RAW<br>
> > image = 6d123509-6867-45cf-83a2-6d679b77d3c5<br>
> > uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180<br>
> > disktype = 2<br>
> > legality = LEGAL<br>
> > mtime = 0<br>
> > apparentsize = 21474836480<br>
> > truesize = 4562972672<br>
> > type = SPARSE<br>
> > children = []<br>
> > pool =<br>
> > ctime = 1422676305<br>
> > ---------------------<br>
> ><br>
> > I opened same thread earlier but didn't get any perfect answers to solve this issue..so i reopen it...<br>
> ><br>
> > <a href="https://www.mail-archive.com/users@ovirt.org/msg25011.html" target="_blank">https://www.mail-archive.com/users@ovirt.org/msg25011.html</a><br>
> ><br>
> > Thanks,<br>
> > Punit<br>
> ><br>
> ><br>
> ><br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>