
This is a multi-part message in MIME format. --------------000704080106010404040203 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 05/07/2015 01:34 PM, Jorick Astrego wrote:
On 05/06/2015 08:15 PM, knarra wrote:
On 05/06/2015 11:22 PM, Jorick Astrego wrote:
On 05/06/2015 06:24 PM, knarra wrote:
On 05/06/2015 06:59 PM, Jorick Astrego wrote:
On 05/06/2015 02:49 PM, knarra wrote:
On 05/06/2015 05:33 PM, Jorick Astrego wrote: > Hi, > > Looking forward to bz 1049994 Allow choosing network > interface for gluster domain traffic. > > Currently I have the bricks on a different storage network and > can't get the volume details or profile it. > > Will this be handled in 3.6 properly? I don't see any changes in > gerrit regarding this, but I can be ovelooking it. > > The errors I get currently: > > Could not fetch brick profile stats > > 2015-05-06 10:34:22,430 ERROR > [org.ovirt.engine.core.bll.gluster.GetGlusterVolumeProfileInfoQuery] > (ajp--127.0.0.1-8702-27) Query > GetGlusterVolumeProfileInfoQuery failed. Exception message > is null : java.lang.NullPointerException: > java.lang.NullPointerException > > and > > Error in fetching the brick details, please try again. > > 2015-05-06 10:36:14,205 ERROR > [org.ovirt.engine.core.bll.gluster.GetGlusterVolumeAdvancedDetailsQuery] > (ajp--127.0.0.1-8702-55) Query > GetGlusterVolumeAdvancedDetailsQuery failed. Exception > message is VdcBLLException: Volume status failed > error: Staging failed on *.*.*.*. Error: No brick > glustertest1.netbulae.test/gluster/brick1 in volume data > Staging failed on *.*.*.*. Error: No brick > glustertest1.netbulae.test:/gluster/brick1 in volume data > return code: -1 (Failed with error GlusterVolumeStatusFailed > and code 4157) : > org.ovirt.engine.core.common.errors.VdcBLLException: > VdcBLLException: Volume status failed > error: Staging failed on *.*.*.*. Error: No brick > glustertest1.netbulae.test:/gluster/brick1 in volume data > Staging failed on *.*.*.*. Error: No brick > glustertest1.netbulae.test:/gluster/brick1 in volume data > return code: -1 (Failed with error GlusterVolumeStatusFailed > and code 4157): > org.ovirt.engine.core.common.errors.VdcBLLException: > VdcBLLException: Volume status failed > error: Staging failed on *.*.*.*. Error: No brick > glustertest1.netbulae.test:/gluster/brick1 in volume data > Staging failed on *.*.*.*. Error: No brick > glustertest1.netbulae.test:/gluster/brick1 in volume data > return code: -1 (Failed with error GlusterVolumeStatusFailed > and code 4157) > > > > > > Met vriendelijke groet, With kind regards, > > Jorick Astrego* > > Netbulae Virtualization Experts * > ------------------------------------------------------------------------ > Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK > 08198180 > Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW > NL821234584B01 > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users Hi,
Can you please check what does your gluster peer status on each of your nodes return? I guess they are in disconnected state and that is the reason you are not able to view these details.
Thanks kasturi
On the nodes it gives me the following:
gluster peer status Connection failed. Please check if gluster daemon is operational.
This means that glusterd is not running on this node. you could check the status of glusterd by running the command 'service glusterd status'.
please start glusterd by running the command 'service glusterd start' on both of your nodes.
Ideally when glusterd goes down node in ovirt should move to non-operational. Because of this BZ 1207150 as of now it is not changing the state to non operational.
There is no glusterd on the compute nodes in our setup, we have two clusters. One for virt hosts only and one for GlusterFS only.
Like I said, everything is Up and running fine. It's just that I can't get the stats because the hostname != GlusterFS NIC ip
But everyting is up and ovirt found the manually configured volume perfectly. But the hostname it lists as glustertest1.netbulae.test is not what my volume uses for communication as I created the volume using the ip's of the storage network.
gluster peer status Number of Peers: 2
Hostname: 10.1.1.3 Uuid: 1cc0875e-1699-42ae-aed2-9152667ed5af State: Peer in Cluster (Connected)
Hostname: 10.1.1.2 Uuid: a0b3ac13-7388-441a-a238-1deb023cab6c State: Peer in Cluster (Connected)
Did you import already existing cluster ?
No, I provisioned the nodes. Added them to our glusterfs cluster (with virt service disabled) and created the volume manually.
oVirt auto-discovered the manual created volume after that.
Error: No brick *glustertest1.netbulae.test*:/gluster/brick1 in volume data
Hostname: *10.1.1.3*
Things should work better in 3.6 (bz1049994), but I don't see any code changes to "GetGlusterVolumeProfileInfoQuery" linked to this in Bugzilla.
HI Jorik,
For more information on using separate storage network please refer to the following feature page.
http://www.ovirt.org/Features/Select_Network_For_Gluster
Thanks kasturi.
I don't think you understand completely, the page you refer to is for 3.6. I'm running 3.5.2 without this feature.
The question was, will this get fixed along with the changes in 3.6 as I don't see any code changes refering to the statistics/profiling of the gluster volume......
It will take less time testing 3.6 to find out than trying to explain it any further ;-p
Yes, it should. As we have changed the way we identify the brick name based on whether a different network was used to add the brick. Earlier, the brick name was blindly thought to be <hostname>:<mount path to directory>. Now the ip address of the interface is used to identify the brick, if a different interface was used. We're yet to test out all the flows, but theoretically it should work :)
Met vriendelijke groet, With kind regards,
Jorick Astrego*
Netbulae Virtualization Experts * ------------------------------------------------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------------------------------------------------
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------000704080106010404040203 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <br> <div class="moz-cite-prefix">On 05/07/2015 01:34 PM, Jorick Astrego wrote:<br> </div> <blockquote cite="mid:554B1CA1.9050201@netbulae.eu" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <br> <br> <div class="moz-cite-prefix">On 05/06/2015 08:15 PM, knarra wrote:<br> </div> <blockquote cite="mid:554A5A43.1080504@redhat.com" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 05/06/2015 11:22 PM, Jorick Astrego wrote:<br> </div> <blockquote cite="mid:554A54C0.8030100@netbulae.eu" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <br> <br> <div class="moz-cite-prefix">On 05/06/2015 06:24 PM, knarra wrote:<br> </div> <blockquote cite="mid:554A403A.9070308@redhat.com" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 05/06/2015 06:59 PM, Jorick Astrego wrote:<br> </div> <blockquote cite="mid:554A1721.3080605@netbulae.eu" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <br> <br> <div class="moz-cite-prefix">On 05/06/2015 02:49 PM, knarra wrote:<br> </div> <blockquote cite="mid:554A0DEB.2080002@redhat.com" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 05/06/2015 05:33 PM, Jorick Astrego wrote:<br> </div> <blockquote cite="mid:554A031A.1060200@netbulae.eu" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> Hi,<br> <br> Looking forward to <meta http-equiv="content-type" content="text/html; charset=windows-1252"> <meta name="generator" content="Sheets"> <style type="text/css"><!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--></style>bz 1049994 Allow choosing network interface for gluster domain traffic.<br> <br> Currently I have the bricks on a different storage network and can't get the volume details or profile it. <br> <br> Will this be handled in 3.6 properly? I don't see any changes in gerrit regarding this, but I can be ovelooking it.<br> <br> The errors I get currently:<br> <br> <blockquote>Could not fetch brick profile stats<br> </blockquote> <blockquote>2015-05-06 10:34:22,430 ERROR [org.ovirt.engine.core.bll.gluster.GetGlusterVolumeProfileInfoQuery] (ajp--127.0.0.1-8702-27) Query GetGlusterVolumeProfileInfoQuery failed. Exception message is null : java.lang.NullPointerException: java.lang.NullPointerException<br> <br> </blockquote> and<br> <blockquote>Error in fetching the brick details, please try again.<br> <br> 2015-05-06 10:36:14,205 ERROR [org.ovirt.engine.core.bll.gluster.GetGlusterVolumeAdvancedDetailsQuery] (ajp--127.0.0.1-8702-55) Query GetGlusterVolumeAdvancedDetailsQuery failed. Exception message is VdcBLLException: Volume status failed<br> error: Staging failed on *.*.*.*. Error: No brick glustertest1.netbulae.test/gluster/brick1 in volume data<br> Staging failed on *.*.*.*. Error: No brick glustertest1.netbulae.test:/gluster/brick1 in volume data<br> return code: -1 (Failed with error GlusterVolumeStatusFailed and code 4157) : org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: Volume status failed<br> error: Staging failed on *.*.*.*. Error: No brick glustertest1.netbulae.test:/gluster/brick1 in volume data<br> Staging failed on *.*.*.*. Error: No brick glustertest1.netbulae.test:/gluster/brick1 in volume data<br> return code: -1 (Failed with error GlusterVolumeStatusFailed and code 4157): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: Volume status failed<br> error: Staging failed on *.*.*.*. Error: No brick glustertest1.netbulae.test:/gluster/brick1 in volume data<br> Staging failed on *.*.*.*. Error: No brick glustertest1.netbulae.test:/gluster/brick1 in volume data<br> return code: -1 (Failed with error GlusterVolumeStatusFailed and code 4157)<br> </blockquote> <blockquote><br> </blockquote> <br> <br> <br> <span style="color:#604c78;"><font color="000000"><span style="mso-fareast-language:en-gb;" lang="NL">Met vriendelijke groet, With kind regards,<br> <br> </span>Jorick Astrego</font></span><b style="color:#604c78"><br> <br> Netbulae Virtualization Experts </b><br> <hr style="border:none;border-top:1px solid #ccc;"> <table style="width: 522px"> <tbody> <tr> <td style="width: 130px;font-size: 10px">Tel: 053 20 30 270</td> <td style="width: 130px;font-size: 10px"><a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:info@netbulae.eu">info@netbulae.eu</a></td> <td style="width: 130px;font-size: 10px">Staalsteden 4-3A</td> <td style="width: 130px;font-size: 10px">KvK 08198180</td> </tr> <tr> <td style="width: 130px;font-size: 10px">Fax: 053 20 30 271</td> <td style="width: 130px;font-size: 10px"><a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="http://www.netbulae.eu">www.netbulae.eu</a></td> <td style="width: 130px;font-size: 10px">7547 TA Enschede</td> <td style="width: 130px;font-size: 10px">BTW NL821234584B01</td> </tr> </tbody> </table> <br> <hr style="border:none;border-top:1px solid #ccc;"><br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a></pre> </blockquote> Hi,<br> <br> Can you please check what does your gluster peer status on each of your nodes return? I guess they are in disconnected state and that is the reason you are not able to view these details.<br> <br> Thanks<br> kasturi<br> <blockquote cite="mid:554A031A.1060200@netbulae.eu" type="cite"> </blockquote> <br> </blockquote> On the nodes it gives me the following:<br> <br> <blockquote>gluster peer status<br> Connection failed. Please check if gluster daemon is operational.<br> </blockquote> </blockquote> This means that glusterd is not running on this node. you could check the status of glusterd by running the command 'service glusterd status'. <br> <br> please start glusterd by running the command 'service glusterd start' on both of your nodes.<br> <br> Ideally when glusterd goes down node in ovirt should move to non-operational. Because of this BZ 1207150 as of now it is not changing the state to non operational.<br> </blockquote> <br> There is no glusterd on the compute nodes in our setup, we have two clusters. One for virt hosts only and one for GlusterFS only.<br> <br> <br> Like I said, everything is Up and running fine. It's just that I can't get the stats because the hostname != GlusterFS NIC ip<br> <br> <br> <blockquote cite="mid:554A403A.9070308@redhat.com" type="cite"> <blockquote cite="mid:554A1721.3080605@netbulae.eu" type="cite"> <blockquote> <br> <br> </blockquote> But everyting is up and ovirt found the manually configured volume perfectly. But the hostname it lists as glustertest1.netbulae.test is not what my volume uses for communication as I created the volume using the ip's of the storage network.<br> <br> <blockquote> <br> gluster peer status<br> Number of Peers: 2<br> <br> Hostname: 10.1.1.3<br> Uuid: 1cc0875e-1699-42ae-aed2-9152667ed5af<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.1.1.2<br> Uuid: a0b3ac13-7388-441a-a238-1deb023cab6c<br> State: Peer in Cluster (Connected)<br> <br> </blockquote> </blockquote> Did you import already existing cluster ?<br> </blockquote> <br> No, I provisioned the nodes. Added them to our glusterfs cluster (with virt service disabled) and created the volume manually.<br> <br> oVirt auto-discovered the manual created volume after that.<br> <br> <blockquote>Error: No brick <b>glustertest1.netbulae.test</b>:/gluster/brick1 in volume data <br> <br> Hostname: <b>10.1.1.3</b><br> <br> </blockquote> Things should work better in 3.6 (bz1049994), but I don't see any code changes to "GetGlusterVolumeProfileInfoQuery" linked to this in Bugzilla.<br> <br> </blockquote> <br> HI Jorik,<br> <br> For more information on using separate storage network please refer to the following feature page.<br> <br> <a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.ovirt.org/Features/Select_Network_For_Gluster">http://www.ovirt.org/Features/Select_Network_For_Gluster</a><br> <br> Thanks<br> kasturi.<br> </blockquote> <br> <br> I don't think you understand completely, the page you refer to is for 3.6. I'm running 3.5.2 without this feature. <br> <br> The question was, will this get fixed along with the changes in 3.6 as I don't see any code changes refering to the statistics/profiling of the gluster volume......<br> <br> It will take less time testing 3.6 to find out than trying to explain it any further ;-p<br> </blockquote> <br> Yes, it should. As we have changed the way we identify the brick name based on whether a different network was used to add the brick.<br> Earlier, the brick name was blindly thought to be <hostname>:<mount path to directory>. Now the ip address of the interface is used to identify the brick, if a different interface was used.<br> <br> We're yet to test out all the flows, but theoretically it should work :)<br> <br> <br> <blockquote cite="mid:554B1CA1.9050201@netbulae.eu" type="cite"> <br> <br> <br> <br> <br> <span style="color:#604c78;"><font color="000000"><span style="mso-fareast-language:en-gb;" lang="NL">Met vriendelijke groet, With kind regards,<br> <br> </span>Jorick Astrego</font></span><b style="color:#604c78"><br> <br> Netbulae Virtualization Experts </b><br> <hr style="border:none;border-top:1px solid #ccc;"> <table style="width: 522px"> <tbody> <tr> <td style="width: 130px;font-size: 10px">Tel: 053 20 30 270</td> <td style="width: 130px;font-size: 10px"><a class="moz-txt-link-abbreviated" href="mailto:info@netbulae.eu">info@netbulae.eu</a></td> <td style="width: 130px;font-size: 10px">Staalsteden 4-3A</td> <td style="width: 130px;font-size: 10px">KvK 08198180</td> </tr> <tr> <td style="width: 130px;font-size: 10px">Fax: 053 20 30 271</td> <td style="width: 130px;font-size: 10px"><a class="moz-txt-link-abbreviated" href="http://www.netbulae.eu">www.netbulae.eu</a></td> <td style="width: 130px;font-size: 10px">7547 TA Enschede</td> <td style="width: 130px;font-size: 10px">BTW NL821234584B01</td> </tr> </tbody> </table> <br> <hr style="border:none;border-top:1px solid #ccc;"><br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------000704080106010404040203--