Gluster command [<UNKNOWN>] failed on server

</div><div><span class=3D"version-text">Gluster version:<br></span></div><=
------=_Part_495943_1653700285.1431343853244 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi, I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after rebooting could not get it working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. Gluster version: glusterfs-3.6.3-1.el7.x86_64 glusterfs-libs-3.6.3-1.el7.x86_64 glusterfs-fuse-3.6.3-1.el7.x86_64 glusterfs-cli-3.6.3-1.el7.x86_64 glusterfs-rdma-3.6.3-1.el7.x86_64 glusterfs-api-3.6.3-1.el7.x86_64 Any help? -- Jose Ferradeira http://www.logicworks.pt ------=_Part_495943_1653700285.1431343853244 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div><div>Hi,</div><div><br></div><div>I'm testing ovirt 3.5.= 1, with hosted engine, using centos7.1. Have installed some VMs, no problem= . I needed to shutdown the computer machines (follow this procedure: <span = class=3D"Object" id=3D"OBJ_PREFIX_DWT216_com_zimbra_url"><span class=3D"Obj= ect" id=3D"OBJ_PREFIX_DWT218_com_zimbra_url"><a href=3D"http://lists.ovirt.= org/pipermail/users/2014-April/023861.html" target=3D"_blank" data-mce-href= =3D"http://lists.ovirt.org/pipermail/users/2014-April/023861.html">http://l= ists.ovirt.org/pipermail/users/2014-April/023861.html</a></span></span> ), = after rebooting could not get it <br>working again, when trying to activate= the hosts this message come up: Gluster command [<UNKNOWN>] failed o= n server</div><div>I have tried a lot of things, including update it to <sp= an class=3D"version-text">Version 3.5.2-1.el7.centos, but no success.</span= div><span class=3D"version-text">glusterfs-3.6.3-1.el7.x86_64<br>glusterfs-= libs-3.6.3-1.el7.x86_64<br>glusterfs-fuse-3.6.3-1.el7.x86_64<br>glusterfs-c= li-3.6.3-1.el7.x86_64<br>glusterfs-rdma-3.6.3-1.el7.x86_64<br>glusterfs-api= -3.6.3-1.el7.x86_64<br><br></span></div><span class=3D"version-text">Any he= lp?</span></div><div><br></div><div>-- <br></div><div><span name=3D"x"></sp= an><hr style=3D"width: 100%; height: 2px;" data-mce-style=3D"width: 100%; h= eight: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name=3D"x= "></span><br></div></div></body></html> ------=_Part_495943_1653700285.1431343853244--

This is a multi-part message in MIME format. --------------040109010309020106020405 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:
Hi,
I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after rebooting could not get it working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. Gluster version: glusterfs-3.6.3-1.el7.x86_64 glusterfs-libs-3.6.3-1.el7.x86_64 glusterfs-fuse-3.6.3-1.el7.x86_64 glusterfs-cli-3.6.3-1.el7.x86_64 glusterfs-rdma-3.6.3-1.el7.x86_64 glusterfs-api-3.6.3-1.el7.x86_64
Any help?
-- ------------------------------------------------------------------------ Jose Ferradeira http://www.logicworks.pt
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hi Jose,
Can you check if glusterd is running on the nodes by executing "service glusterd status"? Thanks kasturi. --------------040109010309020106020405 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 05/11/2015 05:00 PM, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br> </div> <blockquote cite="mid:1007330521.495944.1431343853245.JavaMail.zimbra@logicworks.pt" type="cite"> <div style="font-family: Times New Roman; font-size: 10pt; color: #000000"> <div> <div>Hi,</div> <div><br> </div> <div>I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: <span class="Object" id="OBJ_PREFIX_DWT216_com_zimbra_url"><span class="Object" id="OBJ_PREFIX_DWT218_com_zimbra_url"><a moz-do-not-send="true" href="http://lists.ovirt.org/pipermail/users/2014-April/023861.html" target="_blank" data-mce-href="http://lists.ovirt.org/pipermail/users/2014-April/023861.html">http://lists.ovirt.org/pipermail/users/2014-April/023861.html</a></span></span> ), after rebooting could not get it <br> working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server</div> <div>I have tried a lot of things, including update it to <span class="version-text">Version 3.5.2-1.el7.centos, but no success.</span></div> <div><span class="version-text">Gluster version:<br> </span></div> <div><span class="version-text">glusterfs-3.6.3-1.el7.x86_64<br> glusterfs-libs-3.6.3-1.el7.x86_64<br> glusterfs-fuse-3.6.3-1.el7.x86_64<br> glusterfs-cli-3.6.3-1.el7.x86_64<br> glusterfs-rdma-3.6.3-1.el7.x86_64<br> glusterfs-api-3.6.3-1.el7.x86_64<br> <br> </span></div> <span class="version-text">Any help?</span></div> <div><br> </div> <div>-- <br> </div> <div><span name="x"></span> <hr style="width: 100%; height: 2px;" data-mce-style="width: 100%; height: 2px;">Jose Ferradeira<br> <a class="moz-txt-link-freetext" href="http://www.logicworks.pt">http://www.logicworks.pt</a><br> <span name="x"></span><br> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> Hi Jose,<br> <br> Can you check if glusterd is running on the nodes by executing "service glusterd status"?<br> <br> Thanks<br> kasturi.<br> <br> <br> </body> </html> --------------040109010309020106020405--

------=_Part_497148_2074853727.1431346514805 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi,=20 I have 2 nodes, but only one is working with glusterfs.=20 But you were right, glusterfs was not running, I just start the service - I= didn't check it :( :=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago=20 Process: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code= =3Dexited, status=3D0/SUCCESS)=20 Main PID: 4483 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id g= v...=20 May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus= te....=20 May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 But still the problem remains=20 Should I first start the glusterfs before the hosted engine?=20 Thanks=20 ----- Mensagem original ----- De: "knarra" <knarra@redhat.com>=20 Para: suporte@logicworks.pt, Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:=20 Hi,=20 I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installe= d some VMs, no problem. I needed to shutdown the computer machines (follow = this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.ht= ml ), after rebooting could not get it=20 working again, when trying to activate the hosts this message come up: Glus= ter command [<UNKNOWN>] failed on server=20 I have tried a lot of things, including update it to Version 3.5.2-1.el7.ce= ntos, but no success.=20 Gluster version:=20 glusterfs-3.6.3-1.el7.x86_64=20 glusterfs-libs-3.6.3-1.el7.x86_64=20 glusterfs-fuse-3.6.3-1.el7.x86_64=20 glusterfs-cli-3.6.3-1.el7.x86_64=20 glusterfs-rdma-3.6.3-1.el7.x86_64=20 glusterfs-api-3.6.3-1.el7.x86_64=20 Any help?=20 --=20 Jose Ferradeira=20 http://www.logicworks.pt=20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/= users=20 Hi Jose,=20 Can you check if glusterd is running on the nodes by executing "service glu= sterd status"?=20 Thanks=20 kasturi.=20 ------=_Part_497148_2074853727.1431346514805 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div>Hi,<br></div><div><br></div><div>I have 2 nodes, but onl= y one is working with glusterfs. <br></div><div><br></div><div>But you were= right, glusterfs was not running, I just start the service - I didn't chec= k it :( :<br></div><div># service glusterd status<br>Redirecting to /= bin/systemctl status glusterd.service<br>glusterd.service - GlusterFS= , a clustered file-system server<br> Loaded: loaded (/usr/lib/s= ystemd/system/glusterd.service; enabled)<br> Active: active (ru= nning) since Mon 2015-05-11 13:06:24 WEST; 3s ago<br> Process: 4482 E= xecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3Dexited, stat= us=3D0/SUCCESS)<br> Main PID: 4483 (glusterd)<br> CGroup: = /system.slice/glusterd.service<br>  = ; =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/gluster= d.pid<br> =C3= =A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv..= .<br><br>May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterF= S, a cluste....<br>May 11 13:06:24 ovserver2.domain.com systemd[1]: Started= GlusterFS, a cluster....<br>Hint: Some lines were ellipsized, use -l to sh= ow in full.<br><br></div><div>But still the problem remains<br></div><div><= br></div><div>Should I first start the glusterfs before the hosted engine? = <br></div><div><br></div><div>Thanks<br></div><div><br></div><hr id=3D"zwch= r"><div style=3D"color:#000;font-weight:normal;font-style:normal;text-decor= ation:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce= -style=3D"color: #000; font-weight: normal; font-style: normal; text-decora= tion: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>D= e: </b>"knarra" <knarra@redhat.com><br><b>Para: </b>suporte@logicwork= s.pt, Users@ovirt.org<br><b>Enviadas: </b>Segunda-feira, 11 De Maio de 2015= 12:45:19<br><b>Assunto: </b>Re: [ovirt-users] Gluster command [<UNKNOWN= >] failed on server<br><div><br></div><div class=3D"moz-cite-prefix">On = 05/11/2015 05:00 PM, <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:s= uporte@logicworks.pt" target=3D"_blank" data-mce-href=3D"mailto:suporte@log= icworks.pt">suporte@logicworks.pt</a> wrote:<br></div><blockquote cite=3D"m= id:1007330521.495944.1431343853245.JavaMail.zimbra@logicworks.pt"><div styl= e=3D"font-family: Times New Roman; font-size: 10pt; color: #000000" data-mce-style=3D"font-family: Times New Roman; font-size:= 10pt; color: #000000;"><div><div>Hi,</div><div><br></div><div>I'm testing = ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, = no problem. I needed to shutdown the computer machines (follow this procedu= re: <span class=3D"Object" id=3D"OBJ_PREFIX_DWT216_com_zimbra_url"><span cl= ass=3D"Object" id=3D"OBJ_PREFIX_DWT218_com_zimbra_url"><a href=3D"http://li= sts.ovirt.org/pipermail/users/2014-April/023861.html" target=3D"_blank" dat= a-mce-href=3D"http://lists.ovirt.org/pipermail/users/2014-April/023861.html= ">http://lists.ovirt.org/pipermail/users/2014-April/023861.html</a></span><= /span> ), after rebooting could not get it <br> working again, when trying = to activate the hosts this message come up: Gluster command [<UNKNOWN>= ;] failed on server</div><div>I have tried a lot of things, including updat= e it to <span class=3D"version-text">Version 3.5.2-1.el7.centos, but no suc= cess.</span></div><div><span class=3D"version-text">Gluster version:<br> </= span></div><div><span class=3D"version-text">glusterfs-3.6.3-1.el7.x86_64<b= r> glusterfs-libs-3.6.3-1.el7.x86_64<br> glusterfs-fuse-3.6.3-1.el7.x86_64<= br> glusterfs-cli-3.6.3-1.el7.x86_64<br> glusterfs-rdma-3.6.3-1.el7.x86_64<= br> glusterfs-api-3.6.3-1.el7.x86_64<br> <br> </span></div><span class=3D"v= ersion-text">Any help?</span></div><div><br></div><div>-- <br></div><div><s= pan></span><hr style=3D"width: 100%; height: 2px;" data-mce-style=3D"width:= 100%; height: 2px;">Jose Ferradeira<br> <a class=3D"moz-txt-link-freetext"= href=3D"http://www.logicworks.pt" target=3D"_blank" data-mce-href=3D"http:= //www.logicworks.pt">http://www.logicworks.pt</a><br> <span></span><br></di= v></div><br><fieldset class=3D"mimeAttachmentHeader"></fieldset><br><pre>__= _____________________________________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" targe= t=3D"_blank" data-mce-href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users" target=3D"_blank" data-mce-href=3D"http://lists.ovirt.org/ma= ilman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre></blockquote>Hi Jose,<br> <br> Can you check if gl= usterd is running on the nodes by executing "service glusterd status"?<br> = <br> Thanks<br> kasturi.<br> <br> <br></div><div><br></div></div></body></h= tml> ------=_Part_497148_2074853727.1431346514805--

suporte@logicworks.pt, Users@ovirt.org<br><b>Enviadas: </b>Segunda-feira, = 11 De Maio de 2015 12:45:19<br><b>Assunto: </b>Re: [ovirt-users] Gluster co= mmand [<UNKNOWN>] failed on server<br><div><br></div><div class=3D"mo= z-cite-prefix">On 05/11/2015 05:00 PM, <a class=3D"moz-txt-link-abbreviated= " href=3D"mailto:suporte@logicworks.pt" target=3D"_blank">suporte@logicwork= s.pt</a> wrote:<br></div><blockquote cite=3D"mid:1007330521.495944.14313438= 53245.JavaMail.zimbra@logicworks.pt"><div style=3D"font-family: Times New R= oman; font-size: 10pt; color: #000000"><div><div>Hi,</div><div><br></div><div>I'm testing ovirt 3= .5.1, with hosted engine, using centos7.1. Have installed some VMs, no prob= lem. I needed to shutdown the computer machines (follow this procedure: <sp= an class=3D"Object" id=3D"OBJ_PREFIX_DWT216_com_zimbra_url"><span class=3D"= Object" id=3D"OBJ_PREFIX_DWT218_com_zimbra_url"><a href=3D"http://lists.ovi= rt.org/pipermail/users/2014-April/023861.html" target=3D"_blank">http://lis= ts.ovirt.org/pipermail/users/2014-April/023861.html</a></span></span> ), af= ter rebooting could not get it <br> working again, when trying to activate =
Any help?</span></div><div><br></div><div>-- <br></div><div><span></span><=
------=_Part_501413_1785528690.1431356706141 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi,=20 I just restart it again, and now start the gluster service before starting = the hosted engine, but still gets the same error message.=20 Any more ideas?=20 Thanks=20 Jose=20 # hosted-engine --vm-status=20 --=3D=3D Host 1 status =3D=3D--=20 Status up-to-date : True=20 Hostname : ovserver1.domain.com=20 Host ID : 1=20 Engine status : {"health": "good", "vm": "up", "detail": "up"}=20 Score : 2400=20 Local maintenance : False=20 Host timestamp : 4998=20 Extra metadata (valid at timestamp):=20 metadata_parse_version=3D1=20 metadata_feature_version=3D1=20 timestamp=3D4998 (Mon May 11 16:03:48 2015)=20 host-id=3D1=20 score=3D2400=20 maintenance=3DFalse=20 state=3DEngineUp=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago= =20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code= =3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id g= v...=20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus= te....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 ----- Mensagem original ----- De: suporte@logicworks.pt=20 Para: "knarra" <knarra@redhat.com>=20 Cc: Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 Hi,=20 I have 2 nodes, but only one is working with glusterfs.=20 But you were right, glusterfs was not running, I just start the service - I= didn't check it :( :=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago=20 Process: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code= =3Dexited, status=3D0/SUCCESS)=20 Main PID: 4483 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id g= v...=20 May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus= te....=20 May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 But still the problem remains=20 Should I first start the glusterfs before the hosted engine?=20 Thanks=20 ----- Mensagem original ----- De: "knarra" <knarra@redhat.com>=20 Para: suporte@logicworks.pt, Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:=20 Hi,=20 I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installe= d some VMs, no problem. I needed to shutdown the computer machines (follow = this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.ht= ml ), after rebooting could not get it=20 working again, when trying to activate the hosts this message come up: Glus= ter command [<UNKNOWN>] failed on server=20 I have tried a lot of things, including update it to Version 3.5.2-1.el7.ce= ntos, but no success.=20 Gluster version:=20 glusterfs-3.6.3-1.el7.x86_64=20 glusterfs-libs-3.6.3-1.el7.x86_64=20 glusterfs-fuse-3.6.3-1.el7.x86_64=20 glusterfs-cli-3.6.3-1.el7.x86_64=20 glusterfs-rdma-3.6.3-1.el7.x86_64=20 glusterfs-api-3.6.3-1.el7.x86_64=20 Any help?=20 --=20 Jose Ferradeira=20 http://www.logicworks.pt=20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/= users=20 Hi Jose,=20 Can you check if glusterd is running on the nodes by executing "service glu= sterd status"?=20 Thanks=20 kasturi.=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 ------=_Part_501413_1785528690.1431356706141 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div>Hi,</div><div><br></div><div>I just restart it again, an= d now start the gluster service before starting the hosted engine, but stil= l gets the same error message.<br></div><div><br></div><div>Any more ideas?= <br></div><div><br></div><div>Thanks<br></div><div><br></div><div>Jose<br><= /div><div><br></div><div># hosted-engine --vm-status<br><br>--=3D=3D Host 1= status =3D=3D--<br><br>Status up-to-date &nbs= p; : True= <br>Hostname &nb= sp; = : ovserver1.domain.com<br>Host ID  = ; &n= bsp; : 1<br>Eng= ine status  = ; : {"health": = "good", "vm": "up", "detail": "up"}<br>Score &= nbsp; &nbs= p; : 2400= <br>Local maintenance = : False<br>Host timestamp&= nbsp; &nbs= p; : 4998<br>Extra metadata (vali= d at timestamp):<br> metadata_par= se_version=3D1<br> metadata_featu= re_version=3D1<br> timestamp=3D49= 98 (Mon May 11 16:03:48 2015)<br> = host-id=3D1<br> score=3D2400<br>= maintenance=3DFalse<br> &nb= sp; state=3DEngineUp<br><br></div><div><br></= div><div># service glusterd status<br>Redirecting to /bin/systemctl status&= nbsp; glusterd.service<br>glusterd.service - GlusterFS, a clustered file-sy= stem server<br> Loaded: loaded (/usr/lib/systemd/system/gluster= d.service; enabled)<br> Active: active (running) since Mon 2015= -05-11 14:37:14 WEST; 1h 27min ago<br> Process: 3060 ExecStart=3D/usr= /sbin/glusterd -p /var/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS)= <br> Main PID: 3061 (glusterd)<br> CGroup: /system.slice/g= lusterd.service<br> &n= bsp; =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid<br> = =C3=A2=C3=A23202 /us= r/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...<br><br>May 11 1= 4:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste....<b= r>May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....<br>Hint: Some lines were ellipsized, use -l to show in full.<br><b= r></div><div><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weigh= t:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial= ,sans-serif;font-size:12pt;"><b>De: </b>suporte@logicworks.pt<br><b>Para: <= /b>"knarra" <knarra@redhat.com><br><b>Cc: </b>Users@ovirt.org<br><b>E= nviadas: </b>Segunda-feira, 11 De Maio de 2015 13:15:14<br><b>Assunto: </b>= Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server<br><di= v><br></div><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div>Hi,<br></div><div><br></div><div>I have 2 nodes, but onl= y one is working with glusterfs. <br></div><div><br></div><div>But you were= right, glusterfs was not running, I just start the service - I didn't chec= k it :( :<br></div><div># service glusterd status<br>Redirecting to /= bin/systemctl status glusterd.service<br>glusterd.service - GlusterFS= , a clustered file-system server<br> Loaded: loaded (/usr/lib/s= ystemd/system/glusterd.service; enabled)<br> Active: active (ru= nning) since Mon 2015-05-11 13:06:24 WEST; 3s ago<br> Process: 4482 E= xecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3Dexited, stat= us=3D0/SUCCESS)<br> Main PID: 4483 (glusterd)<br> CGroup: = /system.slice/glusterd.service<br>  = ; =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/gluster= d.pid<br> =C3= =A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv..= .<br><div><br></div>May 11 13:06:22 ovserver2.domain.com systemd[1]: Starti= ng GlusterFS, a cluste....<br>May 11 13:06:24 ovserver2.domain.com systemd[= 1]: Started GlusterFS, a cluster....<br>Hint: Some lines were ellipsized, u= se -l to show in full.<br><div><br></div></div><div>But still the problem r= emains<br></div><div><br></div><div>Should I first start the glusterfs befo= re the hosted engine? <br></div><div><br></div><div>Thanks<br></div><div><b= r></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;font-s= tyle:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;fon= t-size:12pt;"><b>De: </b>"knarra" <knarra@redhat.com><br><b>Para: </b= the hosts this message come up: Gluster command [<UNKNOWN>] failed on= server</div><div>I have tried a lot of things, including update it to <spa= n class=3D"version-text">Version 3.5.2-1.el7.centos, but no success.</span>= </div><div><span class=3D"version-text">Gluster version:<br> </span></div><= div><span class=3D"version-text">glusterfs-3.6.3-1.el7.x86_64<br> glusterfs= -libs-3.6.3-1.el7.x86_64<br> glusterfs-fuse-3.6.3-1.el7.x86_64<br> glusterf= s-cli-3.6.3-1.el7.x86_64<br> glusterfs-rdma-3.6.3-1.el7.x86_64<br> glusterf= s-api-3.6.3-1.el7.x86_64<br> <br> </span></div><span class=3D"version-text"= hr style=3D"width: 100%; height: 2px;">Jose Ferradeira<br> <a class=3D"moz-= txt-link-freetext" href=3D"http://www.logicworks.pt" target=3D"_blank">http= ://www.logicworks.pt</a><br> <span></span><br></div></div><br><fieldset cla= ss=3D"mimeAttachmentHeader"></fieldset><br><pre>___________________________= ____________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" targe= t=3D"_blank">Users@ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users" target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/us= ers</a> </pre></blockquote>Hi Jose,<br> <br> Can you check if gl= usterd is running on the nodes by executing "service glusterd status"?<br> = <br> Thanks<br> kasturi.<br> <br> <br></div><div><br></div></div><br>______= _________________________________________<br>Users mailing list<br>Users@ov= irt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></div><div><br>= </div></div></body></html> ------=_Part_501413_1785528690.1431356706141--

On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrote:
Hi,
I just restart it again, and now start the gluster service before starting the hosted engine, but still gets the same error message.
Any more ideas? I just had the same problem. My <unknown> error indeed to the fact that glusterd / glusterfsd were not running.
After starting them it turned out the host setup did not automatically add the iptables rules for gluster. I added to iptables: # gluster -A INPUT -p tcp --dport 24007:24011 -j ACCEPT -A INPUT -p tcp --dport 38465:38485 -j ACCEPT Afterwards 'gluster peer status' worked and my host was operational again. Hint: Sometimes this is do to gluster itself. Restaring glusterd works most of the time to fix this.
Thanks
Jose
# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovserver1.domain.com Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 2400 Local maintenance : False Host timestamp : 4998 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=4998 (Mon May 11 16:03:48 2015) host-id=1 score=2400 maintenance=False state=EngineUp
# service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 3061 (glusterd) CGroup: /system.slice/glusterd.service ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...
May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
----- Mensagem original -----
De: suporte@logicworks.pt Para: "knarra" <knarra@redhat.com> Cc: Users@ovirt.org Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
Hi,
I have 2 nodes, but only one is working with glusterfs.
But you were right, glusterfs was not running, I just start the service - I didn't check it :( : # service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago Process: 4482 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 4483 (glusterd) CGroup: /system.slice/glusterd.service ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...
May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
But still the problem remains
Should I first start the glusterfs before the hosted engine?
Thanks
----- Mensagem original -----
De: "knarra" <knarra@redhat.com> Para: suporte@logicworks.pt, Users@ovirt.org Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:
Hi,
I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after rebooting could not get it working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. Gluster version: glusterfs-3.6.3-1.el7.x86_64 glusterfs-libs-3.6.3-1.el7.x86_64 glusterfs-fuse-3.6.3-1.el7.x86_64 glusterfs-cli-3.6.3-1.el7.x86_64 glusterfs-rdma-3.6.3-1.el7.x86_64 glusterfs-api-3.6.3-1.el7.x86_64
Any help?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767

------=_Part_511688_1156827455.1431422051480 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Daniel,=20 Well, I have glusterfs up and running:=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago=20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code= =3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id = gv...=20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus= te....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 # gluster volume info=20 Volume Name: gv0=20 Type: Distribute=20 Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261=20 Status: Started=20 Number of Bricks: 1=20 Transport-type: tcp=20 Bricks:=20 Brick1: ovserver2.domain.com:/home2/brick1=20 I stopped iptables, but cannot bring the nodes up.=20 Everything was working until I needed to do a restart.=20 Any more ideas?=20 ----- Mensagem original ----- De: "Daniel Helgenberger" <daniel.helgenberger@m-box.de>=20 Para: users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 18:17:47=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrote:=20
Hi,=20 =20 I just restart it again, and now start the gluster service before startin= g the hosted engine, but still gets the same error message.=20 =20 Any more ideas?=20 I just had the same problem.=20 My <unknown> error indeed to the fact that glusterd / glusterfsd were=20 not running.=20
=20 Thanks=20 =20 Jose=20 =20 # hosted-engine --vm-status=20 =20 --=3D=3D Host 1 status =3D=3D--=20 =20 Status up-to-date : True=20 Hostname : ovserver1.domain.com=20 Host ID : 1=20 Engine status : {"health": "good", "vm": "up", "detail": "up"}=20 Score : 2400=20 Local maintenance : False=20 Host timestamp : 4998=20 Extra metadata (valid at timestamp):=20 metadata_parse_version=3D1=20 metadata_feature_version=3D1=20 timestamp=3D4998 (Mon May 11 16:03:48 2015)=20 host-id=3D1=20 score=3D2400=20 maintenance=3DFalse=20 state=3DEngineUp=20 =20 =20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago= =20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co= de=3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id= gv...=20 =20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl= uste....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 =20 =20 ----- Mensagem original -----=20 =20 De: suporte@logicworks.pt=20 Para: "knarra" <knarra@redhat.com>=20 Cc: Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server= =20 =20 Hi,=20 =20 I have 2 nodes, but only one is working with glusterfs.=20 =20 But you were right, glusterfs was not running, I just start the service -= I didn't check it :( :=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago=20 Process: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co= de=3Dexited, status=3D0/SUCCESS)=20 Main PID: 4483 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id= gv...=20 =20 May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl= uste....=20 May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 =20 But still the problem remains=20 =20 Should I first start the glusterfs before the hosted engine?=20 =20 Thanks=20 =20 ----- Mensagem original -----=20 =20 De: "knarra" <knarra@redhat.com>=20 Para: suporte@logicworks.pt, Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server= =20 =20 On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:=20 =20 =20 =20 Hi,=20 =20 I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have instal= led some VMs, no problem. I needed to shutdown the computer machines (follo= w this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.=
After starting them it turned out the host setup did not automatically=20 add the iptables rules for gluster. I added to iptables:=20 # gluster=20 -A INPUT -p tcp --dport 24007:24011 -j ACCEPT=20 -A INPUT -p tcp --dport 38465:38485 -j ACCEPT=20 Afterwards 'gluster peer status' worked and my host was operational=20 again.=20 Hint: Sometimes this is do to gluster itself. Restaring glusterd works=20 most of the time to fix this.=20 html ), after rebooting could not get it=20
working again, when trying to activate the hosts this message come up: Gl= uster command [<UNKNOWN>] failed on server=20 I have tried a lot of things, including update it to Version 3.5.2-1.el7.= centos, but no success.=20 Gluster version:=20 glusterfs-3.6.3-1.el7.x86_64=20 glusterfs-libs-3.6.3-1.el7.x86_64=20 glusterfs-fuse-3.6.3-1.el7.x86_64=20 glusterfs-cli-3.6.3-1.el7.x86_64=20 glusterfs-rdma-3.6.3-1.el7.x86_64=20 glusterfs-api-3.6.3-1.el7.x86_64=20 =20 Any help?=20 =20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20
Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.p= id (code=3Dexited, status=3D0/SUCCESS)<br> Main PID: 3061 (glusterd)<b= r> CGroup: /system.slice/glusterd.service<br> = =C3=A2=C3=A23061 /usr/sbin/glust= erd -p /var/run/glusterd.pid<br> &= nbsp; =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.domain= .com --volfile-id gv...<br><br>May 11 14:37:11 ovserver2.domain.com systemd= [1]: Starting GlusterFS, a cluste....<br>May 11 14:37:14 ovserver2.domain.c= om systemd[1]: Started GlusterFS, a cluster....<br>Hint: Some lines were el=
Transport-type: tcp<br>Bricks:<br>Brick1: ovserver2.domain.com:/home2/bric= k1<br><br></div><div>I stopped iptables, but cannot bring the nodes up.<br>= </div><div>Everything was working until I needed to do a restart.<br></div>= <div><br></div><div>Any more ideas?<br></div><div><br></div><div><br></div>= <div><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal= ;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-se= rif;font-size:12pt;" data-mce-style=3D"color: #000; font-weight: normal; fo= nt-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-= serif; font-size: 12pt;"><b>De: </b>"Daniel Helgenberger" <daniel.helgen= berger@m-box.de><br><b>Para: </b>users@ovirt.org<br><b>Enviadas: </b>Seg= unda-feira, 11 De Maio de 2015 18:17:47<br><b>Assunto: </b>Re: [ovirt-users= ] Gluster command [<UNKNOWN>] failed on server<br><div><br></div><br>= <div><br></div>On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrot= e:<br>> Hi, <br>> <br>> I just restart it again, and now start the= gluster service before starting the hosted engine, but still gets the same= error message. <br>> <br>> Any more ideas? <br>I just had the same p= roblem. <br>My <unknown> error indeed to the fact that glusterd / glu= sterfsd were<br>not running.<br><div><br></div>After starting them it turne= d out the host setup did not automatically<br>add the iptables rules for gl= uster. I added to iptables:<br><div><br></div># gluster<br>-A INPUT -p tcp = --dport 24007:24011 -j ACCEPT<br>-A INPUT -p tcp --dport 38465:38485 -j ACC= EPT<br><div><br></div>Afterwards 'gluster peer status' worked and my host w= as operational<br>again.<br><div><br></div>Hint: Sometimes this is do to gl= uster itself. Restaring glusterd works<br>most of the time to fix this.<br>= <div><br></div>> <br>> Thanks <br>> <br>> Jose <br>> <br>>= ; # hosted-engine --vm-status <br>> <br>> --=3D=3D Host 1 status =3D= =3D-- <br>> <br>> Status up-to-date : True <br>> Hostname : ovserv= er1.domain.com <br>> Host ID : 1 <br>> Engine status : {"health": "go= od", "vm": "up", "detail": "up"} <br>> Score : 2400 <br>> Local maint= enance : False <br>> Host timestamp : 4998 <br>> Extra metadata (vali= d at timestamp): <br>> metadata_parse_version=3D1 <br>> metadata_feat= ure_version=3D1 <br>> timestamp=3D4998 (Mon May 11 16:03:48 2015) <br>&g= t; host-id=3D1 <br>> score=3D2400 <br>> maintenance=3DFalse <br>> = state=3DEngineUp <br>> <br>> <br>> # service glusterd status <br>&= gt; Redirecting to /bin/systemctl status glusterd.service <br>> glusterd= .service - GlusterFS, a clustered file-system server <br>> Loaded: loade= d (/usr/lib/systemd/system/glusterd.service; enabled) <br>> Active: acti= ve (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago <br>> Proc= ess: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3De= xited, status=3D0/SUCCESS) <br>> Main PID: 3061 (glusterd) <br>> CGro= up: /system.slice/glusterd.service <br>> =C3=A2=C3=A23061 /usr/sbin/glus= terd -p /var/run/glusterd.pid <br>> =C3=A2=C3=A23202 /usr/sbin/glusterfs= d -s ovserver2.acloud.pt --volfile-id gv... <br>> <br>> May 11 14:37:= 11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br>&g= t; May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cl= uster.... <br>> Hint: Some lines were ellipsized, use -l to show in full= . <br>> <br>> <br>> ----- Mensagem original -----<br>> <br>>= De: suporte@logicworks.pt <br>> Para: "knarra" <knarra@redhat.com>= ; <br>> Cc: Users@ovirt.org <br>> Enviadas: Segunda-feira, 11 De Maio= de 2015 13:15:14 <br>> Assunto: Re: [ovirt-users] Gluster command [<= UNKNOWN>] failed on server <br>> <br>> Hi, <br>> <br>> I hav= e 2 nodes, but only one is working with glusterfs. <br>> <br>> But yo= u were right, glusterfs was not running, I just start the service - I didn'= t check it :( : <br>> # service glusterd status <br>> Redirecting to = /bin/systemctl status glusterd.service <br>> glusterd.service - GlusterF= S, a clustered file-system server <br>> Loaded: loaded (/usr/lib/systemd= /system/glusterd.service; enabled) <br>> Active: active (running) since = Mon 2015-05-11 13:06:24 WEST; 3s ago <br>> Process: 4482 ExecStart=3D/us= r/sbin/glusterd -p /var/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS= ) <br>> Main PID: 4483 (glusterd) <br>> CGroup: /system.slice/gluster= d.service <br>> =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd= .pid <br>> =C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt = --volfile-id gv... <br>> <br>> May 11 13:06:22 ovserver2.domain.com s= ystemd[1]: Starting GlusterFS, a cluste.... <br>> May 11 13:06:24 ovserv= er2.domain.com systemd[1]: Started GlusterFS, a cluster.... <br>> Hint: = Some lines were ellipsized, use -l to show in full. <br>> <br>> But s= till the problem remains <br>> <br>> Should I first start the gluster= fs before the hosted engine? <br>> <br>> Thanks <br>> <br>> ---= -- Mensagem original -----<br>> <br>> De: "knarra" <knarra@redhat.= com> <br>> Para: suporte@logicworks.pt, Users@ovirt.org <br>> Envi= adas: Segunda-feira, 11 De Maio de 2015 12:45:19 <br>> Assunto: Re: [ovi= rt-users] Gluster command [<UNKNOWN>] failed on server <br>> <br>&= gt; On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote: <br>> <br>> = <br>> <br>> Hi, <br>> <br>> I'm testing ovirt 3.5.1, with hoste= d engine, using centos7.1. Have installed some VMs, no problem. I needed to= shutdown the computer machines (follow this procedure: http://lists.ovirt.= org/pipermail/users/2014-April/023861.html ), after rebooting could not get= it <br>> working again, when trying to activate the hosts this message = come up: Gluster command [<UNKNOWN>] failed on server <br>> I have=
Handeslregister: Amtsgericht Charlottenburg / HRB 112767<br>______________= _________________________________<br>Users mailing list<br>Users@ovirt.org<= br>http://lists.ovirt.org/mailman/listinfo/users<br></div><div><br></div></=
--=20 Daniel Helgenberger=20 m box bewegtbild GmbH=20 P: +49/30/2408781-22=20 F: +49/30/2408781-10=20 ACKERSTR. 19=20 D-10115 BERLIN=20 www.m-box.de www.monkeymen.tv=20 Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner=20 Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 ------=_Part_511688_1156827455.1431422051480 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div>Hi Daniel,<br></div><div><br></div><div>Well, I have glu= sterfs up and running:<br></div><div><br></div><div># service glusterd stat= us<br>Redirecting to /bin/systemctl status glusterd.service<br>gluste= rd.service - GlusterFS, a clustered file-system server<br> Load= ed: loaded (/usr/lib/systemd/system/glusterd.service; enabled)<br> &nb= sp; Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago<br= lipsized, use -l to show in full.<br><br></div><div># gluster volume info&n= bsp;  = ; &n= bsp;  = ; <br>Volume Name: gv0<br>Type: Distribute<br>Volume ID: 6ccd18= 31-6c4c-41c3-a695-8c7b57cf1261<br>Status: Started<br>Number of Bricks: 1<br= tried a lot of things, including update it to Version 3.5.2-1.el7.centos, = but no success. <br>> Gluster version: <br>> glusterfs-3.6.3-1.el7.x8= 6_64 <br>> glusterfs-libs-3.6.3-1.el7.x86_64 <br>> glusterfs-fuse-3.6= .3-1.el7.x86_64 <br>> glusterfs-cli-3.6.3-1.el7.x86_64 <br>> glusterf= s-rdma-3.6.3-1.el7.x86_64 <br>> glusterfs-api-3.6.3-1.el7.x86_64 <br>>= ; <br>> Any help? <br>> <br>> ____________________________________= ___________<br>> Users mailing list<br>> Users@ovirt.org<br>> http= ://lists.ovirt.org/mailman/listinfo/users<br><div><br></div>-- <br>Daniel H= elgenberger<br>m box bewegtbild GmbH<br><div><br></div>P: +49/30/2408781-22= <br>F: +49/30/2408781-10<br><div><br></div>ACKERSTR. 19<br>D-10115 BERLIN<b= r><div><br></div><br>www.m-box.de www.monkeymen.tv<br><div><br></div>= Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner<br= div></body></html> ------=_Part_511688_1156827455.1431422051480--

------=_Part_512877_22269626.1431423152050 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable This is the engine log:=20 2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exc= lusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS=20 , sharedLocks=3D ]=20 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVd= sCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-0= 6f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN=20 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in or= der to prevent monitoring for host ovserver1 from data-center Default=20 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a m= onitoring of host will be skipped for host ovserver1 from data-center Defau= lt=20 2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStat= usVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f= 1360a3d6f, status=3DUnassigned, nonOperationalReason=3DNONE, stopSpmFailure= Logged=3Dfalse), log id: dca9241=20 2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsSta= tusVDSCommand, log id: dca9241=20 2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Set= HaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7]= START, SetHaMaintenanceModeVDSCommand(HostName =3D ovserver1, HostId =3D b= 505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a=20 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Set= HaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7]= FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a=20 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock rel= eased. Monitoring can run now for host ovserver1 from data-center Default= =20 2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogha= ndling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Cor= relation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call S= tack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by a= dmin@internal.=20 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object Engine= Lock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VD= S=20 , sharedLocks=3D ]=20 2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get= HardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START,= GetHardwareInfoVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2= -48c9-a161-06f1360a3d6f, vds=3DHost[ovserver1,b505a91a-38b2-48c9-a161-06f13= 60a3d6f]), log id: 633e992b=20 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get= HardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH= , GetHardwareInfoVDSCommand, log id: 633e992b=20 2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (= DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with = disabled SELinux.=20 2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOr= ClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Runnin= g command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entitie= s affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS=20 2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] STAR= T, GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a= -38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e=20 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unex= pected return value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method= does not exist / is not available.]=20 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unex= pected return value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method= does not exist / is not available.]=20 2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Glus= terServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Fai= led in GlusterServersListVDS method=20 2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Glus= terServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Com= mand GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a9= 1a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorExcept= ion: VDSGenericException: VDSErrorException: Failed to GlusterServersListVD= S, error =3D The method does not exist / is not available., code =3D -32601= =20 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINI= SH, GlusterServersListVDSCommand, log id: 770f2d6e=20 2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVd= sCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: Se= tNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-= 38b2-48c9-a161-06f1360a3d6f Type: VDS=20 2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatu= sVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1= 360a3d6f, status=3DNonOperational, nonOperationalReason=3DGLUSTER_COMMAND_F= AILED, stopSpmFailureLogged=3Dfalse), log id: 9dbd40f=20 2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStat= usVDSCommand, log id: 9dbd40f=20 2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalV= dsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::= vdsMaintenance - There is not host capable of running the hosted engine VM= =20 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlogh= andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Cor= relation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call S= tack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] faile= d on server ovserver1.=20 2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogha= ndling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Corr= elation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of= host ovserver1 was set to NonOperational.=20 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCom= mand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleV= dsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9= -a161-06f1360a3d6f Type: VDS=20 2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunT= imeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48= c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for re= ason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.=20 VDSM log:=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManag= er.Task::(prepare) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished:= {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquir= ed': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}}=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManage= r.Task::(_updateState) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::movin= g from state preparing -> state finished=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage= .ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources= {}=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage= .ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManage= r.Task::(_decref) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 abor= ting False=20 JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::B= roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'= SEND'>=20 JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonR= pcServer::(serve_requests) Waiting for request=20 Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.= StompServer::(send) Sending response=20 Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds= .MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:= 49510=20 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds= .MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.= 0.1:49510=20 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds= .MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml fro= m 127.0.0.1:49510=20 Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDe= tector::(handleSocket) xml over http detected from ('127.0.0.1', 49510)=20 Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wr= apper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3= fa4730811d4',) {}=20 Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wr= apper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 's= tatsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'v= nc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN':= '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeO= ffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0':= {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDr= opped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'u= nknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '= 1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': = [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-37245= 59636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp= ': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', '= kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': = '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-906= 54e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flus= hLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', '= writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username= ': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs'= : ''}]}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManage= r.Task::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::movin= g from state init -> state preparing=20 clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::44::dispatcher::(wra= pper) Run and protect: getConnectedStoragePoolsList(options=3DNone)=20 clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::47::dispatcher::(wra= pper) Run and protect: getConnectedStoragePoolsList, Return response: {'poo= llist': []}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManag= er.Task::(prepare) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::finished:= {'poollist': []}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManage= r.Task::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::movin= g from state preparing -> state finished=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage= .ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources= {}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage= .ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManage= r.Task::(_decref) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::ref 0 abor= ting False=20 It's something wrong with Glusterfs? Or Centos 7.1?=20 ----- Mensagem original ----- De: suporte@logicworks.pt=20 Para: "Daniel Helgenberger" <daniel.helgenberger@m-box.de>=20 Cc: users@ovirt.org=20 Enviadas: Ter=C3=A7a-feira, 12 De Maio de 2015 10:14:11=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 Hi Daniel,=20 Well, I have glusterfs up and running:=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago=20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code= =3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id = gv...=20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus= te....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 # gluster volume info=20 Volume Name: gv0=20 Type: Distribute=20 Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261=20 Status: Started=20 Number of Bricks: 1=20 Transport-type: tcp=20 Bricks:=20 Brick1: ovserver2.domain.com:/home2/brick1=20 I stopped iptables, but cannot bring the nodes up.=20 Everything was working until I needed to do a restart.=20 Any more ideas?=20 ----- Mensagem original ----- De: "Daniel Helgenberger" <daniel.helgenberger@m-box.de>=20 Para: users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 18:17:47=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrote:=20
Hi,=20 =20 I just restart it again, and now start the gluster service before startin= g the hosted engine, but still gets the same error message.=20 =20 Any more ideas?=20 I just had the same problem.=20 My <unknown> error indeed to the fact that glusterd / glusterfsd were=20 not running.=20
=20 Thanks=20 =20 Jose=20 =20 # hosted-engine --vm-status=20 =20 --=3D=3D Host 1 status =3D=3D--=20 =20 Status up-to-date : True=20 Hostname : ovserver1.domain.com=20 Host ID : 1=20 Engine status : {"health": "good", "vm": "up", "detail": "up"}=20 Score : 2400=20 Local maintenance : False=20 Host timestamp : 4998=20 Extra metadata (valid at timestamp):=20 metadata_parse_version=3D1=20 metadata_feature_version=3D1=20 timestamp=3D4998 (Mon May 11 16:03:48 2015)=20 host-id=3D1=20 score=3D2400=20 maintenance=3DFalse=20 state=3DEngineUp=20 =20 =20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago= =20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co= de=3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id= gv...=20 =20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl= uste....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 =20 =20 ----- Mensagem original -----=20 =20 De: suporte@logicworks.pt=20 Para: "knarra" <knarra@redhat.com>=20 Cc: Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server= =20 =20 Hi,=20 =20 I have 2 nodes, but only one is working with glusterfs.=20 =20 But you were right, glusterfs was not running, I just start the service -= I didn't check it :( :=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago=20 Process: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co= de=3Dexited, status=3D0/SUCCESS)=20 Main PID: 4483 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id= gv...=20 =20 May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl= uste....=20 May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 =20 But still the problem remains=20 =20 Should I first start the glusterfs before the hosted engine?=20 =20 Thanks=20 =20 ----- Mensagem original -----=20 =20 De: "knarra" <knarra@redhat.com>=20 Para: suporte@logicworks.pt, Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server= =20 =20 On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:=20 =20 =20 =20 Hi,=20 =20 I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have instal= led some VMs, no problem. I needed to shutdown the computer machines (follo= w this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.=
After starting them it turned out the host setup did not automatically=20 add the iptables rules for gluster. I added to iptables:=20 # gluster=20 -A INPUT -p tcp --dport 24007:24011 -j ACCEPT=20 -A INPUT -p tcp --dport 38465:38485 -j ACCEPT=20 Afterwards 'gluster peer status' worked and my host was operational=20 again.=20 Hint: Sometimes this is do to gluster itself. Restaring glusterd works=20 most of the time to fix this.=20 html ), after rebooting could not get it=20
working again, when trying to activate the hosts this message come up: Gl= uster command [<UNKNOWN>] failed on server=20 I have tried a lot of things, including update it to Version 3.5.2-1.el7.= centos, but no success.=20 Gluster version:=20 glusterfs-3.6.3-1.el7.x86_64=20 glusterfs-libs-3.6.3-1.el7.x86_64=20 glusterfs-fuse-3.6.3-1.el7.x86_64=20 glusterfs-cli-3.6.3-1.el7.x86_64=20 glusterfs-rdma-3.6.3-1.el7.x86_64=20 glusterfs-api-3.6.3-1.el7.x86_64=20 =20 Any help?=20 =20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20
<b>Enviadas: </b>Segunda-feira, 11 De Maio de 2015 18:17:47<br><b>Assunto:= </b>Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server<b= r><div><br></div><br><div><br></div>On Mo, 2015-05-11 at 16:05 +0100, supor= te@logicworks.pt wrote:<br>> Hi, <br>> <br>> I just restart it aga= in, and now start the gluster service before starting the hosted engine, bu= t still gets the same error message. <br>> <br>> Any more ideas? <br>= I just had the same problem. <br>My <unknown> error indeed to the fac= t that glusterd / glusterfsd were<br>not running.<br><div><br></div>After s= tarting them it turned out the host setup did not automatically<br>add the = iptables rules for gluster. I added to iptables:<br><div><br></div># gluste= r<br>-A INPUT -p tcp --dport 24007:24011 -j ACCEPT<br>-A INPUT -p tcp --dpo= rt 38465:38485 -j ACCEPT<br><div><br></div>Afterwards 'gluster peer status'= worked and my host was operational<br>again.<br><div><br></div>Hint: Somet= imes this is do to gluster itself. Restaring glusterd works<br>most of the = time to fix this.<br><div><br></div>> <br>> Thanks <br>> <br>> = Jose <br>> <br>> # hosted-engine --vm-status <br>> <br>> --=3D= =3D Host 1 status =3D=3D-- <br>> <br>> Status up-to-date : True <br>&= gt; Hostname : ovserver1.domain.com <br>> Host ID : 1 <br>> Engine st= atus : {"health": "good", "vm": "up", "detail": "up"} <br>> Score : 2400= <br>> Local maintenance : False <br>> Host timestamp : 4998 <br>>= Extra metadata (valid at timestamp): <br>> metadata_parse_version=3D1 <= br>> metadata_feature_version=3D1 <br>> timestamp=3D4998 (Mon May 11 = 16:03:48 2015) <br>> host-id=3D1 <br>> score=3D2400 <br>> maintena= nce=3DFalse <br>> state=3DEngineUp <br>> <br>> <br>> # service = glusterd status <br>> Redirecting to /bin/systemctl status glusterd.serv= ice <br>> glusterd.service - GlusterFS, a clustered file-system server <= br>> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) = <br>> Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27= min ago <br>> Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/g= lusterd.pid (code=3Dexited, status=3D0/SUCCESS) <br>> Main PID: 3061 (gl= usterd) <br>> CGroup: /system.slice/glusterd.service <br>> =C3=A2=C3= =A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid <br>> =C3=A2=C3=A232= 02 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... <br>> = <br>> May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterF= S, a cluste.... <br>> May 11 14:37:14 ovserver2.domain.com systemd[1]: S= tarted GlusterFS, a cluster.... <br>> Hint: Some lines were ellipsized, = use -l to show in full. <br>> <br>> <br>> ----- Mensagem original = -----<br>> <br>> De: suporte@logicworks.pt <br>> Para: "knarra" &l= t;knarra@redhat.com> <br>> Cc: Users@ovirt.org <br>> Enviadas: Seg= unda-feira, 11 De Maio de 2015 13:15:14 <br>> Assunto: Re: [ovirt-users]= Gluster command [<UNKNOWN>] failed on server <br>> <br>> Hi, <= br>> <br>> I have 2 nodes, but only one is working with glusterfs. <b= r>> <br>> But you were right, glusterfs was not running, I just start=
--=20 Daniel Helgenberger=20 m box bewegtbild GmbH=20 P: +49/30/2408781-22=20 F: +49/30/2408781-10=20 ACKERSTR. 19=20 D-10115 BERLIN=20 www.m-box.de www.monkeymen.tv=20 Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner=20 Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 ------=_Part_512877_22269626.1431423152050 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div><br></div><div>This is the engine log:<br></div><div>201= 5-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsComma= nd] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [= exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br>,= sharedLocks=3D ]<br>2015-05-12 10:27:44,186 INFO [org.ovirt.engine.c= ore.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] = Running command: ActivateVdsCommand internal: false. Entities affected :&nb= sp; ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULA= TE_HOST with role type ADMIN<br>2015-05-12 10:27:44,186 INFO [org.ovi= rt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) = [76c5a7e7] Before acquiring lock in order to prevent monitoring for host ov= server1 from data-center Default<br>2015-05-12 10:27:44,186 INFO [org= .ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-= 40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped= for host ovserver1 from data-center Default<br>2015-05-12 10:27:44,189 INF= O [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt= .thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName= =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f, status=3DU= nassigned, nonOperationalReason=3DNONE, stopSpmFailureLogged=3Dfalse), log = id: dca9241<br>2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vd= sbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7= e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241<br>2015-05-12 10:27:44,= 320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceM= odeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaM= aintenanceModeVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-4= 8c9-a161-06f1360a3d6f), log id: 3106a21a<br>2015-05-12 10:27:44,324 INFO&nb= sp; [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSComma= nd] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenance= ModeVDSCommand, log id: 3106a21a<br>2015-05-12 10:27:44,324 INFO [org= .ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-= 40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for= host ovserver1 from data-center Default<br>2015-05-12 10:27:44,369 INFO&nb= sp; [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] = (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Jo= b ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event = ID: -1, Message: Host ovserver1 was activated by admin@internal.<br>2015-05= -12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] = (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineL= ock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS= <br>, sharedLocks=3D ]<br>2015-05-12 10:27:45,047 INFO [org.ovirt.eng= ine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzSched= uler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName =3D ovs= erver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=3DHost[ovserve= r1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b<br>2015-05-12 1= 0:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwa= reInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetH= ardwareInfoVDSCommand, log id: 633e992b<br>2015-05-12 10:27:45,052 WARN&nbs= p; [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Wor= ker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux.<br>2015-= 05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsO= rClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Runni= ng command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entiti= es affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br>2= 015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.= GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6]= START, GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b50= 5a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e<br>2015-05-12 10:27:45= ,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersList= VDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return= value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method does not exi= st / is not available.]<br>2015-05-12 10:27:45,142 WARN [org.ovirt.en= gine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzSch= eduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCod= e=3D-32601, mMessage=3DThe method does not exist / is not available.]<br>20= 15-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Gluste= rServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Faile= d in GlusterServersListVDS method<br>2015-05-12 10:27:45,143 ERROR [org.ovi= rt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuar= tzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(Host= Name =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f) execut= ion failed. Exception: VDSErrorException: VDSGenericException: VDSErrorExce= ption: Failed to GlusterServersListVDS, error =3D The method does not exist= / is not available., code =3D -32601<br>2015-05-12 10:27:45,143 INFO = [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (De= faultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCom= mand, log id: 770f2d6e<br>2015-05-12 10:27:45,311 INFO [org.ovirt.eng= ine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51= ) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. E= ntities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS= <br>2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.Set= VdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, S= etVdsStatusVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9= -a161-06f1360a3d6f, status=3DNonOperational, nonOperationalReason=3DGLUSTER= _COMMAND_FAILED, stopSpmFailureLogged=3Dfalse), log id: 9dbd40f<br>2015-05-= 12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDS= Command] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatus= VDSCommand, log id: 9dbd40f<br>2015-05-12 10:27:45,355 ERROR [org.ovirt.eng= ine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-4= 1) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable o= f running the hosted engine VM<br>2015-05-12 10:27:45,394 ERROR [org.ovirt.= engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzS= cheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c= 1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message:= Gluster command [<UNKNOWN>] failed on server ovserver1.<br>2015-05-1= 2 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandl= ing.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correla= tion ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of ho= st ovserver1 was set to NonOperational.<br>2015-05-12 10:27:45,696 INFO&nbs= p; [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzSchedu= ler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal:= true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f T= ype: VDS<br>2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbr= oker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Hos= t b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperati= onal status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command= is skipped.<br><br></div><div>VDSM log:<br></div><div>Thread-84704::DEBUG:= :2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) T= ask=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-428= 0-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired': True, 'delay': = '0.000110247', 'lastCheck': '6.5', 'valid': True}}<br>Thread-84704::DEBUG::= 2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState= ) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparin= g -> state finished<br>Thread-84704::DEBUG::2015-05-12 10:27:49,884::res= ourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releas= eAll requests {} resources {}<br>Thread-84704::DEBUG::2015-05-12 10:27:49,8= 84::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.= cancelAll requests {}<br>Thread-84704::DEBUG::2015-05-12 10:27:49,884::task= ::993::Storage.TaskManager.Task::(_decref) Task=3D`a7b984ad-2390-4b5f-8ea6-= c95b0d0e8c37`::ref 0 aborting False<br>JsonRpc (StompReactor)::DEBUG::2015-= 05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) H= andling message <StompFrame command=3D'SEND'><br>JsonRpcServer::DEBUG= ::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_req= uests) Waiting for request<br>Thread-84705::DEBUG::2015-05-12 10:27:49,916:= :stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br>Detec= tor thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.Mult= iProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510= <br>Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201:= :vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 12= 7.0.0.1:49510<br>Detector thread::DEBUG::2015-05-12 10:27:49,980::protocold= etector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected= protocol xml from 127.0.0.1:49510<br>Detector thread::DEBUG::2015-05-12 10= :27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http = detected from ('127.0.0.1', 49510)<br>Thread-84706::DEBUG::2015-05-12 10:27= :49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGet= Stats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {}<br>Thread-84706::DE= BUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vm= GetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'di= splayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5= 900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '558= 7', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'ba= lloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00= :16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'tx= Rate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed'= : '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedT= ime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType= ': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-3724559636060176164',= 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPer= iod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'tru= e', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '32212254720', '= writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flu= shLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', = 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '= 0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 's= tatus': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]}<br>clien= tIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Tas= k::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::moving fro= m state init -> state preparing<br>clientIFinit::INFO::2015-05-12 10:27:= 50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedSt= oragePoolsList(options=3DNone)<br>clientIFinit::INFO::2015-05-12 10:27:50,8= 09::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStorag= ePoolsList, Return response: {'poollist': []}<br>clientIFinit::DEBUG::2015-= 05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=3D= `decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []}<br>clien= tIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Tas= k::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::moving fro= m state preparing -> state finished<br>clientIFinit::DEBUG::2015-05-12 1= 0:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseA= ll) Owner.releaseAll requests {} resources {}<br>clientIFinit::DEBUG::2015-= 05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(c= ancelAll) Owner.cancelAll requests {}<br>clientIFinit::DEBUG::2015-05-12 10= :27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=3D`decf270c= -4715-432c-a01d-942181f61e80`::ref 0 aborting False</div><div><br></div><di= v><br></div><div><br></div><div>It's something wrong with Glusterfs? Or Cen= tos 7.1?<br></div><div><br></div><div><br></div><hr id=3D"zwchr"><div style= =3D"color:#000;font-weight:normal;font-style:normal;text-decoration:none;fo= nt-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style=3D"col= or: #000; font-weight: normal; font-style: normal; text-decoration: none; f= ont-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>suport= e@logicworks.pt<br><b>Para: </b>"Daniel Helgenberger" <daniel.helgenberg= er@m-box.de><br><b>Cc: </b>users@ovirt.org<br><b>Enviadas: </b>Ter=C3=A7= a-feira, 12 De Maio de 2015 10:14:11<br><b>Assunto: </b>Re: [ovirt-users] G= luster command [<UNKNOWN>] failed on server<br><div><br></div><div st= yle=3D"font-family: Times New Roman; font-size: 10pt; color: #000000" data-= mce-style=3D"font-family: Times New Roman; font-size: 10pt; color: #000000;= "><div>Hi Daniel,<br></div><div><br></div><div>Well, I have glusterfs up an= d running:<br></div><div><br></div><div># service glusterd status<br>Redire= cting to /bin/systemctl status glusterd.service<br>glusterd.service -= GlusterFS, a clustered file-system server<br> Loaded: loaded (= /usr/lib/systemd/system/glusterd.service; enabled)<br> Active: = active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago<br> Proc= ess: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3De= xited, status=3D0/SUCCESS)<br> Main PID: 3061 (glusterd)<br> &nbs= p; CGroup: /system.slice/glusterd.service<br> = =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/= run/glusterd.pid<br> &= nbsp; =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfi= le-id gv...<br><div><br></div>May 11 14:37:11 ovserver2.domain.com systemd[= 1]: Starting GlusterFS, a cluste....<br>May 11 14:37:14 ovserver2.domain.co= m systemd[1]: Started GlusterFS, a cluster....<br>Hint: Some lines were ell= ipsized, use -l to show in full.<br><div><br></div></div><div># gluster vol= ume info &= nbsp; &nbs= p; &= nbsp; <br>Volume Name: gv0<br>Type: Distribute<br>Volume = ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261<br>Status: Started<br>Number of Br= icks: 1<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovserver2.domain.com:/= home2/brick1<br><div><br></div></div><div>I stopped iptables, but cannot br= ing the nodes up.<br></div><div>Everything was working until I needed to do= a restart.<br></div><div><br></div><div>Any more ideas?<br></div><div><br>= </div><div><br></div><div><br></div><hr id=3D"zwchr"><div style=3D"color:#0= 00;font-weight:normal;font-style:normal;text-decoration:none;font-family:He= lvetica,Arial,sans-serif;font-size:12pt;" data-mce-style=3D"color: #000; fo= nt-weight: normal; font-style: normal; text-decoration: none; font-family: = Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>"Daniel Helgenberg= er" <daniel.helgenberger@m-box.de><br><b>Para: </b>users@ovirt.org<br= the service - I didn't check it :( : <br>> # service glusterd status <b= r>> Redirecting to /bin/systemctl status glusterd.service <br>> glust= erd.service - GlusterFS, a clustered file-system server <br>> Loaded: lo= aded (/usr/lib/systemd/system/glusterd.service; enabled) <br>> Active: a= ctive (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago <br>> Process= : 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3Dexit= ed, status=3D0/SUCCESS) <br>> Main PID: 4483 (glusterd) <br>> CGroup:= /system.slice/glusterd.service <br>> =C3=A2=C3=A24483 /usr/sbin/gluster= d -p /var/run/glusterd.pid <br>> =C3=A2=C3=A24618 /usr/sbin/glusterfsd -= s ovserver2.acloud.pt --volfile-id gv... <br>> <br>> May 11 13:06:22 = ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br>> = May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er.... <br>> Hint: Some lines were ellipsized, use -l to show in full. <= br>> <br>> But still the problem remains <br>> <br>> Should I f= irst start the glusterfs before the hosted engine? <br>> <br>> Thanks= <br>> <br>> ----- Mensagem original -----<br>> <br>> De: "knar= ra" <knarra@redhat.com> <br>> Para: suporte@logicworks.pt, Users@o= virt.org <br>> Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 <br>= > Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on= server <br>> <br>> On 05/11/2015 05:00 PM, suporte@logicworks.pt wro= te: <br>> <br>> <br>> <br>> Hi, <br>> <br>> I'm testing o= virt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, n= o problem. I needed to shutdown the computer machines (follow this procedur= e: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after r= ebooting could not get it <br>> working again, when trying to activate t= he hosts this message come up: Gluster command [<UNKNOWN>] failed on = server <br>> I have tried a lot of things, including update it to Versio= n 3.5.2-1.el7.centos, but no success. <br>> Gluster version: <br>> gl= usterfs-3.6.3-1.el7.x86_64 <br>> glusterfs-libs-3.6.3-1.el7.x86_64 <br>&= gt; glusterfs-fuse-3.6.3-1.el7.x86_64 <br>> glusterfs-cli-3.6.3-1.el7.x8= 6_64 <br>> glusterfs-rdma-3.6.3-1.el7.x86_64 <br>> glusterfs-api-3.6.= 3-1.el7.x86_64 <br>> <br>> Any help? <br>> <br>> ______________= _________________________________<br>> Users mailing list<br>> Users@= ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br><div><br=
</div>-- <br>Daniel Helgenberger<br>m box bewegtbild GmbH<br><div><br></di= v>P: +49/30/2408781-22<br>F: +49/30/2408781-10<br><div><br></div>ACKERSTR. = 19<br>D-10115 BERLIN<br><div><br></div><br>www.m-box.de www.monkeymen= .tv<br><div><br></div>Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Mi= chaela G=C3=B6llner<br>Handeslregister: Amtsgericht Charlottenburg / HRB 11= 2767<br>_______________________________________________<br>Users mailing li= st<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><= /div><div><br></div></div><br>_____________________________________________= __<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailm= an/listinfo/users<br></div><div><br></div></div></body></html> ------=_Part_512877_22269626.1431423152050--

This is a multi-part message in MIME format. --------------000006090805040807010001 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit vdsm-gluster installed on your node? From logs - it seems to indicate that it is not. On 05/12/2015 03:02 PM, suporte@logicworks.pt wrote:
This is the engine log: 2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS , sharedLocks= ] 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVdsCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in order to prevent monitoring for host ovserver1 from data-center Default 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped for host ovserver1 from data-center Default 2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: dca9241 2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241 2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaMaintenanceModeVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for host ovserver1 from data-center Default 2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by admin@internal. 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS , sharedLocks= ] 2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=Host[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetHardwareInfoVDSCommand, log id: 633e992b 2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux. 2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] START, GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.] 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.] 2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Failed in GlusterServersListVDS method 2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist / is not available., code = -32601 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCommand, log id: 770f2d6e 2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: 9dbd40f 2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatusVDSCommand, log id: 9dbd40f 2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable of running the hosted engine VM 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server ovserver1. 2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host ovserver1 was set to NonOperational. 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.
VDSM log: Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}} Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparing -> state finished Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManager.Task::(_decref) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 aborting False JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.StompServer::(send) Sending response Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:49510 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:49510 Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 49510) Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {} Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-3724559636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]} clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state init -> state preparing clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList(options=None) clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []} clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=`decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []} clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state preparing -> state finished clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=`decf270c-4715-432c-a01d-942181f61e80`::ref 0 aborting False
It's something wrong with Glusterfs? Or Centos 7.1?
------------------------------------------------------------------------ *De: *suporte@logicworks.pt *Para: *"Daniel Helgenberger" <daniel.helgenberger@m-box.de> *Cc: *users@ovirt.org *Enviadas: *Terça-feira, 12 De Maio de 2015 10:14:11 *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
Hi Daniel,
Well, I have glusterfs up and running:
# service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 3061 (glusterd) CGroup: /system.slice/glusterd.service ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ3202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id gv...
May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovserver2.domain.com:/home2/brick1
I stopped iptables, but cannot bring the nodes up. Everything was working until I needed to do a restart.
Any more ideas?
------------------------------------------------------------------------ *De: *"Daniel Helgenberger" <daniel.helgenberger@m-box.de> *Para: *users@ovirt.org *Enviadas: *Segunda-feira, 11 De Maio de 2015 18:17:47 *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrote:
Hi,
I just restart it again, and now start the gluster service before starting the hosted engine, but still gets the same error message.
Any more ideas? I just had the same problem. My <unknown> error indeed to the fact that glusterd / glusterfsd were not running.
After starting them it turned out the host setup did not automatically add the iptables rules for gluster. I added to iptables:
# gluster -A INPUT -p tcp --dport 24007:24011 -j ACCEPT -A INPUT -p tcp --dport 38465:38485 -j ACCEPT
Afterwards 'gluster peer status' worked and my host was operational again.
Hint: Sometimes this is do to gluster itself. Restaring glusterd works most of the time to fix this.
Thanks
Jose
# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovserver1.domain.com Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 2400 Local maintenance : False Host timestamp : 4998 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=4998 (Mon May 11 16:03:48 2015) host-id=1 score=2400 maintenance=False state=EngineUp
# service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h
27min ago
Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 3061 (glusterd) CGroup: /system.slice/glusterd.service ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...
May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
----- Mensagem original -----
De: suporte@logicworks.pt Para: "knarra" <knarra@redhat.com> Cc: Users@ovirt.org Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
Hi,
I have 2 nodes, but only one is working with glusterfs.
But you were right, glusterfs was not running, I just start the service - I didn't check it :( : # service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago Process: 4482 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 4483 (glusterd) CGroup: /system.slice/glusterd.service ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...
May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
But still the problem remains
Should I first start the glusterfs before the hosted engine?
Thanks
----- Mensagem original -----
De: "knarra" <knarra@redhat.com> Para: suporte@logicworks.pt, Users@ovirt.org Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:
Hi,
I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after rebooting could not get it working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. Gluster version: glusterfs-3.6.3-1.el7.x86_64 glusterfs-libs-3.6.3-1.el7.x86_64 glusterfs-fuse-3.6.3-1.el7.x86_64 glusterfs-cli-3.6.3-1.el7.x86_64 glusterfs-rdma-3.6.3-1.el7.x86_64 glusterfs-api-3.6.3-1.el7.x86_64
Any help?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Daniel Helgenberger m box bewegtbild GmbH
P: +49/30/2408781-22 F: +49/30/2408781-10
ACKERSTR. 19 D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------000006090805040807010001 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> vdsm-gluster installed on your node?<br> <br> From logs - it seems to indicate that it is not.<br> <br> <div class="moz-cite-prefix">On 05/12/2015 03:02 PM, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br> </div> <blockquote cite="mid:623449604.512878.1431423152052.JavaMail.zimbra@logicworks.pt" type="cite"> <div style="font-family: Times New Roman; font-size: 10pt; color: #000000"> <div><br> </div> <div>This is the engine log:<br> </div> <div>2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br> , sharedLocks= ]<br> 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVdsCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN<br> 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in order to prevent monitoring for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: dca9241<br> 2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241<br> 2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaMaintenanceModeVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a<br> 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a<br> 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by admin@internal.<br> 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br> , sharedLocks= ]<br> 2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=Host[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b<br> 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetHardwareInfoVDSCommand, log id: 633e992b<br> 2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux.<br> 2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] START, GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e<br> 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.]<br> 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.]<br> 2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Failed in GlusterServersListVDS method<br> 2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist / is not available., code = -32601<br> 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCommand, log id: 770f2d6e<br> 2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: 9dbd40f<br> 2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatusVDSCommand, log id: 9dbd40f<br> 2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable of running the hosted engine VM<br> 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server ovserver1.<br> 2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host ovserver1 was set to NonOperational.<br> 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.<br> <br> </div> <div>VDSM log:<br> </div> <div>Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparing -> state finished<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManager.Task::(_decref) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 aborting False<br> JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'><br> JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<br> Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br> Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 49510)<br> Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {}<br> Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-3724559636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state init -> state preparing<br> clientIFinit::<a class="moz-txt-link-freetext" href="INFO::2015-05-12">INFO::2015-05-12</a> 10:27:50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList(options=None)<br> clientIFinit::<a class="moz-txt-link-freetext" href="INFO::2015-05-12">INFO::2015-05-12</a> 10:27:50,809::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=`decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state preparing -> state finished<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=`decf270c-4715-432c-a01d-942181f61e80`::ref 0 aborting False</div> <div><br> </div> <div><br> </div> <div><br> </div> <div>It's something wrong with Glusterfs? Or Centos 7.1?<br> </div> <div><br> </div> <div><br> </div> <hr id="zwchr"> <div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b><a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a><br> <b>Para: </b>"Daniel Helgenberger" <a class="moz-txt-link-rfc2396E" href="mailto:daniel.helgenberger@m-box.de"><daniel.helgenberger@m-box.de></a><br> <b>Cc: </b><a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br> <b>Enviadas: </b>Terça-feira, 12 De Maio de 2015 10:14:11<br> <b>Assunto: </b>Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server<br> <div><br> </div> <div style="font-family: Times New Roman; font-size: 10pt; color: #000000" data-mce-style="font-family: Times New Roman; font-size: 10pt; color: #000000;"> <div>Hi Daniel,<br> </div> <div><br> </div> <div>Well, I have glusterfs up and running:<br> </div> <div><br> </div> <div># service glusterd status<br> Redirecting to /bin/systemctl status glusterd.service<br> glusterd.service - GlusterFS, a clustered file-system server<br> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)<br> Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago<br> Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS)<br> Main PID: 3061 (glusterd)<br> CGroup: /system.slice/glusterd.service<br> ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid<br> ââ3202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id gv...<br> <div><br> </div> May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste....<br> May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster....<br> Hint: Some lines were ellipsized, use -l to show in full.<br> <div><br> </div> </div> <div># gluster volume info <br> Volume Name: gv0<br> Type: Distribute<br> Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261<br> Status: Started<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovserver2.domain.com:/home2/brick1<br> <div><br> </div> </div> <div>I stopped iptables, but cannot bring the nodes up.<br> </div> <div>Everything was working until I needed to do a restart.<br> </div> <div><br> </div> <div>Any more ideas?<br> </div> <div><br> </div> <div><br> </div> <div><br> </div> <hr id="zwchr"> <div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>"Daniel Helgenberger" <a class="moz-txt-link-rfc2396E" href="mailto:daniel.helgenberger@m-box.de"><daniel.helgenberger@m-box.de></a><br> <b>Para: </b><a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br> <b>Enviadas: </b>Segunda-feira, 11 De Maio de 2015 18:17:47<br> <b>Assunto: </b>Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server<br> <div><br> </div> <br> <div><br> </div> On Mo, 2015-05-11 at 16:05 +0100, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br> > Hi, <br> > <br> > I just restart it again, and now start the gluster service before starting the hosted engine, but still gets the same error message. <br> > <br> > Any more ideas? <br> I just had the same problem. <br> My <unknown> error indeed to the fact that glusterd / glusterfsd were<br> not running.<br> <div><br> </div> After starting them it turned out the host setup did not automatically<br> add the iptables rules for gluster. I added to iptables:<br> <div><br> </div> # gluster<br> -A INPUT -p tcp --dport 24007:24011 -j ACCEPT<br> -A INPUT -p tcp --dport 38465:38485 -j ACCEPT<br> <div><br> </div> Afterwards 'gluster peer status' worked and my host was operational<br> again.<br> <div><br> </div> Hint: Sometimes this is do to gluster itself. Restaring glusterd works<br> most of the time to fix this.<br> <div><br> </div> > <br> > Thanks <br> > <br> > Jose <br> > <br> > # hosted-engine --vm-status <br> > <br> > --== Host 1 status ==-- <br> > <br> > Status up-to-date : True <br> > Hostname : ovserver1.domain.com <br> > Host ID : 1 <br> > Engine status : {"health": "good", "vm": "up", "detail": "up"} <br> > Score : 2400 <br> > Local maintenance : False <br> > Host timestamp : 4998 <br> > Extra metadata (valid at timestamp): <br> > metadata_parse_version=1 <br> > metadata_feature_version=1 <br> > timestamp=4998 (Mon May 11 16:03:48 2015) <br> > host-id=1 <br> > score=2400 <br> > maintenance=False <br> > state=EngineUp <br> > <br> > <br> > # service glusterd status <br> > Redirecting to /bin/systemctl status glusterd.service <br> > glusterd.service - GlusterFS, a clustered file-system server <br> > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) <br> > Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago <br> > Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) <br> > Main PID: 3061 (glusterd) <br> > CGroup: /system.slice/glusterd.service <br> > ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid <br> > ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... <br> > <br> > May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br> > May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... <br> > Hint: Some lines were ellipsized, use -l to show in full. <br> > <br> > <br> > ----- Mensagem original -----<br> > <br> > De: <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> <br> > Para: "knarra" <a class="moz-txt-link-rfc2396E" href="mailto:knarra@redhat.com"><knarra@redhat.com></a> <br> > Cc: <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> > Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14 <br> > Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server <br> > <br> > Hi, <br> > <br> > I have 2 nodes, but only one is working with glusterfs. <br> > <br> > But you were right, glusterfs was not running, I just start the service - I didn't check it :( : <br> > # service glusterd status <br> > Redirecting to /bin/systemctl status glusterd.service <br> > glusterd.service - GlusterFS, a clustered file-system server <br> > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) <br> > Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago <br> > Process: 4482 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) <br> > Main PID: 4483 (glusterd) <br> > CGroup: /system.slice/glusterd.service <br> > ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid <br> > ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... <br> > <br> > May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br> > May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... <br> > Hint: Some lines were ellipsized, use -l to show in full. <br> > <br> > But still the problem remains <br> > <br> > Should I first start the glusterfs before the hosted engine? <br> > <br> > Thanks <br> > <br> > ----- Mensagem original -----<br> > <br> > De: "knarra" <a class="moz-txt-link-rfc2396E" href="mailto:knarra@redhat.com"><knarra@redhat.com></a> <br> > Para: <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>, <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> > Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 <br> > Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server <br> > <br> > On 05/11/2015 05:00 PM, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote: <br> > <br> > <br> > <br> > Hi, <br> > <br> > I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/pipermail/users/2014-April/023861.html">http://lists.ovirt.org/pipermail/users/2014-April/023861.html</a> ), after rebooting could not get it <br> > working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server <br> > I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. <br> > Gluster version: <br> > glusterfs-3.6.3-1.el7.x86_64 <br> > glusterfs-libs-3.6.3-1.el7.x86_64 <br> > glusterfs-fuse-3.6.3-1.el7.x86_64 <br> > glusterfs-cli-3.6.3-1.el7.x86_64 <br> > glusterfs-rdma-3.6.3-1.el7.x86_64 <br> > glusterfs-api-3.6.3-1.el7.x86_64 <br> > <br> > Any help? <br> > <br> > _______________________________________________<br> > Users mailing list<br> > <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> > <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> <div><br> </div> -- <br> Daniel Helgenberger<br> m box bewegtbild GmbH<br> <div><br> </div> P: +49/30/2408781-22<br> F: +49/30/2408781-10<br> <div><br> </div> ACKERSTR. 19<br> D-10115 BERLIN<br> <div><br> </div> <br> <a class="moz-txt-link-abbreviated" href="http://www.m-box.de">www.m-box.de</a> <a class="moz-txt-link-abbreviated" href="http://www.monkeymen.tv">www.monkeymen.tv</a><br> <div><br> </div> Geschäftsführer: Martin Retschitzegger / Michaela Göllner<br> Handeslregister: Amtsgericht Charlottenburg / HRB 112767<br> _______________________________________________<br> Users mailing list<br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> </div> <div><br> </div> </div> <br> _______________________________________________<br> Users mailing list<br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> </div> <div><br> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------000006090805040807010001--

------=_Part_522196_274858394.1431439152076 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Thanks Sahina, you were right. vdsm-gluster was not installed. Weird, I don= 't remember to remove it, but now I have the 2 nodes green.=20 Thanks a lot=20 ----- Mensagem original ----- De: "Sahina Bose" <sabose@redhat.com>=20 Para: suporte@logicworks.pt, "Daniel Helgenberger" <daniel.helgenberger@m-b= ox.de>=20 Cc: users@ovirt.org=20 Enviadas: Ter=C3=A7a-feira, 12 De Maio de 2015 11:45:53=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 vdsm-gluster installed on your node?=20
From logs - it seems to indicate that it is not.=20
On 05/12/2015 03:02 PM, suporte@logicworks.pt wrote:=20 This is the engine log:=20 2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exc= lusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS=20 , sharedLocks=3D ]=20 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVd= sCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-0= 6f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN=20 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in or= der to prevent monitoring for host ovserver1 from data-center Default=20 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a m= onitoring of host will be skipped for host ovserver1 from data-center Defau= lt=20 2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStat= usVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f= 1360a3d6f, status=3DUnassigned, nonOperationalReason=3DNONE, stopSpmFailure= Logged=3Dfalse), log id: dca9241=20 2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsSta= tusVDSCommand, log id: dca9241=20 2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Set= HaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7]= START, SetHaMaintenanceModeVDSCommand(HostName =3D ovserver1, HostId =3D b= 505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a=20 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Set= HaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7]= FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a=20 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock rel= eased. Monitoring can run now for host ovserver1 from data-center Default= =20 2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogha= ndling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Cor= relation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call S= tack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by a= dmin@internal.=20 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]= (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object Engine= Lock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VD= S=20 , sharedLocks=3D ]=20 2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get= HardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START,= GetHardwareInfoVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2= -48c9-a161-06f1360a3d6f, vds=3DHost[ovserver1,b505a91a-38b2-48c9-a161-06f13= 60a3d6f]), log id: 633e992b=20 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get= HardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH= , GetHardwareInfoVDSCommand, log id: 633e992b=20 2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (= DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with = disabled SELinux.=20 2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOr= ClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Runnin= g command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entitie= s affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS=20 2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] STAR= T, GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a= -38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e=20 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unex= pected return value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method= does not exist / is not available.]=20 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unex= pected return value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method= does not exist / is not available.]=20 2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Glus= terServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Fai= led in GlusterServersListVDS method=20 2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Glus= terServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Com= mand GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a9= 1a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorExcept= ion: VDSGenericException: VDSErrorException: Failed to GlusterServersListVD= S, error =3D The method does not exist / is not available., code =3D -32601= =20 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.Glust= erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINI= SH, GlusterServersListVDSCommand, log id: 770f2d6e=20 2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVd= sCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: Se= tNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-= 38b2-48c9-a161-06f1360a3d6f Type: VDS=20 2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatu= sVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1= 360a3d6f, status=3DNonOperational, nonOperationalReason=3DGLUSTER_COMMAND_F= AILED, stopSpmFailureLogged=3Dfalse), log id: 9dbd40f=20 2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV= DSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStat= usVDSCommand, log id: 9dbd40f=20 2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalV= dsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::= vdsMaintenance - There is not host capable of running the hosted engine VM= =20 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlogh= andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Cor= relation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call S= tack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] faile= d on server ovserver1.=20 2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogha= ndling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Corr= elation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of= host ovserver1 was set to NonOperational.=20 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCom= mand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleV= dsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9= -a161-06f1360a3d6f Type: VDS=20 2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunT= imeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48= c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for re= ason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.=20 VDSM log:=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManag= er.Task::(prepare) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished:= {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquir= ed': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}}=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManage= r.Task::(_updateState) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::movin= g from state preparing -> state finished=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage= .ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources= {}=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage= .ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20 Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManage= r.Task::(_decref) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 abor= ting False=20 JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::B= roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'= SEND'>=20 JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonR= pcServer::(serve_requests) Waiting for request=20 Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.= StompServer::(send) Sending response=20 Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds= .MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:= 49510=20 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds= .MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.= 0.1:49510=20 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds= .MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml fro= m 127.0.0.1:49510=20 Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDe= tector::(handleSocket) xml over http detected from ('127.0.0.1', 49510)=20 Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wr= apper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3= fa4730811d4',) {}=20 Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wr= apper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 's= tatsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'v= nc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN':= '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeO= ffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0':= {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDr= opped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'u= nknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '= 1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': = [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-37245= 59636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp= ': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', '= kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': = '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-906= 54e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flus= hLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', '= writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username= ': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs'= : ''}]}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManage= r.Task::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::movin= g from state init -> state preparing=20 clientIFinit:: INFO::2015-05-12 10:27:50,809::logUtils::44::dispatcher::(wr= apper) Run and protect: getConnectedStoragePoolsList(options=3DNone)=20 clientIFinit:: INFO::2015-05-12 10:27:50,809::logUtils::47::dispatcher::(wr= apper) Run and protect: getConnectedStoragePoolsList, Return response: {'po= ollist': []}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManag= er.Task::(prepare) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::finished:= {'poollist': []}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManage= r.Task::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::movin= g from state preparing -> state finished=20 clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage= .ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources= {}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage= .ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20 clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManage= r.Task::(_decref) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::ref 0 abor= ting False=20 It's something wrong with Glusterfs? Or Centos 7.1?=20 ----- Mensagem original ----- De: suporte@logicworks.pt=20 Para: "Daniel Helgenberger" <daniel.helgenberger@m-box.de>=20 Cc: users@ovirt.org=20 Enviadas: Ter=C3=A7a-feira, 12 De Maio de 2015 10:14:11=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 Hi Daniel,=20 Well, I have glusterfs up and running:=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago=20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code= =3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id = gv...=20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus= te....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clust= er....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 # gluster volume info=20 Volume Name: gv0=20 Type: Distribute=20 Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261=20 Status: Started=20 Number of Bricks: 1=20 Transport-type: tcp=20 Bricks:=20 Brick1: ovserver2.domain.com:/home2/brick1=20 I stopped iptables, but cannot bring the nodes up.=20 Everything was working until I needed to do a restart.=20 Any more ideas?=20 ----- Mensagem original ----- De: "Daniel Helgenberger" <daniel.helgenberger@m-box.de>=20 Para: users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 18:17:47=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20 On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrote:=20
Hi,=20 =20 I just restart it again, and now start the gluster service before startin= g the hosted engine, but still gets the same error message.=20 =20 Any more ideas?=20 I just had the same problem.=20 My <unknown> error indeed to the fact that glusterd / glusterfsd were=20 not running.=20
=20 Thanks=20 =20 Jose=20 =20 # hosted-engine --vm-status=20 =20 --=3D=3D Host 1 status =3D=3D--=20 =20 Status up-to-date : True=20 Hostname : ovserver1.domain.com=20 Host ID : 1=20 Engine status : {"health": "good", "vm": "up", "detail": "up"}=20 Score : 2400=20 Local maintenance : False=20 Host timestamp : 4998=20 Extra metadata (valid at timestamp):=20 metadata_parse_version=3D1=20 metadata_feature_version=3D1=20 timestamp=3D4998 (Mon May 11 16:03:48 2015)=20 host-id=3D1=20 score=3D2400=20 maintenance=3DFalse=20 state=3DEngineUp=20 =20 =20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago= =20 Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co= de=3Dexited, status=3D0/SUCCESS)=20 Main PID: 3061 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id= gv...=20 =20 May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl= uste....=20 May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 =20 =20 ----- Mensagem original -----=20 =20 De: suporte@logicworks.pt=20 Para: "knarra" <knarra@redhat.com>=20 Cc: Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server= =20 =20 Hi,=20 =20 I have 2 nodes, but only one is working with glusterfs.=20 =20 But you were right, glusterfs was not running, I just start the service -= I didn't check it :( :=20 # service glusterd status=20 Redirecting to /bin/systemctl status glusterd.service=20 glusterd.service - GlusterFS, a clustered file-system server=20 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20 Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago=20 Process: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co= de=3Dexited, status=3D0/SUCCESS)=20 Main PID: 4483 (glusterd)=20 CGroup: /system.slice/glusterd.service=20 =C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd.pid=20 =C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id= gv...=20 =20 May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl= uste....=20 May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a clu= ster....=20 Hint: Some lines were ellipsized, use -l to show in full.=20 =20 But still the problem remains=20 =20 Should I first start the glusterfs before the hosted engine?=20 =20 Thanks=20 =20 ----- Mensagem original -----=20 =20 De: "knarra" <knarra@redhat.com>=20 Para: suporte@logicworks.pt , Users@ovirt.org=20 Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19=20 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server= =20 =20 On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:=20 =20 =20 =20 Hi,=20 =20 I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have instal= led some VMs, no problem. I needed to shutdown the computer machines (follo= w this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.=
After starting them it turned out the host setup did not automatically=20 add the iptables rules for gluster. I added to iptables:=20 # gluster=20 -A INPUT -p tcp --dport 24007:24011 -j ACCEPT=20 -A INPUT -p tcp --dport 38465:38485 -j ACCEPT=20 Afterwards 'gluster peer status' worked and my host was operational=20 again.=20 Hint: Sometimes this is do to gluster itself. Restaring glusterd works=20 most of the time to fix this.=20 html ), after rebooting could not get it=20
working again, when trying to activate the hosts this message come up: Gl= uster command [<UNKNOWN>] failed on server=20 I have tried a lot of things, including update it to Version 3.5.2-1.el7.= centos, but no success.=20 Gluster version:=20 glusterfs-3.6.3-1.el7.x86_64=20 glusterfs-libs-3.6.3-1.el7.x86_64=20 glusterfs-fuse-3.6.3-1.el7.x86_64=20 glusterfs-cli-3.6.3-1.el7.x86_64=20 glusterfs-rdma-3.6.3-1.el7.x86_64=20 glusterfs-api-3.6.3-1.el7.x86_64=20 =20 Any help?=20 =20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20
"Sahina Bose" <sabose@redhat.com><br><b>Para: </b>suporte@logicworks= .pt, "Daniel Helgenberger" <daniel.helgenberger@m-box.de><br><b>Cc: <= /b>users@ovirt.org<br><b>Enviadas: </b>Ter=C3=A7a-feira, 12 De Maio de 2015= 11:45:53<br><b>Assunto: </b>Re: [ovirt-users] Gluster command [<UNKNOWN= >] failed on server<br><div><br></div>vdsm-gluster installed on your nod= e?<br> <br> From logs - it seems to indicate that it is not.<br> <br><div c= lass=3D"moz-cite-prefix">On 05/12/2015 03:02 PM, <a class=3D"moz-txt-link-a= bbreviated" href=3D"mailto:suporte@logicworks.pt" target=3D"_blank" data-mc= e-href=3D"mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br= </div><blockquote cite=3D"mid:623449604.512878.1431423152052.JavaMail.zimb= ra@logicworks.pt"><div style=3D"font-family: Times New Roman; font-size: 10=
VDSM log:<br></div><div>Thread-84704::DEBUG::2015-05-12 10:27:49,884::task= ::1191::Storage.TaskManager.Task::(prepare) Task=3D`a7b984ad-2390-4b5f-8ea6= -c95b0d0e8c37`::finished: {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code':= 0, 'version': 3, 'acquired': True, 'delay': '0.000110247', 'lastCheck': '6= .5', 'valid': True}}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::task= ::595::Storage.TaskManager.Task::(_updateState) Task=3D`a7b984ad-2390-4b5f-= 8ea6-c95b0d0e8c37`::moving from state preparing -> state finished<br> Th= read-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage.R= esourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {= }<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::S= torage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> T=
INFO::2015-05-12</a> 10:27:50,809::logUtils::44::dispatcher::(wrapper) Run= and protect: getConnectedStoragePoolsList(options=3DNone)<br> clientIFinit= ::<a class=3D"moz-txt-link-freetext" href=3D"INFO::2015-05-12" target=3D"_b= lank" data-mce-href=3D"INFO::2015-05-12">INFO::2015-05-12</a> 10:27:50,809:= :logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStoragePo=
> I have 2 nodes, but only one is working with glusterfs. <br> > <b= r> > But you were right, glusterfs was not running, I just start the ser= vice - I didn't check it :( : <br> > # service glusterd status <br> >= Redirecting to /bin/systemctl status glusterd.service <br> > glusterd.s= ervice - GlusterFS, a clustered file-system server <br> > Loaded: loaded= (/usr/lib/systemd/system/glusterd.service; enabled) <br> > Active: acti= ve (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago <br> > Process: = 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3Dexited= , status=3D0/SUCCESS) <br> > Main PID: 4483 (glusterd) <br> > CGroup:= /system.slice/glusterd.service <br> > =C3=A2=C3=A24483 /usr/sbin/gluste= rd -p /var/run/glusterd.pid <br> > =C3=A2=C3=A24618 /usr/sbin/glusterfsd= -s ovserver2.acloud.pt --volfile-id gv... <br> > <br> > May 11 13:06= :22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br> = > May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a = cluster.... <br> > Hint: Some lines were ellipsized, use -l to show in f=
> De: "knarra" <a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:knarr= a@redhat.com" target=3D"_blank" data-mce-href=3D"mailto:knarra@redhat.com">= <knarra@redhat.com></a> <br> > Para: <a class=3D"moz-txt-link-abbr= eviated" href=3D"mailto:suporte@logicworks.pt" target=3D"_blank" data-mce-h= ref=3D"mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>, <a class=3D= "moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" target=3D"_blank= " data-mce-href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a> <br> > En= viadas: Segunda-feira, 11 De Maio de 2015 12:45:19 <br> > Assunto: Re: [= ovirt-users] Gluster command [<UNKNOWN>] failed on server <br> > <= br> > On 05/11/2015 05:00 PM, <a class=3D"moz-txt-link-abbreviated" href= =3D"mailto:suporte@logicworks.pt" target=3D"_blank" data-mce-href=3D"mailto= :suporte@logicworks.pt">suporte@logicworks.pt</a> wrote: <br> > <br> >= ; <br> > <br> > Hi, <br> > <br> > I'm testing ovirt 3.5.1, with= hosted engine, using centos7.1. Have installed some VMs, no problem. I nee= ded to shutdown the computer machines (follow this procedure: <a class=3D"m= oz-txt-link-freetext" href=3D"http://lists.ovirt.org/pipermail/users/2014-A=
--=20 Daniel Helgenberger=20 m box bewegtbild GmbH=20 P: +49/30/2408781-22=20 F: +49/30/2408781-10=20 ACKERSTR. 19=20 D-10115 BERLIN=20 www.m-box.de www.monkeymen.tv=20 Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner=20 Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/= users=20 ------=_Part_522196_274858394.1431439152076 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co= lor: #000000"><div>Thanks Sahina, you were right. vdsm-gluster was not inst= alled. Weird, I don't remember to remove it, but now I have the 2 nodes gre= en.<br></div><div>Thanks a lot<br></div><div><br></div><hr id=3D"zwchr"><di= v style=3D"color:#000;font-weight:normal;font-style:normal;text-decoration:= none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style= =3D"color: #000; font-weight: normal; font-style: normal; text-decoration: = none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b= pt; color: #000000" data-mce-style=3D"font-family: Times New Roman; font-size:= 10pt; color: #000000;"><div><br></div><div>This is the engine log:<br></di= v><div>2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.Activa= teVdsCommand] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object En= gineLock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value= : VDS<br> , sharedLocks=3D ]<br> 2015-05-12 10:27:44,186 INFO [org.ov= irt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40)= [76c5a7e7] Running command: ActivateVdsCommand internal: false. Entities a= ffected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction gr= oup MANIPULATE_HOST with role type ADMIN<br> 2015-05-12 10:27:44,186 INFO&n= bsp; [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-= 8-thread-40) [76c5a7e7] Before acquiring lock in order to prevent monitorin= g for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,186 I= NFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.= pool-8-thread-40) [76c5a7e7] Lock acquired, from now a monitoring of host w= ill be skipped for host ovserver1 from data-center Default<br> 2015-05-12 1= 0:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSComm= and] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSC= ommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3= d6f, status=3DUnassigned, nonOperationalReason=3DNONE, stopSpmFailureLogged= =3Dfalse), log id: dca9241<br> 2015-05-12 10:27:44,236 INFO [org.ovir= t.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-th= read-40) [76c5a7e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241<br> 201= 5-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.= SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7= e7] START, SetHaMaintenanceModeVDSCommand(HostName =3D ovserver1, HostId = =3D b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a<br> 2015-05-12 = 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMai= ntenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINI= SH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a<br> 2015-05-12 10:27:4= 4,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.= thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock released. Monit= oring can run now for host ovserver1 from data-center Default<br> 2015-05-1= 2 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandl= ing.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correl= ation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stac= k: null, Custom Event ID: -1, Message: Host ovserver1 was activated by admi= n@internal.<br> 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.b= ll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock = freed to object EngineLock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-= 06f1360a3d6f value: VDS<br> , sharedLocks=3D ]<br> 2015-05-12 10:27:45,047 = INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCom= mand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START, GetHardwareInfoVD= SCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360= a3d6f, vds=3DHost[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id:= 633e992b<br> 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vds= broker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-= 51) [4d2b49f] FINISH, GetHardwareInfoVDSCommand, log id: 633e992b<br> 2015-= 05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] = (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with= disabled SELinux.<br> 2015-05-12 10:27:45,137 INFO [org.ovirt.engine= .core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler= _Worker-51) [211ecca6] Running command: HandleVdsCpuFlagsOrClusterChangedCo= mmand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161= -06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,139 INFO [org.ovirt.e= ngine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzSc= heduler_Worker-51) [211ecca6] START, GlusterServersListVDSCommand(HostName = =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 77= 0f2d6e<br> 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbro= ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51= ) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=3D-32601, mMes= sage=3DThe method does not exist / is not available.]<br> 2015-05-12 10:27:= 45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersLi= stVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected retu= rn value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method does not e= xist / is not available.]<br> 2015-05-12 10:27:45,142 ERROR [org.ovirt.engi= ne.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzSched= uler_Worker-51) [211ecca6] Failed in GlusterServersListVDS method<br> 2015-= 05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterSe= rversListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Command = GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38= b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorException: = VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, er= ror =3D The method does not exist / is not available., code =3D -32601<br> = 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster= .GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6= ] FINISH, GlusterServersListVDSCommand, log id: 770f2d6e<br> 2015-05-12 10:= 27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand= ] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: SetNonOper= ationalVdsCommand internal: true. Entities affected : ID: b505a91a-38= b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,312 INFO = [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzSche= duler_Worker-51) [7e3688d2] START, SetVdsStatusVDSCommand(HostName =3D ovse= rver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f, status=3DNonOperati= onal, nonOperationalReason=3DGLUSTER_COMMAND_FAILED, stopSpmFailureLogged= =3Dfalse), log id: 9dbd40f<br> 2015-05-12 10:27:45,353 INFO [org.ovir= t.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Wor= ker-51) [7e3688d2] FINISH, SetVdsStatusVDSCommand, log id: 9dbd40f<br> 2015= -05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCo= mmand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::vdsM= aintenance - There is not host capable of running the hosted engine VM<br> = 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlogh= andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Cor= relation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call S= tack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>]= failed on server ovserver1.<br> 2015-05-12 10:27:45,561 INFO [org.ov= irt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQua= rtzScheduler_Worker-51) [7e3688d2] Correlation ID: null, Call Stack: null, = Custom Event ID: -1, Message: Status of host ovserver1 was set to NonOperat= ional.<br> 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.Ha= ndleVdsVersionCommand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running= command: HandleVdsVersionCommand internal: true. Entities affected : = ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45= ,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (Def= aultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48c9-a161-06f13= 60a3d6f : ovserver1 is already in NonOperational status for reason GLUSTER_= COMMAND_FAILED. SetNonOperationalVds command is skipped.<br> <br></div><div= hread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManager= .Task::(_decref) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 abort= ing False<br> JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stomp= Reactor::98::Broker.StompAdapter::(handle_frame) Handling message <Stomp= Frame command=3D'SEND'><br> JsonRpcServer::DEBUG::2015-05-12 10:27:49,91= 5::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for reque= st<br> Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yaj= sonrpc.StompServer::(send) Sending response<br> Detector thread::DEBUG::201= 5-05-12 10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_a= dd_connection) Adding connection from 127.0.0.1:49510<br> Detector thread::= DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds.MultiProtocolAcc= eptor::(_remove_connection) Connection removed from 127.0.0.1:49510<br> Det= ector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds.Mu= ltiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 1= 27.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::Binding= XMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127= .0.0.1', 49510)<br> Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXM= LRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('09546= d15-6679-4a99-9fe6-3fa4730811d4',) {}<br> Thread-84706::DEBUG::2015-05-12 1= 0:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vmGetStats with {'s= tatus': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tl= sPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage= ': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587', 'session': 'U= nknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {}, '= pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00:16:3e:42:95:b9',= 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rx= Rate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name':= u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedTime': '69926', 'v= mJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType': 'vnc', 'vcpuCo= unt': '2', 'clientIp': '', 'hash': '-3724559636060176164', 'vmId': '09546d1= 5-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPeriod': 100000L, 'd= isplayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'true', 'disks': {u'v= da': {'readLatency': '0', 'apparentsize': '32212254720', 'writeLatency': '0= ', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flushLatency': '0', = 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', 'readLatency': '0= ', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '0'}}, 'monitorRes= ponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 'status': 'Up', 'gu= estCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]}<br> clientIFinit::DEBUG::= 2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState= ) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::moving from state init -&g= t; state preparing<br> clientIFinit::<a class=3D"moz-txt-link-freetext" hre= f=3D"INFO::2015-05-12" target=3D"_blank" data-mce-href=3D"INFO::2015-05-12"= olsList, Return response: {'poollist': []}<br> clientIFinit::DEBUG::2015-05= -12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`d= ecf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []}<br> client= IFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task= ::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::moving from= state preparing -> state finished<br> clientIFinit::DEBUG::2015-05-12 1= 0:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseA= ll) Owner.releaseAll requests {} resources {}<br> clientIFinit::DEBUG::2015= -05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(= cancelAll) Owner.cancelAll requests {}<br> clientIFinit::DEBUG::2015-05-12 = 10:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=3D`decf27= 0c-4715-432c-a01d-942181f61e80`::ref 0 aborting False</div><div><br></div><= div><br></div><div><br></div><div>It's something wrong with Glusterfs? Or C= entos 7.1?<br></div><div><br></div><div><br></div><hr id=3D"zwchr"><div sty= le=3D"color:#000;font-weight:normal;font-style:normal;text-decoration:none;= font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style=3D"c= olor: #000; font-weight: normal; font-style: normal; text-decoration: none;= font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b><a c= lass=3D"moz-txt-link-abbreviated" href=3D"mailto:suporte@logicworks.pt" tar= get=3D"_blank" data-mce-href=3D"mailto:suporte@logicworks.pt">suporte@logic= works.pt</a><br> <b>Para: </b>"Daniel Helgenberger" <a class=3D"moz-txt-lin= k-rfc2396E" href=3D"mailto:daniel.helgenberger@m-box.de" target=3D"_blank" = data-mce-href=3D"mailto:daniel.helgenberger@m-box.de"><daniel.helgenberg= er@m-box.de></a><br> <b>Cc: </b><a class=3D"moz-txt-link-abbreviated" hr= ef=3D"mailto:users@ovirt.org" target=3D"_blank" data-mce-href=3D"mailto:use= rs@ovirt.org">users@ovirt.org</a><br> <b>Enviadas: </b>Ter=C3=A7a-feira, 12= De Maio de 2015 10:14:11<br> <b>Assunto: </b>Re: [ovirt-users] Gluster com= mand [<UNKNOWN>] failed on server<br><div><br></div><div style=3D"fon= t-family: Times New Roman; font-size: 10pt; color: #000000" data-mce-style=3D"font-family: Times New Roman;= font-size: 10pt; color: #000000;"><div>Hi Daniel,<br></div><div><br></div>= <div>Well, I have glusterfs up and running:<br></div><div><br></div><div># = service glusterd status<br> Redirecting to /bin/systemctl status glus= terd.service<br> glusterd.service - GlusterFS, a clustered file-system serv= er<br> Loaded: loaded (/usr/lib/systemd/system/glusterd.servic= e; enabled)<br> Active: active (running) since Mon 2015-05-11 = 14:37:14 WEST; 19h ago<br> Process: 3060 ExecStart=3D/usr/sbin/glust= erd -p /var/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS)<br> = Main PID: 3061 (glusterd)<br> CGroup: /system.slice/glusterd.s= ervice<br> =C3= =A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid<br> &= nbsp; =C3=A2=C3=A23202 /usr/sbin/= glusterfsd -s ovserver2.domain.com --volfile-id gv...<br><div><br></div>May= 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.= ...<br> May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS,= a cluster....<br> Hint: Some lines were ellipsized, use -l to show in full= .<br><div><br></div></div><div># gluster volume info  = ; &n= bsp;  = ; <br> Vo= lume Name: gv0<br> Type: Distribute<br> Volume ID: 6ccd1831-6c4c-41c3-a695-= 8c7b57cf1261<br> Status: Started<br> Number of Bricks: 1<br> Transport-type= : tcp<br> Bricks:<br> Brick1: ovserver2.domain.com:/home2/brick1<br><div><b= r></div></div><div>I stopped iptables, but cannot bring the nodes up.<br></= div><div>Everything was working until I needed to do a restart.<br></div><d= iv><br></div><div>Any more ideas?<br></div><div><br></div><div><br></div><d= iv><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;f= ont-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-seri= f;font-size:12pt;" data-mce-style=3D"color: #000; font-weight: normal; font= -style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-se= rif; font-size: 12pt;"><b>De: </b>"Daniel Helgenberger" <a class=3D"moz-txt= -link-rfc2396E" href=3D"mailto:daniel.helgenberger@m-box.de" target=3D"_bla= nk" data-mce-href=3D"mailto:daniel.helgenberger@m-box.de"><daniel.helgen= berger@m-box.de></a><br> <b>Para: </b><a class=3D"moz-txt-link-abbreviat= ed" href=3D"mailto:users@ovirt.org" target=3D"_blank" data-mce-href=3D"mail= to:users@ovirt.org">users@ovirt.org</a><br> <b>Enviadas: </b>Segunda-feira,= 11 De Maio de 2015 18:17:47<br> <b>Assunto: </b>Re: [ovirt-users] Gluster = command [<UNKNOWN>] failed on server<br><div><br></div><br><div><br><= /div>On Mo, 2015-05-11 at 16:05 +0100, <a class=3D"moz-txt-link-abbreviated= " href=3D"mailto:suporte@logicworks.pt" target=3D"_blank" data-mce-href=3D"= mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br> > Hi,= <br> > <br> > I just restart it again, and now start the gluster ser= vice before starting the hosted engine, but still gets the same error messa= ge. <br> > <br> > Any more ideas? <br> I just had the same problem. <= br> My <unknown> error indeed to the fact that glusterd / glusterfsd = were<br> not running.<br><div><br></div>After starting them it turned out t= he host setup did not automatically<br> add the iptables rules for gluster.= I added to iptables:<br><div><br></div># gluster<br> -A INPUT -p tcp --dpo= rt 24007:24011 -j ACCEPT<br> -A INPUT -p tcp --dport 38465:38485 -j ACCEPT<= br><div><br></div>Afterwards 'gluster peer status' worked and my host was o= perational<br> again.<br><div><br></div>Hint: Sometimes this is do to glust= er itself. Restaring glusterd works<br> most of the time to fix this.<br><d= iv><br></div>> <br> > Thanks <br> > <br> > Jose <br> > <br> = > # hosted-engine --vm-status <br> > <br> > --=3D=3D Host 1 status= =3D=3D-- <br> > <br> > Status up-to-date : True <br> > Hostname := ovserver1.domain.com <br> > Host ID : 1 <br> > Engine status : {"hea= lth": "good", "vm": "up", "detail": "up"} <br> > Score : 2400 <br> > = Local maintenance : False <br> > Host timestamp : 4998 <br> > Extra m= etadata (valid at timestamp): <br> > metadata_parse_version=3D1 <br> >= ; metadata_feature_version=3D1 <br> > timestamp=3D4998 (Mon May 11 16:03= :48 2015) <br> > host-id=3D1 <br> > score=3D2400 <br> > maintenanc= e=3DFalse <br> > state=3DEngineUp <br> > <br> > <br> > # servic= e glusterd status <br> > Redirecting to /bin/systemctl status glusterd.s= ervice <br> > glusterd.service - GlusterFS, a clustered file-system serv= er <br> > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enab= led) <br> > Active: active (running) since Mon 2015-05-11 14:37:14 WEST;= 1h 27min ago <br> > Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /va= r/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS) <br> > Main PID: = 3061 (glusterd) <br> > CGroup: /system.slice/glusterd.service <br> > = =C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid <br> > =C3= =A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv..= . <br> > <br> > May 11 14:37:11 ovserver2.domain.com systemd[1]: Star= ting GlusterFS, a cluste.... <br> > May 11 14:37:14 ovserver2.domain.com= systemd[1]: Started GlusterFS, a cluster.... <br> > Hint: Some lines we= re ellipsized, use -l to show in full. <br> > <br> > <br> > ----- = Mensagem original -----<br> > <br> > De: <a class=3D"moz-txt-link-abb= reviated" href=3D"mailto:suporte@logicworks.pt" target=3D"_blank" data-mce-= href=3D"mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> <br> > P= ara: "knarra" <a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:knarra@redh= at.com" target=3D"_blank" data-mce-href=3D"mailto:knarra@redhat.com"><kn= arra@redhat.com></a> <br> > Cc: <a class=3D"moz-txt-link-abbreviated"= href=3D"mailto:Users@ovirt.org" target=3D"_blank" data-mce-href=3D"mailto:= Users@ovirt.org">Users@ovirt.org</a> <br> > Enviadas: Segunda-feira, 11 = De Maio de 2015 13:15:14 <br> > Assunto: Re: [ovirt-users] Gluster comma= nd [<UNKNOWN>] failed on server <br> > <br> > Hi, <br> > <br= ull. <br> > <br> > But still the problem remains <br> > <br> > = Should I first start the glusterfs before the hosted engine? <br> > <br>= > Thanks <br> > <br> > ----- Mensagem original -----<br> > <br= pril/023861.html" target=3D"_blank" data-mce-href=3D"http://lists.ovirt.org= /pipermail/users/2014-April/023861.html">http://lists.ovirt.org/pipermail/u= sers/2014-April/023861.html</a> ), after rebooting could not get it <br> &g= t; working again, when trying to activate the hosts this message come up: G= luster command [<UNKNOWN>] failed on server <br> > I have tried a = lot of things, including update it to Version 3.5.2-1.el7.centos, but no su= ccess. <br> > Gluster version: <br> > glusterfs-3.6.3-1.el7.x86_64 <b= r> > glusterfs-libs-3.6.3-1.el7.x86_64 <br> > glusterfs-fuse-3.6.3-1.= el7.x86_64 <br> > glusterfs-cli-3.6.3-1.el7.x86_64 <br> > glusterfs-r= dma-3.6.3-1.el7.x86_64 <br> > glusterfs-api-3.6.3-1.el7.x86_64 <br> >= <br> > Any help? <br> > <br> > __________________________________= _____________<br> > Users mailing list<br> > <a class=3D"moz-txt-link= -abbreviated" href=3D"mailto:Users@ovirt.org" target=3D"_blank" data-mce-hr= ef=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> > <a class=3D"moz-= txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/listinfo/users" t= arget=3D"_blank" data-mce-href=3D"http://lists.ovirt.org/mailman/listinfo/u= sers">http://lists.ovirt.org/mailman/listinfo/users</a><br><div><br></div>-= - <br> Daniel Helgenberger<br> m box bewegtbild GmbH<br><div><br></div>P: += 49/30/2408781-22<br> F: +49/30/2408781-10<br><div><br></div>ACKERSTR. 19<br=
D-10115 BERLIN<br><div><br></div><br> <a class=3D"moz-txt-link-abbreviate= d" href=3D"http://www.m-box.de" target=3D"_blank" data-mce-href=3D"http://w= ww.m-box.de">www.m-box.de</a> <a class=3D"moz-txt-link-abbreviated" h= ref=3D"http://www.monkeymen.tv" target=3D"_blank" data-mce-href=3D"http://w= ww.monkeymen.tv">www.monkeymen.tv</a><br><div><br></div>Gesch=C3=A4ftsf=C3= =BChrer: Martin Retschitzegger / Michaela G=C3=B6llner<br> Handeslregister:= Amtsgericht Charlottenburg / HRB 112767<br> ______________________________= _________________<br> Users mailing list<br> <a class=3D"moz-txt-link-abbre= viated" href=3D"mailto:Users@ovirt.org" target=3D"_blank" data-mce-href=3D"= mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a class=3D"moz-txt-link-fr= eetext" href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_b= lank" data-mce-href=3D"http://lists.ovirt.org/mailman/listinfo/users">http:= //lists.ovirt.org/mailman/listinfo/users</a><br></div><div><br></div></div>= <br> _______________________________________________<br> Users mailing list= <br> <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" = target=3D"_blank" data-mce-href=3D"mailto:Users@ovirt.org">Users@ovirt.org<= /a><br> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/m= ailman/listinfo/users" target=3D"_blank" data-mce-href=3D"http://lists.ovir= t.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users= </a><br></div><div><br></div></div><br><fieldset class=3D"mimeAttachmentHea= der"></fieldset><br><pre>_______________________________________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" targe= t=3D"_blank" data-mce-href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users" target=3D"_blank" data-mce-href=3D"http://lists.ovirt.org/ma= ilman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre></blockquote><br></div><div><br></div></div></body></html> ------=_Part_522196_274858394.1431439152076--
participants (4)
-
Daniel Helgenberger
-
knarra
-
Sahina Bose
-
suporte@logicworks.pt