------=_Part_512877_22269626.1431423152050
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
This is the engine log:=20
2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]=
(ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exc=
lusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS=20
, sharedLocks=3D ]=20
2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]=
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVd=
sCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-0=
6f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN=20
2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]=
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in or=
der to prevent monitoring for host ovserver1 from data-center Default=20
2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]=
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a m=
onitoring of host will be skipped for host ovserver1 from data-center Defau=
lt=20
2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV=
DSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStat=
usVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f=
1360a3d6f, status=3DUnassigned, nonOperationalReason=3DNONE, stopSpmFailure=
Logged=3Dfalse), log id: dca9241=20
2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV=
DSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsSta=
tusVDSCommand, log id: dca9241=20
2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Set=
HaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7]=
START, SetHaMaintenanceModeVDSCommand(HostName =3D ovserver1, HostId =3D b=
505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a=20
2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Set=
HaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7]=
FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a=20
2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]=
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock rel=
eased. Monitoring can run now for host ovserver1 from data-center Default=
=20
2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogha=
ndling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Cor=
relation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call S=
tack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by a=
dmin(a)internal.=20
2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand]=
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object Engine=
Lock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VD=
S=20
, sharedLocks=3D ]=20
2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get=
HardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START,=
GetHardwareInfoVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2=
-48c9-a161-06f1360a3d6f, vds=3DHost[ovserver1,b505a91a-38b2-48c9-a161-06f13=
60a3d6f]), log id: 633e992b=20
2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get=
HardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH=
, GetHardwareInfoVDSCommand, log id: 633e992b=20
2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (=
DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with =
disabled SELinux.=20
2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOr=
ClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Runnin=
g command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entitie=
s affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS=20
2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.Glust=
erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] STAR=
T, GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a=
-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e=20
2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glust=
erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unex=
pected return value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method=
does not exist / is not available.]=20
2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glust=
erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unex=
pected return value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method=
does not exist / is not available.]=20
2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Glus=
terServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Fai=
led in GlusterServersListVDS method=20
2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Glus=
terServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Com=
mand GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b505a9=
1a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorExcept=
ion: VDSGenericException: VDSErrorException: Failed to GlusterServersListVD=
S, error =3D The method does not exist / is not available., code =3D -32601=
=20
2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.Glust=
erServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINI=
SH, GlusterServersListVDSCommand, log id: 770f2d6e=20
2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVd=
sCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: Se=
tNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-=
38b2-48c9-a161-06f1360a3d6f Type: VDS=20
2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV=
DSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatu=
sVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1=
360a3d6f, status=3DNonOperational, nonOperationalReason=3DGLUSTER_COMMAND_F=
AILED, stopSpmFailureLogged=3Dfalse), log id: 9dbd40f=20
2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusV=
DSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStat=
usVDSCommand, log id: 9dbd40f=20
2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalV=
dsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::=
vdsMaintenance - There is not host capable of running the hosted engine VM=
=20
2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlogh=
andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Cor=
relation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call S=
tack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] faile=
d on server ovserver1.=20
2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogha=
ndling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Corr=
elation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of=
host ovserver1 was set to NonOperational.=20
2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCom=
mand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleV=
dsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9=
-a161-06f1360a3d6f Type: VDS=20
2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunT=
imeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48=
c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for re=
ason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.=20
VDSM log:=20
Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManag=
er.Task::(prepare) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished:=
{'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version':
3, 'acquir=
ed': True, 'delay': '0.000110247', 'lastCheck': '6.5',
'valid': True}}=20
Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::movin=
g from state preparing -> state finished=20
Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage=
.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources=
{}=20
Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage=
.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManage=
r.Task::(_decref) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 abor=
ting False=20
JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=20
JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=20
Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.=
StompServer::(send) Sending response=20
Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds=
.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:=
49510=20
Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds=
.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.=
0.1:49510=20
Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds=
.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml fro=
m 127.0.0.1:49510=20
Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDe=
tector::(handleSocket) xml over http detected from ('127.0.0.1', 49510)=20
Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wr=
apper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3=
fa4730811d4',) {}=20
Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wr=
apper) return vmGetStats with {'status': {'message': 'Done',
'code': 0}, 's=
tatsList': [{'displayInfo': [{'tlsPort': '-1',
'ipAddress': '0', 'type': 'v=
nc', 'port': u'5900'}], 'memUsage': '0',
'acpiEnable': 'true', 'guestFQDN':=
'', 'pid': '5587', 'session': 'Unknown',
'displaySecurePort': '-1', 'timeO=
ffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR',
'network': {u'vnet0':=
{'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29',
'rxErrors': '0', 'txDr=
opped': '0', 'txRate': '0.0', 'rxRate': '0.0',
'txErrors': '0', 'state': 'u=
nknown', 'speed': '1000', 'name': u'vnet0'}},
'vmType': 'kvm', 'cpuUser': '=
1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys':
'0.27', 'appsList': =
[], 'displayType': 'vnc', 'vcpuCount': '2',
'clientIp': '', 'hash': '-37245=
59636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4',
'displayIp=
': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900',
'vcpuQuota': '-1', '=
kvmEnable': 'true', 'disks': {u'vda': {'readLatency':
'0', 'apparentsize': =
'32212254720', 'writeLatency': '0', 'imageID':
'39f6830c-8fa1-4abd-9259-906=
54e91ff2d', 'flushLatency': '0', 'truesize':
'15446843392'}, u'hdc': {'flus=
hLatency': '0', 'readLatency': '0', 'truesize':
'0', 'apparentsize': '0', '=
writeLatency': '0'}}, 'monitorResponse': '0',
'statsAge': '1.83', 'username=
': 'Unknown', 'status': 'Up', 'guestCPUCount': -1,
'ioTune': [], 'guestIPs'=
: ''}]}=20
clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::movin=
g from state init -> state preparing=20
clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::44::dispatcher::(wra=
pper) Run and protect: getConnectedStoragePoolsList(options=3DNone)=20
clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::47::dispatcher::(wra=
pper) Run and protect: getConnectedStoragePoolsList, Return response: {'poo=
llist': []}=20
clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManag=
er.Task::(prepare) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::finished:=
{'poollist': []}=20
clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::movin=
g from state preparing -> state finished=20
clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage=
.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources=
{}=20
clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage=
.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20
clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManage=
r.Task::(_decref) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::ref 0 abor=
ting False=20
It's something wrong with Glusterfs? Or Centos 7.1?=20
----- Mensagem original -----
De: suporte(a)logicworks.pt=20
Para: "Daniel Helgenberger" <daniel.helgenberger(a)m-box.de>=20
Cc: users(a)ovirt.org=20
Enviadas: Ter=C3=A7a-feira, 12 De Maio de 2015 10:14:11=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20
Hi Daniel,=20
Well, I have glusterfs up and running:=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago=20
Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)=20
Main PID: 3061 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A23202 /usr/sbin/glusterfsd -s
ovserver2.domain.com --volfile-id =
gv...=20
May 11 14:37:11
ovserver2.domain.com systemd[1]: Starting GlusterFS, a clus=
te....=20
May 11 14:37:14
ovserver2.domain.com systemd[1]: Started GlusterFS, a clust=
er....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
# gluster volume info=20
Volume Name: gv0=20
Type: Distribute=20
Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261=20
Status: Started=20
Number of Bricks: 1=20
Transport-type: tcp=20
Bricks:=20
Brick1: ovserver2.domain.com:/home2/brick1=20
I stopped iptables, but cannot bring the nodes up.=20
Everything was working until I needed to do a restart.=20
Any more ideas?=20
----- Mensagem original -----
De: "Daniel Helgenberger" <daniel.helgenberger(a)m-box.de>=20
Para: users(a)ovirt.org=20
Enviadas: Segunda-feira, 11 De Maio de 2015 18:17:47=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20
On Mo, 2015-05-11 at 16:05 +0100, suporte(a)logicworks.pt wrote:=20
Hi,=20
=20
I just restart it again, and now start the gluster service before startin=
g the
hosted engine, but still gets the same error message.=20
=20
Any more ideas?=20
I just had the same problem.=20
My <unknown> error indeed to the fact that glusterd / glusterfsd were=20
not running.=20
After starting them it turned out the host setup did not automatically=20
add the iptables rules for gluster. I added to iptables:=20
# gluster=20
-A INPUT -p tcp --dport 24007:24011 -j ACCEPT=20
-A INPUT -p tcp --dport 38465:38485 -j ACCEPT=20
Afterwards 'gluster peer status' worked and my host was operational=20
again.=20
Hint: Sometimes this is do to gluster itself. Restaring glusterd works=20
most of the time to fix this.=20
=20
Thanks=20
=20
Jose=20
=20
# hosted-engine --vm-status=20
=20
--=3D=3D Host 1 status =3D=3D--=20
=20
Status up-to-date : True=20
Hostname : ovserver1.domain.com=20
Host ID : 1=20
Engine status : {"health": "good", "vm": "up",
"detail": "up"}=20
Score : 2400=20
Local maintenance : False=20
Host timestamp : 4998=20
Extra metadata (valid at timestamp):=20
metadata_parse_version=3D1=20
metadata_feature_version=3D1=20
timestamp=3D4998 (Mon May 11 16:03:48 2015)=20
host-id=3D1=20
score=3D2400=20
maintenance=3DFalse=20
state=3DEngineUp=20
=20
=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago=
=20
Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid
(co=
de=3Dexited, status=3D0/SUCCESS)=20
Main PID: 3061 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A23202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id=
gv...=20
=20
May 11 14:37:11
ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl=
uste....=20
May 11 14:37:14
ovserver2.domain.com systemd[1]: Started GlusterFS, a
clu=
ster....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
=20
=20
----- Mensagem original -----=20
=20
De: suporte(a)logicworks.pt=20
Para: "knarra" <knarra(a)redhat.com>=20
Cc: Users(a)ovirt.org=20
Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=
=20
=20
Hi,=20
=20
I have 2 nodes, but only one is working with glusterfs.=20
=20
But you were right, glusterfs was not running, I just start the service -=
I
didn't check it :( :=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago=20
Process: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (co=
de=3Dexited, status=3D0/SUCCESS)=20
Main PID: 4483 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A24483 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A24618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id=
gv...=20
=20
May 11 13:06:22
ovserver2.domain.com systemd[1]: Starting GlusterFS, a cl=
uste....=20
May 11 13:06:24
ovserver2.domain.com systemd[1]: Started GlusterFS, a
clu=
ster....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
=20
But still the problem remains=20
=20
Should I first start the glusterfs before the hosted engine?=20
=20
Thanks=20
=20
----- Mensagem original -----=20
=20
De: "knarra" <knarra(a)redhat.com>=20
Para: suporte(a)logicworks.pt, Users(a)ovirt.org=20
Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=
=20
=20
On 05/11/2015 05:00 PM, suporte(a)logicworks.pt wrote:=20
=20
=20
=20
Hi,=20
=20
I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have instal=
led
some VMs, no problem. I needed to shutdown the computer machines (follo=
w this procedure:
http://lists.ovirt.org/pipermail/users/2014-April/023861.=
html ), after rebooting could not get it=20
working again, when trying to activate the hosts this message come
up: Gl=
uster command [<UNKNOWN>] failed on server=20
I have tried a lot of things, including update it to Version
3.5.2-1.el7.=
centos, but no success.=20
Gluster version:=20
glusterfs-3.6.3-1.el7.x86_64=20
glusterfs-libs-3.6.3-1.el7.x86_64=20
glusterfs-fuse-3.6.3-1.el7.x86_64=20
glusterfs-cli-3.6.3-1.el7.x86_64=20
glusterfs-rdma-3.6.3-1.el7.x86_64=20
glusterfs-api-3.6.3-1.el7.x86_64=20
=20
Any help?=20
=20
_______________________________________________=20
Users mailing list=20
Users(a)ovirt.org=20
http://lists.ovirt.org/mailman/listinfo/users=20
--=20
Daniel Helgenberger=20
m box bewegtbild GmbH=20
P: +49/30/2408781-22=20
F: +49/30/2408781-10=20
ACKERSTR. 19=20
D-10115 BERLIN=20
www.m-box.de www.monkeymen.tv=20
Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner=20
Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20
_______________________________________________=20
Users mailing list=20
Users(a)ovirt.org=20
http://lists.ovirt.org/mailman/listinfo/users=20
_______________________________________________=20
Users mailing list=20
Users(a)ovirt.org=20
http://lists.ovirt.org/mailman/listinfo/users=20
------=_Part_512877_22269626.1431423152050
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Times New Roman; font-size:
10pt; co=
lor: #000000"><div><br></div><div>This is the engine
log:<br></div><div>201=
5-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsComma=
nd] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [=
exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br>,=
sharedLocks=3D ]<br>2015-05-12 10:27:44,186 INFO [org.ovirt.engine.c=
ore.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] =
Running command: ActivateVdsCommand internal: false. Entities affected :&nb=
sp; ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULA=
TE_HOST with role type ADMIN<br>2015-05-12 10:27:44,186 INFO [org.ovi=
rt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) =
[76c5a7e7] Before acquiring lock in order to prevent monitoring for host ov=
server1 from data-center Default<br>2015-05-12 10:27:44,186 INFO [org=
.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-=
40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped=
for host ovserver1 from data-center Default<br>2015-05-12 10:27:44,189 INF=
O [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt=
.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName=
=3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f, status=3DU=
nassigned, nonOperationalReason=3DNONE, stopSpmFailureLogged=3Dfalse), log =
id: dca9241<br>2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vd=
sbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7=
e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241<br>2015-05-12 10:27:44,=
320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceM=
odeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaM=
aintenanceModeVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-4=
8c9-a161-06f1360a3d6f), log id: 3106a21a<br>2015-05-12 10:27:44,324 INFO&nb=
sp; [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSComma=
nd] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenance=
ModeVDSCommand, log id: 3106a21a<br>2015-05-12 10:27:44,324 INFO [org=
.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-=
40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for=
host ovserver1 from data-center Default<br>2015-05-12 10:27:44,369 INFO&nb=
sp; [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Jo=
b ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event =
ID: -1, Message: Host ovserver1 was activated by admin(a)internal.<br>2015-05=
-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] =
(org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineL=
ock [exclusiveLocks=3D key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS=
<br>, sharedLocks=3D ]<br>2015-05-12 10:27:45,047 INFO
[org.ovirt.eng=
ine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzSched=
uler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName =3D ovs=
erver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=3DHost[ovserve=
r1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b<br>2015-05-12 1=
0:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwa=
reInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetH=
ardwareInfoVDSCommand, log id: 633e992b<br>2015-05-12 10:27:45,052 WARN&nbs=
p; [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Wor=
ker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux.<br>2015-=
05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsO=
rClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Runni=
ng command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entiti=
es affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br>2=
015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.=
GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6]=
START, GlusterServersListVDSCommand(HostName =3D ovserver1, HostId =3D b50=
5a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e<br>2015-05-12 10:27:45=
,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersList=
VDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return=
value: StatusForXmlRpc [mCode=3D-32601, mMessage=3DThe method does not exi=
st / is not available.]<br>2015-05-12 10:27:45,142 WARN [org.ovirt.en=
gine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzSch=
eduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCod=
e=3D-32601, mMessage=3DThe method does not exist / is not available.]<br>20=
15-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.Gluste=
rServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Faile=
d in GlusterServersListVDS method<br>2015-05-12 10:27:45,143 ERROR [org.ovi=
rt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuar=
tzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(Host=
Name =3D ovserver1, HostId =3D b505a91a-38b2-48c9-a161-06f1360a3d6f) execut=
ion failed. Exception: VDSErrorException: VDSGenericException: VDSErrorExce=
ption: Failed to GlusterServersListVDS, error =3D The method does not exist=
/ is not available., code =3D -32601<br>2015-05-12 10:27:45,143 INFO =
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (De=
faultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCom=
mand, log id: 770f2d6e<br>2015-05-12 10:27:45,311 INFO [org.ovirt.eng=
ine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51=
) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. E=
ntities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS=
<br>2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.Set=
VdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, S=
etVdsStatusVDSCommand(HostName =3D ovserver1, HostId =3D b505a91a-38b2-48c9=
-a161-06f1360a3d6f, status=3DNonOperational, nonOperationalReason=3DGLUSTER=
_COMMAND_FAILED, stopSpmFailureLogged=3Dfalse), log id: 9dbd40f<br>2015-05-=
12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDS=
Command] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatus=
VDSCommand, log id: 9dbd40f<br>2015-05-12 10:27:45,355 ERROR [org.ovirt.eng=
ine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-4=
1) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable o=
f running the hosted engine VM<br>2015-05-12 10:27:45,394 ERROR [org.ovirt.=
engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzS=
cheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c=
1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message:=
Gluster command [<UNKNOWN>] failed on server
ovserver1.<br>2015-05-1=
2 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandl=
ing.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correla=
tion ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of ho=
st ovserver1 was set to NonOperational.<br>2015-05-12 10:27:45,696 INFO&nbs=
p; [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzSchedu=
ler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal:=
true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f T=
ype: VDS<br>2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbr=
oker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Hos=
t b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperati=
onal status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command=
is skipped.<br><br></div><div>VDSM
log:<br></div><div>Thread-84704::DEBUG:=
:2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) T=
ask=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-428=
0-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired':
True, 'delay': =
'0.000110247', 'lastCheck': '6.5', 'valid':
True}}<br>Thread-84704::DEBUG::=
2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState=
) Task=3D`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparin=
g -> state finished<br>Thread-84704::DEBUG::2015-05-12 10:27:49,884::res=
ourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releas=
eAll requests {} resources {}<br>Thread-84704::DEBUG::2015-05-12 10:27:49,8=
84::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.=
cancelAll requests {}<br>Thread-84704::DEBUG::2015-05-12 10:27:49,884::task=
::993::Storage.TaskManager.Task::(_decref) Task=3D`a7b984ad-2390-4b5f-8ea6-=
c95b0d0e8c37`::ref 0 aborting False<br>JsonRpc (StompReactor)::DEBUG::2015-=
05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) H=
andling message <StompFrame
command=3D'SEND'><br>JsonRpcServer::DEBUG=
::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_req=
uests) Waiting for request<br>Thread-84705::DEBUG::2015-05-12 10:27:49,916:=
:stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br>Detec=
tor thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.Mult=
iProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510=
<br>Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201:=
:vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 12=
7.0.0.1:49510<br>Detector thread::DEBUG::2015-05-12 10:27:49,980::protocold=
etector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected=
protocol xml from 127.0.0.1:49510<br>Detector thread::DEBUG::2015-05-12 10=
:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http =
detected from ('127.0.0.1', 49510)<br>Thread-84706::DEBUG::2015-05-12
10:27=
:49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGet=
Stats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',)
{}<br>Thread-84706::DE=
BUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vm=
GetStats with {'status': {'message': 'Done', 'code': 0},
'statsList': [{'di=
splayInfo': [{'tlsPort': '-1', 'ipAddress': '0',
'type': 'vnc', 'port': u'5=
900'}], 'memUsage': '0', 'acpiEnable': 'true',
'guestFQDN': '', 'pid': '558=
7', 'session': 'Unknown', 'displaySecurePort': '-1',
'timeOffset': '0', 'ba=
lloonInfo': {}, 'pauseCode': 'NOERR', 'network':
{u'vnet0': {'macAddr': '00=
:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0',
'txDropped': '0', 'tx=
Rate': '0.0', 'rxRate': '0.0', 'txErrors':
'0', 'state': 'unknown', 'speed'=
: '1000', 'name': u'vnet0'}}, 'vmType': 'kvm',
'cpuUser': '1.64', 'elapsedT=
ime': '69926', 'vmJobs': {}, 'cpuSys': '0.27',
'appsList': [], 'displayType=
': 'vnc', 'vcpuCount': '2', 'clientIp': '',
'hash': '-3724559636060176164',=
'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp':
'0', 'vcpuPer=
iod': 100000L, 'displayPort': u'5900', 'vcpuQuota':
'-1', 'kvmEnable': 'tru=
e', 'disks': {u'vda': {'readLatency': '0',
'apparentsize': '32212254720', '=
writeLatency': '0', 'imageID':
'39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flu=
shLatency': '0', 'truesize': '15446843392'}, u'hdc':
{'flushLatency': '0', =
'readLatency': '0', 'truesize': '0',
'apparentsize': '0', 'writeLatency': '=
0'}}, 'monitorResponse': '0', 'statsAge': '1.83',
'username': 'Unknown', 's=
tatus': 'Up', 'guestCPUCount': -1, 'ioTune': [],
'guestIPs': ''}]}<br>clien=
tIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Tas=
k::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::moving fro=
m state init -> state preparing<br>clientIFinit::INFO::2015-05-12 10:27:=
50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedSt=
oragePoolsList(options=3DNone)<br>clientIFinit::INFO::2015-05-12 10:27:50,8=
09::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStorag=
ePoolsList, Return response: {'poollist':
[]}<br>clientIFinit::DEBUG::2015-=
05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=3D=
`decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist':
[]}<br>clien=
tIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Tas=
k::(_updateState) Task=3D`decf270c-4715-432c-a01d-942181f61e80`::moving fro=
m state preparing -> state finished<br>clientIFinit::DEBUG::2015-05-12 1=
0:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseA=
ll) Owner.releaseAll requests {} resources {}<br>clientIFinit::DEBUG::2015-=
05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(c=
ancelAll) Owner.cancelAll requests {}<br>clientIFinit::DEBUG::2015-05-12 10=
:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=3D`decf270c=
-4715-432c-a01d-942181f61e80`::ref 0 aborting
False</div><div><br></div><di=
v><br></div><div><br></div><div>It's something
wrong with Glusterfs? Or Cen=
tos
7.1?<br></div><div><br></div><div><br></div><hr
id=3D"zwchr"><div style=
=3D"color:#000;font-weight:normal;font-style:normal;text-decoration:none;fo=
nt-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style=3D"col=
or: #000; font-weight: normal; font-style: normal; text-decoration: none; f=
ont-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De:
</b>suport=
e(a)logicworks.pt<br><b>Para: </b>"Daniel Helgenberger"
<daniel.helgenberg=
er(a)m-box.de&gt;<br><b>Cc:
</b>users(a)ovirt.org<br><b>Enviadas: </b>Ter=C3=A7=
a-feira, 12 De Maio de 2015 10:14:11<br><b>Assunto: </b>Re:
[ovirt-users] G=
luster command [<UNKNOWN>] failed on
server<br><div><br></div><div st=
yle=3D"font-family: Times New Roman; font-size: 10pt; color: #000000" data-=
mce-style=3D"font-family: Times New Roman; font-size: 10pt; color: #000000;=
"><div>Hi
Daniel,<br></div><div><br></div><div>Well, I have
glusterfs up an=
d running:<br></div><div><br></div><div># service
glusterd status<br>Redire=
cting to /bin/systemctl status glusterd.service<br>glusterd.service -=
GlusterFS, a clustered file-system server<br> Loaded: loaded
(=
/usr/lib/systemd/system/glusterd.service; enabled)<br> Active:
=
active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago<br> Proc=
ess: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3De=
xited, status=3D0/SUCCESS)<br> Main PID: 3061
(glusterd)<br> &nbs=
p; CGroup:
/system.slice/glusterd.service<br> =
=C3=A2=C3=A23061 /usr/sbin/glusterd -p
/var/=
run/glusterd.pid<br> &=
nbsp; =C3=A2=C3=A23202 /usr/sbin/glusterfsd -s
ovserver2.domain.com --volfi=
le-id gv...<br><div><br></div>May 11 14:37:11
ovserver2.domain.com
systemd[=
1]: Starting GlusterFS, a cluste....<br>May 11 14:37:14 ovserver2.domain.co=
m systemd[1]: Started GlusterFS, a cluster....<br>Hint: Some lines were ell=
ipsized, use -l to show in
full.<br><div><br></div></div><div># gluster vol=
ume
info &=
nbsp; &nbs=
p; &=
nbsp; <br>Volume Name: gv0<br>Type:
Distribute<br>Volume =
ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261<br>Status: Started<br>Number of Br=
icks: 1<br>Transport-type: tcp<br>Bricks:<br>Brick1:
ovserver2.domain.com:/=
home2/brick1<br><div><br></div></div><div>I stopped
iptables, but cannot br=
ing the nodes up.<br></div><div>Everything was working until I needed to
do=
a restart.<br></div><div><br></div><div>Any more
ideas?<br></div><div><br>=
</div><div><br></div><div><br></div><hr
id=3D"zwchr"><div style=3D"color:#0=
00;font-weight:normal;font-style:normal;text-decoration:none;font-family:He=
lvetica,Arial,sans-serif;font-size:12pt;" data-mce-style=3D"color: #000; fo=
nt-weight: normal; font-style: normal; text-decoration: none; font-family: =
Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>"Daniel
Helgenberg=
er" &lt;daniel.helgenberger(a)m-box.de&gt;<br><b>Para:
</b>users(a)ovirt.org<br=
<b>Enviadas: </b>Segunda-feira, 11 De Maio de 2015
18:17:47<br><b>Assunto:=
</b>Re: [ovirt-users] Gluster command
[<UNKNOWN>] failed on server<b=
r><div><br></div><br><div><br></div>On Mo,
2015-05-11 at 16:05 +0100, supor=
te(a)logicworks.pt wrote:<br>> Hi, <br>> <br>> I just
restart it aga=
in, and now start the gluster service before starting the hosted engine, bu=
t still gets the same error message. <br>> <br>> Any more ideas?
<br>=
I just had the same problem. <br>My <unknown> error indeed to the
fac=
t that glusterd / glusterfsd were<br>not
running.<br><div><br></div>After s=
tarting them it turned out the host setup did not automatically<br>add the =
iptables rules for gluster. I added to
iptables:<br><div><br></div># gluste=
r<br>-A INPUT -p tcp --dport 24007:24011 -j ACCEPT<br>-A INPUT -p tcp --dpo=
rt 38465:38485 -j ACCEPT<br><div><br></div>Afterwards 'gluster
peer status'=
worked and my host was
operational<br>again.<br><div><br></div>Hint: Somet=
imes this is do to gluster itself. Restaring glusterd works<br>most of the =
time to fix this.<br><div><br></div>> <br>>
Thanks <br>> <br>> =
Jose <br>> <br>> # hosted-engine --vm-status <br>>
<br>> --=3D=
=3D Host 1 status =3D=3D-- <br>> <br>> Status up-to-date : True
<br>&=
gt; Hostname :
ovserver1.domain.com <br>> Host ID : 1 <br>>
Engine st=
atus : {"health": "good", "vm": "up",
"detail": "up"} <br>> Score : 2400=
<br>> Local maintenance : False <br>> Host timestamp : 4998
<br>>=
Extra metadata (valid at timestamp): <br>> metadata_parse_version=3D1 <=
br>> metadata_feature_version=3D1 <br>> timestamp=3D4998 (Mon May
11 =
16:03:48 2015) <br>> host-id=3D1 <br>> score=3D2400
<br>> maintena=
nce=3DFalse <br>> state=3DEngineUp <br>> <br>>
<br>> # service =
glusterd status <br>> Redirecting to /bin/systemctl status glusterd.serv=
ice <br>> glusterd.service - GlusterFS, a clustered file-system server <=
br>> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) =
<br>> Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27=
min ago <br>> Process: 3060 ExecStart=3D/usr/sbin/glusterd -p /var/run/g=
lusterd.pid (code=3Dexited, status=3D0/SUCCESS) <br>> Main PID: 3061 (gl=
usterd) <br>> CGroup: /system.slice/glusterd.service <br>>
=C3=A2=C3=
=A23061 /usr/sbin/glusterd -p /var/run/glusterd.pid <br>> =C3=A2=C3=A232=
02 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... <br>> =
<br>> May 11 14:37:11
ovserver2.domain.com systemd[1]: Starting GlusterF=
S, a cluste.... <br>> May 11 14:37:14
ovserver2.domain.com systemd[1]: S=
tarted GlusterFS, a cluster.... <br>> Hint: Some lines were ellipsized, =
use -l to show in full. <br>> <br>> <br>> -----
Mensagem original =
-----<br>> <br>> De: suporte(a)logicworks.pt <br>>
Para: "knarra" &l=
t;knarra(a)redhat.com&gt; <br>> Cc: Users(a)ovirt.org <br>>
Enviadas: Seg=
unda-feira, 11 De Maio de 2015 13:15:14 <br>> Assunto: Re: [ovirt-users]=
Gluster command [<UNKNOWN>] failed on server <br>>
<br>> Hi, <=
br>> <br>> I have 2 nodes, but only one is working with glusterfs.
<b=
r>> <br>> But you were right, glusterfs was not running, I just
start=
the service - I didn't check it :( : <br>> # service glusterd status
<b=
r>> Redirecting to /bin/systemctl status glusterd.service <br>>
glust=
erd.service - GlusterFS, a clustered file-system server <br>> Loaded: lo=
aded (/usr/lib/systemd/system/glusterd.service; enabled) <br>> Active: a=
ctive (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago <br>> Process=
: 4482 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=3Dexit=
ed, status=3D0/SUCCESS) <br>> Main PID: 4483 (glusterd) <br>>
CGroup:=
/system.slice/glusterd.service <br>> =C3=A2=C3=A24483 /usr/sbin/gluster=
d -p /var/run/glusterd.pid <br>> =C3=A2=C3=A24618 /usr/sbin/glusterfsd -=
s ovserver2.acloud.pt --volfile-id gv... <br>> <br>> May 11
13:06:22 =
ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br>> =
May 11 13:06:24
ovserver2.domain.com systemd[1]: Started GlusterFS, a clust=
er.... <br>> Hint: Some lines were ellipsized, use -l to show in full. <=
br>> <br>> But still the problem remains <br>>
<br>> Should I f=
irst start the glusterfs before the hosted engine? <br>> <br>>
Thanks=
<br>> <br>> ----- Mensagem original -----<br>>
<br>> De: "knar=
ra" &lt;knarra(a)redhat.com&gt; <br>> Para: suporte(a)logicworks.pt,
Users@o=
virt.org <br>> Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19
<br>=
> Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on=
server <br>> <br>> On 05/11/2015 05:00 PM,
suporte(a)logicworks.pt wro=
te: <br>> <br>> <br>> <br>> Hi,
<br>> <br>> I'm testing o=
virt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, n=
o problem. I needed to shutdown the computer machines (follow this procedur=
e:
http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after r=
ebooting could not get it <br>> working again, when trying to activate t=
he hosts this message come up: Gluster command [<UNKNOWN>] failed on =
server <br>> I have tried a lot of things, including update it to Versio=
n 3.5.2-1.el7.centos, but no success. <br>> Gluster version:
<br>> gl=
usterfs-3.6.3-1.el7.x86_64 <br>> glusterfs-libs-3.6.3-1.el7.x86_64
<br>&=
gt; glusterfs-fuse-3.6.3-1.el7.x86_64 <br>> glusterfs-cli-3.6.3-1.el7.x8=
6_64 <br>> glusterfs-rdma-3.6.3-1.el7.x86_64 <br>>
glusterfs-api-3.6.=
3-1.el7.x86_64 <br>> <br>> Any help? <br>>
<br>> ______________=
_________________________________<br>> Users mailing list<br>>
Users@=
ovirt.org<br>>
http://lists.ovirt.org/mailman/listinfo/users<br><div><br=
</div>-- <br>Daniel Helgenberger<br>m box bewegtbild
GmbH<br><div><br></di=
v>P: +49/30/2408781-22<br>F:
+49/30/2408781-10<br><div><br></div>ACKERSTR. =
19<br>D-10115
BERLIN<br><div><br></div><br>www.m-box.de
www.monkeymen=
.tv<br><div><br></div>Gesch=C3=A4ftsf=C3=BChrer: Martin
Retschitzegger / Mi=
chaela G=C3=B6llner<br>Handeslregister: Amtsgericht Charlottenburg / HRB 11=
2767<br>_______________________________________________<br>Users mailing li=
st<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><=
/div><div><br></div></div><br>_____________________________________________=
__<br>Users mailing
list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailm=
an/listinfo/users<br></div><div><br></div></div></body></html>
------=_Part_512877_22269626.1431423152050--