
This is a multi-part message in MIME format. --------------000006090805040807010001 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit vdsm-gluster installed on your node? From logs - it seems to indicate that it is not. On 05/12/2015 03:02 PM, suporte@logicworks.pt wrote:
This is the engine log: 2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS , sharedLocks= ] 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVdsCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in order to prevent monitoring for host ovserver1 from data-center Default 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped for host ovserver1 from data-center Default 2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: dca9241 2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241 2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaMaintenanceModeVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for host ovserver1 from data-center Default 2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by admin@internal. 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS , sharedLocks= ] 2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=Host[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetHardwareInfoVDSCommand, log id: 633e992b 2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux. 2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] START, GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.] 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.] 2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Failed in GlusterServersListVDS method 2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist / is not available., code = -32601 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCommand, log id: 770f2d6e 2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: 9dbd40f 2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatusVDSCommand, log id: 9dbd40f 2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable of running the hosted engine VM 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server ovserver1. 2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host ovserver1 was set to NonOperational. 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.
VDSM log: Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}} Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparing -> state finished Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManager.Task::(_decref) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 aborting False JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.StompServer::(send) Sending response Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:49510 Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:49510 Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 49510) Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {} Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-3724559636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]} clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state init -> state preparing clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList(options=None) clientIFinit::INFO::2015-05-12 10:27:50,809::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []} clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=`decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []} clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state preparing -> state finished clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=`decf270c-4715-432c-a01d-942181f61e80`::ref 0 aborting False
It's something wrong with Glusterfs? Or Centos 7.1?
------------------------------------------------------------------------ *De: *suporte@logicworks.pt *Para: *"Daniel Helgenberger" <daniel.helgenberger@m-box.de> *Cc: *users@ovirt.org *Enviadas: *Terça-feira, 12 De Maio de 2015 10:14:11 *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
Hi Daniel,
Well, I have glusterfs up and running:
# service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 3061 (glusterd) CGroup: /system.slice/glusterd.service ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ3202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id gv...
May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovserver2.domain.com:/home2/brick1
I stopped iptables, but cannot bring the nodes up. Everything was working until I needed to do a restart.
Any more ideas?
------------------------------------------------------------------------ *De: *"Daniel Helgenberger" <daniel.helgenberger@m-box.de> *Para: *users@ovirt.org *Enviadas: *Segunda-feira, 11 De Maio de 2015 18:17:47 *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
On Mo, 2015-05-11 at 16:05 +0100, suporte@logicworks.pt wrote:
Hi,
I just restart it again, and now start the gluster service before starting the hosted engine, but still gets the same error message.
Any more ideas? I just had the same problem. My <unknown> error indeed to the fact that glusterd / glusterfsd were not running.
After starting them it turned out the host setup did not automatically add the iptables rules for gluster. I added to iptables:
# gluster -A INPUT -p tcp --dport 24007:24011 -j ACCEPT -A INPUT -p tcp --dport 38465:38485 -j ACCEPT
Afterwards 'gluster peer status' worked and my host was operational again.
Hint: Sometimes this is do to gluster itself. Restaring glusterd works most of the time to fix this.
Thanks
Jose
# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovserver1.domain.com Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 2400 Local maintenance : False Host timestamp : 4998 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=4998 (Mon May 11 16:03:48 2015) host-id=1 score=2400 maintenance=False state=EngineUp
# service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h
27min ago
Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 3061 (glusterd) CGroup: /system.slice/glusterd.service ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...
May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
----- Mensagem original -----
De: suporte@logicworks.pt Para: "knarra" <knarra@redhat.com> Cc: Users@ovirt.org Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
Hi,
I have 2 nodes, but only one is working with glusterfs.
But you were right, glusterfs was not running, I just start the service - I didn't check it :( : # service glusterd status Redirecting to /bin/systemctl status glusterd.service glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago Process: 4482 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) Main PID: 4483 (glusterd) CGroup: /system.slice/glusterd.service ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv...
May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... Hint: Some lines were ellipsized, use -l to show in full.
But still the problem remains
Should I first start the glusterfs before the hosted engine?
Thanks
----- Mensagem original -----
De: "knarra" <knarra@redhat.com> Para: suporte@logicworks.pt, Users@ovirt.org Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
On 05/11/2015 05:00 PM, suporte@logicworks.pt wrote:
Hi,
I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after rebooting could not get it working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. Gluster version: glusterfs-3.6.3-1.el7.x86_64 glusterfs-libs-3.6.3-1.el7.x86_64 glusterfs-fuse-3.6.3-1.el7.x86_64 glusterfs-cli-3.6.3-1.el7.x86_64 glusterfs-rdma-3.6.3-1.el7.x86_64 glusterfs-api-3.6.3-1.el7.x86_64
Any help?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Daniel Helgenberger m box bewegtbild GmbH
P: +49/30/2408781-22 F: +49/30/2408781-10
ACKERSTR. 19 D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------000006090805040807010001 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> vdsm-gluster installed on your node?<br> <br> From logs - it seems to indicate that it is not.<br> <br> <div class="moz-cite-prefix">On 05/12/2015 03:02 PM, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br> </div> <blockquote cite="mid:623449604.512878.1431423152052.JavaMail.zimbra@logicworks.pt" type="cite"> <div style="font-family: Times New Roman; font-size: 10pt; color: #000000"> <div><br> </div> <div>This is the engine log:<br> </div> <div>2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br> , sharedLocks= ]<br> 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVdsCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN<br> 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in order to prevent monitoring for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: dca9241<br> 2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241<br> 2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaMaintenanceModeVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a<br> 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a<br> 2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for host ovserver1 from data-center Default<br> 2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by admin@internal.<br> 2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br> , sharedLocks= ]<br> 2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=Host[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b<br> 2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetHardwareInfoVDSCommand, log id: 633e992b<br> 2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux.<br> 2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] START, GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e<br> 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.]<br> 2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.]<br> 2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Failed in GlusterServersListVDS method<br> 2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist / is not available., code = -32601<br> 2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCommand, log id: 770f2d6e<br> 2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: 9dbd40f<br> 2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatusVDSCommand, log id: 9dbd40f<br> 2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable of running the hosted engine VM<br> 2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server ovserver1.<br> 2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host ovserver1 was set to NonOperational.<br> 2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br> 2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.<br> <br> </div> <div>VDSM log:<br> </div> <div>Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparing -> state finished<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManager.Task::(_decref) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 aborting False<br> JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'><br> JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<br> Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br> Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:49510<br> Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 49510)<br> Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {}<br> Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-3724559636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state init -> state preparing<br> clientIFinit::<a class="moz-txt-link-freetext" href="INFO::2015-05-12">INFO::2015-05-12</a> 10:27:50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList(options=None)<br> clientIFinit::<a class="moz-txt-link-freetext" href="INFO::2015-05-12">INFO::2015-05-12</a> 10:27:50,809::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=`decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state preparing -> state finished<br> clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=`decf270c-4715-432c-a01d-942181f61e80`::ref 0 aborting False</div> <div><br> </div> <div><br> </div> <div><br> </div> <div>It's something wrong with Glusterfs? Or Centos 7.1?<br> </div> <div><br> </div> <div><br> </div> <hr id="zwchr"> <div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b><a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a><br> <b>Para: </b>"Daniel Helgenberger" <a class="moz-txt-link-rfc2396E" href="mailto:daniel.helgenberger@m-box.de"><daniel.helgenberger@m-box.de></a><br> <b>Cc: </b><a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br> <b>Enviadas: </b>Terça-feira, 12 De Maio de 2015 10:14:11<br> <b>Assunto: </b>Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server<br> <div><br> </div> <div style="font-family: Times New Roman; font-size: 10pt; color: #000000" data-mce-style="font-family: Times New Roman; font-size: 10pt; color: #000000;"> <div>Hi Daniel,<br> </div> <div><br> </div> <div>Well, I have glusterfs up and running:<br> </div> <div><br> </div> <div># service glusterd status<br> Redirecting to /bin/systemctl status glusterd.service<br> glusterd.service - GlusterFS, a clustered file-system server<br> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)<br> Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago<br> Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS)<br> Main PID: 3061 (glusterd)<br> CGroup: /system.slice/glusterd.service<br> ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid<br> ââ3202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id gv...<br> <div><br> </div> May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste....<br> May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster....<br> Hint: Some lines were ellipsized, use -l to show in full.<br> <div><br> </div> </div> <div># gluster volume info <br> Volume Name: gv0<br> Type: Distribute<br> Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261<br> Status: Started<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovserver2.domain.com:/home2/brick1<br> <div><br> </div> </div> <div>I stopped iptables, but cannot bring the nodes up.<br> </div> <div>Everything was working until I needed to do a restart.<br> </div> <div><br> </div> <div>Any more ideas?<br> </div> <div><br> </div> <div><br> </div> <div><br> </div> <hr id="zwchr"> <div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>"Daniel Helgenberger" <a class="moz-txt-link-rfc2396E" href="mailto:daniel.helgenberger@m-box.de"><daniel.helgenberger@m-box.de></a><br> <b>Para: </b><a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br> <b>Enviadas: </b>Segunda-feira, 11 De Maio de 2015 18:17:47<br> <b>Assunto: </b>Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server<br> <div><br> </div> <br> <div><br> </div> On Mo, 2015-05-11 at 16:05 +0100, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br> > Hi, <br> > <br> > I just restart it again, and now start the gluster service before starting the hosted engine, but still gets the same error message. <br> > <br> > Any more ideas? <br> I just had the same problem. <br> My <unknown> error indeed to the fact that glusterd / glusterfsd were<br> not running.<br> <div><br> </div> After starting them it turned out the host setup did not automatically<br> add the iptables rules for gluster. I added to iptables:<br> <div><br> </div> # gluster<br> -A INPUT -p tcp --dport 24007:24011 -j ACCEPT<br> -A INPUT -p tcp --dport 38465:38485 -j ACCEPT<br> <div><br> </div> Afterwards 'gluster peer status' worked and my host was operational<br> again.<br> <div><br> </div> Hint: Sometimes this is do to gluster itself. Restaring glusterd works<br> most of the time to fix this.<br> <div><br> </div> > <br> > Thanks <br> > <br> > Jose <br> > <br> > # hosted-engine --vm-status <br> > <br> > --== Host 1 status ==-- <br> > <br> > Status up-to-date : True <br> > Hostname : ovserver1.domain.com <br> > Host ID : 1 <br> > Engine status : {"health": "good", "vm": "up", "detail": "up"} <br> > Score : 2400 <br> > Local maintenance : False <br> > Host timestamp : 4998 <br> > Extra metadata (valid at timestamp): <br> > metadata_parse_version=1 <br> > metadata_feature_version=1 <br> > timestamp=4998 (Mon May 11 16:03:48 2015) <br> > host-id=1 <br> > score=2400 <br> > maintenance=False <br> > state=EngineUp <br> > <br> > <br> > # service glusterd status <br> > Redirecting to /bin/systemctl status glusterd.service <br> > glusterd.service - GlusterFS, a clustered file-system server <br> > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) <br> > Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago <br> > Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) <br> > Main PID: 3061 (glusterd) <br> > CGroup: /system.slice/glusterd.service <br> > ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid <br> > ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... <br> > <br> > May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br> > May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... <br> > Hint: Some lines were ellipsized, use -l to show in full. <br> > <br> > <br> > ----- Mensagem original -----<br> > <br> > De: <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> <br> > Para: "knarra" <a class="moz-txt-link-rfc2396E" href="mailto:knarra@redhat.com"><knarra@redhat.com></a> <br> > Cc: <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> > Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14 <br> > Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server <br> > <br> > Hi, <br> > <br> > I have 2 nodes, but only one is working with glusterfs. <br> > <br> > But you were right, glusterfs was not running, I just start the service - I didn't check it :( : <br> > # service glusterd status <br> > Redirecting to /bin/systemctl status glusterd.service <br> > glusterd.service - GlusterFS, a clustered file-system server <br> > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) <br> > Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago <br> > Process: 4482 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) <br> > Main PID: 4483 (glusterd) <br> > CGroup: /system.slice/glusterd.service <br> > ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid <br> > ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... <br> > <br> > May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... <br> > May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... <br> > Hint: Some lines were ellipsized, use -l to show in full. <br> > <br> > But still the problem remains <br> > <br> > Should I first start the glusterfs before the hosted engine? <br> > <br> > Thanks <br> > <br> > ----- Mensagem original -----<br> > <br> > De: "knarra" <a class="moz-txt-link-rfc2396E" href="mailto:knarra@redhat.com"><knarra@redhat.com></a> <br> > Para: <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>, <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> > Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 <br> > Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server <br> > <br> > On 05/11/2015 05:00 PM, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote: <br> > <br> > <br> > <br> > Hi, <br> > <br> > I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/pipermail/users/2014-April/023861.html">http://lists.ovirt.org/pipermail/users/2014-April/023861.html</a> ), after rebooting could not get it <br> > working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server <br> > I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. <br> > Gluster version: <br> > glusterfs-3.6.3-1.el7.x86_64 <br> > glusterfs-libs-3.6.3-1.el7.x86_64 <br> > glusterfs-fuse-3.6.3-1.el7.x86_64 <br> > glusterfs-cli-3.6.3-1.el7.x86_64 <br> > glusterfs-rdma-3.6.3-1.el7.x86_64 <br> > glusterfs-api-3.6.3-1.el7.x86_64 <br> > <br> > Any help? <br> > <br> > _______________________________________________<br> > Users mailing list<br> > <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> > <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> <div><br> </div> -- <br> Daniel Helgenberger<br> m box bewegtbild GmbH<br> <div><br> </div> P: +49/30/2408781-22<br> F: +49/30/2408781-10<br> <div><br> </div> ACKERSTR. 19<br> D-10115 BERLIN<br> <div><br> </div> <br> <a class="moz-txt-link-abbreviated" href="http://www.m-box.de">www.m-box.de</a> <a class="moz-txt-link-abbreviated" href="http://www.monkeymen.tv">www.monkeymen.tv</a><br> <div><br> </div> Geschäftsführer: Martin Retschitzegger / Michaela Göllner<br> Handeslregister: Amtsgericht Charlottenburg / HRB 112767<br> _______________________________________________<br> Users mailing list<br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> </div> <div><br> </div> </div> <br> _______________________________________________<br> Users mailing list<br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> </div> <div><br> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------000006090805040807010001--