<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    vdsm-gluster installed on your node?<br>
    <br>
    From logs - it seems to indicate that it is not.<br>
    <br>
    <div class="moz-cite-prefix">On 05/12/2015 03:02 PM,
      <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:<br>
    </div>
    <blockquote
      cite="mid:623449604.512878.1431423152052.JavaMail.zimbra@logicworks.pt"
      type="cite">
      <div style="font-family: Times New Roman; font-size: 10pt; color:
        #000000">
        <div><br>
        </div>
        <div>This is the engine log:<br>
        </div>
        <div>2015-05-12 10:27:44,012 INFO 
          [org.ovirt.engine.core.bll.ActivateVdsCommand]
          (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object
          EngineLock [exclusiveLocks= key:
          b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br>
          , sharedLocks= ]<br>
          2015-05-12 10:27:44,186 INFO 
          [org.ovirt.engine.core.bll.ActivateVdsCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running
          command: ActivateVdsCommand internal: false. Entities affected
          :  ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction
          group MANIPULATE_HOST with role type ADMIN<br>
          2015-05-12 10:27:44,186 INFO 
          [org.ovirt.engine.core.bll.ActivateVdsCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before
          acquiring lock in order to prevent monitoring for host
          ovserver1 from data-center Default<br>
          2015-05-12 10:27:44,186 INFO 
          [org.ovirt.engine.core.bll.ActivateVdsCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired,
          from now a monitoring of host will be skipped for host
          ovserver1 from data-center Default<br>
          2015-05-12 10:27:44,189 INFO 
          [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START,
          SetVdsStatusVDSCommand(HostName = ovserver1, HostId =
          b505a91a-38b2-48c9-a161-06f1360a3d6f, status=Unassigned,
          nonOperationalReason=NONE, stopSpmFailureLogged=false), log
          id: dca9241<br>
          2015-05-12 10:27:44,236 INFO 
          [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH,
          SetVdsStatusVDSCommand, log id: dca9241<br>
          2015-05-12 10:27:44,320 INFO 
          [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START,
          SetHaMaintenanceModeVDSCommand(HostName = ovserver1, HostId =
          b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a<br>
          2015-05-12 10:27:44,324 INFO 
          [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH,
          SetHaMaintenanceModeVDSCommand, log id: 3106a21a<br>
          2015-05-12 10:27:44,324 INFO 
          [org.ovirt.engine.core.bll.ActivateVdsCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate
          finished. Lock released. Monitoring can run now for host
          ovserver1 from data-center Default<br>
          2015-05-12 10:27:44,369 INFO 
          [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID:
          76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call
          Stack: null, Custom Event ID: -1, Message: Host ovserver1 was
          activated by admin@internal.<br>
          2015-05-12 10:27:44,411 INFO 
          [org.ovirt.engine.core.bll.ActivateVdsCommand]
          (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to
          object EngineLock [exclusiveLocks= key:
          b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS<br>
          , sharedLocks= ]<br>
          2015-05-12 10:27:45,047 INFO 
          [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [4d2b49f] START,
          GetHardwareInfoVDSCommand(HostName = ovserver1, HostId =
          b505a91a-38b2-48c9-a161-06f1360a3d6f,
          vds=Host[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log
          id: 633e992b<br>
          2015-05-12 10:27:45,051 INFO 
          [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH,
          GetHardwareInfoVDSCommand, log id: 633e992b<br>
          2015-05-12 10:27:45,052 WARN 
          [org.ovirt.engine.core.vdsbroker.VdsManager]
          (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is
          running with disabled SELinux.<br>
          2015-05-12 10:27:45,137 INFO 
          [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] Running command:
          HandleVdsCpuFlagsOrClusterChangedCommand internal: true.
          Entities affected :  ID: b505a91a-38b2-48c9-a161-06f1360a3d6f
          Type: VDS<br>
          2015-05-12 10:27:45,139 INFO 
          [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] START,
          GlusterServersListVDSCommand(HostName = ovserver1, HostId =
          b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e<br>
          2015-05-12 10:27:45,142 WARN 
          [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected
          return value: StatusForXmlRpc [mCode=-32601, mMessage=The
          method does not exist / is not available.]<br>
          2015-05-12 10:27:45,142 WARN 
          [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected
          return value: StatusForXmlRpc [mCode=-32601, mMessage=The
          method does not exist / is not available.]<br>
          2015-05-12 10:27:45,142 ERROR
          [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] Failed in
          GlusterServersListVDS method<br>
          2015-05-12 10:27:45,143 ERROR
          [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] Command
          GlusterServersListVDSCommand(HostName = ovserver1, HostId =
          b505a91a-38b2-48c9-a161-06f1360a3d6f) execution failed.
          Exception: VDSErrorException: VDSGenericException:
          VDSErrorException: Failed to GlusterServersListVDS, error =
          The method does not exist / is not available., code = -32601<br>
          2015-05-12 10:27:45,143 INFO 
          [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [211ecca6] FINISH,
          GlusterServersListVDSCommand, log id: 770f2d6e<br>
          2015-05-12 10:27:45,311 INFO 
          [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
          (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command:
          SetNonOperationalVdsCommand internal: true. Entities affected
          :  ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br>
          2015-05-12 10:27:45,312 INFO 
          [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [7e3688d2] START,
          SetVdsStatusVDSCommand(HostName = ovserver1, HostId =
          b505a91a-38b2-48c9-a161-06f1360a3d6f, status=NonOperational,
          nonOperationalReason=GLUSTER_COMMAND_FAILED,
          stopSpmFailureLogged=false), log id: 9dbd40f<br>
          2015-05-12 10:27:45,353 INFO 
          [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
          (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH,
          SetVdsStatusVDSCommand, log id: 9dbd40f<br>
          2015-05-12 10:27:45,355 ERROR
          [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
          (org.ovirt.thread.pool-8-thread-41) [7e3688d2]
          ResourceManager::vdsMaintenance - There is not host capable of
          running the hosted engine VM<br>
          2015-05-12 10:27:45,394 ERROR
          [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
          (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID:
          7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call
          Stack: null, Custom Event ID: -1, Message: Gluster command
          [&lt;UNKNOWN&gt;] failed on server ovserver1.<br>
          2015-05-12 10:27:45,561 INFO 
          [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
          (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID:
          null, Call Stack: null, Custom Event ID: -1, Message: Status
          of host ovserver1 was set to NonOperational.<br>
          2015-05-12 10:27:45,696 INFO 
          [org.ovirt.engine.core.bll.HandleVdsVersionCommand]
          (DefaultQuartzScheduler_Worker-51) [b01e893] Running command:
          HandleVdsVersionCommand internal: true. Entities affected : 
          ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS<br>
          2015-05-12 10:27:45,697 INFO 
          [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
          (DefaultQuartzScheduler_Worker-51) [b01e893] Host
          b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in
          NonOperational status for reason GLUSTER_COMMAND_FAILED.
          SetNonOperationalVds command is skipped.<br>
          <br>
        </div>
        <div>VDSM log:<br>
        </div>
        <div>Thread-84704::DEBUG::2015-05-12
          10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare)
          Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished:
          {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0,
          'version': 3, 'acquired': True, 'delay': '0.000110247',
          'lastCheck': '6.5', 'valid': True}}<br>
          Thread-84704::DEBUG::2015-05-12
          10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState)
          Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state
          preparing -&gt; state finished<br>
          Thread-84704::DEBUG::2015-05-12
          10:27:49,884::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
          Owner.releaseAll requests {} resources {}<br>
          Thread-84704::DEBUG::2015-05-12
          10:27:49,884::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
          Owner.cancelAll requests {}<br>
          Thread-84704::DEBUG::2015-05-12
          10:27:49,884::task::993::Storage.TaskManager.Task::(_decref)
          Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 aborting
          False<br>
          JsonRpc (StompReactor)::DEBUG::2015-05-12
          10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame)
          Handling message &lt;StompFrame command='SEND'&gt;<br>
          JsonRpcServer::DEBUG::2015-05-12
          10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
          Waiting for request<br>
          Thread-84705::DEBUG::2015-05-12
          10:27:49,916::stompReactor::163::yajsonrpc.StompServer::(send)
          Sending response<br>
          Detector thread::DEBUG::2015-05-12
          10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
          Adding connection from 127.0.0.1:49510<br>
          Detector thread::DEBUG::2015-05-12
          10:27:49,980::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
          Connection removed from 127.0.0.1:49510<br>
          Detector thread::DEBUG::2015-05-12
          10:27:49,980::protocoldetector::246::vds.MultiProtocolAcceptor::(_handle_connection_read)
          Detected protocol xml from 127.0.0.1:49510<br>
          Detector thread::DEBUG::2015-05-12
          10:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket)
          xml over http detected from ('127.0.0.1', 49510)<br>
          Thread-84706::DEBUG::2015-05-12
          10:27:49,982::BindingXMLRPC::1133::vds::(wrapper) client
          [127.0.0.1]::call vmGetStats with
          ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {}<br>
          Thread-84706::DEBUG::2015-05-12
          10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return
          vmGetStats with {'status': {'message': 'Done', 'code': 0},
          'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress':
          '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage': '0',
          'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587',
          'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset':
          '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network':
          {u'vnet0': {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29',
          'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate':
          '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000',
          'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64',
          'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27',
          'appsList': [], 'displayType': 'vnc', 'vcpuCount': '2',
          'clientIp': '', 'hash': '-3724559636060176164', 'vmId':
          '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0',
          'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota':
          '-1', 'kvmEnable': 'true', 'disks': {u'vda': {'readLatency':
          '0', 'apparentsize': '32212254720', 'writeLatency': '0',
          'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d',
          'flushLatency': '0', 'truesize': '15446843392'}, u'hdc':
          {'flushLatency': '0', 'readLatency': '0', 'truesize': '0',
          'apparentsize': '0', 'writeLatency': '0'}}, 'monitorResponse':
          '0', 'statsAge': '1.83', 'username': 'Unknown', 'status':
          'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]}<br>
          clientIFinit::DEBUG::2015-05-12
          10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState)
          Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state
          init -&gt; state preparing<br>
          clientIFinit::<a class="moz-txt-link-freetext" href="INFO::2015-05-12">INFO::2015-05-12</a>
          10:27:50,809::logUtils::44::dispatcher::(wrapper) Run and
          protect: getConnectedStoragePoolsList(options=None)<br>
          clientIFinit::<a class="moz-txt-link-freetext" href="INFO::2015-05-12">INFO::2015-05-12</a>
          10:27:50,809::logUtils::47::dispatcher::(wrapper) Run and
          protect: getConnectedStoragePoolsList, Return response:
          {'poollist': []}<br>
          clientIFinit::DEBUG::2015-05-12
          10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare)
          Task=`decf270c-4715-432c-a01d-942181f61e80`::finished:
          {'poollist': []}<br>
          clientIFinit::DEBUG::2015-05-12
          10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState)
          Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state
          preparing -&gt; state finished<br>
          clientIFinit::DEBUG::2015-05-12
          10:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
          Owner.releaseAll requests {} resources {}<br>
          clientIFinit::DEBUG::2015-05-12
          10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
          Owner.cancelAll requests {}<br>
          clientIFinit::DEBUG::2015-05-12
          10:27:50,810::task::993::Storage.TaskManager.Task::(_decref)
          Task=`decf270c-4715-432c-a01d-942181f61e80`::ref 0 aborting
          False</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>It's something wrong with Glusterfs? Or Centos 7.1?<br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <hr id="zwchr">
        <div
style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"
          data-mce-style="color: #000; font-weight: normal; font-style:
          normal; text-decoration: none; font-family:
          Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b><a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a><br>
          <b>Para: </b>"Daniel Helgenberger"
          <a class="moz-txt-link-rfc2396E" href="mailto:daniel.helgenberger@m-box.de">&lt;daniel.helgenberger@m-box.de&gt;</a><br>
          <b>Cc: </b><a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br>
          <b>Enviadas: </b>Terça-feira, 12 De Maio de 2015 10:14:11<br>
          <b>Assunto: </b>Re: [ovirt-users] Gluster command
          [&lt;UNKNOWN&gt;] failed on server<br>
          <div><br>
          </div>
          <div style="font-family: Times New Roman; font-size: 10pt;
            color: #000000" data-mce-style="font-family: Times New
            Roman; font-size: 10pt; color: #000000;">
            <div>Hi Daniel,<br>
            </div>
            <div><br>
            </div>
            <div>Well, I have glusterfs up and running:<br>
            </div>
            <div><br>
            </div>
            <div># service glusterd status<br>
              Redirecting to /bin/systemctl status  glusterd.service<br>
              glusterd.service - GlusterFS, a clustered file-system
              server<br>
                 Loaded: loaded
              (/usr/lib/systemd/system/glusterd.service; enabled)<br>
                 Active: active (running) since Mon 2015-05-11 14:37:14
              WEST; 19h ago<br>
                Process: 3060 ExecStart=/usr/sbin/glusterd -p
              /var/run/glusterd.pid (code=exited, status=0/SUCCESS)<br>
               Main PID: 3061 (glusterd)<br>
                 CGroup: /system.slice/glusterd.service<br>
                         ââ3061 /usr/sbin/glusterd -p
              /var/run/glusterd.pid<br>
                         ââ3202 /usr/sbin/glusterfsd -s
              ovserver2.domain.com --volfile-id gv...<br>
              <div><br>
              </div>
              May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting
              GlusterFS, a cluste....<br>
              May 11 14:37:14 ovserver2.domain.com systemd[1]: Started
              GlusterFS, a cluster....<br>
              Hint: Some lines were ellipsized, use -l to show in full.<br>
              <div><br>
              </div>
            </div>
            <div># gluster volume
              info                                         <br>
              Volume Name: gv0<br>
              Type: Distribute<br>
              Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261<br>
              Status: Started<br>
              Number of Bricks: 1<br>
              Transport-type: tcp<br>
              Bricks:<br>
              Brick1: ovserver2.domain.com:/home2/brick1<br>
              <div><br>
              </div>
            </div>
            <div>I stopped iptables, but cannot bring the nodes up.<br>
            </div>
            <div>Everything was working until I needed to do a restart.<br>
            </div>
            <div><br>
            </div>
            <div>Any more ideas?<br>
            </div>
            <div><br>
            </div>
            <div><br>
            </div>
            <div><br>
            </div>
            <hr id="zwchr">
            <div
style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"
              data-mce-style="color: #000; font-weight: normal;
              font-style: normal; text-decoration: none; font-family:
              Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De: </b>"Daniel
              Helgenberger" <a class="moz-txt-link-rfc2396E" href="mailto:daniel.helgenberger@m-box.de">&lt;daniel.helgenberger@m-box.de&gt;</a><br>
              <b>Para: </b><a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br>
              <b>Enviadas: </b>Segunda-feira, 11 De Maio de 2015
              18:17:47<br>
              <b>Assunto: </b>Re: [ovirt-users] Gluster command
              [&lt;UNKNOWN&gt;] failed on server<br>
              <div><br>
              </div>
              <br>
              <div><br>
              </div>
              On Mo, 2015-05-11 at 16:05 +0100, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>
              wrote:<br>
              &gt; Hi, <br>
              &gt; <br>
              &gt; I just restart it again, and now start the gluster
              service before starting the hosted engine, but still gets
              the same error message. <br>
              &gt; <br>
              &gt; Any more ideas? <br>
              I just had the same problem. <br>
              My &lt;unknown&gt; error indeed to the fact that glusterd
              / glusterfsd were<br>
              not running.<br>
              <div><br>
              </div>
              After starting them it turned out the host setup did not
              automatically<br>
              add the iptables rules for gluster. I added to iptables:<br>
              <div><br>
              </div>
              # gluster<br>
              -A INPUT -p tcp --dport 24007:24011 -j ACCEPT<br>
              -A INPUT -p tcp --dport 38465:38485 -j ACCEPT<br>
              <div><br>
              </div>
              Afterwards 'gluster peer status' worked and my host was
              operational<br>
              again.<br>
              <div><br>
              </div>
              Hint: Sometimes this is do to gluster itself. Restaring
              glusterd works<br>
              most of the time to fix this.<br>
              <div><br>
              </div>
              &gt; <br>
              &gt; Thanks <br>
              &gt; <br>
              &gt; Jose <br>
              &gt; <br>
              &gt; # hosted-engine --vm-status <br>
              &gt; <br>
              &gt; --== Host 1 status ==-- <br>
              &gt; <br>
              &gt; Status up-to-date : True <br>
              &gt; Hostname : ovserver1.domain.com <br>
              &gt; Host ID : 1 <br>
              &gt; Engine status : {"health": "good", "vm": "up",
              "detail": "up"} <br>
              &gt; Score : 2400 <br>
              &gt; Local maintenance : False <br>
              &gt; Host timestamp : 4998 <br>
              &gt; Extra metadata (valid at timestamp): <br>
              &gt; metadata_parse_version=1 <br>
              &gt; metadata_feature_version=1 <br>
              &gt; timestamp=4998 (Mon May 11 16:03:48 2015) <br>
              &gt; host-id=1 <br>
              &gt; score=2400 <br>
              &gt; maintenance=False <br>
              &gt; state=EngineUp <br>
              &gt; <br>
              &gt; <br>
              &gt; # service glusterd status <br>
              &gt; Redirecting to /bin/systemctl status glusterd.service
              <br>
              &gt; glusterd.service - GlusterFS, a clustered file-system
              server <br>
              &gt; Loaded: loaded
              (/usr/lib/systemd/system/glusterd.service; enabled) <br>
              &gt; Active: active (running) since Mon 2015-05-11
              14:37:14 WEST; 1h 27min ago <br>
              &gt; Process: 3060 ExecStart=/usr/sbin/glusterd -p
              /var/run/glusterd.pid (code=exited, status=0/SUCCESS) <br>
              &gt; Main PID: 3061 (glusterd) <br>
              &gt; CGroup: /system.slice/glusterd.service <br>
              &gt; ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid <br>
              &gt; ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt
              --volfile-id gv... <br>
              &gt; <br>
              &gt; May 11 14:37:11 ovserver2.domain.com systemd[1]:
              Starting GlusterFS, a cluste.... <br>
              &gt; May 11 14:37:14 ovserver2.domain.com systemd[1]:
              Started GlusterFS, a cluster.... <br>
              &gt; Hint: Some lines were ellipsized, use -l to show in
              full. <br>
              &gt; <br>
              &gt; <br>
              &gt; ----- Mensagem original -----<br>
              &gt; <br>
              &gt; De: <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> <br>
              &gt; Para: "knarra" <a class="moz-txt-link-rfc2396E" href="mailto:knarra@redhat.com">&lt;knarra@redhat.com&gt;</a> <br>
              &gt; Cc: <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br>
              &gt; Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14
              <br>
              &gt; Assunto: Re: [ovirt-users] Gluster command
              [&lt;UNKNOWN&gt;] failed on server <br>
              &gt; <br>
              &gt; Hi, <br>
              &gt; <br>
              &gt; I have 2 nodes, but only one is working with
              glusterfs. <br>
              &gt; <br>
              &gt; But you were right, glusterfs was not running, I just
              start the service - I didn't check it :( : <br>
              &gt; # service glusterd status <br>
              &gt; Redirecting to /bin/systemctl status glusterd.service
              <br>
              &gt; glusterd.service - GlusterFS, a clustered file-system
              server <br>
              &gt; Loaded: loaded
              (/usr/lib/systemd/system/glusterd.service; enabled) <br>
              &gt; Active: active (running) since Mon 2015-05-11
              13:06:24 WEST; 3s ago <br>
              &gt; Process: 4482 ExecStart=/usr/sbin/glusterd -p
              /var/run/glusterd.pid (code=exited, status=0/SUCCESS) <br>
              &gt; Main PID: 4483 (glusterd) <br>
              &gt; CGroup: /system.slice/glusterd.service <br>
              &gt; ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid <br>
              &gt; ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt
              --volfile-id gv... <br>
              &gt; <br>
              &gt; May 11 13:06:22 ovserver2.domain.com systemd[1]:
              Starting GlusterFS, a cluste.... <br>
              &gt; May 11 13:06:24 ovserver2.domain.com systemd[1]:
              Started GlusterFS, a cluster.... <br>
              &gt; Hint: Some lines were ellipsized, use -l to show in
              full. <br>
              &gt; <br>
              &gt; But still the problem remains <br>
              &gt; <br>
              &gt; Should I first start the glusterfs before the hosted
              engine? <br>
              &gt; <br>
              &gt; Thanks <br>
              &gt; <br>
              &gt; ----- Mensagem original -----<br>
              &gt; <br>
              &gt; De: "knarra" <a class="moz-txt-link-rfc2396E" href="mailto:knarra@redhat.com">&lt;knarra@redhat.com&gt;</a> <br>
              &gt; Para: <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>, <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br>
              &gt; Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19
              <br>
              &gt; Assunto: Re: [ovirt-users] Gluster command
              [&lt;UNKNOWN&gt;] failed on server <br>
              &gt; <br>
              &gt; On 05/11/2015 05:00 PM, <a class="moz-txt-link-abbreviated" href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a> wrote:
              <br>
              &gt; <br>
              &gt; <br>
              &gt; <br>
              &gt; Hi, <br>
              &gt; <br>
              &gt; I'm testing ovirt 3.5.1, with hosted engine, using
              centos7.1. Have installed some VMs, no problem. I needed
              to shutdown the computer machines (follow this procedure:
              <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/pipermail/users/2014-April/023861.html">http://lists.ovirt.org/pipermail/users/2014-April/023861.html</a>
              ), after rebooting could not get it <br>
              &gt; working again, when trying to activate the hosts this
              message come up: Gluster command [&lt;UNKNOWN&gt;] failed
              on server <br>
              &gt; I have tried a lot of things, including update it to
              Version 3.5.2-1.el7.centos, but no success. <br>
              &gt; Gluster version: <br>
              &gt; glusterfs-3.6.3-1.el7.x86_64 <br>
              &gt; glusterfs-libs-3.6.3-1.el7.x86_64 <br>
              &gt; glusterfs-fuse-3.6.3-1.el7.x86_64 <br>
              &gt; glusterfs-cli-3.6.3-1.el7.x86_64 <br>
              &gt; glusterfs-rdma-3.6.3-1.el7.x86_64 <br>
              &gt; glusterfs-api-3.6.3-1.el7.x86_64 <br>
              &gt; <br>
              &gt; Any help? <br>
              &gt; <br>
              &gt; _______________________________________________<br>
              &gt; Users mailing list<br>
              &gt; <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
              &gt; <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
              <div><br>
              </div>
              -- <br>
              Daniel Helgenberger<br>
              m box bewegtbild GmbH<br>
              <div><br>
              </div>
              P: +49/30/2408781-22<br>
              F: +49/30/2408781-10<br>
              <div><br>
              </div>
              ACKERSTR. 19<br>
              D-10115 BERLIN<br>
              <div><br>
              </div>
              <br>
              <a class="moz-txt-link-abbreviated" href="http://www.m-box.de">www.m-box.de</a>  <a class="moz-txt-link-abbreviated" href="http://www.monkeymen.tv">www.monkeymen.tv</a><br>
              <div><br>
              </div>
              Geschäftsführer: Martin Retschitzegger / Michaela Göllner<br>
              Handeslregister: Amtsgericht Charlottenburg / HRB 112767<br>
              _______________________________________________<br>
              Users mailing list<br>
              <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
              <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
            </div>
            <div><br>
            </div>
          </div>
          <br>
          _______________________________________________<br>
          Users mailing list<br>
          <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
          <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
        </div>
        <div><br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>