<div dir="ltr">Hi Kanagaraj,<div><br></div><div>Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it&#39;s not started even it has chkconfig enabled...</div><div><br></div><div>I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it&#39;s failed to reconnect in to cluster after reboot....</div><div><br></div><div>In both the environment glusterd enabled for next boot....but it&#39;s failed with the same error....seems it&#39;s bug in either gluster or Ovirt ??</div><div><br></div><div>Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??</div><div><br></div><div>Thanks,</div><div>Punit</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <span dir="ltr">&lt;<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    From vdsm.log &quot;error: Connection failed. Please check if gluster
    daemon is operational.&quot;<br>
    <br>
    Starting glusterd service should fix this issue. &#39;service glusterd
    start&#39;<br>
    But i am wondering why the glusterd was not started automatically
    after the reboot.<br>
    <br>
    Thanks,<br>
    Kanagaraj<div><div class="h5"><br>
    <br>
    <br>
    <div>On 11/24/2014 07:18 PM, Punit Dambiwal
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hi Kanagaraj,
        <div><br>
        </div>
        <div>Please find the attached VDSM logs :- </div>
        <div><br>
        </div>
        <div>----------------</div>
        <div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
            Owner.cancelAll requests {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:17,182::task::993::Storage.TaskManager.Task::(_decref)
            Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting
            False</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState)
            Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from
            state init -&gt; state preparing</div>
          <div>Thread-13::<a>INFO::2014-11-24</a>
            21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and
            protect: repoStats(options=None)</div>
          <div>Thread-13::<a>INFO::2014-11-24</a>
            21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and
            protect: repoStats, Return response: {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare)
            Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState)
            Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from
            state preparing -&gt; state finished</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
            Owner.releaseAll requests {} resources {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
            Owner.cancelAll requests {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:32,394::task::993::Storage.TaskManager.Task::(_decref)
            Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting
            False</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client
            [10.10.10.2]::call getCapabilities with () {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,553::utils::738::root::(execCmd) /sbin/ip route
            show to <a href="http://0.0.0.0/0" target="_blank">0.0.0.0/0</a>
            table all (cwd None)</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,560::utils::758::root::(execCmd) SUCCESS:
            &lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,588::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,592::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift-object&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,593::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift-plugin&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,598::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift-account&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,598::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift-proxy&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,598::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift-doc&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,599::caps::728::root::(_getKeyPackages) rpm package
            (&#39;gluster-swift-container&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,599::caps::728::root::(_getKeyPackages) rpm package
            (&#39;glusterfs-geo-replication&#39;,) not found</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED:
            libvirt version 0.10.2-29.el6_5.9 required &gt;= 0.10.2-31</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return
            getCapabilities with {&#39;status&#39;: {&#39;message&#39;: &#39;Done&#39;, &#39;code&#39;:
            0}, &#39;info&#39;: {&#39;HBAInventory&#39;: {&#39;iSCSI&#39;: [{&#39;InitiatorName&#39;:
            &#39;iqn.1994-05.com.redhat:32151ce183c8&#39;}], &#39;FC&#39;: []},
            &#39;packages2&#39;: {&#39;kernel&#39;: {&#39;release&#39;: &#39;431.el6.x86_64&#39;,
            &#39;buildtime&#39;: 1385061309.0, &#39;version&#39;: &#39;2.6.32&#39;},
            &#39;glusterfs-rdma&#39;: {&#39;release&#39;: &#39;1.el6&#39;, &#39;buildtime&#39;:
            1403622628L, &#39;version&#39;: &#39;3.5.1&#39;}, &#39;glusterfs-fuse&#39;:
            {&#39;release&#39;: &#39;1.el6&#39;, &#39;buildtime&#39;: 1403622628L, &#39;version&#39;:
            &#39;3.5.1&#39;}, &#39;spice-server&#39;: {&#39;release&#39;: &#39;6.el6_5.2&#39;,
            &#39;buildtime&#39;: 1402324637L, &#39;version&#39;: &#39;0.12.4&#39;}, &#39;vdsm&#39;:
            {&#39;release&#39;: &#39;1.gitdb83943.el6&#39;, &#39;buildtime&#39;: 1412784567L,
            &#39;version&#39;: &#39;4.16.7&#39;}, &#39;qemu-kvm&#39;: {&#39;release&#39;:
            &#39;2.415.el6_5.10&#39;, &#39;buildtime&#39;: 1402435700L, &#39;version&#39;:
            &#39;0.12.1.2&#39;}, &#39;qemu-img&#39;: {&#39;release&#39;: &#39;2.415.el6_5.10&#39;,
            &#39;buildtime&#39;: 1402435700L, &#39;version&#39;: &#39;0.12.1.2&#39;}, &#39;libvirt&#39;:
            {&#39;release&#39;: &#39;29.el6_5.9&#39;, &#39;buildtime&#39;: 1402404612L,
            &#39;version&#39;: &#39;0.10.2&#39;}, &#39;glusterfs&#39;: {&#39;release&#39;: &#39;1.el6&#39;,
            &#39;buildtime&#39;: 1403622628L, &#39;version&#39;: &#39;3.5.1&#39;}, &#39;mom&#39;:
            {&#39;release&#39;: &#39;2.el6&#39;, &#39;buildtime&#39;: 1403794344L, &#39;version&#39;:
            &#39;0.4.1&#39;}, &#39;glusterfs-server&#39;: {&#39;release&#39;: &#39;1.el6&#39;,
            &#39;buildtime&#39;: 1403622628L, &#39;version&#39;: &#39;3.5.1&#39;}},
            &#39;numaNodeDistance&#39;: {&#39;1&#39;: [20, 10], &#39;0&#39;: [10, 20]},
            &#39;cpuModel&#39;: &#39;Intel(R) Xeon(R) CPU           X5650  @
            2.67GHz&#39;, &#39;liveMerge&#39;: &#39;false&#39;, &#39;hooks&#39;: {}, &#39;cpuSockets&#39;:
            &#39;2&#39;, &#39;vmTypes&#39;: [&#39;kvm&#39;], &#39;selinux&#39;: {&#39;mode&#39;: &#39;1&#39;},
            &#39;kdumpStatus&#39;: 0, &#39;supportedProtocols&#39;: [&#39;2.2&#39;, &#39;2.3&#39;],
            &#39;networks&#39;: {&#39;ovirtmgmt&#39;: {&#39;iface&#39;: u&#39;bond0.10&#39;, &#39;addr&#39;:
            &#39;43.252.176.16&#39;, &#39;bridged&#39;: False, &#39;ipv6addrs&#39;:
            [&#39;fe80::62eb:69ff:fe20:b46c/64&#39;], &#39;mtu&#39;: &#39;1500&#39;,
            &#39;bootproto4&#39;: &#39;none&#39;, &#39;netmask&#39;: &#39;255.255.255.0&#39;,
            &#39;ipv4addrs&#39;: [&#39;<a href="http://43.252.176.16/24%27" target="_blank">43.252.176.16/24&#39;</a>],
            &#39;interface&#39;: u&#39;bond0.10&#39;, &#39;ipv6gateway&#39;: &#39;::&#39;, &#39;gateway&#39;:
            &#39;43.25.17.1&#39;}, &#39;Internal&#39;: {&#39;iface&#39;: &#39;Internal&#39;, &#39;addr&#39;: &#39;&#39;,
            &#39;cfg&#39;: {&#39;DEFROUTE&#39;: &#39;no&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;9000&#39;,
            &#39;DELAY&#39;: &#39;0&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;BOOTPROTO&#39;: &#39;none&#39;,
            &#39;STP&#39;: &#39;off&#39;, &#39;DEVICE&#39;: &#39;Internal&#39;, &#39;TYPE&#39;: &#39;Bridge&#39;,
            &#39;ONBOOT&#39;: &#39;no&#39;}, &#39;bridged&#39;: True, &#39;ipv6addrs&#39;:
            [&#39;fe80::210:18ff:fecd:daac/64&#39;], &#39;gateway&#39;: &#39;&#39;,
            &#39;bootproto4&#39;: &#39;none&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;stp&#39;: &#39;off&#39;,
            &#39;ipv4addrs&#39;: [], &#39;mtu&#39;: &#39;9000&#39;, &#39;ipv6gateway&#39;: &#39;::&#39;,
            &#39;ports&#39;: [&#39;bond1.100&#39;]}, &#39;storage&#39;: {&#39;iface&#39;: u&#39;bond1&#39;,
            &#39;addr&#39;: &#39;10.10.10.6&#39;, &#39;bridged&#39;: False, &#39;ipv6addrs&#39;:
            [&#39;fe80::210:18ff:fecd:daac/64&#39;], &#39;mtu&#39;: &#39;9000&#39;,
            &#39;bootproto4&#39;: &#39;none&#39;, &#39;netmask&#39;: &#39;255.255.255.0&#39;,
            &#39;ipv4addrs&#39;: [&#39;<a href="http://10.10.10.6/24%27" target="_blank">10.10.10.6/24&#39;</a>],
            &#39;interface&#39;: u&#39;bond1&#39;, &#39;ipv6gateway&#39;: &#39;::&#39;, &#39;gateway&#39;: &#39;&#39;},
            &#39;VMNetwork&#39;: {&#39;iface&#39;: &#39;VMNetwork&#39;, &#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;:
            {&#39;DEFROUTE&#39;: &#39;no&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;1500&#39;, &#39;DELAY&#39;:
            &#39;0&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;BOOTPROTO&#39;: &#39;none&#39;, &#39;STP&#39;:
            &#39;off&#39;, &#39;DEVICE&#39;: &#39;VMNetwork&#39;, &#39;TYPE&#39;: &#39;Bridge&#39;, &#39;ONBOOT&#39;:
            &#39;no&#39;}, &#39;bridged&#39;: True, &#39;ipv6addrs&#39;:
            [&#39;fe80::62eb:69ff:fe20:b46c/64&#39;], &#39;gateway&#39;: &#39;&#39;,
            &#39;bootproto4&#39;: &#39;none&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;stp&#39;: &#39;off&#39;,
            &#39;ipv4addrs&#39;: [], &#39;mtu&#39;: &#39;1500&#39;, &#39;ipv6gateway&#39;: &#39;::&#39;,
            &#39;ports&#39;: [&#39;bond0.36&#39;]}}, &#39;bridges&#39;: {&#39;Internal&#39;: {&#39;addr&#39;:
            &#39;&#39;, &#39;cfg&#39;: {&#39;DEFROUTE&#39;: &#39;no&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;, &#39;MTU&#39;:
            &#39;9000&#39;, &#39;DELAY&#39;: &#39;0&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;BOOTPROTO&#39;:
            &#39;none&#39;, &#39;STP&#39;: &#39;off&#39;, &#39;DEVICE&#39;: &#39;Internal&#39;, &#39;TYPE&#39;:
            &#39;Bridge&#39;, &#39;ONBOOT&#39;: &#39;no&#39;}, &#39;ipv6addrs&#39;:
            [&#39;fe80::210:18ff:fecd:daac/64&#39;], &#39;mtu&#39;: &#39;9000&#39;, &#39;netmask&#39;:
            &#39;&#39;, &#39;stp&#39;: &#39;off&#39;, &#39;ipv4addrs&#39;: [], &#39;ipv6gateway&#39;: &#39;::&#39;,
            &#39;gateway&#39;: &#39;&#39;, &#39;opts&#39;: {&#39;topology_change_detected&#39;: &#39;0&#39;,
            &#39;multicast_last_member_count&#39;: &#39;2&#39;, &#39;hash_elasticity&#39;: &#39;4&#39;,
            &#39;multicast_query_response_interval&#39;: &#39;999&#39;,
            &#39;multicast_snooping&#39;: &#39;1&#39;,
            &#39;multicast_startup_query_interval&#39;: &#39;3124&#39;, &#39;hello_timer&#39;:
            &#39;31&#39;, &#39;multicast_querier_interval&#39;: &#39;25496&#39;, &#39;max_age&#39;:
            &#39;1999&#39;, &#39;hash_max&#39;: &#39;512&#39;, &#39;stp_state&#39;: &#39;0&#39;, &#39;root_id&#39;:
            &#39;8000.001018cddaac&#39;, &#39;priority&#39;: &#39;32768&#39;,
            &#39;multicast_membership_interval&#39;: &#39;25996&#39;, &#39;root_path_cost&#39;:
            &#39;0&#39;, &#39;root_port&#39;: &#39;0&#39;, &#39;multicast_querier&#39;: &#39;0&#39;,
            &#39;multicast_startup_query_count&#39;: &#39;2&#39;, &#39;hello_time&#39;: &#39;199&#39;,
            &#39;topology_change&#39;: &#39;0&#39;, &#39;bridge_id&#39;: &#39;8000.001018cddaac&#39;,
            &#39;topology_change_timer&#39;: &#39;0&#39;, &#39;ageing_time&#39;: &#39;29995&#39;,
            &#39;gc_timer&#39;: &#39;31&#39;, &#39;group_addr&#39;: &#39;1:80:c2:0:0:0&#39;,
            &#39;tcn_timer&#39;: &#39;0&#39;, &#39;multicast_query_interval&#39;: &#39;12498&#39;,
            &#39;multicast_last_member_interval&#39;: &#39;99&#39;, &#39;multicast_router&#39;:
            &#39;1&#39;, &#39;forward_delay&#39;: &#39;0&#39;}, &#39;ports&#39;: [&#39;bond1.100&#39;]},
            &#39;VMNetwork&#39;: {&#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;: {&#39;DEFROUTE&#39;: &#39;no&#39;,
            &#39;HOTPLUG&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;1500&#39;, &#39;DELAY&#39;: &#39;0&#39;,
            &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;BOOTPROTO&#39;: &#39;none&#39;, &#39;STP&#39;: &#39;off&#39;,
            &#39;DEVICE&#39;: &#39;VMNetwork&#39;, &#39;TYPE&#39;: &#39;Bridge&#39;, &#39;ONBOOT&#39;: &#39;no&#39;},
            &#39;ipv6addrs&#39;: [&#39;fe80::62eb:69ff:fe20:b46c/64&#39;], &#39;mtu&#39;:
            &#39;1500&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;stp&#39;: &#39;off&#39;, &#39;ipv4addrs&#39;: [],
            &#39;ipv6gateway&#39;: &#39;::&#39;, &#39;gateway&#39;: &#39;&#39;, &#39;opts&#39;:
            {&#39;topology_change_detected&#39;: &#39;0&#39;,
            &#39;multicast_last_member_count&#39;: &#39;2&#39;, &#39;hash_elasticity&#39;: &#39;4&#39;,
            &#39;multicast_query_response_interval&#39;: &#39;999&#39;,
            &#39;multicast_snooping&#39;: &#39;1&#39;,
            &#39;multicast_startup_query_interval&#39;: &#39;3124&#39;, &#39;hello_timer&#39;:
            &#39;131&#39;, &#39;multicast_querier_interval&#39;: &#39;25496&#39;, &#39;max_age&#39;:
            &#39;1999&#39;, &#39;hash_max&#39;: &#39;512&#39;, &#39;stp_state&#39;: &#39;0&#39;, &#39;root_id&#39;:
            &#39;8000.60eb6920b46c&#39;, &#39;priority&#39;: &#39;32768&#39;,
            &#39;multicast_membership_interval&#39;: &#39;25996&#39;, &#39;root_path_cost&#39;:
            &#39;0&#39;, &#39;root_port&#39;: &#39;0&#39;, &#39;multicast_querier&#39;: &#39;0&#39;,
            &#39;multicast_startup_query_count&#39;: &#39;2&#39;, &#39;hello_time&#39;: &#39;199&#39;,
            &#39;topology_change&#39;: &#39;0&#39;, &#39;bridge_id&#39;: &#39;8000.60eb6920b46c&#39;,
            &#39;topology_change_timer&#39;: &#39;0&#39;, &#39;ageing_time&#39;: &#39;29995&#39;,
            &#39;gc_timer&#39;: &#39;31&#39;, &#39;group_addr&#39;: &#39;1:80:c2:0:0:0&#39;,
            &#39;tcn_timer&#39;: &#39;0&#39;, &#39;multicast_query_interval&#39;: &#39;12498&#39;,
            &#39;multicast_last_member_interval&#39;: &#39;99&#39;, &#39;multicast_router&#39;:
            &#39;1&#39;, &#39;forward_delay&#39;: &#39;0&#39;}, &#39;ports&#39;: [&#39;bond0.36&#39;]}}, &#39;uuid&#39;:
            &#39;44454C4C-4C00-1057-8053-B7C04F504E31&#39;, &#39;lastClientIface&#39;:
            &#39;bond1&#39;, &#39;nics&#39;: {&#39;eth3&#39;: {&#39;permhwaddr&#39;:
            &#39;00:10:18:cd:da:ae&#39;, &#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;: {&#39;SLAVE&#39;: &#39;yes&#39;,
            &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;9000&#39;, &#39;HWADDR&#39;:
            &#39;00:10:18:cd:da:ae&#39;, &#39;MASTER&#39;: &#39;bond1&#39;, &#39;DEVICE&#39;: &#39;eth3&#39;,
            &#39;ONBOOT&#39;: &#39;no&#39;}, &#39;ipv6addrs&#39;: [], &#39;mtu&#39;: &#39;9000&#39;, &#39;netmask&#39;:
            &#39;&#39;, &#39;ipv4addrs&#39;: [], &#39;hwaddr&#39;: &#39;00:10:18:cd:da:ac&#39;, &#39;speed&#39;:
            1000}, &#39;eth2&#39;: {&#39;permhwaddr&#39;: &#39;00:10:18:cd:da:ac&#39;, &#39;addr&#39;:
            &#39;&#39;, &#39;cfg&#39;: {&#39;SLAVE&#39;: &#39;yes&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;MTU&#39;:
            &#39;9000&#39;, &#39;HWADDR&#39;: &#39;00:10:18:cd:da:ac&#39;, &#39;MASTER&#39;: &#39;bond1&#39;,
            &#39;DEVICE&#39;: &#39;eth2&#39;, &#39;ONBOOT&#39;: &#39;no&#39;}, &#39;ipv6addrs&#39;: [], &#39;mtu&#39;:
            &#39;9000&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;ipv4addrs&#39;: [], &#39;hwaddr&#39;:
            &#39;00:10:18:cd:da:ac&#39;, &#39;speed&#39;: 1000}, &#39;eth1&#39;: {&#39;permhwaddr&#39;:
            &#39;60:eb:69:20:b4:6d&#39;, &#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;: {&#39;SLAVE&#39;: &#39;yes&#39;,
            &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;1500&#39;, &#39;HWADDR&#39;:
            &#39;60:eb:69:20:b4:6d&#39;, &#39;MASTER&#39;: &#39;bond0&#39;, &#39;DEVICE&#39;: &#39;eth1&#39;,
            &#39;ONBOOT&#39;: &#39;yes&#39;}, &#39;ipv6addrs&#39;: [], &#39;mtu&#39;: &#39;1500&#39;, &#39;netmask&#39;:
            &#39;&#39;, &#39;ipv4addrs&#39;: [], &#39;hwaddr&#39;: &#39;60:eb:69:20:b4:6c&#39;, &#39;speed&#39;:
            1000}, &#39;eth0&#39;: {&#39;permhwaddr&#39;: &#39;60:eb:69:20:b4:6c&#39;, &#39;addr&#39;:
            &#39;&#39;, &#39;cfg&#39;: {&#39;SLAVE&#39;: &#39;yes&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;MTU&#39;:
            &#39;1500&#39;, &#39;HWADDR&#39;: &#39;60:eb:69:20:b4:6c&#39;, &#39;MASTER&#39;: &#39;bond0&#39;,
            &#39;DEVICE&#39;: &#39;eth0&#39;, &#39;ONBOOT&#39;: &#39;yes&#39;}, &#39;ipv6addrs&#39;: [], &#39;mtu&#39;:
            &#39;1500&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;ipv4addrs&#39;: [], &#39;hwaddr&#39;:
            &#39;60:eb:69:20:b4:6c&#39;, &#39;speed&#39;: 1000}}, &#39;software_revision&#39;:
            &#39;1&#39;, &#39;clusterLevels&#39;: [&#39;3.0&#39;, &#39;3.1&#39;, &#39;3.2&#39;, &#39;3.3&#39;, &#39;3.4&#39;,
            &#39;3.5&#39;], &#39;cpuFlags&#39;:
            u&#39;fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270&#39;,
            &#39;ISCSIInitiatorName&#39;: &#39;iqn.1994-05.com.redhat:32151ce183c8&#39;,
            &#39;netConfigDirty&#39;: &#39;False&#39;, &#39;supportedENGINEs&#39;: [&#39;3.0&#39;,
            &#39;3.1&#39;, &#39;3.2&#39;, &#39;3.3&#39;, &#39;3.4&#39;, &#39;3.5&#39;], &#39;autoNumaBalancing&#39;: 2,
            &#39;reservedMem&#39;: &#39;321&#39;, &#39;bondings&#39;: {&#39;bond4&#39;: {&#39;addr&#39;: &#39;&#39;,
            &#39;cfg&#39;: {}, &#39;mtu&#39;: &#39;1500&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;slaves&#39;: [],
            &#39;hwaddr&#39;: &#39;00:00:00:00:00:00&#39;}, &#39;bond0&#39;: {&#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;:
            {&#39;HOTPLUG&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;1500&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;,
            &#39;BONDING_OPTS&#39;: &#39;mode=4 miimon=100&#39;, &#39;DEVICE&#39;: &#39;bond0&#39;,
            &#39;ONBOOT&#39;: &#39;yes&#39;}, &#39;ipv6addrs&#39;:
            [&#39;fe80::62eb:69ff:fe20:b46c/64&#39;], &#39;mtu&#39;: &#39;1500&#39;, &#39;netmask&#39;:
            &#39;&#39;, &#39;ipv4addrs&#39;: [], &#39;hwaddr&#39;: &#39;60:eb:69:20:b4:6c&#39;,
            &#39;slaves&#39;: [&#39;eth0&#39;, &#39;eth1&#39;], &#39;opts&#39;: {&#39;miimon&#39;: &#39;100&#39;,
            &#39;mode&#39;: &#39;4&#39;}}, &#39;bond1&#39;: {&#39;addr&#39;: &#39;10.10.10.6&#39;, &#39;cfg&#39;:
            {&#39;DEFROUTE&#39;: &#39;no&#39;, &#39;IPADDR&#39;: &#39;10.10.10.6&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;,
            &#39;MTU&#39;: &#39;9000&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;NETMASK&#39;:
            &#39;255.255.255.0&#39;, &#39;BOOTPROTO&#39;: &#39;none&#39;, &#39;BONDING_OPTS&#39;:
            &#39;mode=4 miimon=100&#39;, &#39;DEVICE&#39;: &#39;bond1&#39;, &#39;ONBOOT&#39;: &#39;no&#39;},
            &#39;ipv6addrs&#39;: [&#39;fe80::210:18ff:fecd:daac/64&#39;], &#39;mtu&#39;: &#39;9000&#39;,
            &#39;netmask&#39;: &#39;255.255.255.0&#39;, &#39;ipv4addrs&#39;: [&#39;<a href="http://10.10.10.6/24%27" target="_blank">10.10.10.6/24&#39;</a>],
            &#39;hwaddr&#39;: &#39;00:10:18:cd:da:ac&#39;, &#39;slaves&#39;: [&#39;eth2&#39;, &#39;eth3&#39;],
            &#39;opts&#39;: {&#39;miimon&#39;: &#39;100&#39;, &#39;mode&#39;: &#39;4&#39;}}, &#39;bond2&#39;: {&#39;addr&#39;:
            &#39;&#39;, &#39;cfg&#39;: {}, &#39;mtu&#39;: &#39;1500&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;slaves&#39;: [],
            &#39;hwaddr&#39;: &#39;00:00:00:00:00:00&#39;}, &#39;bond3&#39;: {&#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;:
            {}, &#39;mtu&#39;: &#39;1500&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;slaves&#39;: [], &#39;hwaddr&#39;:
            &#39;00:00:00:00:00:00&#39;}}, &#39;software_version&#39;: &#39;4.16&#39;,
            &#39;memSize&#39;: &#39;24019&#39;, &#39;cpuSpeed&#39;: &#39;2667.000&#39;, &#39;numaNodes&#39;:
            {u&#39;1&#39;: {&#39;totalMemory&#39;: &#39;12288&#39;, &#39;cpus&#39;: [6, 7, 8, 9, 10, 11,
            18, 19, 20, 21, 22, 23]}, u&#39;0&#39;: {&#39;totalMemory&#39;: &#39;12278&#39;,
            &#39;cpus&#39;: [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}},
            &#39;version_name&#39;: &#39;Snow Man&#39;, &#39;vlans&#39;: {&#39;bond0.10&#39;: {&#39;iface&#39;:
            &#39;bond0&#39;, &#39;addr&#39;: &#39;43.25.17.16&#39;, &#39;cfg&#39;: {&#39;DEFROUTE&#39;: &#39;yes&#39;,
            &#39;VLAN&#39;: &#39;yes&#39;, &#39;IPADDR&#39;: &#39;43.25.17.16&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;,
            &#39;GATEWAY&#39;: &#39;43.25.17.1&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;NETMASK&#39;:
            &#39;255.255.255.0&#39;, &#39;BOOTPROTO&#39;: &#39;none&#39;, &#39;DEVICE&#39;: &#39;bond0.10&#39;,
            &#39;MTU&#39;: &#39;1500&#39;, &#39;ONBOOT&#39;: &#39;yes&#39;}, &#39;ipv6addrs&#39;:
            [&#39;fe80::62eb:69ff:fe20:b46c/64&#39;], &#39;vlanid&#39;: 10, &#39;mtu&#39;:
            &#39;1500&#39;, &#39;netmask&#39;: &#39;255.255.255.0&#39;, &#39;ipv4addrs&#39;: [&#39;<a href="http://43.25.17.16/24%27%5D" target="_blank">43.25.17.16/24&#39;]</a>},
            &#39;bond0.36&#39;: {&#39;iface&#39;: &#39;bond0&#39;, &#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;: {&#39;BRIDGE&#39;:
            &#39;VMNetwork&#39;, &#39;VLAN&#39;: &#39;yes&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;, &#39;MTU&#39;: &#39;1500&#39;,
            &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;DEVICE&#39;: &#39;bond0.36&#39;, &#39;ONBOOT&#39;:
            &#39;no&#39;}, &#39;ipv6addrs&#39;: [&#39;fe80::62eb:69ff:fe20:b46c/64&#39;],
            &#39;vlanid&#39;: 36, &#39;mtu&#39;: &#39;1500&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;ipv4addrs&#39;:
            []}, &#39;bond1.100&#39;: {&#39;iface&#39;: &#39;bond1&#39;, &#39;addr&#39;: &#39;&#39;, &#39;cfg&#39;:
            {&#39;BRIDGE&#39;: &#39;Internal&#39;, &#39;VLAN&#39;: &#39;yes&#39;, &#39;HOTPLUG&#39;: &#39;no&#39;,
            &#39;MTU&#39;: &#39;9000&#39;, &#39;NM_CONTROLLED&#39;: &#39;no&#39;, &#39;DEVICE&#39;: &#39;bond1.100&#39;,
            &#39;ONBOOT&#39;: &#39;no&#39;}, &#39;ipv6addrs&#39;:
            [&#39;fe80::210:18ff:fecd:daac/64&#39;], &#39;vlanid&#39;: 100, &#39;mtu&#39;:
            &#39;9000&#39;, &#39;netmask&#39;: &#39;&#39;, &#39;ipv4addrs&#39;: []}}, &#39;cpuCores&#39;: &#39;12&#39;,
            &#39;kvmEnabled&#39;: &#39;true&#39;, &#39;guestOverhead&#39;: &#39;65&#39;, &#39;cpuThreads&#39;:
            &#39;24&#39;, &#39;emulatedMachines&#39;: [u&#39;rhel6.5.0&#39;, u&#39;pc&#39;,
            u&#39;rhel6.4.0&#39;, u&#39;rhel6.3.0&#39;, u&#39;rhel6.2.0&#39;, u&#39;rhel6.1.0&#39;,
            u&#39;rhel6.0.0&#39;, u&#39;rhel5.5.0&#39;, u&#39;rhel5.4.4&#39;, u&#39;rhel5.4.0&#39;],
            &#39;operatingSystem&#39;: {&#39;release&#39;: &#39;5.el6.centos.11.1&#39;,
            &#39;version&#39;: &#39;6&#39;, &#39;name&#39;: &#39;RHEL&#39;}, &#39;lastClient&#39;:
            &#39;10.10.10.2&#39;}}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client
            [10.10.10.2]::call getHardwareInfo with () {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return
            getHardwareInfo with {&#39;status&#39;: {&#39;message&#39;: &#39;Done&#39;, &#39;code&#39;:
            0}, &#39;info&#39;: {&#39;systemProductName&#39;: &#39;CS24-TY&#39;,
            &#39;systemSerialNumber&#39;: &#39;7LWSPN1&#39;, &#39;systemFamily&#39;: &#39;Server&#39;,
            &#39;systemVersion&#39;: &#39;A00&#39;, &#39;systemUUID&#39;:
            &#39;44454c4c-4c00-1057-8053-b7c04f504e31&#39;,
            &#39;systemManufacturer&#39;: &#39;Dell&#39;}}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client
            [10.10.10.2]::call hostsList with () {} flowID [222e8036]</div>
          <div>Thread-13::ERROR::2014-11-24
            21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm
            exception occured</div>
          <div>Traceback (most recent call last):</div>
          <div>  File &quot;/usr/share/vdsm/rpc/BindingXMLRPC.py&quot;, line 1135,
            in wrapper</div>
          <div>    res = f(*args, **kwargs)</div>
          <div>  File &quot;/usr/share/vdsm/gluster/api.py&quot;, line 54, in
            wrapper</div>
          <div>    rv = func(*args, **kwargs)</div>
          <div>  File &quot;/usr/share/vdsm/gluster/api.py&quot;, line 251, in
            hostsList</div>
          <div>    return {&#39;hosts&#39;: self.svdsmProxy.glusterPeerStatus()}</div>
          <div>  File &quot;/usr/share/vdsm/supervdsm.py&quot;, line 50, in
            __call__</div>
          <div>    return callMethod()</div>
          <div>  File &quot;/usr/share/vdsm/supervdsm.py&quot;, line 48, in
            &lt;lambda&gt;</div>
          <div>    **kwargs)</div>
          <div>  File &quot;&lt;string&gt;&quot;, line 2, in glusterPeerStatus</div>
          <div>  File
            &quot;/usr/lib64/python2.6/multiprocessing/managers.py&quot;, line
            740, in _callmethod</div>
          <div>    raise convert_to_error(kind, result)</div>
          <div>GlusterCmdExecFailedException: Command execution failed</div>
          <div>error: Connection failed. Please check if gluster daemon
            is operational.</div>
          <div>return code: 1</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState)
            Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from
            state init -&gt; state preparing</div>
          <div>Thread-13::<a>INFO::2014-11-24</a>
            21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and
            protect: repoStats(options=None)</div>
          <div>Thread-13::<a>INFO::2014-11-24</a>
            21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and
            protect: repoStats, Return response: {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare)
            Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState)
            Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from
            state preparing -&gt; state finished</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
            Owner.releaseAll requests {} resources {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
            Owner.cancelAll requests {}</div>
          <div>Thread-13::DEBUG::2014-11-24
            21:41:50,951::task::993::Storage.TaskManager.Task::(_decref)
            Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting
            False</div>
        </div>
        <div>-------------------------------</div>
        <div><br>
        </div>
        <div>
          <div>[root@compute4 ~]# service glusterd status</div>
          <div>glusterd is stopped</div>
          <div>[root@compute4 ~]# chkconfig --list | grep glusterd</div>
          <div>glusterd        0:off   1:off   2:on    3:on    4:on  
             5:on    6:off</div>
          <div>[root@compute4 ~]#<br>
          </div>
        </div>
        <div><br>
        </div>
        <div>Thanks,</div>
        <div>Punit</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Mon, Nov 24, 2014 at 6:36 PM,
          Kanagaraj <span dir="ltr">&lt;<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"> Can you send the
              corresponding error in vdsm.log from the host?<br>
              <br>
              Also check if glusterd service is running.<br>
              <br>
              Thanks,<br>
              Kanagaraj
              <div>
                <div><br>
                  <br>
                  <div>On 11/24/2014 03:39 PM, Punit Dambiwal wrote:<br>
                  </div>
                </div>
              </div>
              <blockquote type="cite">
                <div>
                  <div>
                    <div dir="ltr">
                      <div>Hi,</div>
                      <div><br>
                      </div>
                      <div>After reboot my Hypervisior host can not
                        activate again in the cluster and failed with
                        the following error :- </div>
                      <div><br>
                      </div>
                      <div>Gluster command [&lt;UNKNOWN&gt;] failed on
                        server...<br>
                      </div>
                      <div><br>
                      </div>
                      <div>Engine logs :- </div>
                      <div><br>
                      </div>
                      <div>2014-11-24 18:05:28,397 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
                        (DefaultQuartzScheduler_Worker-64) START,
                        GlusterVolumesListVDSCommand(HostName =
                        Compute4, HostId =
                        33648a90-200c-45ca-89d5-1ce305d79a6a), log id:
                        5f251c90</div>
                      <div>2014-11-24 18:05:30,609 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
                        (DefaultQuartzScheduler_Worker-64) FINISH,
                        GlusterVolumesListVDSCommand, return:
                        {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0},

                        log id: 5f251c90</div>
                      <div>2014-11-24 18:05:33,768 INFO
                         [org.ovirt.engine.core.bll.ActivateVdsCommand]
                        (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired
                        to object EngineLock [exclusiveLocks= key:
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div>
                      <div>, sharedLocks= ]</div>
                      <div>2014-11-24 18:05:33,795 INFO
                         [org.ovirt.engine.core.bll.ActivateVdsCommand]
                        (org.ovirt.thread.pool-8-thread-45) [287d570d]
                        Running command: ActivateVdsCommand internal:
                        false. Entities affected :  ID:
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a Type:
                        VDSAction group MANIPULATE_HOST with role type
                        ADMIN</div>
                      <div>2014-11-24 18:05:33,796 INFO
                         [org.ovirt.engine.core.bll.ActivateVdsCommand]
                        (org.ovirt.thread.pool-8-thread-45) [287d570d]
                        Before acquiring lock in order to prevent
                        monitoring for host Compute5 from data-center
                        SV_WTC</div>
                      <div>2014-11-24 18:05:33,797 INFO
                         [org.ovirt.engine.core.bll.ActivateVdsCommand]
                        (org.ovirt.thread.pool-8-thread-45) [287d570d]
                        Lock acquired, from now a monitoring of host
                        will be skipped for host Compute5 from
                        data-center SV_WTC</div>
                      <div>2014-11-24 18:05:33,817 INFO
                         [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
                        (org.ovirt.thread.pool-8-thread-45) [287d570d]
                        START, SetVdsStatusVDSCommand(HostName =
                        Compute5, HostId =
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a,
                        status=Unassigned, nonOperationalReason=NONE,
                        stopSpmFailureLogged=false), log id: 1cbc7311</div>
                      <div>2014-11-24 18:05:33,820 INFO
                         [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
                        (org.ovirt.thread.pool-8-thread-45) [287d570d]
                        FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311</div>
                      <div>2014-11-24 18:05:34,086 INFO
                         [org.ovirt.engine.core.bll.ActivateVdsCommand]
                        (org.ovirt.thread.pool-8-thread-45) Activate
                        finished. Lock released. Monitoring can run now
                        for host Compute5 from data-center SV_WTC</div>
                      <div>2014-11-24 18:05:34,088 INFO
                         [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
                        (org.ovirt.thread.pool-8-thread-45) Correlation
                        ID: 287d570d, Job ID:
                        5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call
                        Stack: null, Custom Event ID: -1, Message: Host
                        Compute5 was activated by admin.</div>
                      <div>2014-11-24 18:05:34,090 INFO
                         [org.ovirt.engine.core.bll.ActivateVdsCommand]
                        (org.ovirt.thread.pool-8-thread-45) Lock freed
                        to object EngineLock [exclusiveLocks= key:
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div>
                      <div>, sharedLocks= ]</div>
                      <div>2014-11-24 18:05:35,792 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
                        (DefaultQuartzScheduler_Worker-55) [3706e836]
                        START, GlusterVolumesListVDSCommand(HostName =
                        Compute4, HostId =
                        33648a90-200c-45ca-89d5-1ce305d79a6a), log id:
                        48a0c832</div>
                      <div>2014-11-24 18:05:37,064 INFO
                         [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
                        (DefaultQuartzScheduler_Worker-69) START,
                        GetHardwareInfoVDSCommand(HostName = Compute5,
                        HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a,
                        vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]),
                        log id: 6d560cc2</div>
                      <div>2014-11-24 18:05:37,074 INFO
                         [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
                        (DefaultQuartzScheduler_Worker-69) FINISH,
                        GetHardwareInfoVDSCommand, log id: 6d560cc2</div>
                      <div>2014-11-24 18:05:37,093 WARN
                         [org.ovirt.engine.core.vdsbroker.VdsManager]
                        (DefaultQuartzScheduler_Worker-69) Host Compute5
                        is running with disabled SELinux.</div>
                      <div>2014-11-24 18:05:37,127 INFO
                         [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
                        (DefaultQuartzScheduler_Worker-69) [2b4a51cf]
                        Running command:
                        HandleVdsCpuFlagsOrClusterChangedCommand
                        internal: true. Entities affected :  ID:
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div>
                      <div>2014-11-24 18:05:37,147 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
                        (DefaultQuartzScheduler_Worker-69) [2b4a51cf]
                        START, GlusterServersListVDSCommand(HostName =
                        Compute5, HostId =
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id:
                        4faed87</div>
                      <div>2014-11-24 18:05:37,164 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
                        (DefaultQuartzScheduler_Worker-69) [2b4a51cf]
                        FINISH, GlusterServersListVDSCommand, log id:
                        4faed87</div>
                      <div>2014-11-24 18:05:37,189 INFO
                         [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
                        (DefaultQuartzScheduler_Worker-69) [4a84c4e5]
                        Running command: SetNonOperationalVdsCommand
                        internal: true. Entities affected :  ID:
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div>
                      <div>2014-11-24 18:05:37,206 INFO
                         [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
                        (DefaultQuartzScheduler_Worker-69) [4a84c4e5]
                        START, SetVdsStatusVDSCommand(HostName =
                        Compute5, HostId =
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a,
                        status=NonOperational,
                        nonOperationalReason=GLUSTER_COMMAND_FAILED,
                        stopSpmFailureLogged=false), log id: fed5617</div>
                      <div>2014-11-24 18:05:37,209 INFO
                         [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
                        (DefaultQuartzScheduler_Worker-69) [4a84c4e5]
                        FINISH, SetVdsStatusVDSCommand, log id: fed5617</div>
                      <div>2014-11-24 18:05:37,223 ERROR
                        [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
                        (DefaultQuartzScheduler_Worker-69) [4a84c4e5]
                        Correlation ID: 4a84c4e5, Job ID:
                        4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call
                        Stack: null, Custom Event ID: -1, Message:
                        Gluster command [&lt;UNKNOWN&gt;] failed on
                        server Compute5.</div>
                      <div>2014-11-24 18:05:37,243 INFO
                         [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
                        (DefaultQuartzScheduler_Worker-69) [4a84c4e5]
                        Correlation ID: null, Call Stack: null, Custom
                        Event ID: -1, Message: Status of host Compute5
                        was set to NonOperational.</div>
                      <div>2014-11-24 18:05:37,272 INFO
                         [org.ovirt.engine.core.bll.HandleVdsVersionCommand]
                        (DefaultQuartzScheduler_Worker-69) [a0c8a7f]
                        Running command: HandleVdsVersionCommand
                        internal: true. Entities affected :  ID:
                        0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div>
                      <div>2014-11-24 18:05:37,274 INFO
                         [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
                        (DefaultQuartzScheduler_Worker-69) [a0c8a7f]
                        Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a :
                        Compute5 is already in NonOperational status for
                        reason GLUSTER_COMMAND_FAILED.
                        SetNonOperationalVds command is skipped.</div>
                      <div>2014-11-24 18:05:38,065 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
                        (DefaultQuartzScheduler_Worker-55) [3706e836]
                        FINISH, GlusterVolumesListVDSCommand, return:
                        {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1},

                        log id: 48a0c832</div>
                      <div>2014-11-24 18:05:43,243 INFO
                         [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
                        (DefaultQuartzScheduler_Worker-35) START,
                        GlusterVolumesListVDSCommand(HostName =
                        Compute4, HostId =
                        33648a90-200c-45ca-89d5-1ce305d79a6a), log id:
                        3ce13ebc</div>
                      <div>^C</div>
                      <div>[root@ccr01 ~]#</div>
                      <div><br>
                      </div>
                      <div>Thanks,</div>
                      <div>Punit</div>
                    </div>
                    <br>
                    <fieldset></fieldset>
                    <br>
                  </div>
                </div>
                <pre>_______________________________________________
Users mailing list
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
              </blockquote>
              <br>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div>