<div dir="ltr">Dear Kaushal,<div><br></div><div>I tried various method...but still the same error...it seems it's gluster bug..is there any body can suggest work-around here ??</div><div><br></div><div>Thanks,</div><div>Punit</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Dec 7, 2014 at 8:40 PM, Punit Dambiwal <span dir="ltr"><<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Kaushal,<div><br></div><div>Still the same error...even try your suggested workaround :- </div><div><br></div><div>-------------------</div><div><span style="font-family:arial,sans-serif;font-size:13px">Can you replace 'Before=network-online.target' with</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">'Wants=network-online.target' and try the boot again? This should</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">force the network to be online before starting GlusterD.</span><br></div><div><span style="font-family:arial,sans-serif;font-size:13px">-------------------</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><font face="arial, sans-serif">Thanks,</font></div><div><font face="arial, sans-serif">Punit</font></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Dec 6, 2014 at 11:44 AM, Punit Dambiwal <span dir="ltr"><<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Kaushal,<div><br></div><div>I already have all the hosts entry in the /etc/hosts for the easy resolution....i will try your method in the glusterd.services and check and let you know....weather problem solve or not.....</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Dec 5, 2014 at 9:50 PM, Kaushal M <span dir="ltr"><<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Can you replace 'Before=network-online.target' with<br>
'Wants=network-online.target' and try the boot again? This should<br>
force the network to be online before starting GlusterD.<br>
<br>
If even that fails, you could try adding an entry into /etc/hosts with<br>
the hostname of the system. This should prevent any more failures.<br>
<br>
I still don't believe it's a problem with Gluster. Gluster uses apis<br>
provided by the system to perform name resolution. These definitely<br>
work correctly because you can start GlusterD later. Since the<br>
resolution failure only happens during boot, it points to system or<br>
network setup issues during boot. To me it seems like the network<br>
isn't completely setup at that point of time.<br>
<br>
~kaushal<br>
<div><div><br>
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>> wrote:<br>
> Hi Kaushal,<br>
><br>
> It seems it's bug in glusterfs 3.6....even i manage my systemd to start the<br>
> network service before glusterd...but it's still fail...<br>
><br>
> ---------------<br>
> [Unit]<br>
> Description=GlusterFS, a clustered file-system server<br>
> After=network.target rpcbind.service<br>
> Before=network-online.target<br>
><br>
> [Service]<br>
> Type=forking<br>
> PIDFile=/var/run/glusterd.pid<br>
> LimitNOFILE=65536<br>
> ExecStartPre=/etc/rc.d/init.d/network start<br>
> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid<br>
> KillMode=process<br>
><br>
> [Install]<br>
> WantedBy=multi-user.target<br>
> ----------------<br>
><br>
> Thanks,<br>
> Punit<br>
><br>
> On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>> wrote:<br>
>><br>
>> I just remembered this.<br>
>><br>
>> There was another user having a similar issue of GlusterD failing to<br>
>> start on the mailing list a while back. The cause of his problem was<br>
>> the way his network was brought up.<br>
>> IIRC, he was using a static network configuration. The problem<br>
>> vanished when he began using dhcp. Or it might have been he was using<br>
>> dhcp.service and it got solved after switching to NetworkManager.<br>
>><br>
>> This could be one more thing you could look at.<br>
>><br>
>> I'll try to find the mail thread to see if it was the same problem as you.<br>
>><br>
>> ~kaushal<br>
>><br>
>> On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>> wrote:<br>
>> > I don't know much about how the network target is brought up in<br>
>> > CentOS7, but I'll try as much as I can.<br>
>> ><br>
>> > It seems to me that, after the network has been brought up and by the<br>
>> > time GlusterD is started,<br>
>> > a. The machine hasn't yet recieved it's hostname, or<br>
>> > b. It hasn't yet registered with the name server.<br>
>> ><br>
>> > This is causing name resolution failures.<br>
>> ><br>
>> > I don't know if the network target could come up without the machine<br>
>> > getting its hostname, so I'm pretty sure it's not a.<br>
>> ><br>
>> > So it seems to be b. But these kind of signing in happens only in DDNS<br>
>> > systems, which doesn't seem to be the case for you.<br>
>> ><br>
>> > Both of these reasons might be wrong (most likely wrong). You'd do<br>
>> > good if you could ask for help from someone with more experience in<br>
>> > systemd + networking.<br>
>> ><br>
>> > ~kaushal<br>
>> ><br>
>> > On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>><br>
>> > wrote:<br>
>> >> Hi Kaushal,<br>
>> >><br>
>> >> This is the host...which i rebooted...would you mind to let me know how<br>
>> >> i<br>
>> >> can make the glusterd sevice come up after network...i am using<br>
>> >> centos7...if<br>
>> >> network is the issue...<br>
>> >><br>
>> >> On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>> wrote:<br>
>> >>><br>
>> >>> This peer cannot be identified.<br>
>> >>><br>
>> >>> " [2014-12-03 02:29:25.998153] D<br>
>> >>> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname]<br>
>> >>> 0-management:<br>
>> >>> Unable to find friend: <a href="http://cpu05.zne01.hkg1.ovt.36stack.com" target="_blank">cpu05.zne01.hkg1.ovt.36stack.com</a>"<br>
>> >>><br>
>> >>> I don't know why this address is not being resolved during boot time.<br>
>> >>> If<br>
>> >>> this is a valid peer, the the only reason I can think of this that the<br>
>> >>> network is not up.<br>
>> >>><br>
>> >>> If you had previously detached the peer forcefully, the that could<br>
>> >>> have<br>
>> >>> left stale entries in some volumes. In this case as well, GlusterD<br>
>> >>> will fail<br>
>> >>> to identify the peer.<br>
>> >>><br>
>> >>> Do either of these reasons seem a possibility to you?<br>
>> >>><br>
>> >>> On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>> wrote:<br>
>> >>>><br>
>> >>>> Hi Kaushal,<br>
>> >>>><br>
>> >>>> Please find the logs here :- <a href="http://ur1.ca/iyoe5" target="_blank">http://ur1.ca/iyoe5</a> and<br>
>> >>>> <a href="http://ur1.ca/iyoed" target="_blank">http://ur1.ca/iyoed</a><br>
>> >>>><br>
>> >>>> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>><br>
>> >>>> wrote:<br>
>> >>>>><br>
>> >>>>> Hey Punit,<br>
>> >>>>> In the logs you've provided, GlusterD appears to be running<br>
>> >>>>> correctly.<br>
>> >>>>> Could you provide the logs for the time period when GlusterD<br>
>> >>>>> attempts to<br>
>> >>>>> start but fails.<br>
>> >>>>><br>
>> >>>>> ~kaushal<br>
>> >>>>><br>
>> >>>>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>> wrote:<br>
>> >>>>>><br>
>> >>>>>> Hi Kaushal,<br>
>> >>>>>><br>
>> >>>>>> Please find the logs here :- <a href="http://ur1.ca/iyhs5" target="_blank">http://ur1.ca/iyhs5</a> and<br>
>> >>>>>> <a href="http://ur1.ca/iyhue" target="_blank">http://ur1.ca/iyhue</a><br>
>> >>>>>><br>
>> >>>>>> Thanks,<br>
>> >>>>>> punit<br>
>> >>>>>><br>
>> >>>>>><br>
>> >>>>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>><br>
>> >>>>>> wrote:<br>
>> >>>>>>><br>
>> >>>>>>> Hey Punit,<br>
>> >>>>>>> Could you start Glusterd in debug mode and provide the logs here?<br>
>> >>>>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart line<br>
>> >>>>>>> in<br>
>> >>>>>>> the service file.<br>
>> >>>>>>><br>
>> >>>>>>> ~kaushal<br>
>> >>>>>>><br>
>> >>>>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>><br>
>> >>>>>>> wrote:<br>
>> >>>>>>> > Hi,<br>
>> >>>>>>> ><br>
>> >>>>>>> > Can Any body help me on this ??<br>
>> >>>>>>> ><br>
>> >>>>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal<br>
>> >>>>>>> > <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>><br>
>> >>>>>>> > wrote:<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Hi Kaushal,<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Thanks for the detailed reply....let me explain my setup first<br>
>> >>>>>>> >> :-<br>
>> >>>>>>> >><br>
>> >>>>>>> >> 1. Ovirt Engine<br>
>> >>>>>>> >> 2. 4* host as well as storage machine (Host and gluster<br>
>> >>>>>>> >> combined)<br>
>> >>>>>>> >> 3. Every host has 24 bricks...<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Now whenever the host machine reboot...it can come up but can<br>
>> >>>>>>> >> not<br>
>> >>>>>>> >> join the<br>
>> >>>>>>> >> cluster again and through the following error "Gluster command<br>
>> >>>>>>> >> [<UNKNOWN>]<br>
>> >>>>>>> >> failed on server.."<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Please check my comment in line :-<br>
>> >>>>>>> >><br>
>> >>>>>>> >> 1. Use the same string for doing the peer probe and for the<br>
>> >>>>>>> >> brick<br>
>> >>>>>>> >> address<br>
>> >>>>>>> >> during volume create/add-brick. Ideally, we suggest you use<br>
>> >>>>>>> >> properly<br>
>> >>>>>>> >> resolvable FQDNs everywhere. If that is not possible, then use<br>
>> >>>>>>> >> only<br>
>> >>>>>>> >> IP<br>
>> >>>>>>> >> addresses. Try to avoid short names.<br>
>> >>>>>>> >> ---------------<br>
>> >>>>>>> >> [root@cpu05 ~]# gluster peer status<br>
>> >>>>>>> >> Number of Peers: 3<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Hostname: <a href="http://cpu03.stack.com" target="_blank">cpu03.stack.com</a><br>
>> >>>>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb<br>
>> >>>>>>> >> State: Peer in Cluster (Connected)<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Hostname: <a href="http://cpu04.stack.com" target="_blank">cpu04.stack.com</a><br>
>> >>>>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0<br>
>> >>>>>>> >> State: Peer in Cluster (Connected)<br>
>> >>>>>>> >> Other names:<br>
>> >>>>>>> >> 10.10.0.8<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Hostname: <a href="http://cpu02.stack.com" target="_blank">cpu02.stack.com</a><br>
>> >>>>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25<br>
>> >>>>>>> >> State: Peer in Cluster (Connected)<br>
>> >>>>>>> >> [root@cpu05 ~]#<br>
>> >>>>>>> >> ----------------<br>
>> >>>>>>> >> 2. During boot up, make sure to launch glusterd only after the<br>
>> >>>>>>> >> network is<br>
>> >>>>>>> >> up. This will allow the new peer identification mechanism to do<br>
>> >>>>>>> >> its<br>
>> >>>>>>> >> job correctly.<br>
>> >>>>>>> >> >> I think the service itself doing the same job....<br>
>> >>>>>>> >><br>
>> >>>>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service<br>
>> >>>>>>> >> [Unit]<br>
>> >>>>>>> >> Description=GlusterFS, a clustered file-system server<br>
>> >>>>>>> >> After=network.target rpcbind.service<br>
>> >>>>>>> >> Before=network-online.target<br>
>> >>>>>>> >><br>
>> >>>>>>> >> [Service]<br>
>> >>>>>>> >> Type=forking<br>
>> >>>>>>> >> PIDFile=/var/run/glusterd.pid<br>
>> >>>>>>> >> LimitNOFILE=65536<br>
>> >>>>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid<br>
>> >>>>>>> >> KillMode=process<br>
>> >>>>>>> >><br>
>> >>>>>>> >> [Install]<br>
>> >>>>>>> >> WantedBy=multi-user.target<br>
>> >>>>>>> >> [root@cpu05 ~]#<br>
>> >>>>>>> >> --------------------<br>
>> >>>>>>> >><br>
>> >>>>>>> >> gluster logs :-<br>
>> >>>>>>> >><br>
>> >>>>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030]<br>
>> >>>>>>> >> [glusterfsd.c:2018:main]<br>
>> >>>>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd<br>
>> >>>>>>> >> version<br>
>> >>>>>>> >> 3.6.1<br>
>> >>>>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid)<br>
>> >>>>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init]<br>
>> >>>>>>> >> 0-management:<br>
>> >>>>>>> >> Maximum allowed open file descriptors set to 65536<br>
>> >>>>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init]<br>
>> >>>>>>> >> 0-management:<br>
>> >>>>>>> >> Using<br>
>> >>>>>>> >> /var/lib/glusterd as working directory<br>
>> >>>>>>> >> [2014-11-24 09:22:22.155216] W<br>
>> >>>>>>> >> [rdma.c:4195:__gf_rdma_ctx_create]<br>
>> >>>>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No<br>
>> >>>>>>> >> such device)<br>
>> >>>>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init]<br>
>> >>>>>>> >> 0-rdma.management:<br>
>> >>>>>>> >> Failed to initialize IB Device<br>
>> >>>>>>> >> [2014-11-24 09:22:22.155285] E<br>
>> >>>>>>> >> [rpc-transport.c:333:rpc_transport_load]<br>
>> >>>>>>> >> 0-rpc-transport: 'rdma' initialization failed<br>
>> >>>>>>> >> [2014-11-24 09:22:22.155354] W<br>
>> >>>>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create]<br>
>> >>>>>>> >> 0-rpc-service: cannot create listener, initing the transport<br>
>> >>>>>>> >> failed<br>
>> >>>>>>> >> [2014-11-24 09:22:22.156290] I<br>
>> >>>>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd:<br>
>> >>>>>>> >> geo-replication<br>
>> >>>>>>> >> module not installed in the system<br>
>> >>>>>>> >> [2014-11-24 09:22:22.161318] I<br>
>> >>>>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd:<br>
>> >>>>>>> >> retrieved<br>
>> >>>>>>> >> op-version: 30600<br>
>> >>>>>>> >> [2014-11-24 09:22:22.821800] I<br>
>> >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo]<br>
>> >>>>>>> >> 0-management:<br>
>> >>>>>>> >> connect returned 0<br>
>> >>>>>>> >> [2014-11-24 09:22:22.825810] I<br>
>> >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo]<br>
>> >>>>>>> >> 0-management:<br>
>> >>>>>>> >> connect returned 0<br>
>> >>>>>>> >> [2014-11-24 09:22:22.828705] I<br>
>> >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo]<br>
>> >>>>>>> >> 0-management:<br>
>> >>>>>>> >> connect returned 0<br>
>> >>>>>>> >> [2014-11-24 09:22:22.828771] I<br>
>> >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init]<br>
>> >>>>>>> >> 0-management: setting frame-timeout to 600<br>
>> >>>>>>> >> [2014-11-24 09:22:22.832670] I<br>
>> >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init]<br>
>> >>>>>>> >> 0-management: setting frame-timeout to 600<br>
>> >>>>>>> >> [2014-11-24 09:22:22.835919] I<br>
>> >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init]<br>
>> >>>>>>> >> 0-management: setting frame-timeout to 600<br>
>> >>>>>>> >> [2014-11-24 09:22:22.840209] E<br>
>> >>>>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd:<br>
>> >>>>>>> >> resolve<br>
>> >>>>>>> >> brick failed in restore<br>
>> >>>>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init]<br>
>> >>>>>>> >> 0-management:<br>
>> >>>>>>> >> Initialization of volume 'management' failed, review your<br>
>> >>>>>>> >> volfile<br>
>> >>>>>>> >> again<br>
>> >>>>>>> >> [2014-11-24 09:22:22.840245] E<br>
>> >>>>>>> >> [graph.c:322:glusterfs_graph_init]<br>
>> >>>>>>> >> 0-management: initializing translator failed<br>
>> >>>>>>> >> [2014-11-24 09:22:22.840264] E<br>
>> >>>>>>> >> [graph.c:525:glusterfs_graph_activate]<br>
>> >>>>>>> >> 0-graph: init failed<br>
>> >>>>>>> >> [2014-11-24 09:22:22.840754] W<br>
>> >>>>>>> >> [glusterfsd.c:1194:cleanup_and_exit]<br>
>> >>>>>>> >> (--><br>
>> >>>>>>> >> 0-: received signum (0), shutting down<br>
>> >>>>>>> >><br>
>> >>>>>>> >> Thanks,<br>
>> >>>>>>> >> Punit<br>
>> >>>>>>> >><br>
>> >>>>>>> >><br>
>> >>>>>>> >><br>
>> >>>>>>> >><br>
>> >>>>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M<br>
>> >>>>>>> >> <<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>><br>
>> >>>>>>> >> wrote:<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> Based on the logs I can guess that glusterd is being started<br>
>> >>>>>>> >>> before<br>
>> >>>>>>> >>> the network has come up and that the addresses given to bricks<br>
>> >>>>>>> >>> do<br>
>> >>>>>>> >>> not<br>
>> >>>>>>> >>> directly match the addresses used in during peer probe.<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> The gluster_after_reboot log has the line "[2014-11-25<br>
>> >>>>>>> >>> 06:46:09.972113] E<br>
>> >>>>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks]<br>
>> >>>>>>> >>> 0-glusterd: resolve brick failed in restore".<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> Brick resolution fails when glusterd cannot match the address<br>
>> >>>>>>> >>> for<br>
>> >>>>>>> >>> the<br>
>> >>>>>>> >>> brick, with one of the peers. Brick resolution happens in two<br>
>> >>>>>>> >>> phases,<br>
>> >>>>>>> >>> 1. We first try to identify the peer by performing string<br>
>> >>>>>>> >>> comparisions<br>
>> >>>>>>> >>> with the brick address and the peer addresses (The peer names<br>
>> >>>>>>> >>> will<br>
>> >>>>>>> >>> be<br>
>> >>>>>>> >>> the names/addresses that were given when the peer was probed).<br>
>> >>>>>>> >>> 2. If we don't find a match from step 1, we will then resolve<br>
>> >>>>>>> >>> all<br>
>> >>>>>>> >>> the<br>
>> >>>>>>> >>> brick address and the peer addresses into addrinfo structs,<br>
>> >>>>>>> >>> and<br>
>> >>>>>>> >>> then<br>
>> >>>>>>> >>> compare these structs to find a match. This process should<br>
>> >>>>>>> >>> generally<br>
>> >>>>>>> >>> find a match if available. This will fail only if the network<br>
>> >>>>>>> >>> is<br>
>> >>>>>>> >>> not<br>
>> >>>>>>> >>> up yet as we cannot resolve addresses.<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> The above steps are applicable only to glusterfs versions<br>
>> >>>>>>> >>> >=3.6.<br>
>> >>>>>>> >>> They<br>
>> >>>>>>> >>> were introduced to reduce problems with peer identification,<br>
>> >>>>>>> >>> like<br>
>> >>>>>>> >>> the<br>
>> >>>>>>> >>> one you encountered<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> Since both of the steps failed to find a match in one run, but<br>
>> >>>>>>> >>> succeeded later, we can come to the conclusion that,<br>
>> >>>>>>> >>> a) the bricks don't have the exact same string used in peer<br>
>> >>>>>>> >>> probe<br>
>> >>>>>>> >>> for<br>
>> >>>>>>> >>> their addresses as step 1 failed, and<br>
>> >>>>>>> >>> b) the network was not up in the initial run, as step 2 failed<br>
>> >>>>>>> >>> during<br>
>> >>>>>>> >>> the initial run, but passed in the second run.<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> Please let me know if my conclusion is correct.<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> If it is, you can solve your problem in two ways.<br>
>> >>>>>>> >>> 1. Use the same string for doing the peer probe and for the<br>
>> >>>>>>> >>> brick<br>
>> >>>>>>> >>> address during volume create/add-brick. Ideally, we suggest<br>
>> >>>>>>> >>> you<br>
>> >>>>>>> >>> use<br>
>> >>>>>>> >>> properly resolvable FQDNs everywhere. If that is not possible,<br>
>> >>>>>>> >>> then<br>
>> >>>>>>> >>> use only IP addresses. Try to avoid short names.<br>
>> >>>>>>> >>> 2. During boot up, make sure to launch glusterd only after the<br>
>> >>>>>>> >>> network<br>
>> >>>>>>> >>> is up. This will allow the new peer identification mechanism<br>
>> >>>>>>> >>> to do<br>
>> >>>>>>> >>> its<br>
>> >>>>>>> >>> job correctly.<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> If you have already followed these steps and yet still hit the<br>
>> >>>>>>> >>> problem, then please provide more information (setup, logs,<br>
>> >>>>>>> >>> etc.).<br>
>> >>>>>>> >>> It<br>
>> >>>>>>> >>> could be much different problem that you are facing.<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> ~kaushal<br>
>> >>>>>>> >>><br>
>> >>>>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal<br>
>> >>>>>>> >>> <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>><br>
>> >>>>>>> >>> wrote:<br>
>> >>>>>>> >>> > Is there any one can help on this ??<br>
>> >>>>>>> >>> ><br>
>> >>>>>>> >>> > Thanks,<br>
>> >>>>>>> >>> > punit<br>
>> >>>>>>> >>> ><br>
>> >>>>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal<br>
>> >>>>>>> >>> > <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>><br>
>> >>>>>>> >>> > wrote:<br>
>> >>>>>>> >>> >><br>
>> >>>>>>> >>> >> Hi,<br>
>> >>>>>>> >>> >><br>
>> >>>>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7<br>
>> >>>>>>> >>> >><br>
>> >>>>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy<br>
>> >>>>>>> >>> >> <<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>><br>
>> >>>>>>> >>> >> wrote:<br>
>> >>>>>>> >>> >>><br>
>> >>>>>>> >>> >>> [+<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>]<br>
>> >>>>>>> >>> >>><br>
>> >>>>>>> >>> >>> "Initialization of volume 'management' failed, review your<br>
>> >>>>>>> >>> >>> volfile<br>
>> >>>>>>> >>> >>> again", glusterd throws this error when the service is<br>
>> >>>>>>> >>> >>> started<br>
>> >>>>>>> >>> >>> automatically<br>
>> >>>>>>> >>> >>> after the reboot. But the service is successfully started<br>
>> >>>>>>> >>> >>> later<br>
>> >>>>>>> >>> >>> manually by<br>
>> >>>>>>> >>> >>> the user.<br>
>> >>>>>>> >>> >>><br>
>> >>>>>>> >>> >>> can somebody from gluster-users please help on this?<br>
>> >>>>>>> >>> >>><br>
>> >>>>>>> >>> >>> glusterfs version: 3.5.1<br>
>> >>>>>>> >>> >>><br>
>> >>>>>>> >>> >>> Thanks,<br>
>> >>>>>>> >>> >>> Kanagaraj<br>
>> >>>>>>> >>> >>><br>
>> >>>>>>> >>> >>> ----- Original Message -----<br>
>> >>>>>>> >>> >>> > From: "Punit Dambiwal" <<a href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>><br>
>> >>>>>>> >>> >>> > To: "Kanagaraj" <<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>><br>
>> >>>>>>> >>> >>> > Cc: <a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a><br>
>> >>>>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM<br>
>> >>>>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>]<br>
>> >>>>>>> >>> >>> > failed on<br>
>> >>>>>>> >>> >>> > server...<br>
>> >>>>>>> >>> >>> ><br>
>> >>>>>>> >>> >>> > Hi Kanagraj,<br>
>> >>>>>>> >>> >>> ><br>
>> >>>>>>> >>> >>> > Please check the attached log files....i didn't find any<br>
>> >>>>>>> >>> >>> > thing<br>
>> >>>>>>> >>> >>> > special....<br>
>> >>>>>>> >>> >>> ><br>
>> >>>>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj<br>
>> >>>>>>> >>> >>> > <<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>><br>
>> >>>>>>> >>> >>> > wrote:<br>
>> >>>>>>> >>> >>> ><br>
>> >>>>>>> >>> >>> > > Do you see any errors in<br>
>> >>>>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or<br>
>> >>>>>>> >>> >>> > > vdsm.log<br>
>> >>>>>>> >>> >>> > > when<br>
>> >>>>>>> >>> >>> > > the<br>
>> >>>>>>> >>> >>> > > service is trying to start automatically after the<br>
>> >>>>>>> >>> >>> > > reboot?<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > Thanks,<br>
>> >>>>>>> >>> >>> > > Kanagaraj<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote:<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > Hi Kanagaraj,<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > Yes...once i will start the gluster service and then<br>
>> >>>>>>> >>> >>> > > vdsmd<br>
>> >>>>>>> >>> >>> > > ...the<br>
>> >>>>>>> >>> >>> > > host<br>
>> >>>>>>> >>> >>> > > can connect to cluster...but the question is why it's<br>
>> >>>>>>> >>> >>> > > not<br>
>> >>>>>>> >>> >>> > > started<br>
>> >>>>>>> >>> >>> > > even it<br>
>> >>>>>>> >>> >>> > > has chkconfig enabled...<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > I have tested it in two host cluster<br>
>> >>>>>>> >>> >>> > > environment...(Centos 6.6<br>
>> >>>>>>> >>> >>> > > and<br>
>> >>>>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed<br>
>> >>>>>>> >>> >>> > > to<br>
>> >>>>>>> >>> >>> > > reconnect<br>
>> >>>>>>> >>> >>> > > in<br>
>> >>>>>>> >>> >>> > > to<br>
>> >>>>>>> >>> >>> > > cluster after reboot....<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > In both the environment glusterd enabled for next<br>
>> >>>>>>> >>> >>> > > boot....but<br>
>> >>>>>>> >>> >>> > > it's<br>
>> >>>>>>> >>> >>> > > failed with the same error....seems it's bug in either<br>
>> >>>>>>> >>> >>> > > gluster or<br>
>> >>>>>>> >>> >>> > > Ovirt ??<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > Please help me to find the workaround here if can not<br>
>> >>>>>>> >>> >>> > > resolve<br>
>> >>>>>>> >>> >>> > > it...as<br>
>> >>>>>>> >>> >>> > > without this the Host machine can not connect after<br>
>> >>>>>>> >>> >>> > > reboot....that<br>
>> >>>>>>> >>> >>> > > means<br>
>> >>>>>>> >>> >>> > > engine will consider it as down and every time need to<br>
>> >>>>>>> >>> >>> > > manually<br>
>> >>>>>>> >>> >>> > > start<br>
>> >>>>>>> >>> >>> > > the<br>
>> >>>>>>> >>> >>> > > gluster service and vdsmd... ??<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > Thanks,<br>
>> >>>>>>> >>> >>> > > Punit<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj<br>
>> >>>>>>> >>> >>> > > <<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>><br>
>> >>>>>>> >>> >>> > > wrote:<br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please<br>
>> >>>>>>> >>> >>> > >> check if<br>
>> >>>>>>> >>> >>> > >> gluster<br>
>> >>>>>>> >>> >>> > >> daemon<br>
>> >>>>>>> >>> >>> > >> is operational."<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> Starting glusterd service should fix this issue.<br>
>> >>>>>>> >>> >>> > >> 'service<br>
>> >>>>>>> >>> >>> > >> glusterd<br>
>> >>>>>>> >>> >>> > >> start'<br>
>> >>>>>>> >>> >>> > >> But i am wondering why the glusterd was not started<br>
>> >>>>>>> >>> >>> > >> automatically<br>
>> >>>>>>> >>> >>> > >> after<br>
>> >>>>>>> >>> >>> > >> the reboot.<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> Thanks,<br>
>> >>>>>>> >>> >>> > >> Kanagaraj<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote:<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> Hi Kanagaraj,<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> Please find the attached VDSM logs :-<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> ----------------<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)<br>
>> >>>>>>> >>> >>> > >> Owner.cancelAll requests {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref)<br>
>> >>>>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0<br>
>> >>>>>>> >>> >>> > >> aborting<br>
>> >>>>>>> >>> >>> > >> False<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState)<br>
>> >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving<br>
>> >>>>>>> >>> >>> > >> from<br>
>> >>>>>>> >>> >>> > >> state<br>
>> >>>>>>> >>> >>> > >> init<br>
>> >>>>>>> >>> >>> > >> -><br>
>> >>>>>>> >>> >>> > >> state preparing<br>
>> >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run<br>
>> >>>>>>> >>> >>> > >> and<br>
>> >>>>>>> >>> >>> > >> protect:<br>
>> >>>>>>> >>> >>> > >> repoStats(options=None)<br>
>> >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run<br>
>> >>>>>>> >>> >>> > >> and<br>
>> >>>>>>> >>> >>> > >> protect:<br>
>> >>>>>>> >>> >>> > >> repoStats, Return response: {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare)<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState)<br>
>> >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving<br>
>> >>>>>>> >>> >>> > >> from<br>
>> >>>>>>> >>> >>> > >> state<br>
>> >>>>>>> >>> >>> > >> preparing<br>
>> >>>>>>> >>> >>> > >> -><br>
>> >>>>>>> >>> >>> > >> state finished<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)<br>
>> >>>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)<br>
>> >>>>>>> >>> >>> > >> Owner.cancelAll requests {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref)<br>
>> >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0<br>
>> >>>>>>> >>> >>> > >> aborting<br>
>> >>>>>>> >>> >>> > >> False<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper)<br>
>> >>>>>>> >>> >>> > >> client<br>
>> >>>>>>> >>> >>> > >> [10.10.10.2]::call<br>
>> >>>>>>> >>> >>> > >> getCapabilities with () {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd)<br>
>> >>>>>>> >>> >>> > >> /sbin/ip route show to <a href="http://0.0.0.0/0" target="_blank">0.0.0.0/0</a> table all (cwd None)<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd)<br>
>> >>>>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift-object',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift-plugin',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift-account',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift-proxy',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift-doc',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('gluster-swift-container',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm<br>
>> >>>>>>> >>> >>> > >> package<br>
>> >>>>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get)<br>
>> >>>>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9<br>
>> >>>>>>> >>> >>> > >> required<br>
>> >>>>>>> >>> >>> > >> >=<br>
>> >>>>>>> >>> >>> > >> 0.10.2-31<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper)<br>
>> >>>>>>> >>> >>> > >> return<br>
>> >>>>>>> >>> >>> > >> getCapabilities<br>
>> >>>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0},<br>
>> >>>>>>> >>> >>> > >> 'info':<br>
>> >>>>>>> >>> >>> > >> {'HBAInventory':<br>
>> >>>>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName':<br>
>> >>>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}],<br>
>> >>>>>>> >>> >>> > >> 'FC':<br>
>> >>>>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release':<br>
>> >>>>>>> >>> >>> > >> '431.el6.x86_64',<br>
>> >>>>>>> >>> >>> > >> 'buildtime':<br>
>> >>>>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma':<br>
>> >>>>>>> >>> >>> > >> {'release':<br>
>> >>>>>>> >>> >>> > >> '1.el6',<br>
>> >>>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'},<br>
>> >>>>>>> >>> >>> > >> 'glusterfs-fuse':<br>
>> >>>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L,<br>
>> >>>>>>> >>> >>> > >> 'version':<br>
>> >>>>>>> >>> >>> > >> '3.5.1'},<br>
>> >>>>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime':<br>
>> >>>>>>> >>> >>> > >> 1402324637L,<br>
>> >>>>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release':<br>
>> >>>>>>> >>> >>> > >> '1.gitdb83943.el6',<br>
>> >>>>>>> >>> >>> > >> 'buildtime':<br>
>> >>>>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm':<br>
>> >>>>>>> >>> >>> > >> {'release':<br>
>> >>>>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L,<br>
>> >>>>>>> >>> >>> > >> 'version':<br>
>> >>>>>>> >>> >>> > >> '0.12.1.2'},<br>
>> >>>>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10',<br>
>> >>>>>>> >>> >>> > >> 'buildtime':<br>
>> >>>>>>> >>> >>> > >> 1402435700L,<br>
>> >>>>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release':<br>
>> >>>>>>> >>> >>> > >> '29.el6_5.9',<br>
>> >>>>>>> >>> >>> > >> 'buildtime':<br>
>> >>>>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs':<br>
>> >>>>>>> >>> >>> > >> {'release':<br>
>> >>>>>>> >>> >>> > >> '1.el6',<br>
>> >>>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom':<br>
>> >>>>>>> >>> >>> > >> {'release':<br>
>> >>>>>>> >>> >>> > >> '2.el6',<br>
>> >>>>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'},<br>
>> >>>>>>> >>> >>> > >> 'glusterfs-server':<br>
>> >>>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L,<br>
>> >>>>>>> >>> >>> > >> 'version':<br>
>> >>>>>>> >>> >>> > >> '3.5.1'}},<br>
>> >>>>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]},<br>
>> >>>>>>> >>> >>> > >> 'cpuModel':<br>
>> >>>>>>> >>> >>> > >> 'Intel(R)<br>
>> >>>>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge':<br>
>> >>>>>>> >>> >>> > >> 'false',<br>
>> >>>>>>> >>> >>> > >> 'hooks':<br>
>> >>>>>>> >>> >>> > >> {},<br>
>> >>>>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux':<br>
>> >>>>>>> >>> >>> > >> {'mode': '1'},<br>
>> >>>>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2',<br>
>> >>>>>>> >>> >>> > >> '2.3'],<br>
>> >>>>>>> >>> >>> > >> 'networks':<br>
>> >>>>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr':<br>
>> >>>>>>> >>> >>> > >> '43.252.176.16',<br>
>> >>>>>>> >>> >>> > >> 'bridged':<br>
>> >>>>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0',<br>
>> >>>>>>> >>> >>> > >> 'ipv4addrs':<br>
>> >>>>>>> >>> >>> > >> ['<br>
>> >>>>>>> >>> >>> > >> <a href="http://43.252.176.16/24" target="_blank">43.252.176.16/24</a>' <<a href="http://43.252.176.16/24%27" target="_blank">http://43.252.176.16/24%27</a>>],<br>
>> >>>>>>> >>> >>> > >> 'interface':<br>
>> >>>>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway':<br>
>> >>>>>>> >>> >>> > >> '43.25.17.1'},<br>
>> >>>>>>> >>> >>> > >> 'Internal':<br>
>> >>>>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'BOOTPROTO':<br>
>> >>>>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE':<br>
>> >>>>>>> >>> >>> > >> 'Bridge',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs':<br>
>> >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'],<br>
>> >>>>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'stp':<br>
>> >>>>>>> >>> >>> > >> 'off',<br>
>> >>>>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::',<br>
>> >>>>>>> >>> >>> > >> 'ports':<br>
>> >>>>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1',<br>
>> >>>>>>> >>> >>> > >> 'addr':<br>
>> >>>>>>> >>> >>> > >> '10.10.10.6',<br>
>> >>>>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs':<br>
>> >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask':<br>
>> >>>>>>> >>> >>> > >> '255.255.255.0',<br>
>> >>>>>>> >>> >>> > >> 'ipv4addrs': ['<br>
>> >>>>>>> >>> >>> > >> <a href="http://10.10.10.6/24" target="_blank">10.10.10.6/24</a>' <<a href="http://10.10.10.6/24%27" target="_blank">http://10.10.10.6/24%27</a>>],<br>
>> >>>>>>> >>> >>> > >> 'interface':<br>
>> >>>>>>> >>> >>> > >> u'bond1',<br>
>> >>>>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork':<br>
>> >>>>>>> >>> >>> > >> {'iface':<br>
>> >>>>>>> >>> >>> > >> 'VMNetwork',<br>
>> >>>>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'MTU':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO':<br>
>> >>>>>>> >>> >>> > >> 'none',<br>
>> >>>>>>> >>> >>> > >> 'STP':<br>
>> >>>>>>> >>> >>> > >> 'off',<br>
>> >>>>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'},<br>
>> >>>>>>> >>> >>> > >> 'bridged':<br>
>> >>>>>>> >>> >>> > >> True,<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'],<br>
>> >>>>>>> >>> >>> > >> 'gateway':<br>
>> >>>>>>> >>> >>> > >> '',<br>
>> >>>>>>> >>> >>> > >> 'bootproto4':<br>
>> >>>>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}},<br>
>> >>>>>>> >>> >>> > >> 'bridges':<br>
>> >>>>>>> >>> >>> > >> {'Internal':<br>
>> >>>>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'MTU':<br>
>> >>>>>>> >>> >>> > >> '9000',<br>
>> >>>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO':<br>
>> >>>>>>> >>> >>> > >> 'none',<br>
>> >>>>>>> >>> >>> > >> 'STP':<br>
>> >>>>>>> >>> >>> > >> 'off',<br>
>> >>>>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs':<br>
>> >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'stp':<br>
>> >>>>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::',<br>
>> >>>>>>> >>> >>> > >> 'gateway':<br>
>> >>>>>>> >>> >>> > >> '',<br>
>> >>>>>>> >>> >>> > >> 'opts':<br>
>> >>>>>>> >>> >>> > >> {'topology_change_detected': '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_last_member_count':<br>
>> >>>>>>> >>> >>> > >> '2',<br>
>> >>>>>>> >>> >>> > >> 'hash_elasticity': '4',<br>
>> >>>>>>> >>> >>> > >> 'multicast_query_response_interval':<br>
>> >>>>>>> >>> >>> > >> '999',<br>
>> >>>>>>> >>> >>> > >> 'multicast_snooping': '1',<br>
>> >>>>>>> >>> >>> > >> 'multicast_startup_query_interval':<br>
>> >>>>>>> >>> >>> > >> '3124',<br>
>> >>>>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval':<br>
>> >>>>>>> >>> >>> > >> '25496',<br>
>> >>>>>>> >>> >>> > >> 'max_age':<br>
>> >>>>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0',<br>
>> >>>>>>> >>> >>> > >> 'root_id':<br>
>> >>>>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768',<br>
>> >>>>>>> >>> >>> > >> 'multicast_membership_interval':<br>
>> >>>>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_querier':<br>
>> >>>>>>> >>> >>> > >> '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time':<br>
>> >>>>>>> >>> >>> > >> '199',<br>
>> >>>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id':<br>
>> >>>>>>> >>> >>> > >> '8000.001018cddaac',<br>
>> >>>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995',<br>
>> >>>>>>> >>> >>> > >> 'gc_timer':<br>
>> >>>>>>> >>> >>> > >> '31',<br>
>> >>>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_query_interval': '12498',<br>
>> >>>>>>> >>> >>> > >> 'multicast_last_member_interval':<br>
>> >>>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'},<br>
>> >>>>>>> >>> >>> > >> 'ports':<br>
>> >>>>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg':<br>
>> >>>>>>> >>> >>> > >> {'DEFROUTE':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0',<br>
>> >>>>>>> >>> >>> > >> 'NM_CONTROLLED':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE':<br>
>> >>>>>>> >>> >>> > >> 'VMNetwork',<br>
>> >>>>>>> >>> >>> > >> 'TYPE':<br>
>> >>>>>>> >>> >>> > >> 'Bridge',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs':<br>
>> >>>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [],<br>
>> >>>>>>> >>> >>> > >> 'ipv6gateway':<br>
>> >>>>>>> >>> >>> > >> '::',<br>
>> >>>>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected':<br>
>> >>>>>>> >>> >>> > >> '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_last_member_count': '2',<br>
>> >>>>>>> >>> >>> > >> 'hash_elasticity':<br>
>> >>>>>>> >>> >>> > >> '4',<br>
>> >>>>>>> >>> >>> > >> 'multicast_query_response_interval': '999',<br>
>> >>>>>>> >>> >>> > >> 'multicast_snooping':<br>
>> >>>>>>> >>> >>> > >> '1',<br>
>> >>>>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124',<br>
>> >>>>>>> >>> >>> > >> 'hello_timer':<br>
>> >>>>>>> >>> >>> > >> '131',<br>
>> >>>>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age':<br>
>> >>>>>>> >>> >>> > >> '1999',<br>
>> >>>>>>> >>> >>> > >> 'hash_max':<br>
>> >>>>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id':<br>
>> >>>>>>> >>> >>> > >> '8000.60eb6920b46c',<br>
>> >>>>>>> >>> >>> > >> 'priority':<br>
>> >>>>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996',<br>
>> >>>>>>> >>> >>> > >> 'root_path_cost':<br>
>> >>>>>>> >>> >>> > >> '0',<br>
>> >>>>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time':<br>
>> >>>>>>> >>> >>> > >> '199',<br>
>> >>>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id':<br>
>> >>>>>>> >>> >>> > >> '8000.60eb6920b46c',<br>
>> >>>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995',<br>
>> >>>>>>> >>> >>> > >> 'gc_timer':<br>
>> >>>>>>> >>> >>> > >> '31',<br>
>> >>>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0',<br>
>> >>>>>>> >>> >>> > >> 'multicast_query_interval': '12498',<br>
>> >>>>>>> >>> >>> > >> 'multicast_last_member_interval':<br>
>> >>>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'},<br>
>> >>>>>>> >>> >>> > >> 'ports':<br>
>> >>>>>>> >>> >>> > >> ['bond0.36']}}, 'uuid':<br>
>> >>>>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31',<br>
>> >>>>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3':<br>
>> >>>>>>> >>> >>> > >> {'permhwaddr':<br>
>> >>>>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE':<br>
>> >>>>>>> >>> >>> > >> 'yes',<br>
>> >>>>>>> >>> >>> > >> 'NM_CONTROLLED':<br>
>> >>>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae',<br>
>> >>>>>>> >>> >>> > >> 'MASTER':<br>
>> >>>>>>> >>> >>> > >> 'bond1',<br>
>> >>>>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '9000',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr':<br>
>> >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac',<br>
>> >>>>>>> >>> >>> > >> 'speed':<br>
>> >>>>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac',<br>
>> >>>>>>> >>> >>> > >> 'addr': '',<br>
>> >>>>>>> >>> >>> > >> 'cfg':<br>
>> >>>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU':<br>
>> >>>>>>> >>> >>> > >> '9000',<br>
>> >>>>>>> >>> >>> > >> 'HWADDR':<br>
>> >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE':<br>
>> >>>>>>> >>> >>> > >> 'eth2',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'ipv4addrs': [],<br>
>> >>>>>>> >>> >>> > >> 'hwaddr':<br>
>> >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1':<br>
>> >>>>>>> >>> >>> > >> {'permhwaddr':<br>
>> >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE':<br>
>> >>>>>>> >>> >>> > >> 'yes',<br>
>> >>>>>>> >>> >>> > >> 'NM_CONTROLLED':<br>
>> >>>>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d',<br>
>> >>>>>>> >>> >>> > >> 'MASTER':<br>
>> >>>>>>> >>> >>> > >> 'bond0',<br>
>> >>>>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr':<br>
>> >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c',<br>
>> >>>>>>> >>> >>> > >> 'speed':<br>
>> >>>>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c',<br>
>> >>>>>>> >>> >>> > >> 'addr': '',<br>
>> >>>>>>> >>> >>> > >> 'cfg':<br>
>> >>>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'HWADDR':<br>
>> >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE':<br>
>> >>>>>>> >>> >>> > >> 'eth0',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'yes'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'ipv4addrs': [],<br>
>> >>>>>>> >>> >>> > >> 'hwaddr':<br>
>> >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}},<br>
>> >>>>>>> >>> >>> > >> 'software_revision': '1',<br>
>> >>>>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4',<br>
>> >>>>>>> >>> >>> > >> '3.5'],<br>
>> >>>>>>> >>> >>> > >> 'cpuFlags':<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270',<br>
>> >>>>>>> >>> >>> > >> 'ISCSIInitiatorName':<br>
>> >>>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8',<br>
>> >>>>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs':<br>
>> >>>>>>> >>> >>> > >> ['3.0',<br>
>> >>>>>>> >>> >>> > >> '3.1',<br>
>> >>>>>>> >>> >>> > >> '3.2',<br>
>> >>>>>>> >>> >>> > >> '3.3',<br>
>> >>>>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem':<br>
>> >>>>>>> >>> >>> > >> '321',<br>
>> >>>>>>> >>> >>> > >> 'bondings':<br>
>> >>>>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'slaves':<br>
>> >>>>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr':<br>
>> >>>>>>> >>> >>> > >> '',<br>
>> >>>>>>> >>> >>> > >> 'cfg':<br>
>> >>>>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'BONDING_OPTS':<br>
>> >>>>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'yes'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs':<br>
>> >>>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c',<br>
>> >>>>>>> >>> >>> > >> 'slaves':<br>
>> >>>>>>> >>> >>> > >> ['eth0',<br>
>> >>>>>>> >>> >>> > >> 'eth1'],<br>
>> >>>>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1':<br>
>> >>>>>>> >>> >>> > >> {'addr':<br>
>> >>>>>>> >>> >>> > >> '10.10.10.6',<br>
>> >>>>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6',<br>
>> >>>>>>> >>> >>> > >> 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'MTU':<br>
>> >>>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK':<br>
>> >>>>>>> >>> >>> > >> '255.255.255.0',<br>
>> >>>>>>> >>> >>> > >> 'BOOTPROTO':<br>
>> >>>>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100',<br>
>> >>>>>>> >>> >>> > >> 'DEVICE':<br>
>> >>>>>>> >>> >>> > >> 'bond1',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'],<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '9000',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs':<br>
>> >>>>>>> >>> >>> > >> ['<a href="http://10.10.10.6/24" target="_blank">10.10.10.6/24</a>'<br>
>> >>>>>>> >>> >>> > >> <<a href="http://10.10.10.6/24%27" target="_blank">http://10.10.10.6/24%27</a>>], 'hwaddr':<br>
>> >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac',<br>
>> >>>>>>> >>> >>> > >> 'slaves':<br>
>> >>>>>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode':<br>
>> >>>>>>> >>> >>> > >> '4'}},<br>
>> >>>>>>> >>> >>> > >> 'bond2':<br>
>> >>>>>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '',<br>
>> >>>>>>> >>> >>> > >> 'slaves':<br>
>> >>>>>>> >>> >>> > >> [],<br>
>> >>>>>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '',<br>
>> >>>>>>> >>> >>> > >> 'cfg': {},<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr':<br>
>> >>>>>>> >>> >>> > >> '00:00:00:00:00:00'}},<br>
>> >>>>>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019',<br>
>> >>>>>>> >>> >>> > >> 'cpuSpeed':<br>
>> >>>>>>> >>> >>> > >> '2667.000',<br>
>> >>>>>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus':<br>
>> >>>>>>> >>> >>> > >> [6,<br>
>> >>>>>>> >>> >>> > >> 7, 8,<br>
>> >>>>>>> >>> >>> > >> 9,<br>
>> >>>>>>> >>> >>> > >> 10, 11,<br>
>> >>>>>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory':<br>
>> >>>>>>> >>> >>> > >> '12278',<br>
>> >>>>>>> >>> >>> > >> 'cpus':<br>
>> >>>>>>> >>> >>> > >> [0,<br>
>> >>>>>>> >>> >>> > >> 1, 2,<br>
>> >>>>>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name':<br>
>> >>>>>>> >>> >>> > >> 'Snow<br>
>> >>>>>>> >>> >>> > >> Man',<br>
>> >>>>>>> >>> >>> > >> 'vlans':<br>
>> >>>>>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr':<br>
>> >>>>>>> >>> >>> > >> '43.25.17.16',<br>
>> >>>>>>> >>> >>> > >> 'cfg':<br>
>> >>>>>>> >>> >>> > >> {'DEFROUTE':<br>
>> >>>>>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16',<br>
>> >>>>>>> >>> >>> > >> 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'GATEWAY':<br>
>> >>>>>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK':<br>
>> >>>>>>> >>> >>> > >> '255.255.255.0',<br>
>> >>>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'yes'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'],<br>
>> >>>>>>> >>> >>> > >> 'vlanid':<br>
>> >>>>>>> >>> >>> > >> 10,<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs':<br>
>> >>>>>>> >>> >>> > >> ['<a href="http://43.25.17.16/24" target="_blank">43.25.17.16/24</a>']<br>
>> >>>>>>> >>> >>> > >> <<a href="http://43.25.17.16/24%27%5D" target="_blank">http://43.25.17.16/24%27%5D</a>>}, 'bond0.36': {'iface':<br>
>> >>>>>>> >>> >>> > >> 'bond0',<br>
>> >>>>>>> >>> >>> > >> 'addr':<br>
>> >>>>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes',<br>
>> >>>>>>> >>> >>> > >> 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'MTU':<br>
>> >>>>>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'],<br>
>> >>>>>>> >>> >>> > >> 'vlanid':<br>
>> >>>>>>> >>> >>> > >> 36,<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '1500',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100':<br>
>> >>>>>>> >>> >>> > >> {'iface':<br>
>> >>>>>>> >>> >>> > >> 'bond1',<br>
>> >>>>>>> >>> >>> > >> 'addr':<br>
>> >>>>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes',<br>
>> >>>>>>> >>> >>> > >> 'HOTPLUG':<br>
>> >>>>>>> >>> >>> > >> 'no',<br>
>> >>>>>>> >>> >>> > >> 'MTU':<br>
>> >>>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100',<br>
>> >>>>>>> >>> >>> > >> 'ONBOOT':<br>
>> >>>>>>> >>> >>> > >> 'no'},<br>
>> >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'],<br>
>> >>>>>>> >>> >>> > >> 'vlanid':<br>
>> >>>>>>> >>> >>> > >> 100,<br>
>> >>>>>>> >>> >>> > >> 'mtu':<br>
>> >>>>>>> >>> >>> > >> '9000',<br>
>> >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12',<br>
>> >>>>>>> >>> >>> > >> 'kvmEnabled':<br>
>> >>>>>>> >>> >>> > >> 'true',<br>
>> >>>>>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24',<br>
>> >>>>>>> >>> >>> > >> 'emulatedMachines':<br>
>> >>>>>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0',<br>
>> >>>>>>> >>> >>> > >> u'rhel6.2.0',<br>
>> >>>>>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0',<br>
>> >>>>>>> >>> >>> > >> u'rhel5.4.4',<br>
>> >>>>>>> >>> >>> > >> u'rhel5.4.0'],<br>
>> >>>>>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1',<br>
>> >>>>>>> >>> >>> > >> 'version':<br>
>> >>>>>>> >>> >>> > >> '6',<br>
>> >>>>>>> >>> >>> > >> 'name':<br>
>> >>>>>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper)<br>
>> >>>>>>> >>> >>> > >> client<br>
>> >>>>>>> >>> >>> > >> [10.10.10.2]::call<br>
>> >>>>>>> >>> >>> > >> getHardwareInfo with () {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper)<br>
>> >>>>>>> >>> >>> > >> return<br>
>> >>>>>>> >>> >>> > >> getHardwareInfo<br>
>> >>>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0},<br>
>> >>>>>>> >>> >>> > >> 'info':<br>
>> >>>>>>> >>> >>> > >> {'systemProductName': 'CS24-TY',<br>
>> >>>>>>> >>> >>> > >> 'systemSerialNumber':<br>
>> >>>>>>> >>> >>> > >> '7LWSPN1',<br>
>> >>>>>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00',<br>
>> >>>>>>> >>> >>> > >> 'systemUUID':<br>
>> >>>>>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31',<br>
>> >>>>>>> >>> >>> > >> 'systemManufacturer':<br>
>> >>>>>>> >>> >>> > >> 'Dell'}}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper)<br>
>> >>>>>>> >>> >>> > >> client<br>
>> >>>>>>> >>> >>> > >> [10.10.10.2]::call<br>
>> >>>>>>> >>> >>> > >> hostsList with () {} flowID [222e8036]<br>
>> >>>>>>> >>> >>> > >> Thread-13::ERROR::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper)<br>
>> >>>>>>> >>> >>> > >> vdsm<br>
>> >>>>>>> >>> >>> > >> exception<br>
>> >>>>>>> >>> >>> > >> occured<br>
>> >>>>>>> >>> >>> > >> Traceback (most recent call last):<br>
>> >>>>>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line<br>
>> >>>>>>> >>> >>> > >> 1135,<br>
>> >>>>>>> >>> >>> > >> in<br>
>> >>>>>>> >>> >>> > >> wrapper<br>
>> >>>>>>> >>> >>> > >> res = f(*args, **kwargs)<br>
>> >>>>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in<br>
>> >>>>>>> >>> >>> > >> wrapper<br>
>> >>>>>>> >>> >>> > >> rv = func(*args, **kwargs)<br>
>> >>>>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in<br>
>> >>>>>>> >>> >>> > >> hostsList<br>
>> >>>>>>> >>> >>> > >> return {'hosts':<br>
>> >>>>>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()}<br>
>> >>>>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in<br>
>> >>>>>>> >>> >>> > >> __call__<br>
>> >>>>>>> >>> >>> > >> return callMethod()<br>
>> >>>>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in<br>
>> >>>>>>> >>> >>> > >> <lambda><br>
>> >>>>>>> >>> >>> > >> **kwargs)<br>
>> >>>>>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus<br>
>> >>>>>>> >>> >>> > >> File<br>
>> >>>>>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py",<br>
>> >>>>>>> >>> >>> > >> line<br>
>> >>>>>>> >>> >>> > >> 740,<br>
>> >>>>>>> >>> >>> > >> in<br>
>> >>>>>>> >>> >>> > >> _callmethod<br>
>> >>>>>>> >>> >>> > >> raise convert_to_error(kind, result)<br>
>> >>>>>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution<br>
>> >>>>>>> >>> >>> > >> failed<br>
>> >>>>>>> >>> >>> > >> error: Connection failed. Please check if gluster<br>
>> >>>>>>> >>> >>> > >> daemon<br>
>> >>>>>>> >>> >>> > >> is<br>
>> >>>>>>> >>> >>> > >> operational.<br>
>> >>>>>>> >>> >>> > >> return code: 1<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState)<br>
>> >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving<br>
>> >>>>>>> >>> >>> > >> from<br>
>> >>>>>>> >>> >>> > >> state<br>
>> >>>>>>> >>> >>> > >> init<br>
>> >>>>>>> >>> >>> > >> -><br>
>> >>>>>>> >>> >>> > >> state preparing<br>
>> >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run<br>
>> >>>>>>> >>> >>> > >> and<br>
>> >>>>>>> >>> >>> > >> protect:<br>
>> >>>>>>> >>> >>> > >> repoStats(options=None)<br>
>> >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24<br>
>> >>>>>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run<br>
>> >>>>>>> >>> >>> > >> and<br>
>> >>>>>>> >>> >>> > >> protect:<br>
>> >>>>>>> >>> >>> > >> repoStats, Return response: {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare)<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState)<br>
>> >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving<br>
>> >>>>>>> >>> >>> > >> from<br>
>> >>>>>>> >>> >>> > >> state<br>
>> >>>>>>> >>> >>> > >> preparing<br>
>> >>>>>>> >>> >>> > >> -><br>
>> >>>>>>> >>> >>> > >> state finished<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)<br>
>> >>>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
</div></div>>> >>>>>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)<br>
<span>>> >>>>>>> >>> >>> > >> Owner.cancelAll requests {}<br>
>> >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
</span><span>>> >>>>>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref)<br>
>> >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0<br>
>> >>>>>>> >>> >>> > >> aborting<br>
>> >>>>>>> >>> >>> > >> False<br>
>> >>>>>>> >>> >>> > >> -------------------------------<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >> [root@compute4 ~]# service glusterd status<br>
>> >>>>>>> >>> >>> > >> glusterd is stopped<br>
>> >>>>>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd<br>
>> >>>>>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on<br>
>> >>>>>>> >>> >>> > >> 5:on<br>
>> >>>>>>> >>> >>> > >> 6:off<br>
>> >>>>>>> >>> >>> > >> [root@compute4 ~]#<br>
>> >>>>>>> >>> >>> > >><br>
</span>>> >>>>>>> >>> >>> > >> Thanks,<br>
>> >>>>>>> >>> >>> > >> Punit<br>
>> >>>>>>> >>> >>> > >><br>
<span>>> >>>>>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj<br>
>> >>>>>>> >>> >>> > >> <<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>><br>
>> >>>>>>> >>> >>> > >> wrote:<br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log<br>
>> >>>>>>> >>> >>> > >>> from<br>
>> >>>>>>> >>> >>> > >>> the<br>
>> >>>>>>> >>> >>> > >>> host?<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> Also check if glusterd service is running.<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> Thanks,<br>
>> >>>>>>> >>> >>> > >>> Kanagaraj<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote:<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> Hi,<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> After reboot my Hypervisior host can not activate<br>
>> >>>>>>> >>> >>> > >>> again<br>
>> >>>>>>> >>> >>> > >>> in the<br>
>> >>>>>>> >>> >>> > >>> cluster<br>
>> >>>>>>> >>> >>> > >>> and failed with the following error :-<br>
>> >>>>>>> >>> >>> > >>><br>
</span><span>>> >>>>>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server...<br>
>> >>>>>>> >>> >>> > >>><br>
</span><div><div>>> >>>>>>> >>> >>> > >>> Engine logs :-<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START,<br>
>> >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4,<br>
>> >>>>>>> >>> >>> > >>> HostId<br>
>> >>>>>>> >>> >>> > >>> =<br>
>> >>>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id:<br>
>> >>>>>>> >>> >>> > >>> 5f251c90<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH,<br>
>> >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand,<br>
>> >>>>>>> >>> >>> > >>> return:<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0},<br>
>> >>>>>>> >>> >>> > >>> log id: 5f251c90<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8)<br>
>> >>>>>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock<br>
>> >>>>>>> >>> >>> > >>> [exclusiveLocks=<br>
>> >>>>>>> >>> >>> > >>> key:<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS<br>
>> >>>>>>> >>> >>> > >>> , sharedLocks= ]<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d]<br>
>> >>>>>>> >>> >>> > >>> Running<br>
>> >>>>>>> >>> >>> > >>> command:<br>
>> >>>>>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities<br>
>> >>>>>>> >>> >>> > >>> affected :<br>
>> >>>>>>> >>> >>> > >>> ID:<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction<br>
>> >>>>>>> >>> >>> > >>> group<br>
>> >>>>>>> >>> >>> > >>> MANIPULATE_HOST<br>
>> >>>>>>> >>> >>> > >>> with role type ADMIN<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d]<br>
>> >>>>>>> >>> >>> > >>> Before<br>
>> >>>>>>> >>> >>> > >>> acquiring<br>
>> >>>>>>> >>> >>> > >>> lock in<br>
>> >>>>>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from<br>
>> >>>>>>> >>> >>> > >>> data-center<br>
>> >>>>>>> >>> >>> > >>> SV_WTC<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock<br>
>> >>>>>>> >>> >>> > >>> acquired,<br>
>> >>>>>>> >>> >>> > >>> from<br>
>> >>>>>>> >>> >>> > >>> now a<br>
>> >>>>>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5<br>
>> >>>>>>> >>> >>> > >>> from<br>
>> >>>>>>> >>> >>> > >>> data-center<br>
>> >>>>>>> >>> >>> > >>> SV_WTC<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d]<br>
>> >>>>>>> >>> >>> > >>> START,<br>
>> >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId =<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a,<br>
>> >>>>>>> >>> >>> > >>> status=Unassigned,<br>
>> >>>>>>> >>> >>> > >>> nonOperationalReason=NONE,<br>
>> >>>>>>> >>> >>> > >>> stopSpmFailureLogged=false),<br>
>> >>>>>>> >>> >>> > >>> log id:<br>
>> >>>>>>> >>> >>> > >>> 1cbc7311<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d]<br>
>> >>>>>>> >>> >>> > >>> FINISH,<br>
>> >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate<br>
>> >>>>>>> >>> >>> > >>> finished.<br>
>> >>>>>>> >>> >>> > >>> Lock<br>
>> >>>>>>> >>> >>> > >>> released.<br>
>> >>>>>>> >>> >>> > >>> Monitoring can run now for host Compute5 from<br>
>> >>>>>>> >>> >>> > >>> data-center<br>
>> >>>>>>> >>> >>> > >>> SV_WTC<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID:<br>
>> >>>>>>> >>> >>> > >>> 287d570d,<br>
>> >>>>>>> >>> >>> > >>> Job<br>
>> >>>>>>> >>> >>> > >>> ID:<br>
>> >>>>>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack:<br>
>> >>>>>>> >>> >>> > >>> null,<br>
>> >>>>>>> >>> >>> > >>> Custom<br>
>> >>>>>>> >>> >>> > >>> Event ID:<br>
>> >>>>>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin.<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to<br>
>> >>>>>>> >>> >>> > >>> object<br>
>> >>>>>>> >>> >>> > >>> EngineLock<br>
>> >>>>>>> >>> >>> > >>> [exclusiveLocks= key:<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a<br>
>> >>>>>>> >>> >>> > >>> value:<br>
>> >>>>>>> >>> >>> > >>> VDS<br>
>> >>>>>>> >>> >>> > >>> , sharedLocks= ]<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START,<br>
>> >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4,<br>
>> >>>>>>> >>> >>> > >>> HostId<br>
>> >>>>>>> >>> >>> > >>> =<br>
>> >>>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id:<br>
>> >>>>>>> >>> >>> > >>> 48a0c832<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START,<br>
>> >>>>>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5,<br>
>> >>>>>>> >>> >>> > >>> HostId =<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a,<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log<br>
>> >>>>>>> >>> >>> > >>> id:<br>
>> >>>>>>> >>> >>> > >>> 6d560cc2<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH,<br>
>> >>>>>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log<br>
>> >>>>>>> >>> >>> > >>> id: 6d560cc2<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is<br>
>> >>>>>>> >>> >>> > >>> running<br>
>> >>>>>>> >>> >>> > >>> with<br>
>> >>>>>>> >>> >>> > >>> disabled<br>
>> >>>>>>> >>> >>> > >>> SELinux.<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf]<br>
>> >>>>>>> >>> >>> > >>> Running<br>
>> >>>>>>> >>> >>> > >>> command:<br>
>> >>>>>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal:<br>
>> >>>>>>> >>> >>> > >>> true.<br>
>> >>>>>>> >>> >>> > >>> Entities<br>
>> >>>>>>> >>> >>> > >>> affected<br>
>> >>>>>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type:<br>
>> >>>>>>> >>> >>> > >>> VDS<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START,<br>
>> >>>>>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5,<br>
>> >>>>>>> >>> >>> > >>> HostId<br>
>> >>>>>>> >>> >>> > >>> =<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id:<br>
>> >>>>>>> >>> >>> > >>> 4faed87<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf]<br>
>> >>>>>>> >>> >>> > >>> FINISH,<br>
>> >>>>>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5]<br>
>> >>>>>>> >>> >>> > >>> Running<br>
>> >>>>>>> >>> >>> > >>> command:<br>
>> >>>>>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities<br>
>> >>>>>>> >>> >>> > >>> affected :<br>
>> >>>>>>> >>> >>> > >>> ID:<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START,<br>
>> >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId =<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a,<br>
>> >>>>>>> >>> >>> > >>> status=NonOperational,<br>
>> >>>>>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED,<br>
>> >>>>>>> >>> >>> > >>> stopSpmFailureLogged=false),<br>
>> >>>>>>> >>> >>> > >>> log id: fed5617<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5]<br>
>> >>>>>>> >>> >>> > >>> FINISH,<br>
>> >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5]<br>
>> >>>>>>> >>> >>> > >>> Correlation ID:<br>
>> >>>>>>> >>> >>> > >>> 4a84c4e5,<br>
>> >>>>>>> >>> >>> > >>> Job<br>
>> >>>>>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call<br>
>> >>>>>>> >>> >>> > >>> Stack:<br>
>> >>>>>>> >>> >>> > >>> null,<br>
>> >>>>>>> >>> >>> > >>> Custom<br>
>> >>>>>>> >>> >>> > >>> Event<br>
</div></div>>> >>>>>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed<br>
>> >>>>>>> >>> >>> > >>> on<br>
<div><div>>> >>>>>>> >>> >>> > >>> server<br>
>> >>>>>>> >>> >>> > >>> Compute5.<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5]<br>
>> >>>>>>> >>> >>> > >>> Correlation ID:<br>
>> >>>>>>> >>> >>> > >>> null,<br>
>> >>>>>>> >>> >>> > >>> Call<br>
>> >>>>>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of<br>
>> >>>>>>> >>> >>> > >>> host<br>
>> >>>>>>> >>> >>> > >>> Compute5<br>
>> >>>>>>> >>> >>> > >>> was<br>
>> >>>>>>> >>> >>> > >>> set<br>
>> >>>>>>> >>> >>> > >>> to NonOperational.<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO<br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running<br>
>> >>>>>>> >>> >>> > >>> command:<br>
>> >>>>>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities<br>
>> >>>>>>> >>> >>> > >>> affected :<br>
>> >>>>>>> >>> >>> > >>> ID:<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host<br>
>> >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is<br>
>> >>>>>>> >>> >>> > >>> already in<br>
>> >>>>>>> >>> >>> > >>> NonOperational status for reason<br>
>> >>>>>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED.<br>
>> >>>>>>> >>> >>> > >>> SetNonOperationalVds command is skipped.<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836]<br>
>> >>>>>>> >>> >>> > >>> FINISH,<br>
>> >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return:<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1},<br>
>> >>>>>>> >>> >>> > >>> log id: 48a0c832<br>
>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]<br>
>> >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START,<br>
>> >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4,<br>
>> >>>>>>> >>> >>> > >>> HostId<br>
>> >>>>>>> >>> >>> > >>> =<br>
>> >>>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id:<br>
>> >>>>>>> >>> >>> > >>> 3ce13ebc<br>
>> >>>>>>> >>> >>> > >>> ^C<br>
>> >>>>>>> >>> >>> > >>> [root@ccr01 ~]#<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> Thanks,<br>
>> >>>>>>> >>> >>> > >>> Punit<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> _______________________________________________<br>
>> >>>>>>> >>> >>> > >>> Users mailing<br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>> listUsers@ovirt.orghttp://<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">lists.ovirt.org/mailman/listinfo/users</a><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >>><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > >><br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> > ><br>
>> >>>>>>> >>> >>> ><br>
>> >>>>>>> >>> >><br>
>> >>>>>>> >>> >><br>
>> >>>>>>> >>> ><br>
>> >>>>>>> >>> ><br>
>> >>>>>>> >>> > _______________________________________________<br>
>> >>>>>>> >>> > Gluster-users mailing list<br>
>> >>>>>>> >>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> >>>>>>> >>> ><br>
>> >>>>>>> >>> > <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >>>>>>> >><br>
>> >>>>>>> >><br>
>> >>>>>>> ><br>
>> >>>>>><br>
>> >>>>>><br>
>> >>>><br>
>> >><br>
><br>
><br>
</div></div></blockquote></div><br></div>
</blockquote></div><br></div>
</blockquote></div><br></div>