<div dir="ltr">Hi Atin,<div><br></div><div>What about if i will use glusterfs 3.5 ?? is this bug will affect 3.5 also ??</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 13, 2015 at 3:00 PM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 01/13/2015 12:12 PM, Punit Dambiwal wrote:<br>
> Hi Atin,<br>
><br>
> Please find the output from here :- <a href="http://ur1.ca/jf4bs" target="_blank">http://ur1.ca/jf4bs</a><br>
><br>
</span>Looks like <a href="http://review.gluster.org/#/c/9269/" target="_blank">http://review.gluster.org/#/c/9269/</a> should solve this issue.<br>
Please note this patch has not been taken in 3.6 release. Would you be<br>
able to apply this patch on the source and re-test?<br>
<span class="HOEnZb"><font color="#888888"><br>
~Atin<br>
</font></span><div class="HOEnZb"><div class="h5">> On Tue, Jan 13, 2015 at 12:37 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> wrote:<br>
><br>
>> Punit,<br>
>><br>
>> cli log wouldn't help much here. To debug this issue further can you<br>
>> please let us know the following:<br>
>><br>
>> 1. gluster peer status output<br>
>> 2. gluster volume status output<br>
>> 3. gluster --version output.<br>
>> 4. Which command got failed<br>
>> 5. glusterd log file of all the nodes<br>
>><br>
>> ~Atin<br>
>><br>
>><br>
>> On 01/13/2015 07:48 AM, Punit Dambiwal wrote:<br>
>>> Hi,<br>
>>><br>
>>> Please find the more details on this ....can anybody from gluster will<br>
>> help<br>
>>> me here :-<br>
>>><br>
>>><br>
>>> Gluster CLI Logs :- /var/log/glusterfs/cli.log<br>
>>><br>
>>> [2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs:<br>
>> got<br>
>>> RPC_CLNT_CONNECT<br>
>>> [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify]<br>
>>> 0-glusterfs: got RPC_CLNT_CONNECT<br>
>>> [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler]<br>
>>> 0-transport: disconnecting now<br>
>>> [2015-01-13 02:06:23.072055] T<br>
>> [cli-quotad-client.c:100:cli_quotad_notify]<br>
>>> 0-glusterfs: got RPC_CLNT_DISCONNECT<br>
>>> [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record]<br>
>>> 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:<br>
>>> [2015-01-13 02:06:23.072176] T<br>
>>> [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request<br>
>> fraglen<br>
>>> 128, payload: 64, rpc hdr: 64<br>
>>> [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--><br>
>>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420]<br>
>> (--><br>
>>><br>
>> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293]<br>
>>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--><br>
>>> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--><br>
>>> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs:<br>
>> connect<br>
>>> () called on transport already connected<br>
>>> [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit]<br>
>>> 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers:<br>
>> 2,<br>
>>> Proc: 27) to rpc-transport (glusterfs)<br>
>>> [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping]<br>
>>> 0-glusterfs: ping timeout is 0, returning<br>
>>> [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init]<br>
>>> 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI,<br>
>>> ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)<br>
>>> [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk]<br>
>>> 0-cli: Received response to status cmd<br>
>>> [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli:<br>
>>> Returning 0<br>
>>> [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume]<br>
>>> 0-cli: Returning: 0<br>
>>> [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output]<br>
>>> 0-cli: Returning 0<br>
>>> [2015-01-13 02:06:23.076244] D<br>
>> [cli-xml-output.c:131:cli_xml_output_common]<br>
>>> 0-cli: Returning 0<br>
>>> [2015-01-13 02:06:23.076256] D<br>
>>> [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning<br>
>> 0<br>
>>> [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output]<br>
>>> 0-cli: Returning 0<br>
>>> [2015-01-13 02:06:23.076459] D<br>
>>> [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0<br>
>>> [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0<br>
>>><br>
>>> Command log :- /var/log/glusterfs/.cmd_log_history<br>
>>><br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking<br>
>>> failed on <a href="http://cpu02.zne01.hkg1.stack.com" target="_blank">cpu02.zne01.hkg1.stack.com</a>. Please check log file for details.<br>
>>> Locking failed on <a href="http://cpu03.zne01.hkg1.stack.com" target="_blank">cpu03.zne01.hkg1.stack.com</a>. Please check log file for<br>
>>> details.<br>
>>> Locking failed on <a href="http://cpu04.zne01.hkg1.stack.com" target="_blank">cpu04.zne01.hkg1.stack.com</a>. Please check log file for<br>
>>> details.<br>
>>> [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking<br>
>>> failed on <a href="http://cpu02.zne01.hkg1.stack.com" target="_blank">cpu02.zne01.hkg1.stack.com</a>. Please check log file for details.<br>
>>> Locking failed on <a href="http://cpu03.zne01.hkg1.stack.com" target="_blank">cpu03.zne01.hkg1.stack.com</a>. Please check log file for<br>
>>> details.<br>
>>> Locking failed on <a href="http://cpu04.zne01.hkg1.stack.com" target="_blank">cpu04.zne01.hkg1.stack.com</a>. Please check log file for<br>
>>> details.<br>
>>> [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking<br>
>> failed<br>
>>> on <a href="http://cpu02.zne01.hkg1.stack.com" target="_blank">cpu02.zne01.hkg1.stack.com</a>. Please check log file for details.<br>
>>> Locking failed on <a href="http://cpu03.zne01.hkg1.stack.com" target="_blank">cpu03.zne01.hkg1.stack.com</a>. Please check log file for<br>
>>> details.<br>
>>> Locking failed on <a href="http://cpu04.zne01.hkg1.stack.com" target="_blank">cpu04.zne01.hkg1.stack.com</a>. Please check log file for<br>
>>> details.<br>
>>> [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED :<br>
>> Staging<br>
>>> failed on 00000000-0000-0000-0000-000000000000. Please check log file for<br>
>>> details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log<br>
>>> file for details.<br>
>>> [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS<br>
>>> [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS<br>
>>> [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS<br>
>>> [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS<br>
>>> [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS<br>
>>> [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS<br>
>>> [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS<br>
>>><br>
>>><br>
>>> On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <<br>
>> <a href="mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>><br>
>>> wrote:<br>
>>><br>
>>>> I can see the failures in glusterd log.<br>
>>>><br>
>>>> Can someone from glusterfs dev pls help on this?<br>
>>>><br>
>>>> Thanks,<br>
>>>> Kanagaraj<br>
>>>><br>
>>>> ----- Original Message -----<br>
>>>>> From: "Punit Dambiwal" <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>><br>
>>>>> To: "Kanagaraj" <<a href="mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>><br>
>>>>> Cc: "Martin Pavlík" <<a href="mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>>, "Vijay Bellur" <<br>
>>>> <a href="mailto:vbellur@redhat.com">vbellur@redhat.com</a>>, "Kaushal M" <<a href="mailto:kshlmster@gmail.com">kshlmster@gmail.com</a>>,<br>
>>>>> <a href="mailto:users@ovirt.org">users@ovirt.org</a>, <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>>>>> Sent: Monday, January 12, 2015 3:36:43 PM<br>
>>>>> Subject: Re: Failed to create volume in OVirt with gluster<br>
>>>>><br>
>>>>> Hi Kanagaraj,<br>
>>>>><br>
>>>>> Please find the logs from here :- <a href="http://ur1.ca/jeszc" target="_blank">http://ur1.ca/jeszc</a><br>
>>>>><br>
>>>>> [image: Inline image 1]<br>
>>>>><br>
>>>>> [image: Inline image 2]<br>
>>>>><br>
>>>>> On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <<a href="mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>><br>
>> wrote:<br>
>>>>><br>
>>>>>> Looks like there are some failures in gluster.<br>
>>>>>> Can you send the log output from glusterd log file from the relevant<br>
>>>> hosts?<br>
>>>>>><br>
>>>>>> Thanks,<br>
>>>>>> Kanagaraj<br>
>>>>>><br>
>>>>>><br>
>>>>>> On 01/12/2015 10:24 AM, Punit Dambiwal wrote:<br>
>>>>>><br>
>>>>>> Hi,<br>
>>>>>><br>
>>>>>> Is there any one from gluster can help me here :-<br>
>>>>>><br>
>>>>>> Engine logs :-<br>
>>>>>><br>
>>>>>> 2015-01-12 12:50:33,841 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:34,725 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:36,824 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:36,853 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:36,866 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:37,751 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:39,849 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:39,878 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:39,890 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:40,776 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:42,878 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:42,903 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:42,916 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>> (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait<br>
>> lock<br>
>>>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300<br>
>>>>>> value: GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>> 2015-01-12 12:50:43,771 INFO<br>
>>>>>><br>
>>>> [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]<br>
>>>>>> (ajp--127.0.0.1-8702-1) [330ace48] FINISH,<br>
>>>> CreateGlusterVolumeVDSCommand,<br>
>>>>>> log id: 303e70a4<br>
>>>>>> 2015-01-12 12:50:43,780 ERROR<br>
>>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
>>>>>> (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID:<br>
>>>>>> 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event<br>
>>>> ID:<br>
>>>>>> -1, Message: Creation of Gluster Volume vol01 failed.<br>
>>>>>> 2015-01-12 12:50:43,785 INFO<br>
>>>>>> [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]<br>
>>>>>> (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock<br>
>>>>>> [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value:<br>
>>>> GLUSTER<br>
>>>>>> , sharedLocks= ]<br>
>>>>>><br>
>>>>>> [image: Inline image 2]<br>
>>>>>><br>
>>>>>><br>
>>>>>> On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <<a href="mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>><br>
>>>> wrote:<br>
>>>>>><br>
>>>>>>> Hi Punit,<br>
>>>>>>><br>
>>>>>>> unfortunately I’am not that good with the gluster, I was just<br>
>>>> following<br>
>>>>>>> the obvious clue from the log. Could you try on the nodes if the<br>
>>>> packages<br>
>>>>>>> are even available for installation<br>
>>>>>>><br>
>>>>>>> yum install gluster-swift gluster-swift-object gluster-swift-plugin<br>
>>>>>>> gluster-swift-account<br>
>>>>>>> gluster-swift-proxy gluster-swift-doc gluster-swift-container<br>
>>>>>>> glusterfs-geo-replication<br>
>>>>>>><br>
>>>>>>> if not you could try to get them in official gluster repo.<br>
>>>>>>><br>
>>>><br>
>> <a href="http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo" target="_blank">http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo</a><br>
>>>>>>><br>
>>>>>>> HTH<br>
>>>>>>><br>
>>>>>>> M.<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> On 10 Jan 2015, at 04:35, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>><br>
>> wrote:<br>
>>>>>>><br>
>>>>>>> Hi Martin,<br>
>>>>>>><br>
>>>>>>> I installed gluster from ovirt repo....is it require to install<br>
>> those<br>
>>>>>>> packages manually ??<br>
>>>>>>><br>
>>>>>>> On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <<a href="mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>><br>
>>>> wrote:<br>
>>>>>>><br>
>>>>>>>> Hi Punit,<br>
>>>>>>>><br>
>>>>>>>> can you verify that nodes contain cluster packages from the<br>
>> following<br>
>>>>>>>> log?<br>
>>>>>>>><br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift-object',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift-plugin',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift-account',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift-proxy',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift-doc',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('gluster-swift-container',) not found<br>
>>>>>>>> Thread-14::DEBUG::2015-01-09<br>
>>>>>>>> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package<br>
>>>>>>>> ('glusterfs-geo-replication',) not found<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> M.<br>
>>>>>>>><br>
>>>>>>>> On 09 Jan 2015, at 11:13, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>><br>
>>>> wrote:<br>
>>>>>>>><br>
>>>>>>>> Hi Kanagaraj,<br>
>>>>>>>><br>
>>>>>>>> Please find the attached logs :-<br>
>>>>>>>><br>
>>>>>>>> Engine Logs :- <a href="http://ur1.ca/jdopt" target="_blank">http://ur1.ca/jdopt</a><br>
>>>>>>>> VDSM Logs :- <a href="http://ur1.ca/jdoq9" target="_blank">http://ur1.ca/jdoq9</a><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <<a href="mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>><br>
>>>> wrote:<br>
>>>>>>>><br>
>>>>>>>>> Do you see any errors in the UI?<br>
>>>>>>>>><br>
>>>>>>>>> Also please provide the engine.log and vdsm.log when the failure<br>
>>>>>>>>> occured.<br>
>>>>>>>>><br>
>>>>>>>>> Thanks,<br>
>>>>>>>>> Kanagaraj<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> On 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br>
>>>>>>>>><br>
>>>>>>>>> Hi Martin,<br>
>>>>>>>>><br>
>>>>>>>>> The steps are below :-<br>
>>>>>>>>><br>
>>>>>>>>> 1. Step the ovirt engine on the one server...<br>
>>>>>>>>> 2. Installed centos 7 on 4 host node servers..<br>
>>>>>>>>> 3. I am using host node (compute+storage)....now i have added all 4<br>
>>>>>>>>> nodes to engine...<br>
>>>>>>>>> 4. Create the gluster volume from GUI...<br>
>>>>>>>>><br>
>>>>>>>>> Network :-<br>
>>>>>>>>> eth0 :- public network (1G)<br>
>>>>>>>>> eth1+eth2=bond0= VM public network (1G)<br>
>>>>>>>>> eth3+eth4=bond1=ovirtmgmt+storage (10G private network)<br>
>>>>>>>>><br>
>>>>>>>>> every hostnode has 24 bricks=24*4(distributed replicated)<br>
>>>>>>>>><br>
>>>>>>>>> Thanks,<br>
>>>>>>>>> Punit<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <<a href="mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>><br>
>>>>>>>>> wrote:<br>
>>>>>>>>><br>
>>>>>>>>>> Hi Punit,<br>
>>>>>>>>>><br>
>>>>>>>>>> can you please provide also errors from /var/log/vdsm/vdsm.log and<br>
>>>>>>>>>> /var/log/vdsm/vdsmd.log<br>
>>>>>>>>>><br>
>>>>>>>>>> it would be really helpful if you provided exact steps how to<br>
>>>>>>>>>> reproduce the problem.<br>
>>>>>>>>>><br>
>>>>>>>>>> regards<br>
>>>>>>>>>><br>
>>>>>>>>>> Martin Pavlik - rhev QE<br>
>>>>>>>>>> > On 08 Jan 2015, at 03:06, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>><br>
>>>> wrote:<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Hi,<br>
>>>>>>>>>>><br>
>>>>>>>>>>> I try to add gluster volume but it failed...<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Ovirt :- 3.5<br>
>>>>>>>>>>> VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br>
>>>>>>>>>>> KVM :- 1.5.3 - 60.el7_0.2<br>
>>>>>>>>>>> libvirt-1.1.1-29.el7_0.4<br>
>>>>>>>>>>> Glusterfs :- glusterfs-3.5.3-1.el7<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Engine Logs :-<br>
>>>>>>>>>>><br>
>>>>>>>>>>> 2015-01-08 09:57:52,569 INFO<br>
>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>>>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait<br>
>>>> lock<br>
>>>>>>>>>> EngineLock [exclusiveLocks= key:<br>
>>>> 00000001-0001-0001-0001-000000000300<br>
>>>>>>>>>> value: GLUSTER<br>
>>>>>>>>>>> , sharedLocks= ]<br>
>>>>>>>>>>> 2015-01-08 09:57:52,609 INFO<br>
>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>>>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait<br>
>>>> lock<br>
>>>>>>>>>> EngineLock [exclusiveLocks= key:<br>
>>>> 00000001-0001-0001-0001-000000000300<br>
>>>>>>>>>> value: GLUSTER<br>
>>>>>>>>>>> , sharedLocks= ]<br>
>>>>>>>>>>> 2015-01-08 09:57:55,582 INFO<br>
>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>>>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait<br>
>>>> lock<br>
>>>>>>>>>> EngineLock [exclusiveLocks= key:<br>
>>>> 00000001-0001-0001-0001-000000000300<br>
>>>>>>>>>> value: GLUSTER<br>
>>>>>>>>>>> , sharedLocks= ]<br>
>>>>>>>>>>> 2015-01-08 09:57:55,591 INFO<br>
>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>>>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait<br>
>>>> lock<br>
>>>>>>>>>> EngineLock [exclusiveLocks= key:<br>
>>>> 00000001-0001-0001-0001-000000000300<br>
>>>>>>>>>> value: GLUSTER<br>
>>>>>>>>>>> , sharedLocks= ]<br>
>>>>>>>>>>> 2015-01-08 09:57:55,596 INFO<br>
>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>>>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait<br>
>>>> lock<br>
>>>>>>>>>> EngineLock [exclusiveLocks= key:<br>
>>>> 00000001-0001-0001-0001-000000000300<br>
>>>>>>>>>> value: GLUSTER<br>
>>>>>>>>>>> , sharedLocks= ]<br>
>>>>>>>>>>> 2015-01-08 09:57:55,633 INFO<br>
>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>
>>>>>>>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait<br>
>>>> lock<br>
>>>>>>>>>> EngineLock [exclusiveLocks= key:<br>
>>>> 00000001-0001-0001-0001-000000000300<br>
>>>>>>>>>> value: GLUSTER<br>
>>>>>>>>>>> , sharedLocks= ]<br>
>>>>>>>>>>> ^C<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>> <216 09-Jan-15.jpg><217 09-Jan-15.jpg><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>><br>
>>>><br>
>>><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>><br>
>><br>
><br>
</div></div></blockquote></div><br></div>