Hi Atin,
It seems glusterfs is not a good and stable product for storage use....i am
still facing the same issue...in the ovirt wiki it says ovirt has native
support with glusterfs but even i am tring to use it from ovirt it failed
and i am keep try to make it work from last 7 days...but still fail and no
one in the community has clue about the same....may be it's bug in the
gluster 3.6.1 but even i try with gluster 3.5.3....it same....
Hope someone from gluster help me to get rid of those errors and make it
work..
Thanks,
Punit
On Thu, Jan 15, 2015 at 2:58 PM, Punit Dambiwal <hypunit(a)gmail.com> wrote:
Hi,
Can any one help me here ??
On Wed, Jan 14, 2015 at 3:25 PM, Punit Dambiwal <hypunit(a)gmail.com> wrote:
> Hi Donny,
>
> I am not using gluster for the NFS mount...no volume has been created
> because of those errors....
>
> On Wed, Jan 14, 2015 at 9:47 AM, Donny Davis <donny(a)cloudspin.me> wrote:
>
>> And
>>
>>
>>
>> rpcbind is running
>>
>>
>>
>> can you do a regular nfs mount of the gluster volume?
>>
>> gluster volume info {your volume name here}
>>
>>
>>
>>
>>
>> Just gathering intel to hopefully provide a solution. I just deployed
>> gluster with hosted engine today, and I did get some of the same errors as
>> you when I was bringing everything up
>>
>> Did you follow a guide, or are you craving your own?
>>
>> Are you using swift for anything… that is usually for openstack to my
>> knowledge? I guess you could use it for ovirt, but I didn’t
>>
>>
>>
>> Donny D
>>
>>
>>
>> *From:* Punit Dambiwal [mailto:hypunit@gmail.com]
>> *Sent:* Tuesday, January 13, 2015 6:41 PM
>> *To:* Donny Davis
>> *Cc:* users(a)ovirt.org
>>
>> *Subject:* Re: [ovirt-users] Failed to create volume in OVirt with
>> gluster
>>
>>
>>
>> Hi Donny,
>>
>>
>>
>> No I am not using CTDB....it's totally new deployment...
>>
>>
>>
>> On Wed, Jan 14, 2015 at 1:50 AM, Donny Davis <donny(a)cloudspin.me> wrote:
>>
>> Are you using ctdb??? and did you specify Lock=False in
>> /etc/nfsmount.conf
>>
>>
>>
>> Can you give a full run down of topology, and has this ever been working
>> or is it a new deployment?
>>
>>
>>
>> Donny D
>>
>>
>>
>> *From:* users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] *On
>> Behalf Of *Punit Dambiwal
>> *Sent:* Monday, January 12, 2015 7:18 PM
>> *To:* Kanagaraj Mayilsamy
>> *Cc:* gluster-users(a)gluster.org; Kaushal M; users(a)ovirt.org
>> *Subject:* Re: [ovirt-users] Failed to create volume in OVirt with
>> gluster
>>
>>
>>
>> Hi,
>>
>>
>>
>> Please find the more details on this ....can anybody from gluster will
>> help me here :-
>>
>>
>>
>>
>>
>> Gluster CLI Logs :- /var/log/glusterfs/cli.log
>>
>>
>>
>> [2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs:
>> got RPC_CLNT_CONNECT
>>
>> [2015-01-13 02:06:23.072012] T
>> [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT
>>
>> [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler]
>> 0-transport: disconnecting now
>>
>> [2015-01-13 02:06:23.072055] T
>> [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got
>> RPC_CLNT_DISCONNECT
>>
>> [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record]
>> 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
>>
>> [2015-01-13 02:06:23.072176] T
>> [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen
>> 128, payload: 64, rpc hdr: 64
>>
>> [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (-->
>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (-->
>> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293]
>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (-->
>> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (-->
>> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect
>> () called on transport already connected
>>
>> [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit]
>> 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2,
>> Proc: 27) to rpc-transport (glusterfs)
>>
>> [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping]
>> 0-glusterfs: ping timeout is 0, returning
>>
>> [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init]
>> 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI,
>> ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
>>
>> [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk]
>> 0-cli: Received response to status cmd
>>
>> [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli:
>> Returning 0
>>
>> [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume]
>> 0-cli: Returning: 0
>>
>> [2015-01-13 02:06:23.076192] D
>> [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0
>>
>> [2015-01-13 02:06:23.076244] D
>> [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0
>>
>> [2015-01-13 02:06:23.076256] D
>> [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
>>
>> [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output]
>> 0-cli: Returning 0
>>
>> [2015-01-13 02:06:23.076459] D
>> [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
>>
>> [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
>>
>>
>>
>> Command log :- /var/log/glusterfs/.cmd_log_history
>>
>>
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking
>> failed on
cpu02.zne01.hkg1.stack.com. Please check log file for details.
>>
>> Locking failed on
cpu03.zne01.hkg1.stack.com. Please check log file for
>> details.
>>
>> Locking failed on
cpu04.zne01.hkg1.stack.com. Please check log file for
>> details.
>>
>> [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking
>> failed on
cpu02.zne01.hkg1.stack.com. Please check log file for details.
>>
>> Locking failed on
cpu03.zne01.hkg1.stack.com. Please check log file for
>> details.
>>
>> Locking failed on
cpu04.zne01.hkg1.stack.com. Please check log file for
>> details.
>>
>> [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking
>> failed on
cpu02.zne01.hkg1.stack.com. Please check log file for details.
>>
>> Locking failed on
cpu03.zne01.hkg1.stack.com. Please check log file for
>> details.
>>
>> Locking failed on
cpu04.zne01.hkg1.stack.com. Please check log file for
>> details.
>>
>> [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED :
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> Staging failed on 00000000-0000-0000-0000-000000000000. Please check log
>> file for details.
>>
>> [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS
>>
>> [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS
>>
>> [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS
>>
>> [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS
>>
>> [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS
>>
>> [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS
>>
>> [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
>>
>>
>>
>>
>>
>> On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <
>> kmayilsa(a)redhat.com> wrote:
>>
>> I can see the failures in glusterd log.
>>
>> Can someone from glusterfs dev pls help on this?
>>
>> Thanks,
>> Kanagaraj
>>
>>
>> ----- Original Message -----
>> > From: "Punit Dambiwal" <hypunit(a)gmail.com>
>> > To: "Kanagaraj" <kmayilsa(a)redhat.com>
>> > Cc: "Martin Pavlík" <mpavlik(a)redhat.com>, "Vijay
Bellur" <
>> vbellur(a)redhat.com>, "Kaushal M" <kshlmster(a)gmail.com>,
>> > users(a)ovirt.org, gluster-users(a)gluster.org
>> > Sent: Monday, January 12, 2015 3:36:43 PM
>> > Subject: Re: Failed to create volume in OVirt with gluster
>> >
>> > Hi Kanagaraj,
>> >
>> > Please find the logs from here :-
http://ur1.ca/jeszc
>> >
>> > [image: Inline image 1]
>> >
>> > [image: Inline image 2]
>> >
>> > On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa(a)redhat.com>
>> wrote:
>> >
>> > > Looks like there are some failures in gluster.
>> > > Can you send the log output from glusterd log file from the relevant
>> hosts?
>> > >
>> > > Thanks,
>> > > Kanagaraj
>> > >
>> > >
>> > > On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
>> > >
>> > > Hi,
>> > >
>> > > Is there any one from gluster can help me here :-
>> > >
>> > > Engine logs :-
>> > >
>> > > 2015-01-12 12:50:33,841 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:34,725 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:36,824 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:36,853 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:36,866 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:37,751 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:39,849 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:39,878 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:39,890 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:40,776 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:42,878 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:42,903 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:42,916 INFO
>> > > [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait
>> lock
>> > > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300
>> > > value: GLUSTER
>> > > , sharedLocks= ]
>> > > 2015-01-12 12:50:43,771 INFO
>> > >
>> [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
>> > > (ajp--127.0.0.1-8702-1) [330ace48] FINISH,
>> CreateGlusterVolumeVDSCommand,
>> > > log id: 303e70a4
>> > > 2015-01-12 12:50:43,780 ERROR
>> > >
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> > > (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID:
>> > > 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event
>> ID:
>> > > -1, Message: Creation of Gluster Volume vol01 failed.
>> > > 2015-01-12 12:50:43,785 INFO
>> > > [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
>> > > (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock
>> > > [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value:
>> GLUSTER
>> > > , sharedLocks= ]
>> > >
>> > > [image: Inline image 2]
>> > >
>> > >
>> > > On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík
<mpavlik(a)redhat.com>
>> wrote:
>> > >
>> > >> Hi Punit,
>> > >>
>> > >> unfortunately I’am not that good with the gluster, I was just
>> following
>> > >> the obvious clue from the log. Could you try on the nodes if the
>> packages
>> > >> are even available for installation
>> > >>
>> > >> yum install gluster-swift gluster-swift-object
gluster-swift-plugin
>> > >> gluster-swift-account
>> > >> gluster-swift-proxy gluster-swift-doc gluster-swift-container
>> > >> glusterfs-geo-replication
>> > >>
>> > >> if not you could try to get them in official gluster repo.
>> > >>
>>
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs...
>> > >>
>> > >> HTH
>> > >>
>> > >> M.
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> On 10 Jan 2015, at 04:35, Punit Dambiwal
<hypunit(a)gmail.com>
>> wrote:
>> > >>
>> > >> Hi Martin,
>> > >>
>> > >> I installed gluster from ovirt repo....is it require to install
>> those
>> > >> packages manually ??
>> > >>
>> > >> On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík
<mpavlik(a)redhat.com>
>> wrote:
>> > >>
>> > >>> Hi Punit,
>> > >>>
>> > >>> can you verify that nodes contain cluster packages from the
>> following
>> > >>> log?
>> > >>>
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift-object',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift-plugin',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift-account',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift-proxy',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift-doc',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('gluster-swift-container',) not found
>> > >>> Thread-14::DEBUG::2015-01-09
>> > >>> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package
>> > >>> ('glusterfs-geo-replication',) not found
>> > >>>
>> > >>>
>> > >>> M.
>> > >>>
>> > >>> On 09 Jan 2015, at 11:13, Punit Dambiwal
<hypunit(a)gmail.com>
>> wrote:
>> > >>>
>> > >>> Hi Kanagaraj,
>> > >>>
>> > >>> Please find the attached logs :-
>> > >>>
>> > >>> Engine Logs :-
http://ur1.ca/jdopt
>> > >>> VDSM Logs :-
http://ur1.ca/jdoq9
>> > >>>
>> > >>>
>> > >>>
>> > >>> On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj
<kmayilsa(a)redhat.com>
>> wrote:
>> > >>>
>> > >>>> Do you see any errors in the UI?
>> > >>>>
>> > >>>> Also please provide the engine.log and vdsm.log when the
failure
>> > >>>> occured.
>> > >>>>
>> > >>>> Thanks,
>> > >>>> Kanagaraj
>> > >>>>
>> > >>>>
>> > >>>> On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
>> > >>>>
>> > >>>> Hi Martin,
>> > >>>>
>> > >>>> The steps are below :-
>> > >>>>
>> > >>>> 1. Step the ovirt engine on the one server...
>> > >>>> 2. Installed centos 7 on 4 host node servers..
>> > >>>> 3. I am using host node (compute+storage)....now i have
added all
>> 4
>> > >>>> nodes to engine...
>> > >>>> 4. Create the gluster volume from GUI...
>> > >>>>
>> > >>>> Network :-
>> > >>>> eth0 :- public network (1G)
>> > >>>> eth1+eth2=bond0= VM public network (1G)
>> > >>>> eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
>> > >>>>
>> > >>>> every hostnode has 24 bricks=24*4(distributed replicated)
>> > >>>>
>> > >>>> Thanks,
>> > >>>> Punit
>> > >>>>
>> > >>>>
>> > >>>> On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík
<mpavlik(a)redhat.com
>> >
>> > >>>> wrote:
>> > >>>>
>> > >>>>> Hi Punit,
>> > >>>>>
>> > >>>>> can you please provide also errors from
/var/log/vdsm/vdsm.log
>> and
>> > >>>>> /var/log/vdsm/vdsmd.log
>> > >>>>>
>> > >>>>> it would be really helpful if you provided exact steps
how to
>> > >>>>> reproduce the problem.
>> > >>>>>
>> > >>>>> regards
>> > >>>>>
>> > >>>>> Martin Pavlik - rhev QE
>> > >>>>> > On 08 Jan 2015, at 03:06, Punit Dambiwal
<hypunit(a)gmail.com>
>> wrote:
>> > >>>>> >
>> > >>>>> > Hi,
>> > >>>>> >
>> > >>>>> > I try to add gluster volume but it failed...
>> > >>>>> >
>> > >>>>> > Ovirt :- 3.5
>> > >>>>> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7
>> > >>>>> > KVM :- 1.5.3 - 60.el7_0.2
>> > >>>>> > libvirt-1.1.1-29.el7_0.4
>> > >>>>> > Glusterfs :- glusterfs-3.5.3-1.el7
>> > >>>>> >
>> > >>>>> > Engine Logs :-
>> > >>>>> >
>> > >>>>> > 2015-01-08 09:57:52,569 INFO
>> > >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire
lock and
>> wait lock
>> > >>>>> EngineLock [exclusiveLocks= key:
>> 00000001-0001-0001-0001-000000000300
>> > >>>>> value: GLUSTER
>> > >>>>> > , sharedLocks= ]
>> > >>>>> > 2015-01-08 09:57:52,609 INFO
>> > >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire
lock and
>> wait lock
>> > >>>>> EngineLock [exclusiveLocks= key:
>> 00000001-0001-0001-0001-000000000300
>> > >>>>> value: GLUSTER
>> > >>>>> > , sharedLocks= ]
>> > >>>>> > 2015-01-08 09:57:55,582 INFO
>> > >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire
lock and
>> wait lock
>> > >>>>> EngineLock [exclusiveLocks= key:
>> 00000001-0001-0001-0001-000000000300
>> > >>>>> value: GLUSTER
>> > >>>>> > , sharedLocks= ]
>> > >>>>> > 2015-01-08 09:57:55,591 INFO
>> > >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire
lock and
>> wait lock
>> > >>>>> EngineLock [exclusiveLocks= key:
>> 00000001-0001-0001-0001-000000000300
>> > >>>>> value: GLUSTER
>> > >>>>> > , sharedLocks= ]
>> > >>>>> > 2015-01-08 09:57:55,596 INFO
>> > >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire
lock and
>> wait lock
>> > >>>>> EngineLock [exclusiveLocks= key:
>> 00000001-0001-0001-0001-000000000300
>> > >>>>> value: GLUSTER
>> > >>>>> > , sharedLocks= ]
>> > >>>>> > 2015-01-08 09:57:55,633 INFO
>> > >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> > >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire
lock and
>> wait lock
>> > >>>>> EngineLock [exclusiveLocks= key:
>> 00000001-0001-0001-0001-000000000300
>> > >>>>> value: GLUSTER
>> > >>>>> > , sharedLocks= ]
>> > >>>>> > ^C
>> > >>>>> >
>> > >>>>> >
>> > >>>>>
>> > >>>>>
>> > >>>>
>> > >>>>
>> > >>> <216 09-Jan-15.jpg><217 09-Jan-15.jpg>
>> > >>>
>> > >>>
>> > >>>
>> > >>
>> > >>
>> > >
>> > >
>> >
>>
>>
>>
>>
>>
>
>