Failed to create volume in OVirt with gluster

Hi, I try to add gluster volume but it failed... Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7 Engine Logs :- 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C

Hi Punit, can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log it would be really helpful if you provided exact steps how to reproduce the problem. regards Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C

Hi Martin, The steps are below :- 1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI... Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network) every hostnode has 24 bricks=24*4(distributed replicated) Thanks, Punit On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C

This is a multi-part message in MIME format. --------------060508090800030806070903 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Do you see any errors in the UI? Also please provide the engine.log and vdsm.log when the failure occured. Thanks, Kanagaraj On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com <mailto:hypunit@gmail.com>> wrote: > > Hi, > > I try to add gluster volume but it failed... > > Ovirt :- 3.5 > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > KVM :- 1.5.3 - 60.el7_0.2 > libvirt-1.1.1-29.el7_0.4 > Glusterfs :- glusterfs-3.5.3-1.el7 > > Engine Logs :- > > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > ^C > >
--------------060508090800030806070903 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Do you see any errors in the UI?<br> <br> Also please provide the engine.log and vdsm.log when the failure occured.<br> <br> Thanks,<br> Kanagaraj<br> <br> <div class="moz-cite-prefix">On 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br> </div> <blockquote cite="mid:CAGZcrBmKFSwHz_MT+bynx2=WEtZfMY0N8mL5BfEBxiW1q3KN9Q@mail.gmail.com" type="cite"> <div dir="ltr">Hi Martin, <div><br> </div> <div>The steps are below :- </div> <div><br> </div> <div>1. Step the ovirt engine on the one server...</div> <div>2. Installed centos 7 on 4 host node servers..</div> <div>3. I am using host node (compute+storage)....now i have added all 4 nodes to engine...</div> <div>4. Create the gluster volume from GUI...</div> <div><br> </div> <div>Network :- </div> <div>eth0 :- public network (1G)</div> <div>eth1+eth2=bond0= VM public network (1G)</div> <div>eth3+eth4=bond1=ovirtmgmt+storage (10G private network)</div> <div><br> </div> <div>every hostnode has 24 bricks=24*4(distributed replicated)</div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> <div><br> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <span dir="ltr"><<a moz-do-not-send="true" href="mailto:mpavlik@redhat.com" target="_blank">mpavlik@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Punit,<br> <br> can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log<br> <br> it would be really helpful if you provided exact steps how to reproduce the problem.<br> <br> regards<br> <br> Martin Pavlik - rhev QE<br> <div class="HOEnZb"> <div class="h5">> On 08 Jan 2015, at 03:06, Punit Dambiwal <<a moz-do-not-send="true" href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br> ><br> > Hi,<br> ><br> > I try to add gluster volume but it failed...<br> ><br> > Ovirt :- 3.5<br> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br> > KVM :- 1.5.3 - 60.el7_0.2<br> > libvirt-1.1.1-29.el7_0.4<br> > Glusterfs :- glusterfs-3.5.3-1.el7<br> ><br> > Engine Logs :-<br> ><br> > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > ^C<br> ><br> ><br> <br> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------060508090800030806070903--

Hi Kanagaraj, Please find the attached logs :- Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9 On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C

--Apple-Mail=_109729C8-42C5-4952-9855-76A598BFB1A1 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hi Punit, can you verify that nodes contain cluster packages from the following = log? Thread-14::DEBUG::2015-01-09 = 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('glusterfs-geo-replication',) not found M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote: =20 Hi Kanagaraj, =20 Please find the attached logs :-=20 =20 Engine Logs :- http://ur1.ca/jdopt <http://ur1.ca/jdopt> VDSM Logs :- http://ur1.ca/jdoq9 <http://ur1.ca/jdoq9> =20 =20 =20 On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com = <mailto:kmayilsa@redhat.com>> wrote: Do you see any errors in the UI? =20 Also please provide the engine.log and vdsm.log when the failure = occured. =20 Thanks, Kanagaraj =20 =20 On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin, =20 The steps are below :-=20 =20 1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 = nodes to engine... 4. Create the gluster volume from GUI... =20 Network :-=20 eth0 :- public network (1G) eth1+eth2=3Dbond0=3D VM public network (1G) eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network) =20 every hostnode has 24 bricks=3D24*4(distributed replicated) =20 Thanks, Punit =20 =20 On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote: Hi Punit, =20 can you please provide also errors from /var/log/vdsm/vdsm.log and = /var/log/vdsm/vdsmd.log =20 it would be really helpful if you provided exact steps how to = reproduce the problem. =20 regards =20 Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com = <mailto:hypunit@gmail.com>> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:52,609 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,582 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,591 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,596 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,633 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] ^C
=20 =20 =20 =20 <216 09-Jan-15.jpg><217 09-Jan-15.jpg>
--Apple-Mail=_109729C8-42C5-4952-9855-76A598BFB1A1 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><pre style=3D"word-wrap: break-word; white-space: pre-wrap;" = class=3D"">Hi Punit,</pre><pre style=3D"word-wrap: break-word; = white-space: pre-wrap;" class=3D"">can you verify that nodes contain = cluster packages from the following log?</pre><pre style=3D"word-wrap: = break-word; white-space: pre-wrap;" = class=3D"">Thread-14::DEBUG::2015-01-09 = 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('glusterfs-geo-replication',) not found</pre><div class=3D""><br = class=3D""></div><div class=3D"">M.</div><div class=3D""><br = class=3D""></div><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 09 Jan 2015, at 11:13, Punit Dambiwal <<a = href=3D"mailto:hypunit@gmail.com" class=3D"">hypunit@gmail.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D"">Hi Kanagaraj,<div class=3D""><br = class=3D""></div><div class=3D"">Please find the attached logs = :- </div><div class=3D""><br class=3D""></div><div class=3D"">Engine = Logs :- <a href=3D"http://ur1.ca/jdopt" = class=3D"">http://ur1.ca/jdopt</a></div><div class=3D"">VDSM Logs = :- <a href=3D"http://ur1.ca/jdoq9" = class=3D"">http://ur1.ca/jdoq9</a></div><div class=3D""><br = class=3D""></div><div class=3D""><br class=3D""></div></div><div = class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On Thu, = Jan 8, 2015 at 6:05 PM, Kanagaraj <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:kmayilsa@redhat.com" target=3D"_blank" = class=3D"">kmayilsa@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"> =20 =20 =20 <div bgcolor=3D"#FFFFFF" text=3D"#000000" class=3D""> Do you see any errors in the UI?<br class=3D""> <br class=3D""> Also please provide the engine.log and vdsm.log when the failure occured.<br class=3D""> <br class=3D""> Thanks,<br class=3D""> Kanagaraj<div class=3D""><div class=3D"h5"><br class=3D""> <br class=3D""> <div class=3D"">On 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br class=3D""> </div> <blockquote type=3D"cite" class=3D""> <div dir=3D"ltr" class=3D"">Hi Martin, <div class=3D""><br class=3D""> </div> <div class=3D"">The steps are below :- </div> <div class=3D""><br class=3D""> </div> <div class=3D"">1. Step the ovirt engine on the one = server...</div> <div class=3D"">2. Installed centos 7 on 4 host node = servers..</div> <div class=3D"">3. I am using host node (compute+storage)....now = i have added all 4 nodes to engine...</div> <div class=3D"">4. Create the gluster volume from GUI...</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Network :- </div> <div class=3D"">eth0 :- public network (1G)</div> <div class=3D"">eth1+eth2=3Dbond0=3D VM public network = (1G)</div> <div class=3D"">eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G = private network)</div> <div class=3D""><br class=3D""> </div> <div class=3D"">every hostnode has 24 bricks=3D24*4(distributed = replicated)</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Thanks,</div> <div class=3D"">Punit</div> <div class=3D""><br class=3D""> </div> </div> <div class=3D"gmail_extra"><br class=3D""> <div class=3D"gmail_quote">On Thu, Jan 8, 2015 at 3:20 PM, = Martin Pavl=C3=ADk <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:mpavlik@redhat.com" target=3D"_blank" = class=3D"">mpavlik@redhat.com</a>></span> wrote:<br class=3D""> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Punit,<br class=3D"">= <br class=3D""> can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log<br = class=3D""> <br class=3D""> it would be really helpful if you provided exact steps how to reproduce the problem.<br class=3D""> <br class=3D""> regards<br class=3D""> <br class=3D""> Martin Pavlik - rhev QE<br class=3D""> <div class=3D""> <div class=3D"">> On 08 Jan 2015, at 03:06, Punit Dambiwal <<a href=3D"mailto:hypunit@gmail.com" = target=3D"_blank" class=3D"">hypunit@gmail.com</a>> wrote:<br class=3D""> ><br class=3D""> > Hi,<br class=3D""> ><br class=3D""> > I try to add gluster volume but it failed...<br = class=3D""> ><br class=3D""> > Ovirt :- 3.5<br class=3D""> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br class=3D""> > KVM :- 1.5.3 - 60.el7_0.2<br class=3D""> > libvirt-1.1.1-29.el7_0.4<br class=3D""> > Glusterfs :- glusterfs-3.5.3-1.el7<br class=3D""> ><br class=3D""> > Engine Logs :-<br class=3D""> ><br class=3D""> > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > ^C<br class=3D""> ><br class=3D""> ><br class=3D""> <br class=3D""> </div> </div> </blockquote> </div> <br class=3D""> </div> </blockquote> <br class=3D""> </div></div></div> </blockquote></div><br class=3D""></div> <span = id=3D"cid:37D3CD71-8728-4D0B-90EA-FFC26B17BDFD@brq.redhat.com"><216 = 09-Jan-15.jpg></span><span = id=3D"cid:87D63ABB-B570-46F7-A5B2-B83469B28C81@brq.redhat.com"><217 = 09-Jan-15.jpg></span></div></blockquote></div><br = class=3D""></body></html>= --Apple-Mail=_109729C8-42C5-4952-9855-76A598BFB1A1--

Hi Martin, I installed gluster from ovirt repo....is it require to install those packages manually ?? On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote: =20 Hi Martin, =20 I installed gluster from ovirt repo....is it require to install those =
--Apple-Mail=_5F83B64B-BAF6-4C9E-B4FE-7E356A618B31 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hi Punit, unfortunately I=E2=80=99am not that with the gluster, I was just = following the obvious clue from the log. Could you try on the nodes if = the packages are even available for installation=20 yum install gluster-swift gluster-swift-object gluster-swift-plugin = gluster-swift-account gluster-swift-proxy gluster-swift-doc = gluster-swift-container glusterfs-geo-replication if not you could try to get them in official gluster repo. = http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-= epel.repo HTH M. packages manually ??
=20 On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavl=C3=ADk <mpavlik@redhat.com = <mailto:mpavlik@redhat.com>> wrote: Hi Punit, can you verify that nodes contain cluster packages from the following = log? Thread-14::DEBUG::2015-01-09 = 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('glusterfs-geo-replication',) not found =20 M. =20
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com = <mailto:hypunit@gmail.com>> wrote: =20 Hi Kanagaraj, =20 Please find the attached logs :-=20 =20 Engine Logs :- http://ur1.ca/jdopt <http://ur1.ca/jdopt> VDSM Logs :- http://ur1.ca/jdoq9 <http://ur1.ca/jdoq9> =20 =20 =20 On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com = <mailto:kmayilsa@redhat.com>> wrote: Do you see any errors in the UI? =20 Also please provide the engine.log and vdsm.log when the failure = occured. =20 Thanks, Kanagaraj =20 =20 On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin, =20 The steps are below :-=20 =20 1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 = nodes to engine... 4. Create the gluster volume from GUI... =20 Network :-=20 eth0 :- public network (1G) eth1+eth2=3Dbond0=3D VM public network (1G) eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network) =20 every hostnode has 24 bricks=3D24*4(distributed replicated) =20 Thanks, Punit =20 =20 On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote: Hi Punit, =20 can you please provide also errors from /var/log/vdsm/vdsm.log and = /var/log/vdsm/vdsmd.log =20 it would be really helpful if you provided exact steps how to = reproduce the problem. =20 regards =20 Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com = <mailto:hypunit@gmail.com>> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:52,609 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,582 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,591 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,596 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,633 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] ^C
=20 =20 =20 =20 <216 09-Jan-15.jpg><217 09-Jan-15.jpg> =20 =20
--Apple-Mail=_5F83B64B-BAF6-4C9E-B4FE-7E356A618B31 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D"">Hi Punit,<div class=3D""><br class=3D""></div><div = class=3D"">unfortunately I=E2=80=99am not that with the gluster, I was = just following the obvious clue from the log. Could you try on the nodes = if the packages are even available for installation </div><div = class=3D""><br class=3D""></div><div class=3D"">yum install <span = style=3D"white-space: pre-wrap;" class=3D"">gluster-swift </span><span = style=3D"white-space: pre-wrap;" class=3D"">gluster-swift-object = </span><span style=3D"white-space: pre-wrap;" = class=3D"">gluster-swift-plugin </span><span style=3D"white-space: = pre-wrap;" class=3D"">gluster-swift-account </span><span = style=3D"white-space: pre-wrap;" class=3D"">gluster-swift-proxy = </span><span style=3D"white-space: pre-wrap;" class=3D"">gluster-swift-doc= </span><span style=3D"white-space: pre-wrap;" = class=3D"">gluster-swift-container </span><span style=3D"white-space: = pre-wrap;" class=3D"">glusterfs-geo-replication</span></div><div = class=3D""><span style=3D"white-space: pre-wrap;" class=3D""><br = class=3D""></span></div><div class=3D""><span style=3D"white-space: = pre-wrap;" class=3D"">if not you could try to get them in official = gluster repo. </span><span style=3D"font-family: Verdana, Geneva, = sans-serif; font-size: x-small; line-height: 14px; background-color: = rgb(255, 255, 255);" class=3D""><a = href=3D"http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/gl= usterfs-epel.repo" = class=3D"">http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS= /glusterfs-epel.repo</a></span></div><div class=3D""><span = style=3D"white-space: pre-wrap;" class=3D""><br = class=3D""></span></div><div class=3D""><span style=3D"white-space: = pre-wrap;" class=3D"">HTH</span></div><div class=3D""><span = style=3D"white-space: pre-wrap;" class=3D""><br = class=3D""></span></div><div class=3D""><span style=3D"white-space: = pre-wrap;" class=3D"">M.</span><div class=3D""><br class=3D""></div><div = class=3D""><br class=3D""></div><div class=3D""><br class=3D""></div><div = class=3D""><br class=3D""></div><div><blockquote type=3D"cite" = class=3D""><div class=3D"">On 10 Jan 2015, at 04:35, Punit Dambiwal = <<a href=3D"mailto:hypunit@gmail.com" = class=3D"">hypunit@gmail.com</a>> wrote:</div><br = class=3D"Apple-interchange-newline"><div class=3D""><div dir=3D"ltr" = class=3D"">Hi Martin,<div class=3D""><br class=3D""></div><div = class=3D"">I installed gluster from ovirt repo....is it require to = install those packages manually ??</div></div><div = class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On Fri, = Jan 9, 2015 at 7:19 PM, Martin Pavl=C3=ADk <span dir=3D"ltr" = class=3D""><<a href=3D"mailto:mpavlik@redhat.com" target=3D"_blank" = class=3D"">mpavlik@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><pre = style=3D"word-wrap:break-word;white-space:pre-wrap" class=3D"">Hi = Punit,</pre><pre style=3D"word-wrap:break-word;white-space:pre-wrap" = class=3D"">can you verify that nodes contain cluster packages from the = following log?</pre><pre = style=3D"word-wrap:break-word;white-space:pre-wrap" = class=3D"">Thread-14::DEBUG::2015-01-09 = 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('glusterfs-geo-replication',) not found</pre><div class=3D""><br = class=3D""></div><div class=3D"">M.</div><div class=3D""><br = class=3D""></div><div class=3D""><blockquote type=3D"cite" class=3D""><div= class=3D""><div class=3D"h5"><div class=3D"">On 09 Jan 2015, at 11:13, = Punit Dambiwal <<a href=3D"mailto:hypunit@gmail.com" target=3D"_blank" = class=3D"">hypunit@gmail.com</a>> wrote:</div><br = class=3D""></div></div><div class=3D""><div class=3D""><div = class=3D"h5"><div dir=3D"ltr" class=3D"">Hi Kanagaraj,<div class=3D""><br = class=3D""></div><div class=3D"">Please find the attached logs = :- </div><div class=3D""><br class=3D""></div><div class=3D"">Engine = Logs :- <a href=3D"http://ur1.ca/jdopt" target=3D"_blank" = class=3D"">http://ur1.ca/jdopt</a></div><div class=3D"">VDSM Logs = :- <a href=3D"http://ur1.ca/jdoq9" target=3D"_blank" = class=3D"">http://ur1.ca/jdoq9</a></div><div class=3D""><br = class=3D""></div><div class=3D""><br class=3D""></div></div><div = class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On Thu, = Jan 8, 2015 at 6:05 PM, Kanagaraj <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:kmayilsa@redhat.com" target=3D"_blank" = class=3D"">kmayilsa@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"> =20 =20 =20 <div bgcolor=3D"#FFFFFF" text=3D"#000000" class=3D""> Do you see any errors in the UI?<br class=3D""> <br class=3D""> Also please provide the engine.log and vdsm.log when the failure occured.<br class=3D""> <br class=3D""> Thanks,<br class=3D""> Kanagaraj<div class=3D""><div class=3D""><br class=3D""> <br class=3D""> <div class=3D"">On 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br class=3D""> </div> <blockquote type=3D"cite" class=3D""> <div dir=3D"ltr" class=3D"">Hi Martin, <div class=3D""><br class=3D""> </div> <div class=3D"">The steps are below :- </div> <div class=3D""><br class=3D""> </div> <div class=3D"">1. Step the ovirt engine on the one = server...</div> <div class=3D"">2. Installed centos 7 on 4 host node = servers..</div> <div class=3D"">3. I am using host node (compute+storage)....now = i have added all 4 nodes to engine...</div> <div class=3D"">4. Create the gluster volume from GUI...</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Network :- </div> <div class=3D"">eth0 :- public network (1G)</div> <div class=3D"">eth1+eth2=3Dbond0=3D VM public network = (1G)</div> <div class=3D"">eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G = private network)</div> <div class=3D""><br class=3D""> </div> <div class=3D"">every hostnode has 24 bricks=3D24*4(distributed = replicated)</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Thanks,</div> <div class=3D"">Punit</div> <div class=3D""><br class=3D""> </div> </div> <div class=3D"gmail_extra"><br class=3D""> <div class=3D"gmail_quote">On Thu, Jan 8, 2015 at 3:20 PM, = Martin Pavl=C3=ADk <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:mpavlik@redhat.com" target=3D"_blank" = class=3D"">mpavlik@redhat.com</a>></span> wrote:<br class=3D""> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Punit,<br class=3D"">= <br class=3D""> can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log<br = class=3D""> <br class=3D""> it would be really helpful if you provided exact steps how to reproduce the problem.<br class=3D""> <br class=3D""> regards<br class=3D""> <br class=3D""> Martin Pavlik - rhev QE<br class=3D""> <div class=3D""> <div class=3D"">> On 08 Jan 2015, at 03:06, Punit Dambiwal <<a href=3D"mailto:hypunit@gmail.com" = target=3D"_blank" class=3D"">hypunit@gmail.com</a>> wrote:<br class=3D""> ><br class=3D""> > Hi,<br class=3D""> ><br class=3D""> > I try to add gluster volume but it failed...<br = class=3D""> ><br class=3D""> > Ovirt :- 3.5<br class=3D""> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br class=3D""> > KVM :- 1.5.3 - 60.el7_0.2<br class=3D""> > libvirt-1.1.1-29.el7_0.4<br class=3D""> > Glusterfs :- glusterfs-3.5.3-1.el7<br class=3D""> ><br class=3D""> > Engine Logs :-<br class=3D""> ><br class=3D""> > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > ^C<br class=3D""> ><br class=3D""> ><br class=3D""> <br class=3D""> </div> </div> </blockquote> </div> <br class=3D""> </div> </blockquote> <br class=3D""> </div></div></div> </blockquote></div><br class=3D""></div> </div></div><span class=3D""><216 09-Jan-15.jpg></span><span = class=3D""><217 09-Jan-15.jpg></span></div></blockquote></div><br = class=3D""></div></blockquote></div><br class=3D""></div> </div></blockquote></div><br class=3D""></div></body></html>= --Apple-Mail=_5F83B64B-BAF6-4C9E-B4FE-7E356A618B31--

On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com = <mailto:hypunit@gmail.com>> wrote: =20 Hi Martin, =20 I installed gluster from ovirt repo....is it require to install those =
--Apple-Mail=_85F39CB7-E7A3-490B-8F0F-71B4F0A5A05C Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hi Punit, unfortunately I=E2=80=99am not that good with the gluster, I was just = following the obvious clue from the log. Could you try on the nodes if = the packages are even available for installation=20 yum install gluster-swift gluster-swift-object gluster-swift-plugin = gluster-swift-account gluster-swift-proxy gluster-swift-doc = gluster-swift-container glusterfs-geo-replication if not you could try to get them in official gluster repo. = http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-= epel.repo = <http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs= -epel.repo> HTH M. packages manually ??
=20 On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavl=C3=ADk <mpavlik@redhat.com = <mailto:mpavlik@redhat.com>> wrote: Hi Punit, can you verify that nodes contain cluster packages from the following = log? Thread-14::DEBUG::2015-01-09 = 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('glusterfs-geo-replication',) not found =20 M. =20
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com = <mailto:hypunit@gmail.com>> wrote: =20 Hi Kanagaraj, =20 Please find the attached logs :-=20 =20 Engine Logs :- http://ur1.ca/jdopt <http://ur1.ca/jdopt> VDSM Logs :- http://ur1.ca/jdoq9 <http://ur1.ca/jdoq9> =20 =20 =20 On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com = <mailto:kmayilsa@redhat.com>> wrote: Do you see any errors in the UI? =20 Also please provide the engine.log and vdsm.log when the failure = occured. =20 Thanks, Kanagaraj =20 =20 On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin, =20 The steps are below :-=20 =20 1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 = nodes to engine... 4. Create the gluster volume from GUI... =20 Network :-=20 eth0 :- public network (1G) eth1+eth2=3Dbond0=3D VM public network (1G) eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network) =20 every hostnode has 24 bricks=3D24*4(distributed replicated) =20 Thanks, Punit =20 =20 On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote: Hi Punit, =20 can you please provide also errors from /var/log/vdsm/vdsm.log and = /var/log/vdsm/vdsmd.log =20 it would be really helpful if you provided exact steps how to = reproduce the problem. =20 regards =20 Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com = <mailto:hypunit@gmail.com>> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:52,609 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,582 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,591 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,596 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] 2015-01-08 09:57:55,633 INFO = [org.ovirt.engine.core.bll.lock.InMemoryLockManager] = (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock = EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 = value: GLUSTER , sharedLocks=3D ] ^C
=20 =20 =20 =20 <216 09-Jan-15.jpg><217 09-Jan-15.jpg> =20 =20
--Apple-Mail=_85F39CB7-E7A3-490B-8F0F-71B4F0A5A05C Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D"">Hi Punit,<div class=3D""><br class=3D""></div><div = class=3D"">unfortunately I=E2=80=99am not that good with the gluster, I = was just following the obvious clue from the log. Could you try on the = nodes if the packages are even available for = installation </div><div class=3D""><br class=3D""></div><div = class=3D"">yum install <span style=3D"white-space: pre-wrap;" = class=3D"">gluster-swift </span><span style=3D"white-space: pre-wrap;" = class=3D"">gluster-swift-object </span><span style=3D"white-space: = pre-wrap;" class=3D"">gluster-swift-plugin </span><span = style=3D"white-space: pre-wrap;" class=3D"">gluster-swift-account = </span><span style=3D"white-space: pre-wrap;" = class=3D"">gluster-swift-proxy </span><span style=3D"white-space: = pre-wrap;" class=3D"">gluster-swift-doc </span><span style=3D"white-space:= pre-wrap;" class=3D"">gluster-swift-container </span><span = style=3D"white-space: pre-wrap;" = class=3D"">glusterfs-geo-replication</span></div><div class=3D""><span = style=3D"white-space: pre-wrap;" class=3D""><br = class=3D""></span></div><div class=3D""><span style=3D"white-space: = pre-wrap;" class=3D"">if not you could try to get them in official = gluster repo. </span><span style=3D"font-family: Verdana, Geneva, = sans-serif; font-size: x-small; line-height: 14px; background-color: = rgb(255, 255, 255);" class=3D""><a = href=3D"http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/gl= usterfs-epel.repo" = class=3D"">http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS= /glusterfs-epel.repo</a></span></div><div class=3D""><span = style=3D"white-space: pre-wrap;" class=3D""><br = class=3D""></span></div><div class=3D""><span style=3D"white-space: = pre-wrap;" class=3D"">HTH</span></div><div class=3D""><span = style=3D"white-space: pre-wrap;" class=3D""><br = class=3D""></span></div><div class=3D""><span style=3D"white-space: = pre-wrap;" class=3D"">M.</span><div class=3D""><br class=3D""></div><div = class=3D""><br class=3D""></div><div class=3D""><br class=3D""></div><div = class=3D""><br class=3D""></div><div class=3D""><blockquote type=3D"cite" = class=3D""><div class=3D"">On 10 Jan 2015, at 04:35, Punit Dambiwal = <<a href=3D"mailto:hypunit@gmail.com" = class=3D"">hypunit@gmail.com</a>> wrote:</div><br = class=3D"Apple-interchange-newline"><div class=3D""><div dir=3D"ltr" = class=3D"">Hi Martin,<div class=3D""><br class=3D""></div><div = class=3D"">I installed gluster from ovirt repo....is it require to = install those packages manually ??</div></div><div = class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On Fri, = Jan 9, 2015 at 7:19 PM, Martin Pavl=C3=ADk <span dir=3D"ltr" = class=3D""><<a href=3D"mailto:mpavlik@redhat.com" target=3D"_blank" = class=3D"">mpavlik@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><pre = style=3D"word-wrap:break-word;white-space:pre-wrap" class=3D"">Hi = Punit,</pre><pre style=3D"word-wrap:break-word;white-space:pre-wrap" = class=3D"">can you verify that nodes contain cluster packages from the = following log?</pre><pre = style=3D"word-wrap:break-word;white-space:pre-wrap" = class=3D"">Thread-14::DEBUG::2015-01-09 = 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package = ('glusterfs-geo-replication',) not found</pre><div class=3D""><br = class=3D""></div><div class=3D"">M.</div><div class=3D""><br = class=3D""></div><div class=3D""><blockquote type=3D"cite" class=3D""><div= class=3D""><div class=3D"h5"><div class=3D"">On 09 Jan 2015, at 11:13, = Punit Dambiwal <<a href=3D"mailto:hypunit@gmail.com" target=3D"_blank" = class=3D"">hypunit@gmail.com</a>> wrote:</div><br = class=3D""></div></div><div class=3D""><div class=3D""><div = class=3D"h5"><div dir=3D"ltr" class=3D"">Hi Kanagaraj,<div class=3D""><br = class=3D""></div><div class=3D"">Please find the attached logs = :- </div><div class=3D""><br class=3D""></div><div class=3D"">Engine = Logs :- <a href=3D"http://ur1.ca/jdopt" target=3D"_blank" = class=3D"">http://ur1.ca/jdopt</a></div><div class=3D"">VDSM Logs = :- <a href=3D"http://ur1.ca/jdoq9" target=3D"_blank" = class=3D"">http://ur1.ca/jdoq9</a></div><div class=3D""><br = class=3D""></div><div class=3D""><br class=3D""></div></div><div = class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On Thu, = Jan 8, 2015 at 6:05 PM, Kanagaraj <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:kmayilsa@redhat.com" target=3D"_blank" = class=3D"">kmayilsa@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"> =20 =20 =20 <div bgcolor=3D"#FFFFFF" text=3D"#000000" class=3D""> Do you see any errors in the UI?<br class=3D""> <br class=3D""> Also please provide the engine.log and vdsm.log when the failure occured.<br class=3D""> <br class=3D""> Thanks,<br class=3D""> Kanagaraj<div class=3D""><div class=3D""><br class=3D""> <br class=3D""> <div class=3D"">On 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br class=3D""> </div> <blockquote type=3D"cite" class=3D""> <div dir=3D"ltr" class=3D"">Hi Martin, <div class=3D""><br class=3D""> </div> <div class=3D"">The steps are below :- </div> <div class=3D""><br class=3D""> </div> <div class=3D"">1. Step the ovirt engine on the one = server...</div> <div class=3D"">2. Installed centos 7 on 4 host node = servers..</div> <div class=3D"">3. I am using host node (compute+storage)....now = i have added all 4 nodes to engine...</div> <div class=3D"">4. Create the gluster volume from GUI...</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Network :- </div> <div class=3D"">eth0 :- public network (1G)</div> <div class=3D"">eth1+eth2=3Dbond0=3D VM public network = (1G)</div> <div class=3D"">eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G = private network)</div> <div class=3D""><br class=3D""> </div> <div class=3D"">every hostnode has 24 bricks=3D24*4(distributed = replicated)</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Thanks,</div> <div class=3D"">Punit</div> <div class=3D""><br class=3D""> </div> </div> <div class=3D"gmail_extra"><br class=3D""> <div class=3D"gmail_quote">On Thu, Jan 8, 2015 at 3:20 PM, = Martin Pavl=C3=ADk <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:mpavlik@redhat.com" target=3D"_blank" = class=3D"">mpavlik@redhat.com</a>></span> wrote:<br class=3D""> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Punit,<br class=3D"">= <br class=3D""> can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log<br = class=3D""> <br class=3D""> it would be really helpful if you provided exact steps how to reproduce the problem.<br class=3D""> <br class=3D""> regards<br class=3D""> <br class=3D""> Martin Pavlik - rhev QE<br class=3D""> <div class=3D""> <div class=3D"">> On 08 Jan 2015, at 03:06, Punit Dambiwal <<a href=3D"mailto:hypunit@gmail.com" = target=3D"_blank" class=3D"">hypunit@gmail.com</a>> wrote:<br class=3D""> ><br class=3D""> > Hi,<br class=3D""> ><br class=3D""> > I try to add gluster volume but it failed...<br = class=3D""> ><br class=3D""> > Ovirt :- 3.5<br class=3D""> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br class=3D""> > KVM :- 1.5.3 - 60.el7_0.2<br class=3D""> > libvirt-1.1.1-29.el7_0.4<br class=3D""> > Glusterfs :- glusterfs-3.5.3-1.el7<br class=3D""> ><br class=3D""> > Engine Logs :-<br class=3D""> ><br class=3D""> > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br = class=3D""> > , sharedLocks=3D ]<br class=3D""> > ^C<br class=3D""> ><br class=3D""> ><br class=3D""> <br class=3D""> </div> </div> </blockquote> </div> <br class=3D""> </div> </blockquote> <br class=3D""> </div></div></div> </blockquote></div><br class=3D""></div> </div></div><span class=3D""><216 09-Jan-15.jpg></span><span = class=3D""><217 09-Jan-15.jpg></span></div></blockquote></div><br = class=3D""></div></blockquote></div><br class=3D""></div> </div></blockquote></div><br class=3D""></div></body></html>= --Apple-Mail=_85F39CB7-E7A3-490B-8F0F-71B4F0A5A05C--

Hi, Is there any one from gluster can help me here :- Engine logs :- 2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] [image: Inline image 2] On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the packages are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo. http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

This is a multi-part message in MIME format. --------------000904060502090707080709 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts? Thanks, Kanagaraj On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
Inline image 2
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the packages are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo. http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com <mailto:hypunit@gmail.com>> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com <mailto:hypunit@gmail.com>> wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com <mailto:mpavlik@redhat.com>> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com <mailto:hypunit@gmail.com>> wrote: > > Hi, > > I try to add gluster volume but it failed... > > Ovirt :- 3.5 > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > KVM :- 1.5.3 - 60.el7_0.2 > libvirt-1.1.1-29.el7_0.4 > Glusterfs :- glusterfs-3.5.3-1.el7 > > Engine Logs :- > > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > ^C > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>
--------------000904060502090707080709 Content-Type: multipart/related; boundary="------------010504030303090109000601" --------------010504030303090109000601 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Looks like there are some failures in gluster. <br> Can you send the log output from glusterd log file from the relevant hosts?<br> <br> Thanks,<br> Kanagaraj<br> <br> <div class="moz-cite-prefix">On 01/12/2015 10:24 AM, Punit Dambiwal wrote:<br> </div> <blockquote cite="mid:CAGZcrBm9T3p6k_xVzMYuGVJ_Q=z+RqTEneFf0+fiuBtGgg=QNg@mail.gmail.com" type="cite"> <div dir="ltr">Hi, <div><br> </div> <div>Is there any one from gluster can help me here :- </div> <div><br> </div> <div>Engine logs :- </div> <div><br> </div> <div> <div>2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> <div>2015-01-12 12:50:43,771 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4</div> <div>2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed.</div> <div>2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER</div> <div>, sharedLocks= ]</div> </div> <div><br> </div> <div><img src="cid:part1.02060703.06090409@redhat.com" alt="Inline image 2" height="456" width="558"><br> </div> <div><br> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <span dir="ltr"><<a moz-do-not-send="true" href="mailto:mpavlik@redhat.com" target="_blank">mpavlik@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div style="word-wrap:break-word">Hi Punit, <div><br> </div> <div>unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the packages are even available for installation </div> <span class=""> <div><br> </div> <div>yum install <span style="white-space:pre-wrap">gluster-swift </span><span style="white-space:pre-wrap">gluster-swift-object </span><span style="white-space:pre-wrap">gluster-swift-plugin </span><span style="white-space:pre-wrap">gluster-swift-account </span><span style="white-space:pre-wrap">gluster-swift-proxy </span><span style="white-space:pre-wrap">gluster-swift-doc </span><span style="white-space:pre-wrap">gluster-swift-container </span><span style="white-space:pre-wrap">glusterfs-geo-replication</span></div> <div><span style="white-space:pre-wrap"><br> </span></div> <div><span style="white-space:pre-wrap">if not you could try to get them in official gluster repo. </span><span style="font-family:Verdana,Geneva,sans-serif;font-size:x-small;line-height:14px;background-color:rgb(255,255,255)"><a moz-do-not-send="true" href="http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep..." target="_blank">http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo</a></span></div> <div><span style="white-space:pre-wrap"><br> </span></div> <div><span style="white-space:pre-wrap">HTH</span></div> <div><span style="white-space:pre-wrap"><br> </span></div> </span> <div><span class="HOEnZb"><font color="#888888"><span style="white-space:pre-wrap">M.</span></font></span> <div> <div class="h5"> <div><br> </div> <div><br> </div> <div><br> </div> <div><br> </div> <div> <blockquote type="cite"> <div>On 10 Jan 2015, at 04:35, Punit Dambiwal <<a moz-do-not-send="true" href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>> wrote:</div> <br> <div> <div dir="ltr">Hi Martin, <div><br> </div> <div>I installed gluster from ovirt repo....is it require to install those packages manually ??</div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <span dir="ltr"><<a moz-do-not-send="true" href="mailto:mpavlik@redhat.com" target="_blank">mpavlik@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div style="word-wrap:break-word"> <pre style="word-wrap:break-word;white-space:pre-wrap">Hi Punit,</pre> <pre style="word-wrap:break-word;white-space:pre-wrap">can you verify that nodes contain cluster packages from the following log?</pre> <pre style="word-wrap:break-word;white-space:pre-wrap">Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found</pre> <div><br> </div> <div>M.</div> <div><br> </div> <div> <blockquote type="cite"> <div> <div> <div>On 09 Jan 2015, at 11:13, Punit Dambiwal <<a moz-do-not-send="true" href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>> wrote:</div> <br> </div> </div> <div> <div> <div> <div dir="ltr">Hi Kanagaraj, <div><br> </div> <div>Please find the attached logs :- </div> <div><br> </div> <div>Engine Logs :- <a moz-do-not-send="true" href="http://ur1.ca/jdopt" target="_blank">http://ur1.ca/jdopt</a></div> <div>VDSM Logs :- <a moz-do-not-send="true" href="http://ur1.ca/jdoq9" target="_blank">http://ur1.ca/jdoq9</a></div> <div><br> </div> <div><br> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> Do you see any errors in the UI?<br> <br> Also please provide the engine.log and vdsm.log when the failure occured.<br> <br> Thanks,<br> Kanagaraj <div> <div><br> <br> <div>On 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br> </div> <blockquote type="cite"> <div dir="ltr">Hi Martin, <div><br> </div> <div>The steps are below :- </div> <div><br> </div> <div>1. Step the ovirt engine on the one server...</div> <div>2. Installed centos 7 on 4 host node servers..</div> <div>3. I am using host node (compute+storage)....now i have added all 4 nodes to engine...</div> <div>4. Create the gluster volume from GUI...</div> <div><br> </div> <div>Network :- </div> <div>eth0 :- public network (1G)</div> <div>eth1+eth2=bond0= VM public network (1G)</div> <div>eth3+eth4=bond1=ovirtmgmt+storage (10G private network)</div> <div><br> </div> <div>every hostnode has 24 bricks=24*4(distributed replicated)</div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> <div><br> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <span dir="ltr"><<a moz-do-not-send="true" href="mailto:mpavlik@redhat.com" target="_blank">mpavlik@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Punit,<br> <br> can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log<br> <br> it would be really helpful if you provided exact steps how to reproduce the problem.<br> <br> regards<br> <br> Martin Pavlik - rhev QE<br> <div> <div>> On 08 Jan 2015, at 03:06, Punit Dambiwal <<a moz-do-not-send="true" href="mailto:hypunit@gmail.com" target="_blank">hypunit@gmail.com</a>> wrote:<br> ><br> > Hi,<br> ><br> > I try to add gluster volume but it failed...<br> ><br> > Ovirt :- 3.5<br> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br> > KVM :- 1.5.3 - 60.el7_0.2<br> > libvirt-1.1.1-29.el7_0.4<br> > Glusterfs :- glusterfs-3.5.3-1.el7<br> ><br> > Engine Logs :-<br> ><br> > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER<br> > , sharedLocks= ]<br> > ^C<br> ><br> ><br> <br> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </div> </div> </div> </blockquote> </div> <br> </div> </div> </div> <span><216 09-Jan-15.jpg></span><span><217 09-Jan-15.jpg></span></div> </blockquote> </div> <br> </div> </blockquote> </div> <br> </div> </div> </blockquote> </div> <br> </div> </div> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------010504030303090109000601 Content-Type: image/png Content-Transfer-Encoding: base64 Content-ID: <part1.02060703.06090409@redhat.com> iVBORw0KGgoAAAANSUhEUgAAArYAAAI3CAYAAABu7xiRAAAgAElEQVR4AezdB5wmRdE/8N5L 3JGRnI8cJIhkgVdQUAmKCUEUE38UA6ioYEDECCqCgoi+ir4qIihJUFARBEEQQTJIEDhyOuJx 3B2X/vPth1oe1rvbC7vchqrPzs5Md3V1dXVN9W96eubpOOSQQ6bff//9ZfXVVy/Tpk0r1157 bdlqq63KE088UcaNG1de9rKXlUUXXbRcffXVZd111y1Tp04tDzzwQFlyySXLwgsvXP7xj3+U zTffvIwYMaJMnjy53H333WX06NGV76mnnir33ntv6ejoKBMmTCjDhg0rq666all++eVr2qRJ k8r06dPLDTfcUGXQY+jQoVW2/dixY8uUKVPq5nyFFVaoOg4ZMqRMnDixDB8+vJBxxx13VN1u vfXWssYaa1RdSkNXXXVVeeUrX1kef/zxqo96tRHfOuusU/WlkzR723PPPVd1Jfu+++4rI0eO LMsuu2zVkSxp+JZeeumqOxuNHz++1knO4osvXnnYzzn96bvyyitX/cllJ/WQo11sithJnrRn n3223HbbbZWHHdl6iSWWKMsss0x55plnavvpxQ6PPPJIWWuttao9brnllqqD/rnmmmvK+uuv X9tw2WWXle23376Wvf7668smm2xSdZa+7bbbVhvdc889ZYMNNqg6qJvMhx56qCyyyCK1fnqx 6UYbbVTPnx43voxaeJHy8MMPF31Hd37ENs9NeLbqrn9r28q0uo9/Hc+fTy9DIin3aYG0QFog LZAWSAukBebJAsMWW2yxCvQAEOAUUASqFlxwwQpgV1xxxQr0gDMgD0ANAKbmhRZaqCqgzF13 3VVWW221CvhGjRpVnn766QrmADLnQB6wB4whQMlGpnqBPnJCPkAHoAGOygKD9gsssEDlwUcv bQDA1BMAmyz5+KNeeyAyypHjnA6AKNAKtGk7e2gvvfEDlnS3f/LJJyuIU5c2Kq9N5AGrSy21 VOVzU0CWNklXl3aSSbYy6iFXO8jBR3dp5G244Ya13fLpLw+wxUcGXVZaaaUql00BX/ZUHtDE H7LVrQ512uOTpw1kaZs09erXsDs5bIC0STvoPmrBBcqEZ8bX+umkrqh/WCNXPUlpgbRAWiAt kBZIC6QFXioLDAOMgCQzdGb3gBNgDNgBwAL4AVWAkTygCUkDepBZU8DTjCHQAxDb7rzzzjqj ByQBTKusskoFu4CSNODHjKA90ESfK664ogIvM390aAdWwCQCzOjhnE633357nWEkR5kAd2ZQ H3300Spf/doDMNNzdDOzjPADxcCZmekA2UAaYK2dgO91111XbUSemW12U06bHnvssWozoFaa WWGz10AtwGgmWfvozD5hZ3WxhY3uNvXZpGm7emxIPhlRxt6MrZlwZegB6AdIlq8O5eW3y2I7 fOzHXhWsNrbAw07Artl6OuCzmf1mF3kTJk2s9jdrPbqxJd0mT5xQRgztHtTmTG3tzvyXFkgL pAXSAmmBtEAPWqDjS1/60nTgB7ABaAAWAAgYAgQd2we4cQ5YmdEDfiKdDHnAD4ADJAVgBXgR IBXADk/IN2soD0ACRMm0AX+IbgGs6Eiec0QePvUpCxiThy/0INsGZOJVBg8iR13OHWsDG9AF 4JMnjTxl7QOM0kNatDtkAuzKsknIIlu+uqXb009apNNHuk0dAKtH++QrE7wPPvhgLQtgAuU3 3nhjtRE+oDaWWQDplokod+mll9YlB9pz5ZVXls0226zKvPzyy2s6nS0xWW+99aot3eisvfba texNN91U07QB0DVD7JhOIxYYVZ6ZMLHTVtOnTK5t1h/RB9oVSw8cJ6UF0gJpgbRAWiAtkBbo DQt0HHbYYfV5cYAws5aAFdApzd4MJdAIOAGZZiABFwTgIMARaFK2Ap7mHDgDqoLkIbzS8QM/ AaqVwyMdILTHCwDa1I/fXr026fiURfZ4gG6kPF2DV14AU20LcKpe4E69yDle+yhLpvqcI8fR PsfyAU3yyQGklcdDD3rja9dPPnCsLD55IbtdT+XIIwfRgTxbzLhG2bCB9uCTzkb4tFc96sMX bYs06SEHrxlgdtLn6Oabb67rcOmjjZOea934qEta3AyE3RLQVrPlv7RAWiAtkBZIC6QFXgIL NBhmWAV+gA1w0g6yAE6bR+hAFZ6YqQ0QA8AAV8Aj0AUsyQOK4lg7ACjl8QYQBaICmJFPF2Uq YGrAIZBELp3Ug8iIWVL1BIBSltyoQ30hvxZ8/p/68ABrgKPywWeGOc7lq5tcZZBySL10UB5J V077Q6ZzRE7IVy7aKN8xoivdpWm/+uyDVx4iX730km+TppyNHGUdq1PdeJyTpRyiB4o8+cpF ffou6jIrbNmGduHxEhwd6GQjSp66wvZk4yE3KS2QFkgLpAXSAmmBtMBLZYHOGduXqsKs56W1 AHAZNxXALlALgAK1SBpqT3OuHN7gkzYjmtla2ZypnZG1Mi0tkBZIC/QdC8QEUEx20EzsNx7I y8mJvtNXqcnsW6A1ZTj7/MnZzywgQCEANoKUfQBWwUtQk2YfYFY5W5SZWbObEjPJmln6TNgz OS2QFkgLpAVeUgvE08AAsu1P8YwZMU68pEplZWmBebRAAtt5NGB/KB7gVPDqGqgEL+mxLEJ7 AuDay5s7mttyc1dblkoLpAXSAmmBObOAJWPGAMvMbCgmN7qOFXMmObnTAvPPAgls55/tX5Ka BSkkSJmdRQJZBC/ngK+gFrzAbGx4k9ICaYG0QFpg4FnAOzRIvDdGdAWzMSYMvJZniwayBRLY DuTebdoWgcoegI0tglgEtAC9zBE8QG37TO4AN1U2Ly2QFkgLDCoLxAvDGi3ei/0oxod6kv/S Av3MAgls+1mHzam6MTNrL2gJWIIZQItipla+NAA3wC7+vGOfU4snf1ogLZAW6B8W8EWbiPk0 bge0Gfv7Rx+mlv9tgQS2/22TAZUiOAlWQY4FMnfnPv8l36/MjR07tv64hUA3J9T16wcz+0rC nMhM3rRAWiAtkBbofQv4pKbND+/4cR+fvPTpRuOCF8tiAqT3Ncka0gI9Z4EG2Jq58/hhMO57 zpDzV9LM+6+jwwxsC9jGHbhZWj/E4Vfa7rnnnvodWoDWDK3Z3FiWEI+lZtW2BLazsk7mpQXS AmmBvmsBP5FuE/N9o95PwvspewD3v5ehzeyF4JmPP4MTV6Q95ne/D3thhi2cdmDtp0/3U7R+ /nVo86MRE+vd6JQpfjwi2tl3g86caRbtefF+yNDmp4WbG5fJzY8uuBOfMGF8/bng++67rwLb +CWy5ZZbriyxxBJ1aUKsrTW76zgpLZAWSAukBfqfBbo+rYvJjWiJGVlP7kx0PPzww/WHmPzS qB/m8XPuCzQ/mW7SY1jz9QRjafxA0LBhrZ+7nz4diEMvHnfyPO0xP/1imF/8GsjkQnZx24C4 yZOnVrDmgm7lhQP2VyvM+nuxZl21dUjH9PorcuPHj6s/kSyIuSP3S2IbbLBBWWeddeodOyuw lXJmdv/7rr2/2in1TgukBdICg8sCEcu12jggrpusMEPr1yXFeOc33nhjueuuu8rdd99dwe1D Dz1Ux4uVVlqlzt6OH9/6esK0aa1f3Xx2/KRafuo0v4o56zFocFk8W9sXLDDMbOZAJxc3Mkvb aq9Z3BawHehtr6C2aaslAx45PfbYo832WLOeapGy/vrrl80226KusRLggF1367fffnu58847 6518LEsY6HbK9qUF0gJpgYFqgRjvgNhRo0aV1VZbrc7IrrjiihW4brLJJuUVr3hFufXWW8uF F15YxwrvXtx3//1l9OjRDYgdWU1jKDVrO2Vy60VkcpPSAn3NAg2wHRyOaXY2HsMAunGh97UO 6Wl9tFlbJzbfK2wB28fq3fqmm25abB41jRkzpvzrX/+qYBa/wNW+72mdUl5aIC2QFkgLvDQW iHcl4v0JSwtuu+22OkNrLFxyySXLNttsU5/eWX5gGcLvfve7+v7Fc80TzpEjRzZ5q9YnecYG Tz4tawOSBwt+eGl6KmvpKQsMmzYIllC2HsdML1OntDbn05vNY5UBTR2t5QQdHcPLE088UTeP n9Zdd93iDh1ddtll5aabbqqztQKYQDVu3LiaJ3glpQXSAmmBtED/tAAgCoB6ImcT3z2FM3EB 4PrygfHw/PPPr0vSXvWqV5WXvexl5c1vfnM5/fTTy8OPjK1jwyKLLFbT8dtGLDSyyps+CPBD /+z5wa31oJmxdfHaXNixd3E7Hrg0pHmENKQGtscfe7IuLVhrrTXK5ptvXh8/AbVXXXVVTW// xMvCCy9cgx+AKzAmpQXSAmmBtED/s4CXwkxY2ABS4Na4Z/bW5lNflqaJ+TfccEON+xtvvHF9 32L77bcv5/z+3Pqk75FHHileMJ40qTVbSwbKGdv+5xODQeNh8ZhiIDcWeNXO2FyMcdz/ge2s lpI0n+8aPqTO1FovJZBZU+WO3KOo66+/vrPb426ebdzJs0uC2k7z5EFaIC2QFuh3FrCe1ljn ywfIGCDGm8W1WVbg+7V+WtcYYEmaJ3rKebF4+eWXr+OH75wbQ0aOHFXL41U+KS3QFy0w4Gds Y1bWxdx1c8FL6980a/0t8n/qqafqXbpvFC6//Io1IFl+4E596tTpdb0UOwlUlioAtQJXPMbq 3/ZJ7dMCaYG0wOC0gLhui7HPmAfMxsSFPF9GMmNrPPD92r/85S9l5513rmlAri8kPP30M/XF 4lVWWbWODcOGjsiJj8HpUv2i1cN8m25gU7Oetnlq4ju2rbYCgkOaixOoHdaAucn1AnWBu9hd 9AHonAN71iQJDB7lyIvjsJs0ee6GBQ6gsOvXBMiKtU14yZBmU7d64hi4dIdtj4I/5ONH9tLM sOKNc2VbOrdmqt1pk+8lAW/DPvnk4/XLB3RA9EXK4ItjbUH0Yhe/TuPRlnNLF+gV6fbOyQg5 +Mj2uGv8+PH18ZaZAXKjbVFHtEl9YV8fDNe2SMMTNopj+oYu6iVPHhkBzqUpZ49Hvk1ZMxMC e/Rr2EK+x3fy6EF/Ooe+Iau9DjLYQZoyyuqfpLRAWiAt0BsWEIfEKXFVzANMxSlxzV5MEkPx 2ftRHjFKPIxYj0d5MUsMv/feeyuvc8D2iiuuaOL+s/XXKVdbbfUqS4xrxdKBjh96o9dSZm9b YNbTfb1dex+QHxc4MGgDRKS5cAUKF70twGU8ssHjwkYChSCBxwZoyQtQE2DKmlV1CBgBkCLI BJBSZ4A1PNKVlx71hF7kSxPY8Dqmn/PgHT6iBfbI1KbpzWp/M7j0p2t3dMghhxQbUCsossW+ ++5bPvGJT9T2aYsX05BASlebIKoOaXSSpj56so8ND73IRGFre4F51113LZ/73Ofq0gl8ZCjv hyQ++9nPlne9612d7dA28tgGn/arY++99y4f//jHa3rYiyx8dAP68ZFLhjw2V38s3yCLTOk2 5waQduCqbepH7B+61IT8lxZIC6QFesEC4liMW/GzuOKc2OTTjX5Z8sEHH6wvBIt3ntqZnRWb gxxHbJMmBlp6EHEy8p2Lk0hau4yamP/SAn3EAg2w5aiDdwuQCEwBJAF+zDIG8ANwXPh4BAcb cCPNxS0dv72NjADKeAQEJKioz+wfHvIBJhuZgG8AogBJysvHC2yFTuQ7HjbMTPKkRpep9XjU KJ9hIbcFKtUNkKoXIKSvZQj0p8PskpleIFb7BE0U4B8wJB/RW52CJ93xyFfOOTvhZVP2jnQB k12UxSdfYNZOPx4hzTGyJgyfb+2yG3nq0x7HZKkndPH5GraVhyfazQZsYi+PneljUz8ZSBvU Ty4+A4jAv/jii1c98JNJJ20lL46rgPyXFkgLpAV6wQLiERITxSEx1GcdxXwTGM5NLJh88J1y 5+K4G3MxDXUFqGKXH2qIeIfHsXT74G+VH7zYYTDjpr7e9kE1YxsXsgs1jl2oLnbnLlwbkCIo AF7Ao6ARs3DAj8fxQA3A41zgAPZc8OQ5FnAAI3LxAFmCC+BDPvATgI88+eoQcOggTzlEB7LV YzmBc/IDTAlo8oNHeYBvwsRWuxZfvAGCzS/EWEOlnDt49eHpjuhge8tb3lLbS1fn6rYHNkOO +smP5Qpkh03lsYt2KxNtD50BU8AdP7s6v+666zqDsDayD54111yz2vaaa66ptiCTbcjSLno5 D9D7wAMPdPZXgE666RsAFr969Z2NfdWFnIecmtD8i69F8BvtVRedAd4A7NLVlZQWSAukBXrT AmKieCneGLfEMvE14nHETjHKGITP2BQ37l11E/PM9CIyUMTGetL8kx5jaKTlPi3QVyww+1N2 fUXjHtbjrN+dUYHMO9/5znqhjhv3VPnnP/9ZAcpuu+1Who9o1h81ANGFLXjcd9995Re/+EV5 //vfX0FmA4fLAiNbwBJ47BjSzOI2Og4d2jwOmvBMBUXPTZ5YFl9i0Qqg8QBSypXmO7NTG7mW CwBPAsW4Z56qd95Dm8Dxy1/+st55B7BV7jWveU1d92RGdtq0KbU+Xz54ZvyzFYBPn2jGs1kW MaUFbgUzQBMBzdphJhQwJbc7AkJ9BsZs81prrVVnSQXNAIgCnGPLFejvJuCOO+4oJ554Yq1j v/32K8suu2z5+te/XpcU+H7ujjvuWE477bRy88031zJf+MIXCvB58sknV/0EYsEZYBRgN9xw w3LeeedVECl9s802q2W1hU3e/e53lxVWWKHajQ5f/epXKwCmO4AJpEpX9tOf/nQN7ieddFJN 32uvvcqqq65avva1r1X7HHDAAeXqq6+u7VhjjTUqCP/DH/5Qdtlll1qXQURZMxpI2w899NAa 6B37LI5+058BsruzceanBdICaYG5sYC4FhMoMcngSwZi9P3Nr4aJ/eLfSiutVMX72VzgV4w1 OSPGogCw9jazvvbkt+fHsW/AyxNfk9ICfc0Cg8or4+LVCXHs17eA1Zi1Bf58Bsu3/AAhG/Ak OJhxBGzagZILOwAUoOhiD8AoeDiPQFEBbXMumAhGysmTrh5AyGMioMi5NaS777575dljjz0K kDh6tJ83bD1+B6SQGUSgWxsENPLsyRsypKMCTPXQlT7RHvV3R+Sqz/dut95666qXcuQDluR+ +ctfri8cHH/88eWggw6qgP+DH/xgFW3WFQ9QTD+zmvTdaqutaj7wCGT7rfKwRbudgF/5L3/5 y2u+pQPaIUBrH6AKOB933HGdelh/qw/CjvijH7Un+kwae+gPdcrzCG+LLbYop556ajn22GNr 3W9605sqWD3mmGPqT07qi7gx0HY/QXzkkUeWb37zmxVg77nnnrVv+UlSWiAtkBboLQuIX8YS MUw8dC7uAK0Artho79x4hMS9iIdirjIxNsiXZmyxj3PHcR5pNTP/pQX6oAUG/Yyt77oCq2bg rOUEcl3UPkZ96aWX1vWoExpAu3Xziyzbb799XadkFlQwAK7OOuus8vnPf74GFbOUr3/96ysA Pvfcc8sqq6xSH/v7DW7LAMxK+oas2WGB5K9//Wu5tnmcvlgTdADsbbfdtt5NA03IHtC1B9LO PPPMsv7669ff+L788str2uqrr1Z+//vf18BFd98e9KsxZk4vvviiOrMJFC6y8IKdSwTIe+aZ 8c0M5aK1rbPyS3f87AFg+tnF1Zq1tnQX5Oz9BCOgGzMBlkpccskl5dWvfnVdv+oNW+Wtc/3P f/5TNtpoo/oNXUCXDZdeeukalM0uRPDEj/SL+skHOgVm9QPI5ALFSD8pA5xefPHFdTbdjYkb FOBbe8mwdx6AHL822OiiLjc23gJGUcbvp8fLFAYN/G5y1l577VrGLC2fAPZ9KocPmEE225zg tpoy/6UF0gK9ZAFxxxID8VNsNM44bn3ecflaq6d0xjgxTdxz4w8Q4xMPIwZiluY8yLkNiZNx HPvgy31aoK9YYNADW2DHms0xY8ZUoPTvf/+7vOENb6igBDD66Ec/Wi/mH/zgBzUQADMuaHfI wCawJagAMO6YBQ4ACSjbYYcdap7H6B5lA3set5vhE4D8SMKnP/OZWu6EE06oM50AH5BGrrtq dZELOAGRygKqgKalEn4JRn1vf/vbm/JL1ZnLG264seqsLWZOb775xvJkoyNwKDDZ6E5ud8GJ fQRO7TOrCkzSjY7kmEG1f+1rX1vbF46tnFkCyxKAQuUAW+lnnHFGOfjggyvYZXuPvYBDFLLt 6acNgLrlCMA8YO8mQaA2E4GUZ/eYTY81sNHOAJ2CdehND+lsHHUZCCLos4vgj99NQgDksJl8 NynKmsn2U5TkOcdLFzcl5CelBdICaYHeskCMEeK0OCgeAq1ma8Uo8VCa9f/ivphm/HHzHUSG uIUci4NdSTyNraPjBeDblS/P0wLz2wKDHthOfm5qefn6G5ZTTjmlvP51O5ebbvx32eDlG9WP Ui+/3IplWoNLhg8fUdOeeLx5y/SZCcXHqRdacJEaLDqmNy9xDWm+bVua2b9mi+OhHc1s66iF y7inninDG/4lFntZweu3tW1jH3msWZHfUY7//gkV/JD54AMPlzXXWLtMb9Kfm/T8dwKb46GN fHpsvtmW5bK//6PceceYymPdlLvzJhQ14GvxMqSpc9NNNiuPPjy2AqyO5hvFJxz/g7LFlptV QPnIIw81ujSfM2uWTU1p2i3INe/6z9IHBbi4y7/yyisrIJUGxNkD1QDg3//+93LhhRdW8CxA AoBmP4FJgNgHv80mu4FQb8y4ApN+2hcBm4CgMmSQ79iv4VibKxBr8x//+McaYAVsQRiAtNHJ Jk2QF8AjWMdNAuApOEdd6sCjHu2U59gWZch0jqrNGh5lAGx5Zomty5UWgBk/uzhPSgukBdIC vWUBMUtci8kWMc2TM0+VkBglFseaWvFRXMLv2+6+8e6HenzX3Xsb5HUYJJqxYcxdd5Rnxzdf lGnOm+d0ZWgDaJsSNTbW2NnwJqUF+poFBv1tlwvcY3J3uOecc05dguBcIABcBAwXsJlDs5Nm LgEYC/CVEQTM2FqbCcgANDF7S7byAgvQBBQBXMpY6iD4fOADHygHHnhg3VuOAIypz4wfeY6V BdzI9sHsa6+9tq5ZbcJMlSnvP7ffWQHdZZf9oyyz9HIVAApkZNNXwNKmCHahR3cOCdxphzbY AFg6SQNezWqbCTbTzB50fsc73lG++MUvVhuRb5ZZ+k477VRnDci09MAsJ5t6RKZt9NR+ugm+ 7CRtTAOGpZmpFoz9Eg4ZPveFx6N/MxDKWB9Lj4suuqhzxpRspEws16C72QxLIdTB3trhmByb 9uozNlNWmr5wrI+Bc7Y36ywPr69HeBlOWTxJaYG0QFqgtywgtomJYo1YKA65+fcug3TLqMzW GguMOeJ2jC9454ial52T0gL9wQKDHtgCI8DMlltuWQGW5QPAy+jmJS3rOb/73e+Wo446qmyw wQZlvfXWq6AG0AES66OcJjicdvrp5S8XXFCGNIDvuQbATWrAmW1oE0ScL9AEnRENMHP8dDPL OLypc+Nm3a0vIliXq44///nPNSgBcECaDdACkAA1QFogAmyBQiCP3oIXfYC8733ve2WNBmRt uPFGdb2wAOaFrgCiZlcXGNUsmZjc+s6qIDc7BJwHAalAoQ0oZYfDDjusgkQvclliwHZsRm/E nrHuyzd0A2Cq32wusOk49JGPtNexZRfsEEBY+wVx7frGN75RbeOFsc80yzos1zj88MMrEGZD gZ+M6GfLGZz7coIX2NyUAN3SDADBy670p7ulCPpFHiCLn+2BcACeH7DBhz70obru9ogjjqi6 40lKC6QF0gK9ZQExU/wXl9yci5linUkZS928+2DiwNMt8cgmnsc401t6pdy0wPy0QMfBBx82 aJ4lAHYCQIAdwAg4cXcLwAAzAJQAITjE3bBgId1eUCDDulHAJwCOvCBBg2xg1Kyg9aB4AygB ZchdtnqBROSuOmZs1SNgRaCyPgqIvOWWW+qj+Pe+971Vj/sfuLecffbZ5cP7f7TKmDix9cML w5sfblhooVENcHu8/OvqKxu5E+v61Le+9a3l6KOPrssg6DF9Nu7CtRGYUz9dtQ3IYxPHgitb aYu201nb2QlpkzRAvYLrhkdZ9lAWuNRex+oKuY7DRvqDPdSDz6a/7Nt1cywNH5n0JY9e2ktv OtMVX4DVkKUMfSMv2q6sPPyALrk2+mmb8upQDqCmb6RVI+S/tEBaIC3QwxYQj8Qg7xt4agjQ irPiVcRRMVFsEsvELPFLbDamjB/f+v43fjGzfkKyiWeWInzqU59qQPG9xYvQTz/dLKlr4t9W W25T452lC+LotKZcUlqgr1lg9qbs+prWPagPABIvf7nYgRppwAoQBnwBKvJc/NIFEMFEIJAG 8CmHAvQpK4AAUJY2ADnk4HWsnE2wUIZcMshURhAK+dLJ8skt6zn/53/+p86WClw2cvCQha+W bQITcOlcULNeShAU5PB3CGJTm8fuVeuZ/8MvULKDdtvTC9G5ynpedzbBTyfLLOyd00GbzRoo H+1kDzzyBVmy2IL8kK2MdoU9tIV9kH7SJmXIV5d+UpZsafjNLFu+4IaFLHUqG/UoF7KUC1Ab NqVztFNZfasOfNLpGH0FfCPn7CE/KS2QFkgL9IYFxCExUGwWj8Rd8TEmRSIe24tjJlrEOrGx Fau6GwF6Q+uUmRboXQsMKmArCATFsT0AEoBHPvACLEkPUGkvHWABngQTwQEfci5dGmpfvO8u WT0CjwCDF+AiE2BTD7kCTYAoMqIMcCTPp7IsiSAjQKu1rX4sgjx86lLPc81yA/oAjOpQXh34 6Ljooq2fg1XPrIhuwB77KEsWvckKYEc2cIpPnhlwadqJh67kCKjK09Ne+4BN7cFno3+ATHU4 tm/p3PqRC7KUUZ5OArl+cAx0KmMjjx6ApxsK5QwACFgnA6lT+9RjH0S3Tns27UJRH/31FZ8g K9oTdqGT+pPSAmmBtEBvWSDilRgrxhl3xFMaQDgAACAASURBVCIxTwyKsUbcEhPFTDHSJg7n 1w16q2dS7vy0QI68z1sfcBIQgB/BAGAJsONYHvDk2CZouPu1D2AE2AE2AFHcQTsXQIAcMhC5 Aow8AUd9AbLIRgEK6UUn4CmO6UFWldcsJaDHs+Nbv7IVgWpko4NfTFtksUXr46LWcovWbKIZ zKeeas2E1spm8S/aHQA+QKg2aLs2AHgBnNVDd/w2fM4RXvIiwJLlXJu1Maidn4wAq7EPOyjv OGzHzmxvD2CTQ6462NvMPF2d6zNEF/zqoZ9yzkNX6VE+0rRJGiIXf/QbHvrQg32UT0oLpAXS Ar1hAfEqbqBNYjg3oSHOeVIoTtmki5ViqL24JfZNntyKzb2hW8pMC8wvCwx6YAuEuMABkiAX vQ0wki4oIGAFSQc03SXLF1gApQBnAG7IBHhCtrrwkk1m+x4AUgYYAnYFI4BJkJKmTvyCkvyo 00/4Iufqwg9kWisVPH7Gd8pzrXVWeAXAkSNbANDP786K6GCjr702OtYW+tjUKy90aG8nWyjT zq890kK29pBBroAc5bVZOrurh3x5gCXeKBcgMuzjHA/5+GdkV23WL/pNOfbGi/Aj5wFMow/p Qw8U7VYPGcopg5d++OQlpQXSAmmB3rCAGCiGiT3ipWN7a23FIfErYpJzeRHTjBPDh1umlpQW GFgWGPDAFuAIcBF7XRjHLnIX+8wIXwQCYAUJEIJFABygqH1msj0dL1JHpLfLrJnNP2kBmvBG uXZwhZeM0Jde05oZ26lNWV8XlCewtfQFjFvgmKwAyuwB+E2Y0AqGSnZH1uMiCzkmT2lmP5vv HYJr0itsazJ8EaJZslu5hg7zfVxtavRtAueU5kUDZZD0rmnyGwnN1yIaANzIL813FSNNHS25 eNTZqnvI0KYP8DblJj3XvNjXpOMjo1WWtg3wbsrja9dJur9pjch2vchErfKtvFbdUp+3QejT 6Fg1IkpVDYXukU5+rcguKS2QFkgL9LAFxJwhQ5v3Bp6PoeKa44iT7cc1ljVxqxWXWnHYuGDc sLchIHl88+1aY0X7+BnH9saUGEN7uEkpLi0wzxYY8MA2Lr7Yt1tsRmnt+QPhGOAuzQ9DxOym NrlTH9EAwArqn5+lHAhtzTakBdICaYG0wOxbwBgAqAK1JkxGjmy9B+E4nlx1ldYaN5+/m++a medpgT5ggUEDbF28tqCeArXND4PNEz2/kmCeZCjcVY+QawZX0OoY0gSv5nbeN2wXXnSRMnVy 64Uy39dNSgukBdICaYHBZwHLwgBYT/E8kXRunIz3JmZmkc7xs21MnRlvpqcFXmoLDOt00Je6 5peovnYw215lAN14/NKeNyfH9WnznBTowutx/bxQswL2+eIv3rf6dXoNWmZtxz87rs7QWhu8 //7715/+NYs7uVkzmpQWSAukBdICg88CizUv+3qpFgG43s2wzGDChNaXcGZkkdaY2prRye/Y zshCmTa/LTBsZsBvfivWU/VH+wC9dhAfwDby57a+eb1hnccJ37ZZ6JDU2jfz03Xxp0dK1u4G tV4cG1nuvGNM54takZf7tEBaIC2QFhg8Fri7+blyyxGMjSZAJk1qfcJyzTVXr+9rdLVEjJvx 7kCMOl358jwtMD8tMOCXIszKuO1Ad1Z8/TmvdRfe+sSWlwScC07uypdaaqnOl9T6cxtT97RA WiAtkBaYcwtMayY+fP82Ptfoh3z8BK8lCV5c7kovTBAlpO1qmzzvOxYY8EsR2sFr3G1Km9cl CNGFsQAgzud031tLEaZaomCWulk3pb2T2gKVT4YBtpYi9JQd5rTdyZ8WSAukBdIC89cCw5rZ WmtrPdkzFnh5LMZJn45spwC1rX3rCwrzOv61y8/jtEBPWWDYtOkvPKbuKaF9SY6LEIhzsZbm 01h1axScXpoLufkc1jxTc2WT4mNQc7pX9zxr0Fl/87kWejRfQIg21lY236m1rvaxxyfUz7qE Ley9WGaflBZIC6QF0gKDzwKArR9t8MuRlqzFy8Z1MqQZH7qST0hOa74XFhMy03PitquJ8rwP WGDYQgsOnrfiR41c7AWTL9BzqzDctbq+53T/gjLzdtS13hekNd84BGAnNh/iHjqsDO1ovgk7 pQH6Hc3P1y7Q+oU1yxN8LyEpLZAWSAukBQaXBfxAT0cHoDq1mbV9YelBR/O929aI1rIHHtMw Cy3c+mGfwWWlbG1/s0CD7hLUzGunxU3rnO7ntd5WeT/NgHzM6wUKsOqb2+6+28kjJ78INmXK c2XKRHfl6QPt9snjtEBaIC0wGCxgYgNZlmYSxNb6gZ9mVDB4tFFrHJH24vGkjSUP0wJ9wgI9 N23ZJ5ozGJV4cfDpaoH24BTLMuIt2PoNwyHuzJPSAmmBtEBaYLBZwBhgPDBOALX2MWYEwB1s Nsn29n8LJLDt/304yxbEiwCYvCAA3ApYPu3im4XT6u8szhocz7KCzEwLpAXSAmmBfmmBkSNH 1vdPYrLD+BDAtr6X0i9blUoPdgsksB3gHiA4uRO3F7S8IBB35hXkNnlJaYG0QFogLTD4LGA8 iJlZ4wKyNzaYCElKC/RHCySw7Y+9Ngc6C1A2JGBF8HIO7OaMLUskpQXSAmmBwWkBM7QBYo0P xoXYBqdFstX93QIJbPt5D8bnVupLqzNpi8AF3Apa8chJ4PIth9jPpGgmpwXSAmmBtMAAtUA7 kI1ZWnvjQszkDtCmZ7MGsAUS2A7gztW0+jkvM7PPvxQgaL1whz6t+NhXUlogLZAWSAsMPgvE UgRAtv1pnnHClpQW6I8WSGDbH3ttDnQGYhGA67g1Q1vq3bgPbeeXW+bAmMnaJy3gU0VPPPFE /SGSeKQas005OPfJLkulZtMCATyxO47Pcvn1yNn95UjlgFab60M5X0MwJrQ/wYtrBZ/jGCtC 1a7nkZ77tEBfs0AC277WI6lPWiAtMEcW8JOgfv7TgGyg9utJBm5f/khKC/RnC/DhAJm+YDBu 3LjmZ29HVlAKsHYHNpWNSQ12iKd10l0vMfExOzZSJikt0B8skMC2P/TSLHSc1draWRTLrLTA gLCAwdYAb7B/9tlny4gRI+qAbTA3MzV58gu/pjQgGpyNGFQWcNMWN2z2Cy64YG2/G7jZ8W3A 1bWAlHd9BCkfTzYiLfdpgYFggQS2A6EXsw1pgUFqATNWQG08bmUGg7dBG9CNQX2Qmieb3c8t YMYWIOXfTz31VFlsscXK008/XW/a+HYsvZlZMwFX4Nb1YO9mT5lRo0bNFjCemdxMTwv0ZQsk sO3LvZO6pQXSArO0gIHbz0P/85//rGtsgVlg12YABwqS0gL91QIBTAFbN2z82lMKILX9Zm5m 7cMHHNvM/irj+JWvfGW9IczlBTOzXKb3ZwsksO3PvZe6pwUGuQUM3AZ8s7a77LJL57pagCBm uga5ibL5/dgCZlr5d/jyscceWxZZZJFOUAvozoqAWC9Wolius9RSS9U16bHMYVblMy8t0B8t kMC2P/Za6pwWSAtUC8TAbpD2mBUAcGwmKmdr00n6uwWsqeXPvoKwxBJL1KUIe++9dwW73S1D 0PaYsXU94D/55JPry5VkystrpL97SOo/IwsksJ2RVTItLZAW6BcWiBdgrDeMR7Nma+ORbb9o RCqZFpiJBQBaM62Aqa992Pg6YBr+PpOinclRdtFFF63Ldl72spd1Lk3oZMqDtMAAskAC237f mbGGsPW92jltjjt2d+42LyX87Gc/67yLlyaI7rrrrmX11Vev6e7646UF4AGwiBcSnnzyyfLT n/60fPrTn+58qSHkeyQmSAMd+CNgq4PMu+++u5x99tlVDwF7pZVWKu985ztrnepQV8zOxTE+ xwJ3gBmBP9aijR07tvz85z8vn/jEJ6ocdeGlk82x8iFHu5SX5/NRPq0jLanvWkD/xONa/Yv0 vzT9iPQzkh4zVcCCY8R3+KAyBn/rdMnis4gcPqKuqMNeOv9RDi8Z6opy5JOdlBaYWwvwH77H 19p9jq/J42+RHn7OJ+WH/8vHK51vOm/Pn13dlE9KC/QHC+So3R96qRd1NCgvuODICjQFOwP+ Bz7wgfrWrEAG6P3whz8sb3zjG8uqq65aNREYDdqCJJBKRlAEYkHWY7T4EPgzzzxTFl988Xru 2Fu58vBfd9115ZJLLikHHnhgDdTSLrjggvrYDLhFEZzJFbCVJYPOdIiXhqw/A0jJAETwB/iJ c2XoDOgg8uSREbMj5Gcgr+bp0//0JT/gU/qLjz322GO1H8MXpTv2kpn+1ddurOxR8Nnrd/7A p8I/+YUbHT7fDhb4v+tD/QGS5QMifCv8rU8bMJXrVxYQz8RCm2O+xv8idvFHFDdufN/WXka5 uaG5LTc3dWWZtMC8WCCB7bxYr0+UnbuZ2lDd4AscGJAFQIO/T8rYI4O3NV3nnXdeBbb4H3nk kfLb3/62ggNl3vKWt5SVV165AlEB9IQTTigHHXRQueOOO8ppp51WgYN04NjMr0DsPOq88MIL K5h2LjADGK9+9avL6aefXsaMGVPrVeY73/lOBRvA67777lv1ffzxx8v5559fllxyyXLrrbdW gEGf9ddfv3zrW9+qAf6YY44pBxxwQAWzP/jBD2q71A9Ia7uZakBa2kMPPVR1pwdwIi2pb1sg ZlP5D1AbINSThhNPPLEsu+yynaD24YcfLh/84AfLqaeeWvv39a9/fTnjjDM6fRJA2GOPPaoP nnnmmRU83HLLLeXQQw+tRgAYVlxxxerjfMTN26OPPloB9TXXXFP+9re/lY997GMvAs5923qp XV+2gPjaTs4jDdAUn92kSxOrxEk3bW60xK+YdEhQ2m7FPB7oFojn2AO9ndm+mVgAGDDDaQ9Q Aq4xWyloSgcagVnfURRALTd4xzveUT7+8Y9XQPr73/++BtU999yz1vKRj3ykyrS0YJ999imf /OQnKzh2DkiSoS7y77vvvrLccsvVOgRf6fLlARiAsNkz4GOrrbYqn//858uGG25YLr744hq8 BfP777+/eNMX2N1rr73KpZdeWgHyhz/84dqej370o1WuMq95zWvKfvvtV7bbbrvy17/+tXNm jR5bb7111VX7c7ZtJg7Tx5L5J5+yGdT5itlbgzo/MsADvmZy288B0vB7vhYzru9+97vLe97z nsKn8fPZk046qZx77rnlnHPOKb/61a+qb3o7PQCEG8Gzzjqrgl8+zHfIz5uiPuYs/Vwd/sjf bY6DgFc+7Brg064DxKfb+RzHOT9NsBsWzP1As0AC24HWo3PYHoEOCAAoBUaDsYAoUAqYQJ7N MoTIM3tlMBdgY7kBQIEM6sg5QAlwkge8ygvgQSZ56rCeFkhwjIcuATroZBbtrrvuKuuss04F rNtvv3254YYbav2CuDKbb755nalTFgBXxrHgHXIteVh66aUryNl0003Ltdde29lmdlhllVWq HmQGYKqNyX991gIBaPmSfudf8RhW3/NjfsDf+FgM7GZbzfwrF35ihvdd73pX3eQDxD/5yU/q kwI+xCcWXnjhcvTRR1fgy8fweWLgqcNhhx1WQQWdpONPSgv0tAX4NR/jk2Kr+G3P38L3o04+ bMOfQDaskvuBboFcijDQe7ib9gl6BnYDMVAACAiOwKJgaSAXQP/zn/+U173udVWaR/dXXHFF ufrqq2swjdkCj4DJca6sQHrRRRfVNbQKAhECrM/WAAUCsZm0v//972WnnXaq9QTYJQsIcU4e 3h//+Me1/gAn9FCfNgDCAjx9yQa4PXamg/bgodMpp5xS98ArXcxEx3chCaebGT82iXpqpfmv T1qAz+pvPqLP9R2fccyH9THSl3zN0hXHNr5jz7+/8IUvlP/3//5ffUoQs17XX399XT5DBvmA MR9z/oc//KH6kfotm9l5553LlVdeWZc8hM/yN36UlBboKQsEqOVbsXk5VnxD/C18nv//6Ec/ 6qw60iOB7yelBQaiBRLY9vNenf58bOqYu/cBngehEzvv/gXLCJIGc4P+bbfdVtZcc80KFphL sLSOdZtttqkf/7YWNmbEDPyOAccjjzyyAoXXvva1nV9cEFwBEXuBFxAFGO65556ywgor1LJA LLCpHoDX+l0g9pBDDqlApL3LAiCTA6ioG9ggP2YyHJOJx1pbgwO+mLEDgIFfbVUPvfBk4G+3 dN88jhsp/WXT9/bhxx/60Ieq4uHLTvgFngCdn/nMZ6qP63eztkCxvre2lhy8rgk+oj5+FC+i AbHk8GHHytEBX/hU37RcatVfLcDP4kadj4mTbsr4qY0PRvzlv0j8k4cyrlUz5L8BbIF8VjaA O3d2mmZARpYWIIN7BEHBE9j705/+VDbeeOMKCAA/gdXyAQDBIO4RMF4BE4gEIqx7FVxXW221 miZPOTNq6lBOGftNNtmkvngm8AINgIW1jWZtYwmDpQaWH5Bz7733Vn7HAIb6AFUAg+5Rl/rU RQ8Bfu21166zzHT0azzWCktXHznq1iZ6xSBQjZL/+qwFApyG/9nbwo8N+n/84x/ri4h8Kvo8 wK1+3nHHHev6WWA01ovze+u75fNRPuUphXXfXjhzY3fVVVdV/yEr/AbQcE3hJy8pLdBTFuBT sfFLGz8Vf/m52CfmSffibLuPS0PKJ6UFBroFcsZ2oPdwN+0DAIYO7aizqIIekGgNYYBCwCG+ YwsQelS/7bbbFi/PCJaAJ/AKKJpZtYb1+9//fn3z3Izud7/73SrLR8EBBW+QA5DAgjqAkC22 2KLKPe644yrwBWiXWWaZEi+jCd5mfb/3ve/VF77U6yUf+pIDSNCbrmQCFI49evYG+ze+8Y1i Vu4Nb3hDOeKII6oMAARIIUN5cmJ2o2WToXXQMFgk9V0L8AU+ZBDXf+FX+pQP6F/+g8I/4jiA 55ve9Kbq/5/61Kfqi4Vu2txI2b70pS/Vr2cos+WWW9aXxFwHli6QS6Zz9URd0vgjHdJ/WC5p bi3Av8U0fhTAVXzi8/L4HF/z5Aq49WKsOCmNf0c+GY6jDH2k8dXZJeWT0gL9wQIJbPtDL/Wi joKVICnIAZPve9/76uytcwO2wRsIRI4FV0B0s802q2l4YvB2/P73v78CDGkbbbRRnY0N4GFW FPAQmMkK2QKw2VQvbwnIQIqZYUEYRVD2GSWBWHoEZMsJLFFQB/IFB+eh19ve9rbOgK4MgGuv TnXRRRmf/kLKIXXKj/OamP/6nAX4WQzWZt7Dl/kF/9LX+pCfOwYAzODzLzdbiF/6YQa+cPjh h5evfvWrxU3W8ssvX1+A3G233eoThPAZTx1uuumm+qUPZTwtsOe36uA71umSr76ktMDcWkAM 4seI//Ev53zaMX93ztf43FFHHVV9MeKW68MNX1wnseevrhu+OrsU8Xh2+ZMvLTC/LJDAdn5Z vofqndu1tVG9wLjAAsMriBMAzaZ2nbmMgCoYxuAu4JqRAhIM3soBCJEf+wjKgIUygmMEWiDX eQRhssjEa9ZWukDs3DF++jonV2CPPHtBuj1YR92AuTqBGXKlq4vMWMJAdgAUckNPx0l91wL8 g8/oUz4Xfav/9J00fa3/Y7mJNL7s6QHiH3yY/5il9SKYT9aZ3ffCJNCw++67V5/Ar06f90Jk AcX2/Itc9QLW0vAmpQXm1gJ8KJYURIzjX2JX3HiLXV6K5L+eqAG4KGJXgFfn4mfENvLSP+e2 Z7JcX7ZAAtu+3DsvkW6CpGAn6BnkBUKbdCTNsXwDt2MbwAAMCrACsDwEXMSxPLKUFUQjOEtT Jzlm2uJckFYekRGy1AV8SMNLLrDinBx7sqRHGXU593KYgG+AwCOdPuQgdfqwPwAjPQB7zcx/ fdoC4Yd8lH+5wbLm0LmXGvU5v8EHbPIT6fwJIPV1A/4SvsGfvWC4//7713S+5fvGv/vd76ov yecv/CR8DZAgd4cddqi86grfDnDRp42YyvVZC0TM5GN8lq/ai2HxBAwP/5PG9xGeiM18kD8i e/6LlEtKCwxECySwHYi9OgdtagHJSRXkCY4BQIlwLBDGnb60mPGMGS5lgAIB1T6Cr3IBMAFI gz0gIPhGoMXjXLrA7Zz8ALnSkXPyQ3aAEIAYCNUGwJosOtMpgLbg7lu6ZIQ8upMVoEMeHul4 BPzQrSqQ//qsBfQ5P3OTEz7gRUh92fLtFmjVp/j0q77HCxjwJWk2fR8zu/wwfEnj5SmD8PJn FKDCubLqtAes6RA+VpnzX1pgDi3Av/g4f+KrNn5rmQwfFd/4Iz6+FjGc7yJ50sJnlefH/DXS 51ClZE8L9HkLJLDt813UuwoahJvwV4OioBdfLRD0YhZMkBRcBUdBFRnoBUfBVJog6/jJJ5+s g7vyMahHABV8I+BWIc0/MgMIqINMGwIMlLUpG18vkA+kkE9nfMAKPsfkOQdYlbOPWTXpwAcZ 0snAL41u9mTSK6nvW0D/urnhj/yVX+hHx5GuL6UH6Xe+Fv3MJwLkKofIIBOvPDzq4mPynPOb kC0dAQ3KKRP61Iz8lxaYCwvEzVT4M3/z63cALX8MHxXPIpbFDT8f5Zd8EfF5Puuphjyyw2/n QrUskhbosxZIYNtnu+alUczgPmXKcxXoGbCdC5gCpXN755EeAEGaQT8AgL28AIbxOF8gFXzl K4MEWMeCNLkx6xDp0hwrI1gHCB07dmwFC9LIJJsOAnUAWvJ9DcG5PLIEdo+hyaFj6G6wENzt 1WWmjy7kCvjqUCap71pA/1lCos/89G2QPuYnfEzf8gVp0ff2fCx8iw/5soc+5w/8QBkAQHm8 ZNmiDD48zvmZPP4VvilNflJaYG4tEL7Et/iic77I5/mWdHv+a1JC/EVeiHVt8Eexjy+6RsLv xcfgnVvdslxaoK9aIIFtX+2Zl0gvga+JkzVYGsAFwAACgiiyxxf5AQgCJMRMgYHeIG8DDAVS gRcFGHAcwVWa4Es+GeqQJ13ZAJzS2oFmgAnpygEhgrTy9pYtkKkM+dLw4pNGp9DZXrvsAWB7 AwgdbEl92wL62Utb1sHyF/2tf/Uzf0D63sZvo5/NaoXv6X99rd+VlR6+zA+lS1PWhl+90tSD lFcWL9DgCYanBK6npLTA3FqA/7lRF9P4mlgmLWIwudIjtvHLiI/S+aUyYlv4ousijvlsUlpg oFkggW2/79F4AWDuZoYESEsRDPzIoI0M4AKkdMHRvj0YKicoSo+BHh+SLr8lu/UyWQRc/PKj bPCTETKlqUv9kYa/nUJ2yAkdBW11IeW1AxkMAoQ4Vy5kh07BG+3Al9T3LRCDfQBN/Rf+En3q 3MYn+BZ/a/fx8GOt5SdxHfCl4JVHNr+JtKhHeRR+Rpe4XmpG/ksLzIUF+JAbpYhp/Avxu/C5 8GnpEbvkSbdFTAwf5782eXNCyiSlBfqDBQIV9QddU8e0QFogLfAiCwCYNoOuwd4gbsB2nsDy RabKk0FoAddAgFjXhmvEeVw3c2KSuImbkzLJmxaYHxbIGdv5YfUerXPO7rp7tOoUlhaYzxYw u2rANYADtAFy53Q2aj43I6tPC/SKBdpncF0nrhczt0Bu+xOsXqk8haYF5pMFcsZ2Phk+q00L pAV6xgIGaRtQa/C2xMCsVFJaIC3Qen/CNeHacI3E9ZK2SQsMVAvkjO1A7dlsV1pgEFjAgG39 ISBr4EaxxrY9bRCYIpuYFvgvC7gGPL2IawODNNeMaydmdP+rYCakBfqxBRLY9uPOS9XTAoPd Al6s8SUCg7dHqwbteGvcsS0pLTCYLQDUuj4s14ljoDY/+TWYvWJgtz2BbT/v3+nPj9sdL/5o QD9vVaqfFpg9CwCzvorw+OOP1xkon/EyiMdb5O0zVbMnMbnSAgPHAm7sYk2tY98XN0vrO7iW JCSlBQaiBRLYDsRezTalBQaJBQzaPkx/ww03dH4/1jc7gV0ANyktMNgt4Brx+TnfVnZtuPnb dNNNOz8XNtjtk+0feBZIYDvw+nSOWmTwnz79hV9OUtgsl7v7WKsoMDo2OxZpHmu545fmhQTk 8RZSPj4KbuZMHbYoT57ygqzHyD4urqx0ZfGZWYhHZ6GLdHLooFx8dFy+x8/4EX3Ikx7n9jPS L+qVL/jHR/4DGKkzfgBCPn6bOsiPuv1qmQ/yx3pP+jjOGUOW7T1iX/3BF9/73vd2zk5Jk6cP +IlzAzsQ3N4v4XP8CtnLD39ybuMHKPpTfcra+AKwwE/wOg7/Df8IPZ2HDPKUV0Z6+J98m3T5 8vi84yijfm2STt8g6XEdkCcvdMTvWHrIIhs/fUNHfNF+csPXHbODOvGE3s5t5IRe6m3//qqy SXNugegLdtdnxx13XO0DcUqfyZ8Vydd/SP+6FvSNrb1fZyUj8vhKUlqgP1gggW1/6KVe1BGA mzRpQh2UgUXBMgY9wU9AFQz9ZKM85zaDOD6DGzKoGQwjza9BCb5IQFSPwU/ak08+WYOq+qSR p3wMuGTHI2Vlpcs3UCpPLzqRiejl0ZrH0fJj0KWLYI5mpp/61WEfP82rbuc2sgEi7SRXnTE4 0DsGFrpF2w3uysSAUhXIf71igRhs2Vp/6Q97/a7PHesn/cY/3IDgla9Po1/tY8DXj+TwZ6QO /Po1+lj/RlpcN+pST4BC/HjIUg5FnfWk+Rc6q4NM/oYn/Cf4lbe5NvCGj4b/xfXA/4F3+QFe wibO5fFzNiCP34aO7BT1kRPtxUeGPPxkx7WOx0+20pt859pER7ZQJmnuLSD2sWP4El/be++9 6409O+u7WRGe6Et9/qtf/aqyh9w56Z/u6pqVHpmXFngpLZDA9qW0di/UNa9rawVMgxAQYHD0 840CYAywBlIBTZqBS7qBTrAUNG0Gvcg38MVvk2uudDxIEHW86KKL1uMYHMmXRxd62KSRRXYM uHRTxibA40fqe+KJJ+oAEMGXrvjQrPTTFnz0RO2DgAElAEb7wE2/4DdLGzzqiY1NyaVHUu9Z IPqPnfkJn4l+4xPSnQN0SyyxRL1Japj6ZAAAIABJREFUMqjrM74SvqffHPN3fqWMNDL1MTCn 3+XxPWn6GNGB/wKMyriO+HnwyydbmjLO2/dkqhc/2fL8hCp5ymhD6CyPrPBZspTht9FOezx8 lN7arr101B6kTmVdU1E3Pjqow42pm4CoR5qy6pJGrnbiZwu8Dz/8cFl66aWrzvqBTknzZgF9 ry/1lS38wE1a9El3NehX/S3uimP2+jL8sbvymZ8W6G8WyO/Y9rce62F9DVIGK8HznHPOKdde e23n4C3wmV394x//WNMMgAZIg7dy8g1q3/3ud2t5MuTHwC6IohjgDKwGRHx4YrA1AAu8zoOU pVcMpMoabA2Y+AL0Sv/Pf/5TfvCDH5Sjjz66Pqo79dRTXzTozkq/GOyBCGTgMIAYULTRoPDg gw9WuaGLdjumt30AIOnKkJkDe/Rk7+7ZG7jSZwZ6vqEf+Io0vqYv+B1e5wH8+DNe/SjNsb40 +0+m9JBPtn5VHh9+fc+341w9AKl6lQ95ypClfOgpLWTxfQDT3obXzZq622VFvj1Z2qR+9dIv npIogwd44b/2zgPARh79XYdkRZ72kxX6uA7UQRfy6ezcdc/W8pUhk93oEvZwLSXNmwX0hVho z/b8Sl841//dUZTjr0hf6Rfp+j0pLTAQLZAztgOxV+egTYKjwU/gW2ONNWrQFEADDNx88811 FsbgKKgKsngFRYOZQczgJlAakO3xkWHgE4TJwmMzkJKlXoOjQGu2R7rBNQZhMpDyjm3qAkAF ZuXQjTfeWP72t7+VAw88sOpFh9///vfl7LPPLnvssUe3+gW4IUsd2kd3crTTsYHcno4BNOil Pc6VoU/kkckOtqTetUD4hf4KP+NDfMINTdyoxCD+hS98oWy33XY1nS/px+hDe3wAnL4nm1zp 9uHnu+66a/VTaW6oVl555epnfJq8q666qlx++eXl4x//ePUbMvlClFdOmo1v/etf/ypf+tKX Og21zjrrlG9961s1j/477rhj9Wd62MjSRn6H6EmW641sefyVT5K/++67l9/85jedfqy8vLje lFVGWdeYNjjHs/POO9c6yHHjOHr06FqvfPa+8sor683wvvvuW3ULOewXcaIKyH9zZQF9w476 Qn/rH/3ElwKgzkpw9CW/UTZ8hX/EtTOr8pmXFuiPFsgZ2/7Yaz2osyAZA+aqq65agaKBL0Dp NddcUzbaaKM62AEKxxxzTB3gBFvBUYA1wJFhdld+pHk06WUHQdmA99vf/rbcdNNN5Tvf+U75 3ve+V99kv+eee+qgfvzxx5frr7++yjSICsLf/OY3KzjBqw5ARHAX7OWTazbZS0OCtECPdtll lzoY3HbbbZ26zEw/9QIk8v/9738X5wZwbTUoAO//+7//W+XFixsAgXbajjrqqKqHeqX/8Ic/ rO386U9/2qmPvKTesQB/CBDGBxzzjf/5n/8pZ555Zn0K8ac//al8/vOfrwoAtXzeBsDypSA+ wKf4A18iJ4CFvbo+85nPlC9/+cvltNNOK1/72tcqsA2wIP8Pf/hD+cpXvlL9KW5wAmS6rtRr 7/pR7qSTTqryzjjjjELPP//5z2X55ZcvALjyriUUwEZZaWaGyQjfxase/kp315C0hx56qKa5 nrUn2hR6kCfN3sYGrjPHn/70p6tuv/vd78ohhxxSr2V2oLcZ5bPOOqscccQRtbz6lJFHLzet 7bYNG+d+ziygn/SJWMSu/NueraWx8aw2vsJ3xM/oZ32lrC0pLTAQLZDAdiD26hy0SdATPAW5 pZZaqg5MAKkgaOC3HsvgeeGFF5Ytt9yyglDA8cQTT6zpBlKBF38EXAN8rOMiV3my1EO2AfNd 73pXufTSS8sdd9xRQYcXIsyyCdhk/uUvf6kA1Uzsq1/96ppnxlS+YE0+kLzeeutVEG4wp4d8 gd4s1frrr19ldaefsnQ200s32w477FAuvvjiOijss88+VfdPfOITtZ1m47beeuty0EEHle23 37789a9/rQMO/bSXnd7//vfXsnPQFck6FxaIR/bsHn0P4CHAji/o3yOPPLJ8/etfr/0XN1O/ /OUvy5ve9KbqK+eff37n04QxY8bU2f43vvGN5W1ve1txgwQM8F116F/n6667bpV955131uuG /wCnhx9+eK0XmMQf+tDFNeCa45OuiVNOOaWCY/LkAYSf+tSnqv6xLIi/a8Njjz1WdttttwpS POUg7/Wvf325++676zXAb+n71re+td6YuR7deAFDr3vd6+q1cP/995c3v/nN5S1veUttu7Ku T8t53JR97GMfq3W4oSX/Fa94Rb3efB6KHvfdd1+V8/Of/7xcffXVta1sr430Z3Obtqs3ad4s YLlI3LCEjdnbpj+6I7FUP/I5Nx3OwwedJ6UFBqIFEtgOxF6djTYNmd6s92vG/6lTW2vxIuit ttpq5dFHH62Dp0HP7JEB0ndCt9pqqzoYA4yPPPJIHehVZVA2gAu2SMB1DOwKqECjgRhtttlm NdAC0TYDpkDrGMUXE8yerrDCClXWJptsUmeSBXa6kAsAGJCdG0DpL4DTQ50GV/xoVvrRFb8B hFwzXGRsvPHG5TWveU2VoQ4kna4GdDbAjw/ABjzoYGZkdPO4VpmovxbOf71iAb7F7nyAvfUn 0MgX9Lu873//+/Vxfjx5wGMJC5+zZMVNGh43S/px//33L4cddlidfTVbD+wBBICt68MNjHP+ t+aaa1afUbebKeCQj6iDLHrYguhms/wHmFx99dWrn/MlG53xA6RAJV51RXr4t+tLnc610Yzv csstV84999zaJgDbkoiPfvSjVS7Qzc8//OEP15tTM84/+clPykc+8pFO+WabzRT/+te/rrza Gu1QlnzXvfqAYzcKdI3rAy++6AvtT5o3C0QcNYuO2FefB4V/zWyPXz8ogyf2+oqfJqUFBqIF 8pa6j/dqBLEASbEPtSM/zmPfvEdbD4c0AS0GwJrQfLO2NNvQJntqE9g6moEzBi97g5nZ0gCT ZmfN8hisoi7B1rpC5wZj58o6liaA4hc4navfJl96BFhlYqCPIIvfIC4Ye0yrTPADswbV4DEr 5O1gMoLPMVn4Iq07/QAH5Qz6liIYRMaOHVtnZIGXIHwAMAJ4tCXqBna00Xm01Z6uSb1ngfAn A7h+Z3N9r6/4jTWgZmMtS3AefqqfAFHlllxyyQpQ3ci5sTEL+spXvrIq7YUoa2rJwRvyySLD dtddd5Utttiiyue36oh8/e+cPnR1Tkbo4XpD0vgr3ZWNa4d8FHt8oUfIdDO1yiqrVKB6wAEH VB/1Iig5rl2k7ksuuaQ+BTHjrMyyyy5b3vCGN9S2uZ7drLEF+crS1Z6ucQ15kdJNIzCrfnvX JflxDSsTddI3ae4twPbszf78I2KbGwz9xO7ve9/7agV8Bx+K/rAkSj+huAHhS/olfDF8Slky nONRn0094X/O9S++8FGyyZoRtaeTa0PkhX/NqFympQXmxQIJbOfFei9B2QgEUVUEmDjvmh/p sY/BRkDCay9YkuO8fRCSZxbL3gAmeBn8yJBmLzgavOUrG2v3BF9ypeEB9JwHKQMwCp72gmUE bHKBWXmCsD0AaaYM0TWCNt4I9vGJISDUW9zSlcd/8skn1xlms0x0n5l++NXNFmR7nCvgkmlt oSUS8kJ28B588MGVTxn5No+WlcXr3LF9Uu9ZQN/yJf5lWYKZUMcGdnnWw5qR1ReRxj/5qTQ8 KGb/LVMw23nBBRdUf+BL+FwHnl4op0/j+lF29OjR1WfJ4E+2kM2X6aUc3+Ev9JMvTX38hR70 MwPN9wFP8vBrH98KYBLXED3kKweIe1nNEoMgM7DK0JX/A0PSbGSrg2xr61dcccWqA10B1Wgj WXFMX5/zIi9sSYY0egeFbZRLmjcLhA31k1iL+A/iZ+KsmXcz85HO/vzixz/+ceeTLf4lhiL9 qCzfsMeP1IFPncrzUXl49C/fUEY+GVFO2dDT8eyQ8kgdSWmBnrZAy6N7WmrK6zELuPBjI1RA EHxsEXwif0Z7A5B0QcjA6tjAaBPoyJNmjxd5vOprCGaBzGAJZmZzYs0fUGuQ911QgVW+sh7l Iuvw1Ge2qD2gGpyjTkHUMT57wZN+kWbtrPrw3X777cWaPjpqs72BGSCwPOIXv/hFbYO65ZmZ Emg95u1OvwjS1vp6nKt+aYCtNpFHB4FdfQCCR8RXXHFFPX7ggQfqi0R4ABJ2Ndgrl9T7FtBf +sReH4X/6EPLAsxgugHiX/qIP+KLQVkfS+cngAGQZwnKeeedVzdLFXw+bs8996zruc3O6uMA nZYTuHkCOuK64sehE58gWx1e+AK8panXUxHXGZnqlkZ/PIcffnitl78j9YVMfkim61Y7ydYe 6769TGk5grW4p59+em23NpNtVtYTGMDd+mIA10y29bdk4fEERF2AvGtCWSTPdWiG2fUeT0/k h6+TERS62ec29zbgF+wX/mPPJ+JmiK/hiZdY+YH+4Pv2+lIf8U3EX/BIV448hEe6LfxJnztX l80xH3T9uJ5m1a8hqwp//h/+SI+y7fl5nBboKQsksO0pS/aSHMHJJiBFgBNwbIJN5M9sTy1B yxaDu+AieEkz6AtWZAt0eNZaa626Pg+Ytc7Vh+4NiJYo+EKBrxsY6OmElI+A6dNEZkuPPfbY CvDUJajS14CuzqhLGXrLd6xuJG2nnXYqF110Ufn2t79dAYb6yQIg1IfXHgjxqNhnl3xtAT9g 7eW0CNqz0g+QQCuttFJtq7e8fe3AjJ020pXeo5tZOTMgzr3MBtiqE0AIYKAd5Bn41a3epN61 AH+KmawAga4LgNRLXfxRnwTA5Gd8vt3v+L18fWtJgZcG3djgwf+5z32uAkD16Fcvk7kmfMVD X7sB1Nf4nTsmT1k68Vvp4Rvqke6asMbVS4h0BhiUsR4W4PUiGJ64ZgBoPuZmSnnLDXwaDLne zEyrCzCmA9BjJjbixOabb15Br7J00JZDDz20Xl+hq1k5dbjuyfISGb35O10sVVCeTHZjE4Ad D342oK9j5Dy3ubcBO+uTiN1s6twkgf7TB8ixeMTWJ5xwQt3zAX0pTX/g1e/hZ9LCF/WpfLyO 2/d8w0YHPsCXbXhiC56qTPPPeTu156vTlpQW6C0L5FKE3rJsD8ltDwCCg0CCInAIaLMiQcqg Y5BrD1aCmkDlZS355Egz22g5gs8aCXRmhiLfAG/QEjDVL8gZAD/72c9WWfQAhs1ECaiCspfF tEE5j0nVoazACDiq17m92TV1ObdZFmAfgdaxNihLJvkG1bXXXrvOMrcHasf07E4/L3/RTb0+ BQUIRfvIsMn3TdwYGKTtt99+Fbwox0bSDPKf/OQnq44Gj3gZaVb9k3nzZoEYkPli+Lh+Ak7N OFpHKw+fPrKOmr/wHwBBP/EnhMdm/bRPyLk++Bzwad2t8h75+lIHn0A/+tGPan/ji2tIffLp EbJdV+q0me1E+N2U8ZvwL+kbbLBBnU0Nv1IvvfC95z3vqUt0yLFMxjXgmvjgBz9Yl124yUKu NV/2oJfvU/v6g3aZyXMc+pNndheQD1nkkQ9w83NEV20ljz7OgXfXeQAv7Q4bKB+8VUD+mysL sGH8qiKbhi/zB/6OpPE1G1AbsVdfSOM34cv8UT9Ll0+mPVlRTp3S7NWjrD5H9sqGrzgnA8Xe sbK2dppRWnuZdt48TgvMiwU6mrd/X+x98yIty/a4BQSOrgGivZJZ5QWfb2MKUGZizWQa1Cwx kCawkGEvsBmsBC7n6pbmsZV9HBv4HAN1jg3A1riSG4Oisogc8iKgGuAFzeCTD0gGABCk5SlD hg0/2Y7xxmwYvaUFbwRjadpGpqA8K/3wCtJmQNSNyFFOunrbBwIDAF6DBV1CX3LoHWXxAU3K JvWeBdidL5pR3GuvveqxNdzSzVzyXXv9wNdi4x/8R/8FL4AW/qXv9CX+6H99Lp1vkEsG/9DX ZPFDZZB0sukW+crJJ4+v4LGRS8fwazrQCR9+QDj8iU/LUzf59FOva1E5+dIcm5kmX760aE9c 7/bSlLEEQX10pAf91EGuNOXVKx0/mWyhDtdXXA/y8YbuZCTNvQXYnK35tL1Z2Q996EP1HQd9 qw/1D3sj9ucr4qw05fHYLM0ys7/MMsvUJxP6iUw+4GYv/NPSGE8h5Hv65V0DEyD6ffvtt++M /copE4RfPSj2eFDkhS418fn0OM59WqCnLJAztj1lyV6SI3ihCBSOBQkBa3bI4BeDu8AUgFRA EgRjoHJOZgQ3stVpkDNIqjOAbAyGBi3pHpHGrIL6pLfXI8CSJTgGIFAX3gja2kk+HvINlGSQ bwAVrMmRFrrRXV1AtUAOHFj3qw14Dbhkzko/9QIOBo6wR9iXbBu7kSdfPWwiPfSRz0b08IIO gIS0Q5mk3rOAvuYr4b+O9VH4Pd/RP/zNpp/4hDR9rx/5ltn1kBVAMvqXj5IXNzTKxADNF3we z9cT8ER/63t+RLZlNEj9KHgsl1CHepUNgIhHWfrRl286tsdLdsQD566p9vzw2XZATz6Z2qb9 jhH5NnqQw3ZkOVeH600bAd9YhoAfjyc7Znrjp3SVCRuwGT2cJ829BdiZb4qv+sGm79g3fICN +RZf0Af6OPoPX/hGXAvK6Rs8cW3oUzLw81mkHB8K+fjl00GeY76IyLTRLTbpeJTHRz4e/h88 cS3gTUoL9JQFcsa2pyzZS3IEBQFLgBCYbI6lRYCYVdUCEBKUIvAZ2ARKINDAJcgJZu2BCq+g pw6bwESWgCQ4IWnylBe06Eo/wI4svMiePAOnfOXwRn3KkkEWPfBGWXWSZ8mBNDLoTAflyBLI ARpEDpJn604/8rWTLuyi7miX8vQEOOilTnWHXZRl12i/uvArB6Ab8PEn9a4F2F3f6UO21z8A mr6Rxzf4iH6Vr0/wy9On0vW7AdsAH35Aa/ztpAy5yoX/qo8POOef9tLIIpvfhr8q71w6/yHf NQG04nGs/tCJP+Gjc8hWRro6Q39y8UW76EgPhF+d8snly66n8GkybKGXclEHuTY25NPk2hCb OQ4e8umFHEdeTch/c2WB6Jv2/uZb/EFfBukLfR/xDw9f0gf6Nfxav+pfZcMvyCIfr2N951rA J03ZyFPOcXvfh450IdMW8o0x9AqfDl462ZwnpQV62gIJbHvaonMoTxAQJCIYxCBhLzi4YxZg zAoJPgYk6fJjEOlaZXuwEDxmRrPKm1mZTE8L9CULtPt6X9IrdUkL9AcLdDcGdHd9yTd2IWOS 8wCsALKbsZh0MMPv2PhlzDN+4cEfY1pMEjiPvP5gx9Sxb1kgnxPN5/4IgOpCd5cNuHqk7S7Z Olh7d95moKyN8mKJfMFAXgSVGTUjgsyM8iKtu8AWfLlPC/RFC3Q38PZFnVOntEBfsUB38b+7 60t+bGQBq0ApgGr21xc8/PDJas1n4nyz2dgF4LaPW8obB5F0crqrt6/YL/XomxZIYDuf+8Vd qUeC9gKBlzGs0/T428VuyYAPoPvp2dc0n7byOSLU/ihoXprQXWCbF9lZNi3Q2xbIAbC3LZzy B7IFuov/3V1fMwKo7TL/8Y9/lIuazzb6RF4subGkBcA1hgHBAWzJCnnS2uUM5D7ItvW8BXIp Qs/bdI4kusONNUxxl2s9oDTLD/xYwj777FOBLcHSkYCAIhDUkxn8m1FgyoAxA0NlUlogLZAW 6GMWmFH8bldxfsdyY5axyDgUAJV+AVJDv3/+85/lN7/5TbnlllvquwfW6frGcoxfJnHwtsuJ su3tzeO0wOxYIGdsZ8dKvchjltbC+liKYPlBLD0wS+tnZT26iWUHMbPr4/A33XRT+dvf/vZf 2nUXDP+rQCakBdICaYG0QL+zwLyCv+7Giu7kW0PrayHGMb9W57vgft559OjR9WsOxjbkx0EA Wd9Svuqqq+qYdu+999YfxjG5A9DGcgQ62bqru991Vir8klkgZ2xfMlPPuCKA1Wd0rK11bO8n a31zsPnGcM2LC903BX2A2894CgaCiWUM7dQ1UEVw6JreXiaP0wJpgbRAWiAtMKcWMAaZfTUe mb2NLy5YUme5gYmZ+CaumV3LEIxrxjJfuzFpY2+citlfY5UJHDJz3JrTHkl+Fhja/Dzo4WmK +WcBwFQwQC52wNbPu/qFI3e9LnYX+FlnnVWOP/74+kKZoCFd2bjLjRbMbiAIvgC+UT73aYH+ ZIHw4/6kc+qaFphdC/T1+OyFZzq6Ds26Arn2xjQzufFEcb311qsvQPsm78orr1xuvPHG+v4I Xp8WM8YBs2S1y5tdOyVfWqDdArkUod0a8+EYMAVSLUXw2MYdrR8U8NO0voQg7eSTT66/547P XbBPgAkkXYNeR/NJwGHTmyDTfOGr+XXwZmsShsQPBDSfZJnWfLNzWpPe0aR1NIv27fMzgvOh 17PKnrJA12ugp+SmnLRAWqB7CwCjxqIAtnFsdtbm/IwzzqhjlZ95Bnh92Wf55ZcvDz/8cH1J 2rgWSxbUmNd093ZPjllb4MVfH581b+b2ggUAW3e97m59CcGjmZ122qnWJN2nUv70pz/VmVkX v8DgwnenW2l685H30pw32/AGtC7Q/D7ByKlNUJk+osGszQtmzd+U6ZPLhKbcyJELlqFTG97J Tc6wJr+C29avxkQwEWRQ/LqMO+mYFRao8DmPQOYO3XnoIz3063oHLl0aXpu6gledyqpPurrw 2jvH7zjqYgvpNnZyY0CWdHJiTTJ5HpfJC93URV60S3572/CSGeXUETMR6lE22kFm6OQ42oyH HNTO45xc+aFDzFZEW+1DDtkhJ8ra0wePfPKDyLJGmx3khe6hG9nt8qNc7tMCaYG0QFcLiBsR fxyLHc7FHue2mJSJNHEHiT1inZjkZ3lvv/32GleVee1rX1t/IEiZxx57rMolO2RGbOyqT56n BWbHAglsZ8dKvcgTFzIg5iL3a1Vma5GZ2Z///OcV8LrQgVqb4wBz+IY0wHWBKePLglOeqtuI yeOaidoWQJ36XOuTKoDquKfHN0Gp9TO5zz7b+uEH5YEkQUj9+JBPjqlDfQITArzVLQAFwIuf XFRewFPeTLMyePGRG+UCcAHyllTIV055xB70iXrl2fCxkeUXgqV68QUvXdVDR0Q2WUCeNVzy ED3oR56y5EVZOsgnX13S2/WRH/bxoXE6hn5ka7P2qUtwV4fyeMiVL92vTDlXB34600Ue3qiH nPaBJOQr68sZoTue0JtO2q4N0ulJPpn46YQn9KR3UlogLZAWmJEFxBCxRHxC4qk05+KRvVgi 3bEtYp2YI0+cEwd/+ctf1rgt3Qtmnj6KcREvZ1R/pqUF5sYCuRRhbqzWC2UED5ugAJgAOQCY Lx8IFoIDcowCBFlSMHTqM2XxqU+WhaY920zQTivjO5rffDeH2+yHleZncSc3gG1oM6M7opkJ bOZxJ05+ppnfnVaWa37wYewjj5YFFhpR6xWw1OlNVwA7AJC6/TRi1AkcCVjy8QpUAJkAJU/A slZYnuAXcgKMqidemAuAhQ9FkIt2Rlkgk2425QFMdUW9dMMjLYCfPbInV3BVD/0cA3wAonPH ZIa8WrD5py79Qk91q8M5fuvFyNF+JB+QpIM+pDt5dCPHMhOglh2csw9ZACf9AkjT0UChbNgv 9I82Aet0cG7DSx59yIy+Upd05/Rq96WQVZXPf2mBtEBaoIsFxAwxyF7MNi6Ik2KWWGOCAnk/ JOJRxCAxU1wSH8Uan/oyLliGIE7Klye2RXwjSxky7JPSAnNjgQS2c2O1HiwTFzHg4WK2R0DR mDFjagABRoAiAQUgEmSAFwHHutoFp04rKzw3rizVMa4pOa080szgjhu6aHmuY1SZOsXbpc1v gEeAmtbM7o5qZhXHTywPP/hQ/U7uUUcd1Ql61H3QQQfVX4mJgEZHPxQhEAFcAQQFK/raAC06 SgswShZdlRe4BEWB0LkgSJ4243GMnCuvDoA66jjwwAPrp2Te+c53dtaBnz3IswW/dLoDioju MbsZoFI5drWxJdv6EsXhhx9edZVO57333rvqhkcwVh4v/bRF2+moDTEjCtQK4Pi1JdrgHNFL Op1RyIhZ2Ajy+MnB/+1vf7usscYaZY899qiDAXuzGRnK46WPdoUfSSOLXb71rW+VVVddtbz3 ve+tfYVXG0OHqkj+SwukBdICbRYQY8QvMQ0gFQfvuOOOzplbsdDLzl4O88tilhWIKcqIQ4Br HaeaNDfYd955Z5Uj3SZ22fCijEdtxs/DubZALkWYa9P1TEEXsos6gkGAMYHBG6UAENAkHQ+Q EoHCutqOaU3AaALEypPGl/UnNluzX3XC+LJIU2bo1CZYDGmAT+1lj7lbwFl9Q5rj7bZ9VQFq f/azn1XA8/a3v7384he/KEcffXR5xSteUYOTwKY+QQnRQfBC0skCptpBGcAUIA+PfO0AprTL pj1Al2M8AYDJF0TNhu67776d8p3LUy4ee0VADBnybUiaGVT60UVdyikf4C+CNjk77rhj+cpX vlKOOeaY8oEPfKDstdde9bNqp556al33TC4+epAFxEszA2uvDnt56gU89VWATzrRhQ50w4/P cYBoNnbMFnQMEE1naWQbSMiMsnTC69zeOZlsCNDHwCKdfnQiVzvwJaUF0gJpgZlZQPzwtM5s 7V133VX+9a9/1dgnFtl8d13arbfeWmPhCiusUGMVefLFHTFNjBej/v73v3dOynSNP+KTLaj9 ONJynxaYHQvkyDY7VupFHoEDILFFsFAd8GIZgou/6+zeiy74BiwNb0DPUg1wXLHZr/jclOa4 WU/ZbB3TmjWaw5s1o1Ob2d0Rwxtg1XocD1wBXp846OPl5ptvLH/+859rsAGszjvvvHLdddeV Qw89tAYld+h0AIjUGzOF9AW2BCx6RxucC2ahs2NtMXsI2AFp5AHC0WZypSPHwOJxxx1XwZ+y 9D3kkEPqbKXgCBSSFTOSykUGB9F8AAAgAElEQVR96qcz0h56o6jXcfCaNQ77v+997yt/+ctf ip+AlE63L37xi+Ud73hHfXvXObnKahMd1IVXHdLoJIhrH9s4pq+65bETwEmOfPpJJ9OeLmFn dlCHcxtZ0tQH3OKPOpVzLJ+e8oBXs8rS5YdedMHDpvRJSgukBdICM7OAWOxJj5eYH3jggTo7 68mb2Cdu+XSX79SarQVyl2mWt7lpFnfEQTHJccTkf//73zVdffJsXUn8SkoLzIsFcinCvFiv h8sKAAFuXfACB/ADHEkXKAA9x3Hx4xvWrF4Y/lwzmze0mYltYsICzdKEYSObmdFmxe2zUxrQ NKw5amZuJ0xsXgJoAtKIEaPKKqNXbdKGlmuvvbbKEqhCrjtwvyBj1vbqq6+unxuzPkr+y1/+ 8tpqs5vSgC4AzIsBwBJ9/YDEZz/72ZpulvaUU04pN9xwQ9lwww1rkAMW11lnnfLNb36ztklb tOPDH/5wfWnt4IMPrnzbbbdd2XbbbetygCOOOKKsttpqxVIEQI0dPvnJT1YdQwfLBgJsn3TS SXUWYezYsWWbbbapgZgM7aUjGfYoXta74oorqj7aw+by6UYmMCjAH3nkkRUkSmcP7fSDGtru szba7nHbDjvsUPksIWBP7QM0f/zjH3cGfTcRJ554YrWTfDY0AGibx32f+9zn6rFz+uj/WKPr mF0Rv1EnIE4O8nUNdQOyF154YS0vna3cENgHCJaelBZIC6QFuloAUHWT7GXitdZaq+yyyy71 G7SXX355jY2Wb0k3flhna8JCnBQHxUdxKsYVN+Tim7gTaeKVWBpjgPq7nnfVKc/TAt1ZIGds u7NQL+ebSXORA08AjIvfhS8ACBYCBR5pAIxAYB+gpHl3rPkurZe3mq8XTPfKWLM1x83HVpoX xqxBbX3xAEgGaIY05+MbkARYojHNOl4gDwCKu2qP2ZE1VXQQaNZdd91y7rnnlj333LN+tsWv xwh68n25wWMqSwesAV1zzTXL1772tSrXDCV9gdoDDjigglR1AYgXXXRRBapk0gGQ0y7LIwRE M6jAqjRAjBy6sNOvfvWrCmrVt99++xUzAcAsHvnKA+FnnnlmrUN73va2t1X7mfWkQ8xaxuOz mBWli2N9wm5kSQMUBXT6Wqqg7ZYt2KtT3dp+zjnnVFs4f/Ob39ypN1B7UdNm5b/+9a+XnXfe uearB6i97bbbyvuamWOyzYR84xvfoHYFpQBtHKvrt7/9be0HvPvss08ZPXp0+epXv1r1ZKtj jz22zqK8+93vLqeffnpdA0cf7Y42BQiugvNfWiAtkBboYgFP6MR2s7SPPvpojVEmPUwWuHk3 +WEm15hhHDNrKz65EY94Ku6YJACQI5aK6QFqVRlg1j4pLTCvFkhgO68WnMfy7Rd03MUKBMCS veAgPQh/AJIG5pVJzTKDKUObF7SGN4+WhzVfVhjePOYeOb7ZTyoTpzXLCKY3s7VTvWSmq5u7 5+YHG6Y2h3E+fGjrpS1gR7ARlAQgpC6gS32WRVxzzTU1MAGLgJ4fkvAb4MpedtlltQx9gTR3 8vGygWB2880318BIPpkAnOUGwNhvfvObWtYjLbzRPns64Ufy2MQHvpF6EPv4ZTb5u+66a9VN OqAo6KrT8dprr1153TwAtUBed0QmsgdIf/jDH9a1uNbeIrO9bghseHyr0Swxm1h3ZjZDOwB7 ZKkDG5kJB9rPPvvsakN5V155ZdVdG9kESGbD0EE72UP78QDZQYA+fr/X/qpXvaomX3LJJVUv a23N6OozxLfIavermpH/0gJpgbRAmwXESBMBES8uuOCC+qSu+cXSsvXWW9cJhfPPP78CW0+6 TMS4CVdOTFQuZIg7YqEtxpUY58Q46caciHdtauRhWmCOLJBLEebIXL3H7KJ2QQMu9oCLi7w7 qvzN3fDT0xYvD/shseZJ9DMdS5TpCzYvWzVfQ5gytZkJbpYijGiOBaihw4eVhUctXG659faG eUhZapmlK/Ajx1038GM2V1Byhy4d2dNHvqAkIJnpdO5u3MxgzGLixwuouttHsV40gpeX1Byb 7TVTawYSCYwAKGKD2CvvnB7kovvvv78zGMoDJkN+2BC/dOd0DVAnXdAFMoFfe+2it+NYihDg DxAG6J3/9Kc/LYc3X0/49a9/Xe2EP0gdZKhPQFe3ukY3M6ry6BnLAdhYnmUDyBINNpSGBy29 9NIVKMdsh/ZZkkHWe97znrrRmw5kewzoZQ9pAK324rU5RmTTL24yamL+SwukBdICXSwg3sTy J/Eq4oxYI77JF2vM7MozxkTMi/gm9kSscSyG+tUx8WlGJH4mpQXmxQIJbOfFej1Q1kUsOAAa AI397JJPfZXmywdPT1mw3DR9mXLf5IWbn8wdWp4avngZN7n5+HXzFdvho5pZ3SnNR7WnNS9u DW8el09qZmGbH20Yc/e95bnmZbJtttu2/rIZPQAiAM6jJrpYF0u3IDyCmaCGxjTLGMx+ClSA qpfQ5AGFliCQ5UUC5fA4BtA8whIMLTmw5jWAoXVZHnl5nIXoIFAKmmSxjXJeZFCPGWM8ABsZ dFOPgIocK9NuU/z0QdomCGsn2nLLLas+6tQueQIynT16Q9YFW/ZAD2RdWXxmqyY0/9QngNOD TupjK+l0NVCEjvJihve0006rL+/RWzpdyaGnmwd7aW4W6AVgW6driUn7khWf3mEPgJmsaLP6 ndNJGoBrn5QWSAukBWZkAbFQLBJPxJlNN9203lj7uoEY7OmRmOKFY1/L8cRI3JQnRsmzORbL Qtbjjz9e02dUpzR8SWmBubXA7KOoua0hy82RBdqB5OwUHNLh3qQBn81P6DY/KNu8Lta8PW/Z QbMNGdJ8IaB+x9bj/wYgNW+WDW3Q8KhRC1QgdPR3v1dGNzOJ+++/fw06wKN1qB6ff+c736nA RzBC1thusskmNc1MoXQvOF111VU1CHnJC9Hf43rrXQUzIFca2cCcNEDRfquttqoB0PIBZCYW YFRGYFTOTK3AGOAXwAOGgTR64APqdttttyoDuA5edQiy5JABVAYgVS6Aprz/+7//q5/8opMA Ls0yCeDRI37ngCCAq+277757rdsMKQBJtmBsr744/v/s3Qe8ZkV5P/ADgmKkKUFjKC5dQEFA mgKuVFFAiiVRjPQqCFhi9K/hk3w+xkQpghBEBATsCgjSBSkKCBoVkCK9KBYUIyAiKP/zncvv 3eHN7t675d7dlXk+99w5M/P0mfPMc+ac933ZrE3gJ2+77bYrvvAKg9cZttpqq/KKh2TT+2ro 0JDt9QI0AE9y6eCVEMDn2uH7YJwPk7GLLHgZE3zzTjV+/Ava4lHc0P41DzQPTMMD4qcYZ3fV ZoVY6HUrr0F9+9vfLq94aROngVenxHoxTwwCYo74KVY51ydOodEGUpZK+9c8MIseaDu2s+jA WSV3QUswcrjgxwpeO/hL/20Hz+t3YFd/4vfdivM/3H8DwuPdrU/+vvvlfAt1v53/77r5/rxQ /2GyPun6y8jPGv752f0vcy3Q79L1O7dXX3Zld+gDv+0/Tf+hbvLkyUWsYOST/vfee2+pSw7p 5hsQ/AyiH2+QOPrBBDuVEs1dd921vO/q0Tz9Bba3v/3thV7CBuxSCm5o7ZB6V4vMTTfdtCRs EstddtmlBEavBnhvS9Knf7/99it88ZEU22X0oTEf5pIc4ps2+kpe6Zyk0u5r6vrpRw+JYoKt D6p5D/XQQw8d6ImvVyzQ2E327QabbLJJ59sa8PMerFcH8Oc3fEF48kUtb//99++OOeaY8tqF vssuu6zYSQ/fIWzHVjIbe7RZCBxZLPjbLopvh+DvvJ+MxqsMWSx8a4XjC1/4QvnuyBtuuKF8 mI5cO+d8ZIFp0DzQPNA8MC0PeHpmV9bNuydFPkDslTVxREyX4Pr8hK/7Wmqppcp6IPEF4hYQ a8UbdXETbYPmgfH0wHz9p9vbnv94engU3klsfdBIcrTGGmuUhESyI6kaDf7SZ7fLPPJAt9lD t3WrzPf77okFHutu6p7bXfE3K3f3PXfp/uNii3RP9l//teACf+6/J+HR7sln9R8u619LePKx BbsX/e2Luwd//5s+kRp5rC4pEoAkSYKRQOSO3fuvHr9LlCSq2XGFQ2e6SrjgSuQELgFMu3rw 8bdbKBHMKwbkJQGVnKpHNj3whw8HPeAzPOkCR6KGH3kAvn6vVki+6QM3gVVCJzj7+q2U6NHh TSZch3O0dKKDfvzoErvIpIcdDb8UloQxwTx19GRIUvlLyWY+AvjFdr6MfXD4ky7lmy16vexS 46WP3frpRg9y8dSHHzl4ATqwCyjbIlNc0f41DzQPTMUD4oldWK+IufEX3xK7lGKN+CMG+zwA sCmSmJs4JnaJRWKU19auu+667vjjjx98oDifqxDHxCVyE/emolZrah6YrgdGVrjporTO8faA izngwh879I915u8Trmct1N2zwCLddf1P6DruXnDh7uFn/035aq/uif6nXnueBvrxx/vvtO3f vF1ooZFE6pe//lUJIgKJoCNI0UUiJhCpa0+Q0iehgi9ZEnzgJhDpF8iAc+0SKXf2gG34Jdgp 8dCPT+RrQwf0OZdICqCSP/oIpErBT7CFkw8wSJ71e49LW4BO4S1QS2aVsZM+zvENSCTJyON7 7ZJGfkgffAefkMF3bJV4sle7fjbo45fogh9e2smxWLCT3cr4H7/4Aq5zEH5o4dONTeRpk9iT S2Z22PkgOhcm7V/zQPNA88BUPCAe+Q5bh00XrzWJSdrFEDHON754micWSWrFHfEqa4F4I76L eUAfnNQjVnsNw/11XztvHpieB9qzyOl5ZwL6XLwu6PqYEbF/7pOZR/vvpv35cxbtftf/vO6f 53+8+33/AwyPzvecPtHpvz9wgf4F/v5HGh57/I/dIosv0j3y2MPdY3/www/9h8uekOiNPI4X hASpJJcSpyR6deKaXUQJY3YYtWVXkD0OQU8wA84FwySH5OCdAJidTz6Ah06QREcf7QlyaCWx gqV2gI9A6z0wySYcsrRJ5rSRxw6HBBAPsshxjgY/fCWHSV7h1HrpT6DGh4540sE5Ov3a+ES7 Nj6SwDqXaGbnAw09w1M7HHbjET3pTxd86EwnJVo4cPHFxy60czTGKb6MH/gsesav2ho0DzQP NA/UHhBHxRYxxSsGnnL5vIV4KdY4bCD4OkjxTVwS0xLbsxaIXeKRvuC02FN7up3PTg+0xHZ2 enMmeAkAEpP6gsdG0iQAKJMgahcMBBOwYP9zufYW/9i/Q3v/c5fod2UX7Wt9Mth/aOzP8y/U f1Csf/T8ZP8Cfz/Kz+qTRb881n/JU//OLf52RH134MhdtMATmX1H0Yl8+nmPFCQgKeksOQLo 0LMhIBjGJjo7YkeStrSjdR6aJGrhRYfYDE8d0CMlmiR22uCQj3fkaQ99rWuSPP10gCMIxz7t gJ0B/OFFB+cgbcPjVvOTaOYGAU1sQxOZ4VPbATd82RQe8Uf0w4N92uEpHdHReXDxbNA80DzQ PDA1D4gVQDxx0+0YC4g1dbwRn8V/8VW8w895DcFPm9g63Ja+VjYPTM8DLbGdnncmoC8JTy74 JH+SExe2wKJMMNCfxMVFL7f7c4/z8LNGfh62VrkPC332l5Z+V7j/zoSng2RsgPD0rlZrHmge aB5oHmgemAUPWKMktXZ43dBbv6xnNhwaNA+MlwdaYjtenh0jXwmtC90drCCQ3bTUlfr1OXeA nA/f9Y5RbENrHmgeaB5oHmgeGFcP2ITxdMhrWTmX3Ep2rWkNmgfGwwMtsR0Pr84ATxe5XVsJ rSTVB5q0eYScR9RJfrN7m4CQR/gzIK6hNg80DzQPNA80D0yIB7yj691+a5kEV3Jr5zZr2YQo 0YQ84zzQEts5POQS2HwAyM6sD/tIdCWzvqNUEpsjCW52bdX7b+5q0DzQPNA80DzQPDDXeaBf 0vrdWQmtb0XwpNFTyf4zGo/3nxOZb/jVuLlO/abQPOqBltjO4YHLYxl3sBJYO7b5UJC6ZNfh jjf1ktA+9UqCwNGgeaB5oHmgeaB5YG70gKQW/Kn/ISHrnM91+KBv+wBrcUv7Nw4eaIntODh1 RlgmSbULK3GV6AIXvVcN/DyqIABPmxJk17bd9RZ3tH/NA80DzQPNA3OZB2zISGY9hfT92r7a 0c97p30uU7ep81figZbYzuGBdMH7xKiE1Qv1goCkVVKbVxP88ovXFSS+AkKSX6q3NxHm8AA2 8c0DzQPNA80DU/VAv2T169hI15///GT5BUvr28jO7VRJWmPzwCx74OlfJDfL7BqDGfVAXkXI TmxeN3DxS2wlu5JaoJ6kFl6D5oHmgeaB5oHmgbnVA0lq6Td//wXqEtps3rQ1bG4dtXlfr5bY zvtj2CxoHmgeaB5oHmgeaB5oHmge6D3QEts2DZoHmgeaB5oHmgeaB5oHmgf+KjzQEtu/imFs RjQPNA80DzQPNA80DzQPNA+0D4/N43OgfdvXPD6ATf3mgeaB5oFnhAf6r6/s+s+OVMczwuxm 5IR7oO3YTrjLm8DmgeaB5oHmgeaB5oHmgeaB8fBAS2zHw6uNZ/NA80DzQPNA80DzQPNA88CE e6AlthPu8iaweaB5oHmgeaB5oHmgeaB5YDw80BLb8fDqLPB8sv+iCm8ijZR+gEF95IcYpvbN tY8//vhAmh9vGAv4jtyJ/A7BWlbOoytdAvPiTyyyJwc7hu2LbeNZxm/xqXqtR3yc/lrfsegV /inRhKfz8HUOavkjLdP+Hz2VNZ16LWNaHIZxUq918iMoAM+0w4vsafFOe+wOba1ncEYrIys8 1NMW/r6zOpC22FNf58FJX+rjUQ7rGP3J0jesQ10PbcrYhLbmoz4vQ+yKTeqxeax2hQf82q+1 P8Nf24zyD9/oMzP0oZ2ZcmQNy7pmTRs5ZoZXo5m7POC79afMWSllP7ZP9mPdH/PN96xy+HEO 7b6vH64fUH3Ws3xCyPo/taNvfhqM8B35Iq/6/GlIgwqMBvOwB/xCWQLe8K+5JHjpt2hmcTS5 /NjDRICATZZSQE6gzg9SpIyuKf/4xz9OhHqzLINtOfg39hmL+HuWhYzCID7jSzIFmowvPbTD iU7O0z8K60KHn/FTxiY8JYyRXfMhB/86Uav76/P4S1nrbc5mbtT4w+fB+cMf/lC6wo8Ogec8 5zlPu0bYgm5GfIAXmmH/Rsb0SjSRRS9+UWejg93g2c9+dvGz8/g6dOp8DR84j+2lYZz+kZ+x Md7xq+szNhD96KOPFg2iU+aFsrYBEn74xJZCOA//i60zOz/4MnOAb/gLL/MEb/On9r3+2vfT cx366Gfex+e5TqZH2/qaB0bzQNZ1c0ycVZq3z3ve88p1br651v26qnkMH8DJ+WgyZqa/JbYz 47W5jEaQM2kCOU9AM7EsmpLggL4EubSNR5kFWSlYZ2EUWB3RgQ1wlCb8Qgst9DSbxkO32cGz vljpH/skWrW/Z4esafEgx3jynQPwbR08LIza4BoHfs/8mBZf7eyDxzbjEpu0CWTk4RdcMgLm 3GgQf6VMsBuLbnjDIzO/zkdPkEQr1wL+8PCPrII4hn/T8+9o5LUdfG4c4sPoigc8uqUNLjz+ pXfGlu7w8Mm1M5oOs9JPRuaL8c4Np+sTRF8LF6Brjuis3VwE7HTgO6PjUBjMhf9mZX4YxzrW 8V3A9cNHDr43lzPmfDgW/2X+4ElPNGQo8RhvGNmpHW8pjf+c8oDr3pw0z6x55qn5/NGPfrT7 xS9+UeaaOe76T5/49bznLdLTTNl8mKJ/dmOntIycZVd3uH3q9ZbYTt0v80yrBdzCk8VS8Eui oUxiEoMEswTPsQTG0M1smUmvrBdjsqMn3iZ7Fr/oF5tmVvZE0NHR4s42emexSKI1ETqQIbCY C0rAtxbG+FiwcZ6EL+NSkKfzL8kpfHYqATnOjZlz84wv+CB9GcfpsB90ZceVDDzHOvZksSt6 JfFKooVP2owN3OhH57ECmqn5dzR6dLUtgrs2vjHncw1qc4C0synzKe18w0d4hrYQjdO/yIhu tS1Eaq91zHxDR3+2ADqD8NGf6710zOP/2DUz8yPXF7+aE/yk5DulI37iW34b9vlorsuYGAvX QuoZi9HoW3/zwLQ8YD6Zs5m3YoF5przmmmu6//3f/x3MV3PaNWI3N/F+Wnxntb0ltrPqwTlM bwEX/IBJlgUvwVBdAIOTCSewTVRQE4jJVVrUlfWCpy7BcGEI8vRUopkXwAUsqY1ttV/rBX+8 bIkMPjUX+FNb/Gesc06HzJex+th44I0vSOmcrer6zbMkXfrIzQKqPj1A50aALHMhi/30aIb7 opcEHh/Abr7Ijpg5Fn/Ap/NoMJp/R6PXHx5KB73oEZ35zxxSV9ITqOfaVc814lx7+KqPJ9Av sugUv2ZuRBdjB1cdwI0tGRPt6Z+ZcUY/N0H8wu6pXX+j6RpfwMv1giffqes3fzM/9PGlvsge TYZxQWMs8AJox0o/Gv/W/8z1gHmY+ZWEVVxd4FnP7i44/6LuvHMv6O7/+S8HMX3BBZ/Tz+Wn v1oz4r1p7dQO+3ZsO7ctsR322zxYN7kELkFLkmDhsWgIXCYdgCOwKQMTFdjq4E0/QdsR+c61 gyyAdGXH3A5sk9SypT7i9/HWP/6KL9UtsvznXHvtf32ZA3BGg8wbdMZICSy0+oxdZCZRUScT zlggCSafOUfvPDZNjwd9zJPMe7j0UmcfPeDgS5+a71j40wUEN7bGv6VzlH/hEX2U+IUnH2oD 5lLt47AOLlvZwMaJuD7IoJ+D/9RjR8Y/OmqHF13p6QD6oq/+7KKHdl4tM7axeUbnR+iNeXax 4kNtmb/mBVx1B3nKscKwfuEzVvqG1zwwNQ+Yl2Kt+eSatkHxyCOPlPn54hcvWXZtL7vssu7O O+8cxDZzcYkllngqHkzJR6bGf2bbxofrzGrT6GbKAwKepMLksmuwyCLeXxl5RPurX/2qe/jh hwd84SaYwhlvuOWWWwY7Y9EzFwL50TP6Z4FPYjLe+s0O/mywaMcWPJOszQ7+0+MhuQD8JrCo R7bzJBtJMpM0oanP1acGSUwstsZIefbZZ3eLLrposZfNG2+88SChISc6CXpjATzyfhb86J25 MD0e9IFHNyV9v/Wtb5XrAF/H5ptvXtrpwzd5TSTXwfT4xxa8p+bf6dHqI4OOOedz9SOOOOJp yam2c845p+jr3PynL3yyLQYO53xvvmWcC/Nx+kdeElL6qEenT3ziE4PElZ8BG+hvcaOfAz4e 4aX/BS94wYB2nFSfELazOj/it0033bSM61FHHVXG2fjzE1DmusqcNhdCO5qhuW7RAGNyxx13 jOn6H413639me0BMlHO4vpVJbvNk4PnPX6y7+OKLu2uv+UF35x13l2vefH788SmvKfURfyac OP2d25nhOBNKNJLx8oBgJbhaQAVDi8hvfvObEhy1vehFL+oWXnjhgfg6ICbQDTrH4STBGWuB mE5ZkF0M6lMDwXheAIk7G9kkqYp/2TXWhWdW7IwM4y6Y3H333YUdPRxJTJNk8mvasihPT37G ij1kHHPMMd12221X3pWKjFe84hXdhhtuWNhEjrk4FsADSDbxB8p77rlnmnOjID31LzThc9xx x3Uf/OAHC4/ozN7999+/UGgz77TVc7PmWZ+P5t8ad2rnfJybCrzIPPPMM7szzjij2JdEkf7/ +Z//2V144YWFTeZ/xggt3YHFA11sLo3j+I+s2EAMuRJYdmTea6fjv/3bv3UXXHBBeY8u8ww+ HiCxyrt2Y/F/IZqL/83q/Aj95ZdfXubkgQceWKx1HZmnGWN15zkyL0ZzDfxcI3DRaVOOlcdo Mlr/M9cD9XxyTYtb4pSNin6a9XOs65ZZ5u+78847r7viiiu6Bx54oODYbHvuc583bo5rie24 uXZiGAuMAiCwyAha2ZGysJh4t99+e1nYJR8WmEsuuaTbZZdduo022qj0S87uuuuusjAJoHje fPPNhedtt93W7b333gVXuzv9wJ577tkdffTRg0RJf3Yc6PHJT36yyHX+qU99qixkFjPJkTa6 kFcvmnir0xk/+islOuzJ4p7Hdj/96U9LP14O+grkkydP7tZff/1BULerqA7ghG940wdvi3L6 lCC66M8igQ6f3XffvbTBZZsSDj+9+tWv7vbZZ5+in7E58sgji91knXvuuUVffPCAe+uttxZ5 +QePf+EIGPjzb3TAQx+7JbXa3/a2txXy2KAvYEzDS+nxEJuU/LPHHnsMdLUbF4ADn/x3vetd 3Q033FDsNO/I/O///u/Sd9FFFxVfmS/40YGP2IU+h7FEh685gq8+uh522GGl/o53vKOI184P Evbg4Zv5qc28gEcfiYEPLaDRBvc73/lO95nPfKa79957n3atGFcy4TpAzduY4MG3u+22W+l/ 61vfWkrjyQa+g2N88HItacd733337V71qleV65EPtINtt9226JRrCb3z7373u91rXvOagsNv 2slJP1sfeuihIou+ZJMZvgjhuibId31/85vfLG3aQfRVdx6gN9vj45tuuql04aUtibYxI3Ny f31deeWVRQ4djYEx4vstt9yyjK1xJqe2gQwH0MduJb5A0k6eeEQn/rMoRjdy+IYO6OAA/nCe diXd45uML1w+wS+6acNTPBMj9dH9xhtvLDjwHPEXXe24x1dkuU5B5m341/7Gg73RKTZrz/wx h/FCr03pmtcePZTDMK3xg+e6RIPXXnvtVfR24zk1PsN8W715YCweMLdc5+apa801ETBdl1xy yZLYfueKK8vOrc0263zmNHqA1rlrJG3hMyPlCLcZoWi4c5UHBCiTSIJoUTBRBEyTJoGRwsce e2x3yimnlIkzadKk7tRTT+1OO+20MoFMyOWXX767/vrrCw/Bd/XVVy+B3ELy2c9+tjvppJPK pIWXwHzIIYd0n//858+vKRQAACAASURBVMuEtoiAL3zhC4PJ+qUvfamDY4K6m6OXhOSAAw4Y BFX86a6kh4mtvsoqq3Tnn39+kXXfffd1L3nJSwofvNgmeYe72mqrlUfP6L///e8XOguOx9ES ErpbhFxw3/ve94qOK620Ugn26NmKt35y7Eb+6Ec/Krw//vGPD5JjPia7vtj4+uSTTy5tfJ9+ 7XSw8K+44oqF13XXXde9973vLXZLciQ3EkQ68DXdndODn5xLXCVqzj3aNT4HHXRQ4W0h3mab bYo/+eNjH/tYN7lPNowpPdCED54SlXXWWacs4OqSvBVWWKG7q08g+E7ypx+dBf1DH/pQ8b2+ 2GyXTuJFX3OAT9hJDvo88j/hhBO6r371q2Wc+PblL395d9ZZZxXeEi5tGSPzAy+HxIiPgGQZ 0MdhzvIHPP5bY401Sj9bop/5wl9w6B3gHzyWWmqpck2k3bUDN9cMGnOPXebMqquuWvgY2698 5SslITLfQa4714NHbXCMJ9/EJ3aPXXNw+RrfyKQnf9KL/nZw3eC4ZiVD5qi5yT5184jdXjOi Jzol/QNwyWaDMZEo8xlcN1Xs/Md//MdSd6P5lre8pZDSae211y72uU5dB64rvOmhjBy8HGTs 0t8cSxKBa5cNkkNAdzpLuOC77vngJz/5SemnJ33YwH4HGa5Zurtx5Ytrr722+/KXv1za3Gy5 7t/5zncWvxx++OHFZ5J9vOjsNRnyyHnZy15W/FEE9v/YKWF94xvfWJJe/vr3f//3cgMAx7z9 +te/Xl7dMlc8ibCDzifsiS8k2h/+8IfLmEeWeYyfOMkW7ewxP9wsqxtHPmA7gAf00Q39pZde 2r373e8u9qq7bl3z5o761GB64+ea2nrrrct1T47rhl7OGzQPzC4PDM8n12P/V+BPf/Kq5Pzd C1/4gvKUxzrsWhBDnnhi5Amaue0YoZuvPPGpX6GcUT1bYjujHpvL8AV/YGHNe4/aLBB2nCwS Fo/Xvva1JTBnUbQT8uIXv7j0WwwsqhYCE+vv//7vu1133bUkI/jY6RTU0QIBWbJg4Ydvkt7V J0h22yxEwGPKddddt5vUJyTgfe97X6GXsNoVkZjVgdrCiD9+Fkg7ZFtttVWh9aK5nYZvf/vb pe6CABIDi+tmm21W7LAQ2SUjGx+LjN3D97///SUBR/ONb3yj22+//Yo/XIzLLrts4S0JkOyz WyIGLFYuwiwEkkt8QRYnOjvnc19twt8ONNolauzkK/pZWPmbfS996UuLzeRILNhlAcuC94Y3 vKEsQGTwj4WbzpIFCQtd+ZOcgw8+uNwRkwWH/PDBU6L/pje9qZvUj4f2pZdeutgtMdHv4Duy lltuuW699dZ72m4cu+FYmI09mewkDz90SonHWmutVfwK39MBerIFSGqNyemnn95tscUWZd5G V2MB3AR4Xws9kBQbM4kzGXT4h3/4h+JHOHSRfJiHEiK6Gg86Ab4DaPkGjaBZv6KjD15uUIzN 5z73uYEOXu9xIxFe7DaOXnGY3N9Q8AUaO3bmJbBjKrmjH/7AucPcduNHV7T88eY3v7mcuymQ 3BgjvpnUj5nr05hLYtiAXw51+rDLwQZ4ziWwzvFTv+qqq4oMY/DDH/6w6CTxovfrX//6wtPu isTbjQL96IAWOE8bfZN06qO3G0Mg+TOfVl555bLLbJ4aQ/OQPnSv51H8Xoj7f2IZPHad3N88 stEcJ9+HUYAbZDvEkn02SHglcYBc9YwFnR387MmEhNM4fOADHyjXjWtPnHv+859f5jD9/+mf /qnMUX2uiV36WEOOufzb3/52MBfMR/a48aAnvmyzU8oHYi8wv/nAvDHmAbYD+rlxxAPgaU5p l6gbg4yDea4fTG/8xDs60BE4xyMxtDS2f80DM+0B83hkIybz1nwNmMoLLjgyT4Vj15E4cdWV I8ntSJwf+XyQb1VA60bV9SMGTBtG5E6rf0TitHpb+1zvAYE+C7jF2kLgEDgla4KfQ1IGTD6T R3CTtArCcC08+tAKehIb54KuEsAPPRwLjx0PE1UAtZjYsRFo1S0AZKGBKylQAnITpPWDJA2/ /OUvuxNPPLHQarNwHX/88WVHLLqwWyJjRzb2KO1M243Ef5lllimJhwXOQsZPPkxnJ02/w4WF t0VeAi0BBeTgByRJ5LnwABvU+S0JFBsWW2yxYhO7gqNdXSmRkoCxac011yz0zsnC+8EHHxz4 OvyVr3vd6wa+IN940l2ijG+SSn3qDv2ZF2Squ9FAyy66211Up6u6YEIf+pKrD2gD+Eqqsyiq hx/+eKAzT5yTY5eZv9GoS3yNrXFQ5z8JoPMciy+++EAn/HzRt3HFN/LchPzsZz8r85NuxnHS pEmFBxw2e28RT7op81ievzMWsVF//MUfEiRzu9aJHH2A3nbDvFbDX+xT2qG1K01vtMFX4k8X IIl00wTXrhz9JUB8ykfmK7/Dx8vNApkOwL+ADHId+CdZNJ+iAxr+wBu+dvhKcFd/U/rpT3+6 9NGDnz3+NtaAv8hzRD69+EiibDeTDXwp+YfvEFOUrl+yJfnhQVc+w4edZEY3Mt0kRhbf6cOL jNgem+G7UXYjC88BTz1PkvAgR7sbxNhDLwdfkMd+IElNDNKGTqyxm00PuHlVJXM7PsdLG9me PMCNrebx/fffX2Tknz5jTic2ARsPsUW7RR7fHPADo42fZCLA3xtssEGZl3g1aB6YVQ+4fkDK ml9/KfbzWAwZKeUTbthtVngtwXVrTqJNjBKbZ3VutsS2HoV58NykSDAUQNVNCkHRIpcgrA8o 9f/+978v9QR1Cywa9AKpR+P4mmQWBAuRdjgBuJv2n+a1Q/rjH/+4JGsWZHWLvnft4JDpSOC2 UJjgZOMNnJOhtKu38847l3M2pD27TmyCJwn1zmfq+FvA7OSw0Y4JvSxIdrvYY5da8o0m+HSx G6T0GB7AJQMOGwLqbNFvsQvkQqQrYGMWpuC4ubCIkU0v9OjwYqeEjkygze6WBcwjV+14Z4yc e/SNPj50rl2SQIZ2utPFDpPFOoCPnVFAj9rO+LMea7yNp5ulJAuh1Td58uTyuJsdbIwf7H57 z1gdX/PObpM2Y0JGXpFhY+3z6GRhtmOf5DB4doLZSj5cr3d4l5af2Wxuaofvnd/4id5JlvSD +KvGsZtXyzIOgm9wzHVzzQ4DHdiutNsJB09zBQ9l6LQDrzu4Aby0f/wsITTmxss89DoMoCd6 8yU+hMNv8MkLaNPH/3yAThsemY9w63mrPmnSpLKLCZ8NDueeJJCJHxqHdpDx9YqDVzE8st9h hx1KH73gupacs5ce5k3oC+JTfPBKe3xWzyG+A7FXPTT4AjbYDcXHwS/GynUdnvDc/NMDL7QO urLZvAR4S37tbuJlPDKPvMPuQzDoMm/JIgNvfOHyuadc4YF3dDNvyUTn0J4dWH7U52lB8OnE FmMLlPoyd9luF1rb8PjBF4vpE/DaEP3DL+2tbB6YWQ+Ye8Og6bHHxF/XlOvjkT4O+hDkyBOZ L37xi+Xm3gbVAgvIXUbWK3PV/EyMeTrf6e/UBnfKypyWVs5THhAYBTjBUHAF2oDFQfCqg5rJ YpHTl6DqkaKEIB8usuOpLlEQ1LOYJ5CGP97keoVBwuMVBu/w2f2xm2Nyki3YRkf4v/vd70qf fjqnLEr3/9B+/ql3GeEDO8gWUWDxcEhWPGpFTyelZMZuE90kC5ITj1Sze6yNfgnqLkgJmyTD u3fsluQDibGdk0n9wiEpsVDTxysXdlyBR3zk0idAFzaDjAkcPPhhxx13LAsXf9PDY88sNsFB Z7fTKwuRlQQXX4987TQbK+Dx5iabbFJ2YY0ZfdiWsfKomV8s6nSQ5NrR8hoHmfEpGuf0DdAl Y+2DLR6PGtf40BgYf4kQOvLNL3L4lr/NO/ja7Uz+13/9V9npk+BKACzsdrPQSxS18TU5Htuz FeBBR69u2FFjH7zoy0d2zgE87WyWjNArbZKjyMiuAdvsgtNRoiCZxp8dfIBGos4W7eYpGjt1 cOhmZzXzLzrRhR2ZC/zrmjKGbgIFeNdaEivz0I60HVMy7fCZl/yQcSVPws929gG6uE7sdua1 JHLxAORHTz427oAd5OnHT7vH8n4W03nsDT0aegBPE+jmlQSvusRf22+/fRmzu/rdYH5gi537 vKqA1lh4yqPfKzpoT+5fO/AeqDay6RO/0SOQOQqPTa5v84e96uhcD8YCTniIT/Bcy+SJDXYw jbmbJEAuX8MD+BkvTxb4yQ2h+eGJECAD8LndKPGVz/nVji07zQ26Te5vAPkriap2ugJ49DLv zE368Rkwx9gAB/BF9J3e+LnxJy9zRJwDmROl0v41D8wGD7jOHMA87S+b/rpyAzdSd12JO+KT uW2TSQy/sv8sSuKUG3JzHX14zYxqLbGdGa/NRTQCo8NEMHFMBrusmWReCxDETJzgWEDhJqi+ 8IUvLIlO3pmd1Cdy7qKycJpoErUsZngLzECA9SEUuxD4/d3f/V2Rs9NOO5XSY0iQ4O1cQB4O 1NEHbwHco03n6JQSPEG6BouRBA0/urFTUmDHDI2dWQuvRcq5NiChzMJILv29K+kdOLuIFhW2 oLv66quL7yRMeQdZn+TJgsd+CyPe2RHlZ681KMnJokpffnQD8LWvfW3Azxh5rGnHB6Aj4z3v eU9Z2PhAHUgKJbEWUIsyXcmWVEguQb79IgmbNuNiTCWldFJKNow5G8h0AHZFZ+exQZtElL5s iQ+9iqLPYQxCm7GzWGtPnb+992w3K6+FmA9AQuUVAiB5tWNNbzwyv5W77LJLeVeXX9hPJv19 0CavHKQvr9mYG3DRx2YfSJJc0U1b9PdBIgmMugNv8y/jbc55BE83POmvlPCYf64PdSCI41GD 5Mf1ZT7Q3WP6PF42pm5qjA0fm5fGkh3AmGQ+mJd2F8kiA77r31yi87BcdqJ3XbIl4xv/Zowk gL42zZymF3n6Ijf+oxOZ7PHVgmyF5/yuPqlloz46qrMTbcYGvtc5JvUxBx+xydziE7qxCw7I +DvX7kZFmXnrJg9teNtdNWap40cfNxNuYNglKXSNk4FPZBmXjC16PvVet3eTHT6kiZ6t7PFu LL5wJeb0MH/slsan/CDhl7jqB2gBO4C5Y+6pGztJwOQ+GXbjwD+hY0t8pH1a40efH/zgB+WD ofh5yoM3P+PRoHlgdngg8ze8XEv9Xx8Hn+jn2qPdXf21n/hhPm+55VZlXttwMS/hjyS3C/XJ 7yPl2jK/p8DYdmqDP99HPvKRkdUsLa2ccA8YVAuIYLP2Ouv2n7j9f32AlVDM3yd4ko9VBjqN hL+R5CeTSTAWwFKaECaRAwhgzskZptE/tfa0hTZ4ZAikoO4rDdW/0OdOLEE0OgnoFiqQ85pf fY6XOhvhkq8tNuHBrshkvzp8kPaUbEAbGvjRJXKV8VVKeGQPl/F7Edb/Qwtia6n0/9Luwy+S Bu8nAouSnUB8a/8Oy4EbfzoP1HaxWT18IpMuwzalHvvU41Ntw/VhecNya3zndElb7XN8JA5Z pNXhwTEOSpDxU885nziPztGhEPT/Ii/19MdveGnLHIZX889cTBtcR8Yy/IKHPuOUvuiQdjjT 4pO+2EMuiO/QxXbtNU9yQHRzDl97aKJL9NUP8MmcV695waEPn/GT81o/55FJ35zXONE/fXg6 Uq/tqH2NR+r0it7wAb5w3DhIFEFsDM/oXzr7f2lPXYkGXu0nfIdtqHVBF14pI0s9uqUNfn2u DqJvyszN8EwZ2hovY0bP4IWnkn9Dhy/82FTrCDd4zscL6uSid3e/qXBLuUGT9NvN/s0DvyqJ jZtLQCc2xO/qDeYND8w335T19uijjul8VuaJJ0Y+I+Fmzw2sjSZPQW0Gucmc/1lT4lXG3nx9 7LFHy4aVOTsC07oJG8lxhj009dZhrFafqz2Q4JzSYpQFhOI5T4DTFlznU2tPW2iDVycEdZ/+ GkKfHSO4NX4CNJqc1/31OV7RN4E6/foiKyUdg49/2lPqy7ky8uGGr1Jf8PTF9uGylhUe4WNR CmhTt+vqkXX4S2p96lk9vAT2YTn4xJ/hqYyOoVUPLZnRRRmZ6FJ3DoIXfsP1Eawp8oIXuTX+ cJt68PGpk1p1tBkHuKHXV5+zq+ZTn4ePMpD++A2v+CY4Nf9aB/3oY1fqyuA5D7/ICn7a4UyL T/qUgC7RB5+cj/ROkaWuP7LST05Nk/7oq9+ReuhqXvoBn2lPXRve4Zl6eKqDyK/x4NT12jfB j5zU8Yqe8B3BSVILJ3zDMzj6QNpHaiP/0dRy1Gu69KUMbXilDI16zlOiqc/DI/qmzNwMz5Sh rfG0pT14+MIJXvrxzTmcWkf1uk99PMBMymbMePBvPOesB8whX9lV3pGdf8HukYcf7Z7z7Of2 NybeM3+87NR60vm2t+3c/fM//0vBPeWU07pLL718cG33U7e/oXFD7wOqNndGNq+mJLVslKpO 7dD3f6Eltv/XJ62leWC2eSCLTRjmYvXDBO5QHXalvBrgfCRQjOw4h6aVzQPNA80DzQPNA3Ob B558cuQHbP746MhnCPxUttdnbFz4Nhvv/PvaPK/f6fNajK8TXGzR5/evW418p/V42NQS2/Hw auPZPPCUB+zQJoHV5N3MPBJUl9RKfrVlB8XOynBCDLdB80DzQPPAvOQBLxK0lwnmpRGbMV2t Wd51X/DZ/WsI8/2l++Njf+gWW3yR/jMoi/Yfel6j/BLm3/7t35ZXCzbYYL3BtxiddPJn+1fx Rj4TMmMSx4Y95aPcY8NvWM0DzQMz4IEkqBJXSa7Hg3mEqM2j1rwv6PUDia7kt0HzQPNA80Dz QPPA3OwBiW3WOKXPTljffPg0mzo+ZOnbDpbr36n1uaEH+veqn/OckW9mGi/bWmI7Xp5tfJsH Kg8kmXWxAwEhbXlfUJukNh+ckPgGp2LVTpsHmgeaB5oHmgfmuAceeeSh8v3r1qrHHx/5sKL1 6y9P9mvXgiPfs/3gg78tX4X36KN/6Hbaacf+FwH/qa8v1v+q3+bjpn9LbMfNtY1x88CIB7yK ACSuzn0gRSBQ+uSy95HyCWZ4kl93vy2p5Y0GzQPNA80DzQNzowf8BLUf3bG2Wccee2zk2448 mbSO2ajRZ32znjn3tXk2c/rv7ikfFBsPu9o7tuPh1cazeeApD3i1wMXsAClzkQsGcPIJZomv 1xMEhezuPsWqFc0DzQPNA80DzQNzjQcefvj3fUK7YL9+2Z31409+Ot0v4D3ar19+afJ3/XfS 9q/W9e/f/vkv/fdXL9B/z/4if9M99qdHy7o3Xoa0xHa8PNv4Ng/0HsjXFXk8A+zE2q3NLq7k NV8pBEfim7a8u1QI27/mgeaB5oHmgeaBucgD2aCxZlm/rFnOldY1PwTjWxKc54PTeQ93PNe3 ltjORZNkVlSx1Q/s/plYQAJVg4lXt6k7AnVffY5noD6PnPwKGZyaLjTK0EnoQkd2zuv2JH3D JT5pi73aQPikrPtDkz7409I57Wjgxx76h0/68AHw6r60lc7qX3wgGIRvAgG07OZqq8dFn2BQ 6++TqCByh/HTF5nqcIIfO6NHeA+X6ILjPPR1m/b4IPRpUwY3fdE1Y5T26JqyxgsOfiA8ndd2 qQe0T40ufPGILOexDU3oUtbytIWOrOBErjLj49zYBWq6tEXu1Pqia3AjKzTah8/reuhmVzk1 HYd1iny6D+s/rAfcGj/9oQvvzBW4aYM7rE/w4IRvjVOPc2SNVkbeMJ/Q1e3RWx+6YdroVOPV c6XWO7jho163sSV8Uk5PpxpnmFf4pozeNU1412XNp8atz2v8dv7X5wFzxdplXfvTn/wa4MiP bGh3bYy8liDW+mCZlNO1OfK9tePljZbYjpdnJ4hvFlyTR1C3Qygxyl2RNkHGJHPXZPIBAUnd AfwEZ/rwdG5Sos2jcXjOE4hzx0U2fD8yEB5w8Yl+4UEPdHiQ7TwXhotDu58P9SMG2kESPufw JWVk1sEzi0utE3ry4Sn1JRB7x8cvvfmePT+H+vGPf7zI1k4HMuGzJ/LQRn99sc15rWOtM3xy 2Epnv9keHvTCMz7DJwAHGEd4XlnQH/3xgkOufvz5IPZq08fvaPTByXk+sEZ2fGOuRIfYRA4e d/U/iQjI9Ct52tDi6Sd2gx/68EyJVl9sVs8YhgZPvlcCvOkEDw5eDhDZbHRuzrANxHcXXnhh +YndJPFkhw5f52SZt34imY/wiJ54+VlVv5RDFwBHPzpycujDP3W7E8b9oosu6g444ADdA9vI jq74scE4x2426scvcvXX4+McsAFeAB1+swr4xNfxpzod45/IzdiQS6/g0z26wEV/zz33lF8d iv36Yy984w+cw8EbxDfhVxqfao8vgp8SbvxKhoO/tMOJnuE1tTI+R5OfTc75kUceWXjSzUF3 etfziP546I+/yIkO9DBXAtodATrGB+ljh7bMZTzIBbEp/rr99ttL7AgOWhBezvGqZaa/Lmd2 /PBo0Dww0R6YspJOtOQmb7Z4QHBLYJcA1ImQIKZN0BO4BCeBL0etgN9CDyTI4Z2gqsQ7gVjp 0IY/XMlXAmsWEe1pwyP8BHMLAIAbHO2C8oMPPjgI1nAsGvQnU1KGhlx1QDbAM/7Ak/wc+mtZ vobkxz/+cUnq3/e+9w0WmCw04YOeHLqjB+Roh5M27Vm84m805MAxFnTSR/daf/3hk8UYP3bB y2KJ1hFfxidw+Y1ONW9645uFDl18hmd8hD72OH/FK17RnX/++eVrWr71rW91b3nLW4oOeK++ +uqFhzGhq59JtIBmfqDPeOIZmeRFN3o71GMDPfleiRfgs4ceeqic4+UAG2+8cfeNb3yj6HHu ued2e+65Z/GL8SDvE5/4RLfNNtuUx2DmCxk1xB9s8LhMEkpWxi39GXfttR/xyxhEJ7YEXzJ9 6qmn9r+JvmVp0842dJFDn9Aqa5/pwy9t5gE69PRQl3SrhzY68fOsAnvDFy881clkS8bROX0y jnR0rg1k7LXhaVyDqx89X+sDfMRmtgD98YE253hmfmTc6BedoxN6+gC6kwvwgRuZpXEa/2p6 cwst/uR/5Stf6S6++OJCyd7EDQ140xUNHmgctcz63DzEQ1t855ytAbwAPgAe+wN0Cz4c/eqx V8kPgG5w8Iw87XAA2eEHb2bHrzBr/54BHjBP62POmjzlqpmzejTps+ABQchOoABqV8FCcccd d5Qgdsstt3R777132SWRYNlB2muvvbpNN920BD0JyfXXX18CGT6Ou57andMHVxKh/Tvf+U6R IRAGNwmlQJ9kIAHULjCgA3xBlY52/IJvt09iIXDqF2gtUosvvvjAI35Xmu4JrjrY++pXv7ro F952XSPjm9/8ZpGljw1w6aEfnyyEWXjwPProo4ue7LAg+C1zkIUXTRa62Or3sPmAnCQb8IAd Q+2OAw88cJAYZqG2mJFDL7QWRrh4gti4++67lz56+8UywMf4xG+SO7wiD85NN91U7D322GMH ffELmRI+NPTFiw533313kWsHc6uttiq2b7bZZqU0LyS5vqMwY0z+5MmTuxtuuKHQ42Ns4yd4 Rx111GBcyHNcfvnlZZfV3Hrve99b2viUrsaILnZh2fOiF72olHb78KOHOShxBVtssUWRefPN Nxe5fpP8zDPP7M4+++zBPMrinIUcX8DnxgEcd9xxxU7y6QwXXsYTDn31azd3lXywyy67lJ1d /gN4nXLKKeV6Ywv5vssRbsY//KObaxa/8LSTzFfrr79+kYWvPvYrjUPa4OEN+LFOeErjDP6j k2sGTzHl3e9+d9H7TW96U+EUmw477LBuiSWWKHryC98bf+fmJD3pFpu0qyvJ2GijjUqfc2ML 9KujcUgetZlTjsxV473ooosWHc3lG2+8sdCbJ/iST3+xwxhLyvEj27XlfKyQ5A8N3QD/k5m6 NmN6//33D+SIbXDQ0eWII44oc44O2nI986ebIG3GT7xgJ97iMLvh8MOvf/3r8hQJD7ah4bs6 3sDXz27nuQbhGiN8nI8lfgWX/niqK+k22vjxSYPmgYn2QEtsJ9rjs1me4CcQrrLKKiWQ2ykS 8FdYYYUiSTA8/vjjyyJrwVlmmWW6k08+ufvc5z5XFmwBUeIo0CeIrrjiiqWO94knnljwLe5e DxAo60BOiLogqQ+NAGghsAuM78tf/vKikzYJ0Morr1yCvwXabp9dUzzsspHBHrrCf+UrX9l9 6EMfGgR58uhsIbvqqqsKbzLJgUcPifO2225bFgR8/Vb19773vcJXP1vIQCfo85lF0GIC3+Jk MTnooIOIK/iCORp9NSy77LIlgUJnAWMb++nz+te/vuiiz3go8dEPJFWSNz+nK4lhL7qXvOQl RT5dr7zyym7dddctNsOhEz54JBHQvtNOOxUe+j71qU9166yzTrlhYKNFCC5Z/CLhxVufdnbR K3wlvXZl6QPHWBgH7ZtvvnmZO7EBn0svvbSMA17ateGFXt2NFZ9rk3ACCyLfu1mSuJLD5/vv v385h7PSSiuVcXGDRHd+wc/8XnPNNQt/MtjHXgkNGXhccsklRY/f/e53WBUcZRZ8eCA64utD DmzF/+CDDy648JIYmZv/8i9+7/yJMq7ZqWb3Wmut1V199dWD+bHvvvt23/3ud8vcMr/4V4II nLOXLkr+4ttVV111kCi52bRrzicf/vCHu7POOqvQ+nfOOed0HoN//vOfL210pANQRufSMJP/ 3GCIKXjZMXeTJ3H66le/WjiyyU2SJx0///nPC57E8V3veldJwCTpb3jDG4qf6GROmsd8p66U OP/rv/7rwAfa+QNkPooREj5jot/Nqxst14k5qnTD4NpTJ9c1eu2113Zf//rXCz/+c6O26667 Fj3dwLpZZNtoQCYwVsBYOYB4IbaljVx45qmbKraId/QCbuYOOeSQ8oRIrDS25hk8/hUHyKMX f4lJ+sRj14122msYcwAAIABJREFU82SppZYq8Q/P+Awv17brBB4/bbjhhuXahsPf2l37Xo3J OIwlfqGf2fGjY4O/Zg9kl3busrEltnPXeMyUNgKghdRCIUmRXNml9CjZzs0mm2xSFinMJVOS lBe/+MUlSbAbIimSRAjMyy+/fElEBFLJo4RBUiZ4W2wkAGRIVARdNPq0q0syUsKxq7XbbrsN 5Fss99tvv6Lbl770pbLACOhoBPkrrriiJMgScomEHQ47c4IyOdkpyAJgIdW+XP+rJhYZC915 551X/CFo08ECZFGVtNCX/vjhoc4nklBt9MDPDQB7JY3wAXwLF1uBBIN8O4Zg0qRJxe+SKjp4 PG5RwvM973lP8Scd1MkFHqdLxPhAm3GwAJ9xxhkDOfigk/i85jWvGSTL0Ys8u0f68cbPb3Jb 8PlLYhkf8b1de+3scwTYpx1fugTHWEj6LJrxGRqyzDlJlnnDL/zNR+ShB87Jkfi88Y1vLCUZ cOF84AMfKHjG0E2WRZpvzeGtt966yDEHLcjaoxt56I2bRMuiLWEASUQkpYH4XJ0dIPTq//zP /1z0NEfNe/zoiCc93DiZJ0CSx6/GnHw7mkkk+MWcAnTMuWQCP8Af8JTazJd3vvOdZQzVJUPm gXY75xLNzFlJrnng+gRJnJ3zdW2ntpkB185pp51WSPEkWyIfoLdESnsSdjcoQDJ6+umnl2vQ /Ge3sZRswgeS00MPPbTYgT//Z8xip3bJ6R577DGIH+b2t7/97bKL6zfozRn+NT9ci+YHesdn P/vZIsvNCJ+IdXwOzw1j5kBBmsa/XOvo2UynlJJR8x+QZ06Ld2Jxnia4nvfZZ58S78wPeOYc W10zzt2I8zd+eAPXOx740dc5ffkTP9cwneiHxjwxH8Ut4Fpx00hfNpufZIszngDYVQ7v0eLX rIxfUab9ax6YYA9MWdUmWHATN3s8IKhJGARFj70EModdWnf3FnYBUVATCAVfwTCLiD67X0Cb ICYYe8Rnp8biaWFHUz/exIdsNAInOvIB+QAOEIzJJwuuxAEvC6KdLqBPMMfTwm7hkaBLyvWF JzkWgyw48PHS78ui1clgA5zoQIYd5OgLH58A/fCePHnywD/4eCWCXvodgD5AEm0ng6/C76ST TiqvdvC1GwhAJ3XJknNHfE0Hi3Lw6LD22muXcSOfDfAll+RKYoxpxkIbf/kQnHO4SrrCs4jR TV2fREBpTOHpA7Vt2skjH8DFy40DffQD5zvuuGNZNEtD/4+/0TnoxW624rHzzjuXxXRSnwyq w2UvXPLpApd+ktsTTjihtMHT5xFqPadrOjR25s3X6KfE3/gBvMnRFru1q2cuxC/mJvvY7SZB wiC5BZkPq622WqnjpQ2P2KGDfDoCvtDv0KaPzkr0/C2Z1cde/NRzzUpmJC8eHdsRtJvrZgMt m5UZQ7zQzQpIYg8//PCn8RcP6E/vmr+5R75dUf0Zc9dvztnEt/DsbPOdhD268jVcvAF8vsm8 4xc+MYauAf50PeHnnO3iChl4ZJ6iCUR3OPDVxwL440cHOirR2/XNzUX0l6SKxfTKIRa7SaI7 2rzapd/cY6vd7fqVocxj14EYgz96OnvNhRx1PnJI7m1WaKMj/OhEjnMloIOnINaN0eLXrI7f WPzbcJoHZrcHWmI7uz06wfwENYmjXVFBUsBNoHeXbhFIgBX0BLQsHlQV+AVPdA4B0OM8u31Z dARNvC0yeAP1AP4Cbj7kkwCKH7x77723BFO8yRbknQuwdnGA4E6eNgmohduOi50egRzoQ6+0 OJGZxU/529/+tvDF26PcAD3Y7hdSAnjUIGkgn8wsXmRJHrTjQZ4y9lm47SbxMX76JJx2ni1M Xn+IHLQeTbMzi238IUHWBsfx/e9/f+Bn9OER/NwUaNeGJxrndFDSMckvO6KzBVEinV22+CB2 8SO7vB4SXxt/j1TtKhnPvEPIZ3ai+SuAniy0bMlC+/73v7/bYYcdymNluHhmLsF14ONALwG3 U00vPJX87FEuPSSbsRONOTWpT5hjP95kw8m80uegl/b4lf/YpY9e5GU88JZAeV/WTpx3IvED 5hi90WpTxv7oQY5xICN97NOurqRPXrdAB/BjU+YL33n07ynHW9/61qK7Pvi53uN3bWyfFcCb vXTkA0nc5/tXHzLX8acP/b/whS8UPNckfPbQyesD8Go/s8vTE0mcpwfe/YSDLjqzA+CNZ/yM Fi/XFv65xulKpjkLh/3GAr16/KwExjh+Lg3T+YcP/qF1DtRzbq7QiyzXlpiFvwOeww6qXX46 uflNP1455+8k3e94xztKguw64Cc84l+y4KLDzyFO/epXv8KutCvpRUf+jAy04jgavEeLX7M6 fvRo0Dww0R5oie1Ee3w2yxOgPNbzziwQiAX5PCIWbC0CAprgKKg7F+yUHicKtHbCtElyLTp2 4gD+QGCEL1AmoJaO/h/+5AqY+gBZ+G233XYlMbXIAY+zJaoeSXu85pPjZAOJkq9WSiDWJiGy KAA86SCZoId69FOyle2b9h+M8xgVX/p4BOn9Nv0BtIAsQC8LkpsEvOxWkVXTwMMvNnosbVcx 7Uo7zN4V9vjw5JNPLv7UbhcHSGDCk2y7VvDsngB6GEt+ix/ow7+AfQ4QGzxKlHhJWoEF1Dus fCS59qoAne/qHxHbPbLzKDnEH43xM+YelQI+wNt48LWSfLvgZFu8PfK1qwcvix9/4enQnpJM yZ0kGE4guPFnxhCOefnpT3+6oJINxzy3w2RBNo/zHuOl/Tu++umFJ/zMU7pJxtihL7Iwjv9S 6gdJsIIbXvR30wKfD71/brz4hM89ZcjYxk48owP74iNyIo+c7bffvowBfkDJfjt55Hu32Xz+ 2te+VuYMGuMUwKuWaUxnBewQS574DV86SJTsigLyzVl4dpDhmUMB1zf93SSDXNv04g8+5EuH xJlvtDky18k1D1xj9913X7l2PII33103bkITV8xteK5JvjAmeAWcR0b8pE97xlk958FBU0Pw tcOlqzHPuXlhvgNt/CKR/8xnPtNdd911xae5vsxl/XjyNV9mTuDPV8ZfvMwY4CtGusEKLnof 6vPusPhNHzcd5kxshuugL15485VYI2EOr8n9Eyuvf7m2ze+xjB+e8RNdANudu6JGrqrS3P41 D0yIB1piOyFuHn8hAprgZDdDaYfHB40SeAVQoE/AEdy02Z3zaDAfgrBwWUQ81tKfBdLCZucV vQNfgI+FSmCbNGlSodFvR1CCJ9mws+LdSe2SIwuixVBy5V1ej5D1CbR0wTOP4jwCTsC3WJFD Nt0ET4stYJO6IO0Rrnfz1lhjjYL35S9/uTz+Cy5a9gBy0XkP1wKCvzY2SVLzYSdt8VkhfOqf XTX8JFpwvK/sU/6A3fypHR54wQteUHTOGIzmf7zhsj27ZfTQxga640FPpUXNou/9OgkDX9FD +3J9QmieADr5gAn99MWn2h1uDCQVzo2LnUJyfQqej9JHBzjee1TCAXBSfvGLX+xO7pN3Nrj5 gefGIWOozg40dv3VjeNdfbKiDQ3cN7/5zeXJBJmSB76OvRIHPmEHOUDdh8fMpQA+wLzGh0+N nXM3HTWEl6cXdDJfvceIv3nsXVfvLOqTcLoxMMfYog3Er3QwHoH4KTi5LsJP3U1H3vWGZ1w9 yQDxr3dtzVUJXni6iUh/QZ6Jf64Fu/TswIvungDxm0fp/OhDXZLZvOJDrqTL+7WT+lggmXX9 85eb2DzBwEubBE9CZwwBOQ595JCRmyh+WHLJJcsNBb5iBv8kbpnbbgbgG0t+xyvzCn9jow1f R8aGjeYBcI4G8Kc6PHMEoKv76Rq/O9fHb7EFvd1XMUxccIMLz8EPXuNwUyQpFSOMJRr0kn7n PhiXdnU79mJ75ML1vq75KAbAcX36vAIZ/KGkGz5K15Trg671q1SuKXLxo9do47f7U69j4I8f XfgyNhSntX/NAxPsgfk+8pGPjFzFEyy4iZviAYFGkLYwrL3Ouv2noP9fHyR8EMtje5+WHtkl QZG7XzSCRyBBxYKgXeAMSBISTIOnT2AUqOs2fPEQAC0EWXTgC14JjGTAxUMwq/u1WQzgBtTh C7L4KwPhqx6dnNd4dCSTvtpBeAzTSLL+4z/+o7zPSQeLvmQXD8DPAjudQO0rdfzAsA30F7TR ph8tfEf4wHPQNcBG/XVb+sbi/xondPwQmZkL8ZnSo8mll1666Jb++IptaOmZPvPEAl63RVba okfqKcM3+PDYSo5xUqrX9qMNHR3COzyU6Y+t0VXf1GwIfk3rnKyMXXwU+lquPjwy72Nf5qg6 HE8AfCjpmmuuKf5z/frwHp6gvua04ZnrITwLYvUvtMaltiP6Qo2ubgC80x4bwqamS9uMltPi MS29o9O05KS/5osXYBu/6INnrkQO26Y1Z+CIO+ZD/FPPX7SJDxm76DdcNy/ceNT+D278i4Yu ud7q9mmNa+yla+zEN/Y5Dx/nZOCfayR6xn/BIU8bPtEnsVo74JvQp6zp47OC3P+LDPrEr2lL GXvQkA20Jf6rRw/nZCxQxXkkN998S7mZtnstyf/NA78q4y8hB/iyPzIjp3S2f80DY/TAlJV3 jAQNbe7zgIs/AUAwT7ATpIBkRVuCUHD1OU9g0q+eQG1xF2C0KbMAwQugFcCAgKRPCVd7+uqA TcfwjTwlgJ9zeJGVhCY2ZdGCGxzn5NhJsuOQnQhJbV53EHiTmMINX7LpBLTFBjzRpD07N3DR 0xeuI/QWWPUAnflDW3yijd74T8//+kFw1NE5+ICuDnzpGb8o/chF+i1uAbh0B9HZeZ3UsgFE fvCiBx760s+2yFKyF252S8mLj9iODh6cjOkwb+3xWWxlR/DxxEc9+sFPv/PMDbJ8uFBJJ7Jy XTiX3MDFMz7kA/jaM5/VnXvdxYf8XCPaLMx23DJXwhuP2M6feNMLTX2wC57DDjGc2Bp9tNOV zfmgJjwQXtFHfWYhPlMa52HQHp9n/INT19HCDT86hZ9SnS/xordzbRlr4xT7jC8++vnA4Rxt /MPn4astczg86IgPOTnHwwdE8SMPHfzQaicXjbEZBu1kshuvADmRqz0yjWn8gQZP9Nqch4Zd aODQATiPPG3wyYn/4OAT/NAr4YHwj8/oA0KDJ321K/GLvup0BfFz6Epj/8840hNERqm0f80D E+iBKavvBAptomafBwQdAUdAAgk8diW1JSgJNglq8FN3nuCWYKxPsFQXuPQngOHhSD/+Ahi5 0SEytTvCP/LRkusIrtKRwAgHhCZyycBTf4KrNjqjxxN4344PtLPDY2RJB/6RBY9d6vEH3PCI nmgkPuygB9zg0MUiqD0+SjIHD0Q/5/DVHXg44p+p+R9NIPbGF/ABOfjSE8R3HtcGx+IG2Ao3 5/SATwe4eDkkCSB2wsvcwiOLJV0A36KDo8RLWxK9+JiPYntk0Sc+KMz6f+TCg6MfPSA3ftYH 9Gsz3gBdIPqpG0NjxV7ndCMX7yRR5EZWfBB/kuOcXHy9QyrZjG1eHzAG0QNeeEQnO63a46fQ xiZ6eo2HnPiKL+mdD/xlTOGCmp/XKdRjg/76XH00oCsZbGQzX4Hw0e6gH39Fd3KjG/3Rhoc6 em3wMh/xxSP+UGZMnIP4olT6f9rR4BO/4m088Qd4DPveuNAVfc2Db+mNRolXaCMnPGMfu8iP TOf0xAPEJ5lv2vDN/HXuAPEjnnztiF3hBzfnsZFstBkvvJzThVwl/dHizebg0hV9xgFv+AHt aBzkTW/8IgstPAeIr0ql/WsemEAPtMR2Ap09HqIELiBoAYFOILMrKVAJTCDBJsFL3bkywS0B E090eOFrsa6DlH4B0hH+CX6pkxlZNV9toaMnGYKqEl/9wRF4QcosOnSO/OCHF3z9sSE0bEzi R8fYg945fAc+QLAmA/CBxAcP52jwIAfEf2iA9vBJHQ1AHzuVcENf+wl/PgkdHdNfGPX/0o8n IBNe9NZGBv/h49CnRJOxCj5+2uCHV+zFK2OcNriRzbfGkbwAfDxBxqT2C9zg80FsjZ0ZF/S1 LDSSlNAnkcz4xr7IyrjgERx6xd7gsTv+C42+jA89+Irc2J16ZMLJtQcvNmUOS5giAw9+yfxH q44GLf3Ir/XWpx5c59oyhrEPPb7onc8oRLYky9ynKz50B9Eh9ehMDzYoyaYD0IYnwMeYoYnt +uOX8NJGLlnxA3rzI/poB3Bie+ro9eMHjAsZ6B3DNtQy4KvDzziZB7FBPzsyN6J76vq1Ge/4 P3roi+zYp805XzvgomeTuRj72OQA+vgPbsZLO3mRywZ1Bx5o4Rof9GQAdTh45V1zuKONH1qy yAHhUyrtX/PAHPJAS2znkONnl9gEoyQDdaAUbBIE9QtS8EHwBaIEbu1oHIKgoBc8fVmcnSeZ SADWhpd2tNoD+KROfoJgyiwG+ur+BO4EYAEZnywuwVdqswBktyPytOsH7E+gZhf5+pyjQ+Oc P9KPjj0B7Vkg2Itf7BXgQYJ75NIhPJzH7uBOz/94xOaMZej5NfrjRQb+IIkZ3vwHL/rU45ox NQbksIcdeIV36HylENsyHnD4I4AH+WSSET31Z4zjh4yDvuhADhq0YNgWsjKuFv+AhAUP+LVP yFLPuGTOhre+4CvZFR6hwYNMNpER+bGbzuZOfIU3G+CD2IR3/BG78HDwDfkg/oGTuYVHLcc5 MBYgNHaPA2jxDW3aZ7Ss/UwOvfggPoktsTfXH1vhZ9zjz/gmyX/66RVesUfJP8q0xYfa6aBd mWsjfuQbR+jw1xe/4hM74sfYBjf6azN2cCJDP75w+NfXCNIH8AO+AI5xiF0p8YGnHtlo8Igf 1fWTEd/hCd8RO/kPDr+Tl3kC13xNm3otj17oopN+gMaHAQHajE90QEPHjB9b4tf4B8+0FUbt X/PABHtgyoo9wYKbuNnrgQREQTggyNWBU5BK8EnwhSugBq8OhOhBgnB4C34CmyAHB03wtIP0 CaB4w4vsgtD/i8zU0STAa3OewJogr8Qr9sJzTifBN7sdcMgWYCMbr2EfkEEvdPAsEPwROQnQ 8PAD8OlOX/wiA05siEy6OQf6+crhPDA9/4cP3Pg5/OiIT+pw6OLgCyXe9I28jAdcbcY044IP noDe5LHHAXwBP8A3uqiH3jlcPMJHW2Q7jw8zrmgzr5zXfCM3i2r4KNOHJ33CIz7RD08dX+eZ m6GPninxMo/CAx7gF34kg1/SXjr7f+YOHYA+R+YFPWJTrTM+IG3xIVzy1OMjePDJ0a6fDDIj Fx+vKii10RcM61oax/CPH+iCHs/w005+3eYmKrrSUR/a6EZn52nHK+f4O4ZjQeZfeEzNjvhB GXpjh3cAnX6Qkq7xs7bMSTih1e/cEbrIyFjBB97lTxvbcq6PrPAPP3zgsRFv7fF1/Jx69MAL hDea1NnM74C9OTdfp+Y3ePFB/Bt5udbif7hkTmv82BIb4h91/Bo0D8wpD0x5GW1OadDkzrIH BM4s7GEmoNXBRbDRlkCXOnwLUwKauqCWwKeeYJsATJZgJ5Dhh1fw0MLPAoMPSLvzBNEE1fDR ji9+6JWRWcvBIzppT3DVbuFNAhLZ2mtekUdG+pR0zKIQfeNDeoQferqHPiUZjgC8Guq+8K37 cx684MTW1NMPvz5Xjy71ea1HbNAf2ro/9MMy4QeCk3pNHx315TxytNXy1WvanId/6OGBmk9w tNfnwatxw1cf0Ff3j7SO/M91NK3+Yf1DGx3Q5TxlcGo9hv0buVPD1Rb88Kjl6I+vUmoDw/WR 1tH/Rx9yal+kvbatjh21zOBE51pq9Kp5p19b7A2P4IWuxs15yhondPrq9uAq6zGNPO3Two89 dX/a0EX3yA7/Gr/GS3vw9YHU068tciIjdX0gNCO1/1tPe8ra3po2/OENy9BW61Sf6wM1r5GW 9r95YOI80G6rJs7X4yYpi43kK3f0w4FF8qtNwKp/IQx+vTBJ6ARiCWIST211gskQwS7Jpn64 DvzxdAh44aEdDyCRpgsejuisTx0d3qHVDic26csuiDYHHZLUwq8DtjqcBGC4eATg0o2OaXdO Prp8Gh1+9ApNeLSyeaB5oHmgeaB5oHlgznugJbZzfgxmSQOJVhJaiVsSziR+kjiQXQPn+aog tPBDL5FTB5LlJIKSuJzDHT7QwHEkAdXmXBu+ZIVHEmn90UECCbyv5tyBVj/AL7ZoY0/6JMoS UTonGSZLvzq61PGCq66PDiB0SWS1RXePePXn5kB7g+aB5oHmgeaB5oHmgbnPAy2xnfvGZIY0 kohJ0urET1sSWe1AcichTXKYJE1f6CVsaOFJCpUSP/3oJHdw1B051x6+2hxoI1sd3wDe+Op3 Hlrvt3pfDTgHaAMS0kB2lMlOokwmiO7q6NGRR4ckqPD0RUf1QL5uCX30hhuf4kUGHzRoHmge aB5oHmgeaB6YezzQVua5ZyxmSpMkcRIwiZZ6IImeenYpkxxK0iR5DiBZc44GHz/VKek78sgj Cy06yR05tYwkqOGLh0QVj/D307To8KoTanLhhTal9nwIxTkZ9IqtaCSzfvqXTrEBrj6ytOnD E71z9Ep20aO2A786eYeDlwM9yKfq9aHFo0HzQPNA80DzQPNA88Dc44Gnf7pl7tGraTJGD0iw JF9A0ga0JblLsigxdEjKJHBwg5/ktBA/9c/vjCfxO/DAA0srOglgklslfiC4qWsL/6uvvrrI hg8knfCSfNp9ZQNdtUkYJbbagotuWE/61O/V6pewp+32229/Gk3a8c3uK774kElfdsSW6Bdd 86n6fNiu5oFPg+aB5oHmgeaB5oHmgTnrgbZjO2f9P8vSJV9J6JTqfj5Wkig5lEyqS+YkaGec cUZJ6iRle+65Z7fhhht299xzT0ns0IMkw0lEvbsqKdYuqSND32233VZKCSH+P/3pTwu9/uyw wpM46geSTQmiduWtt95a3o21+7r33nuXnWLvANNZgpnk+Pzzzy/y6b377rsXfvrpdtdddxV+ XiFQv/vuu4tP3vGOdxSZseOkk04qdOoS9COOOKLYwi6+wY/eZHoVQvmqV72q8NafBD8JLdwG zQPNA80DzQPNA80Dc48HWmI794zFTGmSpCyPxe+///7u5S9/efc///M/JbmTSK666qol6bzx xhu7N7/5zSUhlcSuttpqnd1UiahkLcldvgUgO5cUk5iikRQm2Vx55ZU7Caf2O++8s8hJ0ptE NjyUEuEVV1yxu+6660oSqcQDrd3U448/vjvttNNK8rzKKqsUWWRLcrfeeuuSsPpw2VprrVXo I2O55ZbrLrzwwpJA/+AHPyh6sOWUU05BXnDPOeecbt999y220oV/DjnkkGJL7Ipt6m4KvDrB l/AlsUcffXR39tlnF1+oR34R0v41DzQPNA80DzQPNA/McQ+0xHaOD8GsKSDBknhJspRnnXVW t9tuu3Wrr756Ybz88st3u+66a3fuued2F198cTlfYYUVyg7twQcf3G2wwQaDBBJBktzsumZX UrInWQSSaInkXnvt1W211VaFZtllly07wJdccklJApPgwk8CSLd99tmnk7TaxVW+853v7C64 4IKCs/7663eTJk16mj7oJZPoyLDTuv/++w9sPu+884q9m2++OdTuZS97WdGDvdFdu8SYfwDZ knpgd1d7+rRFX/p95jOf0VSATdtuu205z42ECtrISqkNfoPmgeaB5oFnqge8fDblY8PPVC80 uyfaAy2xnWiPz2Z5Eig7jQ67tz7gJCGTnEm+JFrrrLNO2Zl0noTXDq0ET6Lo9YIkdkleqYmH I7vCcBxof/KTn3Sf/vSni9zFF1+88LfjevPNN5cEOLu6ZNBNYowX3QCeQGKdn2oNDXwgMUTn 1YSVVlqpnGvXv9566xXbvP5w6qmnFvleRdBHL7vAtS38wh8bbbTR4FUIuH4XPbokEeUnx5Zb btkddthhhSfd8woCPH6Lfvjopyuw+6wt9pTG9q95oHmgeaB5oHmgeWDcPdAS23F38fgKkDxJ pJKUqXv9IAmohOvaa68tOBK466+/fpDEqkv4JGwSMZDkzDkeXktQapfsJbmTaNpF1ffggw+W 0vkBBxxQzpP0keFckknGTTfdhHXhRW+vDkhI8aY7faKLukNyfMMNNxQe9MDvhz/8YZFDj7e/ /e2F/je/+U3Rkx52deMTvL164PWCE088sfDTBy9JLZ3UAZlJTO1q5wbB7vInP/nJggMXXmTQ i438XX+fbkFu/5oHmgeaB56BHhBRp3xPzzPQAc3kOeKBltjOEbfPXqESKUmVndcddtihO/bY Y8t7qaRIck8++eRu++23L68N+ADVHXfcURTw3qnEUsKHFuCTZE1yl+901S6Rk+Tp32abbbrj jjtukBxK9NZdd93uqKOOGiSFaGrYYostynu05OP9wAMPdCeccELRS0Iogc03D0hGs9vr3Vjv 3qLD85hjjil9ktLXve51ZceWHLuodNx44427j370o2WXlxy8r7nmmvLaQ3aMDz/88IIrUZe8 g+jLPnR77LFHSd7poq5fAuvcoZ3v+SS0eIW+MG3/mgeaB5oHmgeaB5oHJswDLbGdMFePjyDJ FZBwSey8h+rDVt4hleT5sJb3SH3Ayru13j3VJjE788wzu7XXXrsktWglaECSqj87mKWx+oev pFOiKWmEK7HzLQS+OcDjfQCn5uFDbPfee2/RA/7SSy9dvhXBris8CWGSSDyzm0qeD5rR37nd W/2SSPLzrQ/aHDvttFP3wQ9+sMj3yoK27bbbrnyYTCKqLqH1jQf8IXkP0CGvGXi/1g73wgsv XGjQHXTQQUUmfLLh40V/uuNFR30NmgeaB5oHmgeaB5oHJtYDT99Sm1jZTdps9IDESiIpGfRN A84lWEDCJdE6/fTTOzuVSTbzjQkSXSBB83qA5E9yKpED8POoHV8gMV1qqaVKn34JnjbnEkFA Ph5JmJ07lHuhAAAgAElEQVQvscQSA/na6YvWh9zsqgL6kkOP2LTMMssUOjT0tFuc5BEtvZMI 4wfI960PATbYmZa4Al8vRmcy4hM0Xq2I7y6//PLSBy+Aj35ygsc2dTrhRU/6N2geaB5oHmge aB5oHpg4D7RtpYnz9bhKkgwmsXMu4ZKAOZKg7rjjjt2aa6452H2UANsJrUFShsYrAUn26gQO rkQQTwmmPudJMtWB0oFHkj/tkr3goEerX0luknB9aNmk3U4oOu1AWxLm4NELpIxO3tsNJEHF iy61bujIl/jGdv01DT50wtvP/yrpXvsAD7zDI7Jb2TzQPNA80DzQPNA8ML4eaInt+Pp33LlL rCRpSomUhM85kJA5JF2SLeD9VHgOdC996UtLf2gldZI5fNCBJH/a8JPYoU0fWjLTn3Z46dOW pBU/5/qil6QS7+gOX38N0UubBDdJbmiiZ14lgCf5rd/bJYPu0YH82KMMxHZ1dgE0dEKvfOih h0qZNno4jx6xrRC3f80DzQPNA80DzQPNA+PugSnPV8ddVBMwXh5IIiUZk7hNDZL8Ta1PW01b nwd/uC0yQ6ucmuxa7tTO8QV1UqmuPX013/ocXg01/+iX5Bde+tOXur6ptWkHtcxar9DACa/o rK3uV2/QPNA80DzQPNA80Dwwvh5oO7bj69/GvXmgeaB5oHmgeaB5oHmgeWCCPNAS2wlydBPT PNA80DzQPNA80DzQPNA8ML4eaInt+Pq3cW8eaB5oHmgeaB5oHmgeaB6YIA+0xHaCHN3ENA80 DzQPNA80DzQPNA80D4yvB1piO77+bdybB5oHmgeaB5oHmgeaB5oHJsgDLbGdIEc3Mc0DzQPN A80DzQPNA80DzQPj64GW2I6vfxv35oHmgeaB5oHmgeaB5oHmgQnyQEtsJ8jR4y3G96fedttt 5QcCyPJDAaD+kYD8sEDpGPqXHxnQfNNNN3UbbbRRoa3p82MGdVt+qAG9I335UQP86nP1/GKY 81e96lXl+2o/9alPqQ5++CH6p2RfePte2chNWYif+le31bLxuOuuuwa+CU2NExn6yA6v6BEa Jdy6f5i2xs2voaWtxg2vyIg+9HX4wYlbbrll4Mf0X3DBBeVngcNTGR7DZfSEE/rgpC04xmcY B250Dt7s0I/sm2++efCdxWSHv7477rij22CDDZwOgC7ByVyiW3ROG4JvfvOb3V577TWgDZ2f Vgap1+MTPiFSj69Shi7fX1zLpEvwwmNmy8hBH578HhiWm/bQoYk99Lr77ru7DTfcMGgDnsHR ETnOaz7xUUr9gbRljmivZatPDUf7WCBzTen7of3oyic+8YlCSk7tE42RFd7D45322Proo4+m aVDySezBr/ZR2gfIT50Yj7rP/KWbtsiCmnFLm58EB7XefK8/cvEYHr/Iyjjh4Tz+SH/aI099 GMKjphnGmdG6mTplts4odcNvHpg5D7TEdub8Ns9QJcAJmBYEdQGsDqACp4CnX8B97nOfWwKx RVuQS8BTt0AM8+QMbQ44+IVGX37gwE/b4udHEyL/qquuKrKTfNABPV5wIotcvOlZB97onIBN bo5h2XCzwOljK9zoRy4ZIDqgsehEj9iFFq7+Rx55pOCr000ffOdZsPLrZoV5/w8u+8ihuzoa +Hhuuumm3dlnn136vva1r3V77LHHQDf6HnbYYd3rXve6sJtmGb1jB/3Jio5kO/CEwz/GRz04 Fn180MXPbkjOPffcWdaPj8JTSTY96KnOP+Zj/J6xgAPg049udOZTbYCPdthhh8H4xrfK/Gxy +Bgf7ejDBw/y1ZWAH+iFTtIS3cmMjhnL1AvhTP6Lfsjp5qbTzzXTFZDLh/QjFzgPHX3pD0d/ ncBFdzRwcm2knf74kItPfkRFqQ0oHemDB8iL7PCNfko0cEaD0MKjl4NeDz74YLlpcXM3zIf9 0Sf0xhuwB496PLWbY+Z7PWbxW+yr50Hwan+Kb8YjdsIJrTb+yM9704Pe0SP6pkQLJz6Ei0ct L+MUu8Jbe3TPeNAj/KIXOoB3rg205NCr5j+C2f43D8wbHmiJ7bwxTrOkpd0+i6Eg5/jFL34x CL5nnXVWCeoJegKzxUBdABQgJ0+eXOhSx+PGG2/stthii7IbJhgLhocffnhZHNBcdtllA3n7 7LNPt/7663e//vWvCx7+aBJ00ZJ79NFHl7b0nXjiicVuQff5z39+OUcj4IbWLrVFCQ9t9913 X+FF//POO6+0af/ABz5QzuMHwTyL0E9/+tNCH7l2ENkgcbFz/a53vavQajvuuOOKX9ACujzv ec/rrrzyymITuYsttljxod3hRRZZpDvyyCMH9HRCY0wWXnjhIie2KH/+858XuQ8//HD3hje8 oeC+/vWvL4vPL3/5y7JQ2qmS9Bq70cACdcQRR5Txt7g72HnppZd29Hv1q189sI8Pjz/++MFi e8899xS92cOuJHK33npr8d2WW245y/rRn4+An3vmA2OUn36mUxZlJZnGwbmkBj76XXfdtez+ 8x8wF7/85S93bgr4wHjjaw4rgTZHgH/14W98b7/99nL+mte8pvgLnuThzjvvLPPlLW95S8Gj QxKBmp9x/sY3vhH2M1zaqcbbse+++xZZu+++e0lCkpibW8Ymc9fcoL8k7fzzzx/Q6weuFXoB vsgTE3U82cHnAF/gWnYeXbSp84Prkq/0uY6U/LbJJpsM5EemazU8+HosQCcQm5yzzTjtuOOO JQ7RBZ44wSbXb/yx6KKLlnlrvgD+yvjS5aijjir+QOd6iy177rnnIJbZJYXLL+RkXuDHtsAV V1wxsA9+/KefHG2LL7545+mUPvx+9rOflXZ1/XhH14suumjAbyzjZxfb/GNfPY7Ti//nnHNO sYkdu+22W4nT7KUbfWYVzLSn/zD6rHJs9M0Do3ugJbaj+2iexrirT15e+tKXDl5TkJAuvfTS ZVGSqNjREvgsdhZFi6kgZxdSsFWXFFpM1AV+QdPCYsFbffXVu4ceeqgsau973/tKv12lbbfd trRZMOBce+21hQdnZrHKAoufAHvQQQcVPbRLqiTEgjxZdirokLpzsNJKK5WdGzQW2kmTJpX2 H/3oR90222xTdNC38sorF70lJoK/wB2aVVZZpSwodL333nu7l73sZWVnzEL53e9+t1tzzTUL rgVTkssPAD7g48l98k8mnh/72Me6rbfeuuhKX/ay0cJOJ4mhhYoN8EESInj8R6aFRT9dV1tt te573/teoXvve9/bXXjhhQM/FgbT+GeRO/DAA0uijdcZZ5xRZEkQjbEdcz7TZx68+93vLgsj ffjW4mp89a2wwgpFJzcP5hQ/zKp+bMtOU/zxgx/8oOihDxgH4EbKHDMX7r///rJjLZniM/66 5pprik/pzmZ1Nx38mPEujPp/ZNE/Mtj3kpe8pPvJT35S+owR+43ZwQcf3H3xi18sciQPbirc hGnDh378nISCDOOu/sY3vjEiZ7hcZ511in/dCLqhskN+0kknDfzuNYv3vOc9ZTwzfvwDjJE5 yA7++eQnP9m98pWvLPrmGn7ta1/bfeQjHykyMv/oDNTNb/bzO7+QITEUE8SO5Zdfvlwf2o1D 4gx6182ZZ545kM3XnsrApYvrIL6HPz3gXzbQm05KYK66runKR+IEOeutt1731a9+tciil3nr erv44ovLWBozNGzCg04ST9cBWeqnnnpqd8kllxS57HITRQfzZMUVVxwkfXCBdk9QxAJtH//4 xwc3DejMB7q7tg844IByDm+55ZYr1zL9xTy86WrzwTU6I+PHptqn5NEn40LecPw3DuIa2jXW WKPEMLrgo61B88C86IGW2M6LozYDOktk7PIstdRShWrVVVft9ttvv7JIWnj0WdCBRdKCBOyC WcAOPfTQzo6hYCvQWRQsfgnoFgaJm0UuC54FZOeddy5BWoAUyC2qQJAH+IAkF3ZL0idxWWaZ ZUpwlWgGLGj0sPiQSX+JpgWabhYmj+wlyZIa55ITIEmW4NglTfBXt6O29957l2Rf+5JLLll2 Lr71rW8VWWTmNQl+sntnUbdo0B0PCQb+EnjAJ3wA9Nv9gYue7y2SFg83EPTmyyTszpP48wed 4uv4Ak09BkXQNP7hGz0lKW448MGDfH2SIXW+Mk5uECTO/LLxxhuXhMGCq8639DIGYFb1w4MO 4JBDDimlGwvvgd5www2lTzJpZ/n9739/t9VWW5VxOf3008u40Jk+bookgrGVfyTM9GQbPxgL /Xyf5EifsbS7yT43OWxir/ljV3z77bfvTjnllEHiahfYqyLmIsArj4jDl5zM54I0E/+OPfbY cp1I2OgoUWWXc9ePsYwMbZI8ciVD5rV5myTMdeKpAh+w2bUqqcUTbcYx12V85QmD+Ss5Iguf q6++usSPXXbZZXDTR465LenHD0jG6bP//vuXukTROLjpuP7660vbaP/Mffazm7+VeNJPki1u gIwFu8UeYwaWXXbZoj87zCn80AOxENihdON4wgknFP7aHnjggZKourFzbYsxfCQR5QN2gvjT 9a5d/ABuPvk7uoqtoZ/c3wS7FvE2RptttlmxUcwzB8UHMcX8G2388jQn45f5QAeyR4v/YjOb 0LmG1lprraJLklt8GjQPzGseaIntvDZiM6ivgLf22msPghXySX0CaIGRQLlLT+KkL4vS5Zdf XoL9/2fvTsAtu8oyAe9AnFoUVESDIYQwhHkSlEEwSBglOIvQEoIoY8LcPE0/Df202toqAsEw hIQhDNr9tGgzGJRJhIjS0ChTRoEwiSgoY3fLlL7vqnw3m8utureqTtWpc+v769l3r73Wv/7/ X9/ae63vrL3OKROfiZCY9EzcyIIB3WFAdEgjStIGRT7lRUxIOeSxI4b4i55XmEi1yYxfBz22 iTQRM4LmtZ7XjZlAEBATuxhMVtEVF3KkDeqGmIkRIYUT4RdZyuRHP5MmHddiTrk6SE0mSWVi pAtfr3qjq/3HrmGvTfyKMXHzG1ytpGYyZ0u8qcufuuynDfJ2J2IXE58PetCDxvYMH3LUlxf8 XcMofWhl56yzzhorntrB55lnnjkmZPHYnkB/f+Ob+0+c2iIe9wFy6l7UR0iM9lhBhpt7l8BG LAhw+le+awJL9egQ7dEG7eeHLWXuF9cEtje72c1GXXGdfPLJg4hcurYChvzq79yL9OGm3wjb 7Ihrf8SqsFjFl/tEPxDPj7hce2b4kpanvdrmAwKR71odOPjw6vm0IkjSDnp0iDQsHXM74mBL vg8SwVSbEWv52h6fdNXhV75zSFP8Doe7+cMO0S719Iu0uj5ASkf4surpvlUPJny/4AUvGB9G cz/AK+XObJx44oljFT44XuMa1xj9aDXzuc997sAl9ow57gNtTV+LzbgRH2IJNsErcbp/9Svb tv6wC3ex2H7l2fNGAskk8tnYXf+Jg2grWwTW6Ys9jf/HXj4ezeupL75KEVhVBEpsV7Xnthm3 we5v/uZvxiCbwcqKo0HSwOd1HJ1MEPJNHlZ0DLL2XVk5IuqrY+I0oJMM9gbfz33uc+sDq1dz GdgzgRlsSa5NcvwZtK20sG9SQhBC2vihhzyKMXWkrSJZTbEflR479Ex4fJs4iDK2TejSYtUG Oq69HoRB4rLPzURFLwM+XTpEzHQd6osJjmKi45poAxtsiY14NWoVBmlTN5Jy9REqWzdiG25W L02cJH0VP7Gx2Tm6VmZ+5md+ZqxCJaa0Rz2+tJUv+Uijfatid7BDx6rOsWuToVW7tHV/4ksM 2i89v0f0pQ8tPhxY/bNyaL8w7LTBfU3SN876NffOPD99S19afb74db9ZyYexNspj5z3vec+w 5dqHAnt2raT9wi/8wtCjm/uXjm0PYmabb3n7I1Y6xQp7q3del+f5Yzd7yF/+8pePeJS5zz2H 2uYZJPJdi0lf+oCHUHm24akOSdsTt7Yhau985ztHmf5Rlj7yfLlngqXr+T3BXnTFQE99/qT3 5v5li+08e3ku2XfAydmbo2x5cB8EP3uUveHh18ose/BwJuKy4ipOeT7IwNsHmIesvdWSx54z XeOOWCLsul/yvNERI//uJWn+xAhT/SBWdugaw5z5t/Xl+7//+6d3vetdw/ye+s9e8vRffPND xAezPY3/xiM64iHaIWaxxN4o6J8isEIIlNiuUGftS6j2UBnIfUmBIHFW3rzCsgKGMHgVZ3Dz es2kZ8IxwBkg/8N/+A/j1aG6mQScDdDOBr/kG2ANwr5UY7XJ6gnxWt9ezgyY7KvjyKDtyyT3 v//9x2qYicCAzYc4ImLM4C9tlUX8BnCx0vVlLysqXmsrM7mzY18ff9JsEDZ8AcWKjFeD4jLQ w8urTHqxS1dbnbWDbnCyjxJRsFJMvNb2wcBk9ra115G+CMWvWKwewf2oo44a8YZ8wAv2bJvw +PFKknidSBB5kkl9OxMPO/zThUnq6KtMgHAJzvkAccIJJ4z9nPGl3EqXfdgme21HrMj+xDfv C+nEJ2796v5AKOD35Cc/eX27gtVGH7xgLkb32HnnnTfalNV9dUj6yzX7Yif8yUNIT1hrr77R //KsmrGvb+nrT8+Otnp1zM7GFUo48s2umOJ/ONuHP0gPHPRTbNkvChPiA5hX11638+0DKIKi PeLW79rjeXIveZ2uH8VO/ylPecp4ZZ7rlPGl3KGtv//7vz988QlnXwT1IYl9H5LF5/nxHP3c z/3ciIFNsbMRkY5tfTIvi87GMxtEPXXY1R8wdog5dpV7tjyL0urqB/F6voyBSG/6TfyEjmcD 0U9M6rNt36wP2/LTF8YYedotBgJv+5+Dx5//+Z+PrQ+wz/3LD/GMef6MzcYo4l7nz4c4HzZO WLsfX/ziF2/Zf9lGlPhy7yWurcZ/c4FnSJy2H/nAKg54B/sRYP8UgRVCYNfIv0IBN9SvRyAD mFwDtgHJAOowmNvXaeC6zto+KgOW162uTZDHrq28+UJK9nHd9773Haua9DJA0rNiIW9+xC9/ jkwIBmgTrRVYK4zq2A+JtIkp9eSbpCImnHPOOWfoG1CV2xNnsDURsKsuXyT1tUXaZOcw4WZP nxUkr7CV29OXOF3HBnxM/rCTT189X7CbD+zKXCMV/Mxjhy0cET4Tlu0bSBYRc2JETBGm5CPb 8kwq7Dky+SHXPnzwiXwjWZFgCHNtMglHQlbnZ1+kYU8bxOfsyy3aIe1gi13+YO0a6VYmj5/7 3e9+YzWcLyTQpLm/8SV+dubiWrvgbduDM5ytXKqjv6ye2hYgRveY+0XZXLRDO4MVu9IkZ2l2 EFf9Qd994IOZ+5/AxL2V+yl++NR/7qHgmLJRcT/+uFfEKx5i1d21L0eJB+lKv8rXf/ZEe/bc 1/ZDe97FRde39t0X6so7du35twrunsv9kvsv18HZs6xdiKM3H/rCcwIf8dGDAb9zXPlxXzs7 SJ4d17mXnef1kMGIfOXpR/mxpUzaOTjpR30i3/HABz5w3Lf2ALtvtUM5MS5ZyTVWIufarw6M rOASz+883wd3tuTB3b3pw6j7EQ7i8MHLh/m0L3jmGedDOs9Y7Ptwb8WWne30n5VlH2TYgoG2 8enMp34y9qX/3L/w0W/yLGa457XDhzZ4qJ+4BwD9UwRWDIEj1r5AcMVH6hULfqeEa0AyeNpP eOsfvO3aSsp/XBuorPpcaW0QumiNMB2/3tRdU8OuSdngSEJsMrDLMzG4NsARKzkZzOkbuAnf ZG7LIKcevaTpGCgzkLM1r0s3/pG8X/u1XxsDu3raZqAWE7/sGMjnwpYYnGNXDAZsPuexZNBN 2+hIpw3qx0fsKjMB8T8f+OWzP7exUc81+4lZP3mdKCZ5iZEtfol8E2Xa7Vq5ePjnM3HEX/BI vmu21eM/+LuOz+gGM2WROWbyopvy2GY398NcT/kcV2Vi1c/8qOfs2Nf4cs/48JKV1o14zO8t bdBWq+K/+Zu/OT5AuLYVxJ5P91iwmbcT3vTEyj6Z34spkx+cEse8zLYFRGIufHoe5njHxlxv b9OxAWf3zzyO+IxOynLezFf6K/f7HBP6wS33hT5lT730k3EEbso2CrzgG1316EV33sfqwkvZ PB75aZMzSX2+fTBLuTLxiTf4zGNVTtKP0soJffnOsR9MlcfH/N6TPxe22Jjvq2VrXieYqjdP z3XiK33nnDx4RvhzPceLnrzo0dGmuS/16UXE6Noz5AOu8dqz54OJD3j8L0rmlpi98MKLhg/b Nqzwf/pT/zjuJ+Mk4Vt8iX+RsSyqTbVz6CPwjaPToR9zI9yAgIkkk4kBk5hoM2AbJLJSZwCk a0A2CGZglDaIKMvAkkEfkSMZMNlOvVGw9kc9vvmy0pjVWv4MllZ3Qp7YIfTZcc7ArD7/id15 Hpe41Um+dCZodUna6Bw79OI/g2Xapx4bEXr0o8dGYpafiSyYK0v80g54OhKLeomPr1ybTPnT rvhLHOrT5R8pSAzJZ2PePmkHnw79lL5iM/bVJ7E9x0U8bCBwytmJfkiNPHsCxRMb+xofW+Iy sQYf8cjLvSwebSXSfFo5NBmqLw6k9t3vfvemEyJ9th109Z+9u+rOnws+tJUPkjjEktjUTZqu foMxW87suy/4dOyPJI744yOSe4+OcmVicXadI33uOv0lbrFpb3BVT3udEXe60o74YgNe8clO +shZfbriIOqmDa6zTUOaLbrBjk+68ceG69TXDr75Scxs0BcroZNYXSsn4mJPedqV/NhXLhb2 1Us+e7GTdmk3Ycv9wK5DHc9PYpCXe0BdabbUj908b3S1hTjTdZZP1ONP/rz/+Exc9OIjtlKf Xo749AsjVm6zZ904bbuG8T712KwUgVVDoMR21XpsQ7wZdA1GBnwDpoFrLvNBzqDomt58kM9A qh5buWYzgzdf8iOu2aJjIGQzA6tfJ5DnMFkYNEkmBSSJvsHW2eAsbjFlcpGX2OMzceda/c98 5jPDTmILJs5iY4Men4ReyHrK5NPPJCGfnvjVjSQesaZ9ysQ8b5t2aHNiYMuR+mm3yZRol1gj 4mCDyM9qpvqOxMV+Jjp5dOcHwsmXPP7pssu+tHzCjiPxzFcl6dNDLCK+bEWfKNvX+BIzO9Kx KQ2TtE0bxCyfwN/2jNRxfZ21167aSNQjYmNTvnYg7MrcM+q6J5z1Hx8OEj/pr2DnOj6ckSb2 xMaXI19mhP12Rb1I0mJw6BPnub2U0RWH+MWSuvLEKja6c0n88qSDK5zowogvaXYjue9cwwme 6SNnoq44SPywK655HMrElnFLvejQZyP6Ypn3A7/KYl+M6kZHe+I/eLCXcuc5Lu6d+NOu2FWX LdfOaRf/JH7iQ15wkBZjdFKXb/VzT+rbtI8dcUW0ax5n8hOfa7Gzoa40+9K5j9Nm/mI79xM7 trKkD8Rr24/xXr3gH789F4FVQeCKGXtVIm6cX4eAwcmAZTDKAGvgIpmUDFIGNpJBWZ6B0ABo 8E05G2wpI7FpkKOTgTITgDzp5CMK0hnQ5/XZy6AekpTB38Avbm0xObCZyUBe4mGbzdhn0//m k7a6ZtORCUCajfgUszZGErvY4lMZPb7EIk3SHnrqpUzMwTYrS8rFRScxsRFbYmLPNV2HvLQ9 dVzTy6Fdc5KedsY2O+rCSEyxo5wtMYlfGV3Chuv4kCdNxKceSV7SbO9PfNrrIIknGCTPmZ9g ru3ulTmOdHL/iFF/aL/YtC33i3z1ci+kXfKJ69gdGZfn8Sk+whYfsHOWrz5cxSmOxDsq7OEP UsNf+oFq8JSXuKXz4YEfbZJHly/+g5syNpXJj15wdi1u5QSuiZ0t9bWBqC+P0FfXdWzIpy8P Rj4QOSfueb15HGKBW/qU7cQgNkJfnnYn9sSiDHbRTWz02STS8tOfYlJfHmGDaGvIpbbMbbLl Ojal2Usee2m/PLYTq7R64iBz3yPj8j/al7GYrQjst+q/3PNsp8+kxZI+0DZ24Z1YtD3l0sFi Po+wUSkCq4hAie0q9tosZoNPBiUDVwigwdWg6Fq5gc3ZdQasDIQGX3WJSYkoo5eJIYO0QXMu 7Koj36CJMNDNAJ10BlRxxb88MSV+E0YmoeQZfLUjA3jaK2Y6sRt9sWlLrvkXW+JRHh+wSHuT Nx/s1Um9tNt1JlR59FMmPyJmkrN2R48N9cSZ+OhKy7N6RbQh7UhfJD6TYXCky7Yj+s6ZVKXT jvSHOkQZDNL+2FGW2JzjSzq62hZ/+xqf9qZvE788on+CKfvKnd1j8NQmedFXZ57WfjaINBF/ YmbDNfHbohHl0cn9ESLMH1vqufdSX93EHzu5N3O92Vk/5jlQDltH7OcZZTuSeypt0w4yj4UN WNFxaA9s0h66uZeCIR3p2IU9O/LiX/1gMz+nX8QBq3k/0Ev/KU+ZPBL7+pOu2OJXW6XVgRMd Z3mwoxsd+dJpV/LV4Uu/zWPmO21lK8K2IxJ85fGXOJWzJ372E7dY/eQagaE4lKsfW2yQvDka F2t/gpW2kO30X2Kir+25P3PPz9sW7MWR8sSX+yQYJUZ2K0VglRAosV2l3tokVoNaBkHFBikD lAHMQBqCqswA7DoDvzx15/V9Yjfo0WHbMZcMeqnDpolMPt8mPoO4QTF+Moi6Fhc94sxOzupk IleuzODLB1EuHr4So7qxy68jon4mYnXYiU31+QpJSj3+lNGFXyYevuU5TBSuldPXJn4ygczj YIvwGyznr2uVqR+s1M02AHUy8fKnTLx8RdRL2+TFhzT9YC3WCB1xxTab7BPtS1od18E79WOL /djY1/jYJ8GAL/a1Xf/AlI6Y+Uv7gmv02VCevnMtPb+fUift55MfYp9h2pWY5MMm14lLvnSw DQa5d9gUV8rp704SY3yLDZby1Y9t9nKPxJa28UXPPRFslNN3TceR+zj3uzYlvtxDbKUOG7nP 0x556iNr6vMBc7GLWV5ilO/Ivao8ZezAni/tTKyug0Py6RK22CCJW1z02FUmX1o/0FeWOny4 JvxxyqIAACAASURBVPN82NAl2iXNjlgi6Ve2El/ssSWtPST+7OGGBwzZjE7uc7ryjbfaIQ76 8vhOG7fTf/TdJ2IRqxjZcSReZe4BcfAXkU/YSOwwVLdSBFYVgSue3lVtQeMekw0YDEgGKINh BnsDm8GMGKwMniYnomzjIK6ewdcASehnwMuAmHrKYjtlbItBWWy45juDrrIMuPzHXgbTTIYZ 3OeTe+qxGfuxGz/ijk3xxK58Nl1rkziSji/X80kqr+a0U1nObMUfO/MJUn7KYosdAqcQV/WI szYoSz0TjpjYjY4ykv5zHQzYEFsmKuno515QVxvSxth2rT7/4g0JghEb6sePvkhandiQjr+9 jU9chC3x8ysOcelHZ+1xEL7Sr9LpS/7Vi16wFz9xTT9pZxjz4YgdZ9eRpOWn7WkrnfhJmh6J r3Gxmz+JN3XEDoPYhIn2k9yj0nne5EkH85TJVy91c/8pj22+tC19KD92c6avXFuCYz7AKSNi jx/4pE9gJK7ck7ETPJ35zDVbwTUxJU8+u/zQ59OhfvpbuXY7k7RTHTrx5Zx46cVmSGjyYidx yuczOLATm8mjk/YmLXb+6cAVRnymTWzy5UhblLtOnLvrPz7YhFewY0/aIU2cY2Met7T68zbO bY3K/VMEVgyBXR8zVyzohnsFAhkgnTMoKp0PctFW7sjEQEc9g17sZGLKQDe3GTv0DYYmkdjK ABo76hmYDfL0DOps05MffTalY0cZPZK8eQzKSWJ3jj3ntHtjncRFR1nKcx5GL7crPc+PzcSc 65xTd15H3sZyebCIRD/nlKm3se7G69jIObGZqDZK8JS/mZ3kxX8mwNiZ158Tjvjcnd3Ud47u xvjS9uimXP7GssSZ/Fyrmzzpeb7refxzPWUksUmnfJ6XmJRHoud6rus6/uc68rcrG/1tZi99 xeY8Pb9Ovc38JrZ57PJiK+fUVbYZjvGRM/3YTt15e5TF58bzZnXnedGP3fhMrHO/83T01IuN ed48vqSjN68Tv3McUj7Xjw0xJE1vXi/+6STWnOnOy11vlLlu2p86G3Vdz/WTTsypv1m9/c3b 9VF07UPe/hpq/SKwFwh0xXYvwDoUVTOYORugEE5iQEUMEbkcrudE1gCX+uooI/TVR0pjT132 56up9AyOIY1W+thz7bUe++qE1LKtLAOra8JPBtnEpH7ykCn18hNTu2rtGvzFpY5YnB3szdul LbmOPhvSlSJQBIpAESgCRWDnIFBiuwP6EkFDQBHIObkNMUT6iDIEj36IXohrCCW9ED6EMDai n1fAfKlDlCGT85U+r/WQUxL/yKkjIj+EVJ4ydvlP/dTlw09MRS+2R8baH7HElrij56yNXqGz mzL5aZt0pQgUgSJQBBaLgJmnq7WLxbTWtkagxHZrjA55DaTvbne729jP5v8/Rz4RRKLMgfgh dvIRuuxbRXYJUhjSecIJJwx9/82kuvLVR16JdF6rWcFVd04Y+ZCXPWtWXNXxX5RGb06oxUOf L7rSbKgvTehL80fPl31cq8t2yulqp2t6YtZGdudEVt20V51KESgCRaAIFIEisPoIfOOGvNVv 02HVAsQNWTzvvPPWieujHvWodWKrDBlE/ubkThp5RFBjA9lDeN/+9revE8qUZbVTecgnm/Mv rdBFJuWxRUIukUpbFZBJeSHUsY+I/sM//MOoI2b+Up+t6Md2yHP0lMenPO3iiy35Ibpz23Oi O5z1TxEoAkWgCBSBIrDSCHTFdqW7b9deWgQTmbMVAOl0nRVVxBEZlP+BD3xglCl3WEGNIH/y 2EhaGZ3kW0FNnb/7u7+bHvrQh063ve1tR/mll146SCTSSt7whjeMfHE85jGPGfHZSqD8ve99 7yhDRtkUF5L5gAc8YNRFPl0/4xnPWPctht/93d8d5dqqrjy6bIrZ4b9W9asDyh7+8IcPv9Li oEvod7V2QNE/RaAIFIEiUAR2FAIltjugOxE9RDD7TpFFhHYjebve9a43vepVrxoE8P3vf/90 k5vcZKzaInzIn1XQ1EMSL7744vF/iX/oQx+a/uVf/mU6//zzpxvc4AbTRRddNPydddZZ0x/9 0R+N1dBjjjlmIMnnBRdcMN3rXvca9cXEr/h8+Yvc5ja3GXFYRX3f+963Xv7Sl750lFsVfu1r Xzs9/vGPH7H6sfOPfexj05Oe9KRRzta8bdLiFw9iS1/8rt/0pjet/1g6nLSRLmLMf6UIFIEi UASKQBHYOQiU2K54XyJrViCdCbKGTGYlM817zWteMz3ykY+cTjrppEFeb3zjG08nn3zy2MKg DnKYVVB1EEN1Hvawh01Iq5Xca17zmmMV1GosAny7291uOuqoo4avEE2xvO51r5t+6Zd+abr+ 9a8/CO3jHve4EZ9VU4T1F3/xF0cc6tAR16tf/ephk2/E9R73uMeIIdf80Ee2CT8kcYv3+77v +6aXvexlY3Vavjx7j/Nj6a6zkq3N2lspAkWgCBSBIlAEdg4CJbYr3pdZvQy5dXYgdnOxyvqc 5zxnrFbalqDe2WefPVZMpdWxkhmRZuP4448fZYgsUogQI4iuEUP1SPwpU/emN71pTA1SiwTT sYXhhS984SDD/CLg4vrwhz+8ri/BLlt3vvOdR6z06FvNZd+hnB6SKv2TP/mT02/8xm8MPboO wi99NiKJO9c9F4EiUASKQBEoAquPQInt6vfhIG3IW8hciJymIXRI6M1vfvOx2ooAIocO6Uc/ +tGDHKrjNX3EtVVahJgghWy9613vGkTRftn8QoF6WQllU13bEawiuxaXL6Qh1Mcee+z0iEc8 YnyRTJnYnK0MI5u2UbB37rnnjuvnP//5o5xN9r7ru75rxCMtHqRWbCGqvjjHnuMhD3nI2Kc7 J7+j8tqfrHDnuuciUASKQBEoAkVg9REosV39PhwETzOQPwdB+giCh/h5Jf+85z1v5IWk2uvq C1pIHvKXFc6htPbn7ne/+3TmmWdO9tj6RQNf8jrnnHNGPkJKX12klh95/D/2sY8dvnyhTP6z nvWscaZrK4Q4Eh+Cfac73Wnsh73uda87tlGwhxifeuqp0w1veMMRzumnnz7OyDQ/hA2rxq7Z Pu200ybbHkLa03a6aXPquq4UgSJQBIpAESgCOwuBEtsV788QWcR0fmiWshBIxM/+VNcOvxzw 4Ac/eBBBBJWu+hHpa1/72hNy6gtjfjfWl8BsJTjuuOMGqUUO4z8rp/KQ4EsuuWToW6VFUvlA SgmCbDVYHFe/+tWn+973vmPFFhG94x3vOOL46Z/+6emMM85Yj5dd2xJe+cpXrpNU+oQd9p/+ 9KdP73jHO0bbQlzt3yWJUx3kdt7WodA/RaAIFIEiUASKwMoj0GWrFe9CBA1pzUqk5mQFM4Q1 eb6oFTIoz5fMQvLm+ewRq6FHH3309PnPf/7rfq9W2TWucY3prW99q+QgjdmK4BppRYLZDqHM /l5+fuAHfmBsIUjMIaHq+j1eOurNY6Jrm0FEjAjtF7/4xaHrWry2PBAk2s+cxT8s2KATURfB rxSBIlAEikARKAI7A4ES2x3Qj8gtkheRzookcujayinyh+DlFwHsZyUIH3IZXWd5dHPQy15a ZakrP+Q6cSCT0vO66ohDHhJMZ05o+ZSnnG4IqDSJrm0GytI+xHRej546SK2z6zk2w9jlf/is FIEiUASKQBEoAjsHgSvePe+cNh12LQlxQ1wdJHk5I39JI4YIorxsIXBWnu0CIYTshQAipMit MnXluyYIaST6rhOTOvJTxh9hJ7GxIS0+9cQoT10kVRmCHhuxLe7EJS+kNzHFrzM7uWavUgSK QBEoAkWgCOwcBEpsd0hfImuInEN6LiFwCJ/X7wRBdI1EInvOyKaVTvmpM7cnD7llP+RRPRJ9 aXVcz2OSj4CGXGfrAjuJFykl4mBDjPRdI7d06SiLjjSdrAJLR086ttkVE/3kpQ3KKkWgCBSB IlAEisDqI1Biu+J9GEIZsuZ6I6FD4ELidrenNK/4A0f0XYdwJi/2nR1EWdKbXctDKiNzXTGH sCqfl7lGaok2JobkjYLL/yhLeXzFlrr8BJ/Ym9dfVlpsOcSAtM9F2UbZmJc6sbOxfGP9vbne aDO+5jbgupnI36if+4l+4kyeszrJpzO3nXRW3pVvR1IvfvKmYTt1t6OTNm7EKv7Snrkeu4kr +blWpk6ug8tGe/S2kviOrbQ9traqf6iUJ27xzNNp30aM0t5DJf70sTgT83Zii27ak7anvWxE x3MRORT61+xwxSa5RNZzETiwCJTYHlh8D7j1DHYInW0EIXIczwe53QWiPpJodZQguBkkpUlW ZdmLzQzSQ2E//vCPhIaIGoxDWvlXnjYpy2AtL23fk3tt8SsN9NlNPXZid0/1D3QZPMVhsko8 8BZ38E7M8hzBJOXy0kdzW+mr/WnDVvHlHkmMfCVPnO5LsUknf97X6rmX5OWsDn33s37KhxVf YkxZ+nI7beNXvXmf543BdurvSQc+e8Jfu/SttmkTLNQRS/BxhgMRZ54tduk52FEWnNXZjnhD M7fNlrbH5nZsLFsn26PyVkZ/pg1ig0UwhBGsgtuyY88zKD597wu74st9v1V82+0/9uDiudj4 nG3lo+VFYKchUGK74j1qIMtEaRtBJkUDnbKtxESQgdCgO3/9L51JhR324s/EEl9b+dhTOf8O k7g4DPpEOrGYpEj8S8vLhO16T+JXGtIOdfjbbt092V1EmTbBUVth8IUvfGGYTf8p11YxZwKX lhc80hZ12WCLTeX7K1vFxxdJjHBOXvLTV/Kls+IkbtcmfPGG9Em7D9zPzj6YED85R7SfDXW3 Es9G/LKlLpnHsZWNPZXDZ0/48+9ZcSTu9Jtr5c7y0r8hQPKlCUyIuOVvV/KGJvc/DObP2Xbt LEsvhDD+4aA/4aYtEZjkfghWwS46yzjr17n40m36eWPZXC/prfpPu9nLfa6edudDUews6+xd 0ze+b1pWNPV7uCCw/RHycEFkxdppUDNAZpCfr2SEsG7VpEwAiAR7JgZ5JkN5zlbLiLJMrDlv ZX9P5ZmMkAMTVmxmcJZvkHZkQsh1CMWe7NMJuaDHHwIfP3uqe7DKtIeIzf/o5lqM83xpfeyQ Dm5zXXXn+YuKfx7HxvhSBk/94/4jwVi8ytKPaQOdef/p79zLuR/dv0itDybuuxBidflhaytJ fO6B3D9sqZtYt7Kxp/Kt8N9IXjb6nJcHJ3HmV0fgKFaY6FvnkDrt2ErEl+eYrg8/nrNVEZjM +zCxw03705cwmWMEM9gdCiJ+7dB/7un08/z+312c2+k/9uDhWYFDMNuO/d35bX4RWGUESmxX ufcujz0DfyY6g7oBNJPAnpoYIkQnA6EJwuQaoutstSzEw8CJZPCzv8KGuENatEU6g38mcbHJ S756yrYSOqlLl32kwVl7li3aHrITkiNmMaY/5EuL2SEdXWm6wSL5bOZ+2J82bhUf/3BM3zi7 p9wzBKkSc9LO6XN1pZWnz5WnLe5fpJa4J9O29FvsDoXd/FHPhM9WYtxoZzdVt5W9J/y3ZWCD Envzfsu9Sk0ZET/Spm1biTrRg5cPJpH5s5+8Q+0sRu3Vh/qP5AO7djlgkT4NRtoKu2WLezUx udfdh+KXl/t4TzFu1X95Vjw/nhU4yMu9vifbLSsCOxWBEtsV71kDWIhBJjBNMoBuZ8XCYEvY mE8eGTDlk6w0ZDAOGRuF+/knExQzBnKDcyZd6UwMczfRm+dtllY/REpbXJsEtCMT5Wb1Dlae tosr7XWWJ0Z9kElKPNocLMSecrrqzG2wKW9/Zav42BdL7jV+HeKEcT4c0QvRSBvlEbruJ+1B 6tL/aY97TxvTZvp0g8UuK5v/ZW++4pvYaOde3rzm9nLFsRn+26u9uZZnN7HBIO0NHtoQLDe3 cEWu2HL/JxeePnDk2U/+oXhOjPpQ/+tP+MxlTv5hlHsjeM11l5Gef1ARU+LXnq1kq/7zrJA8 P9Ly9O/cr/xKEThcEPj6DUCHS6t3UDsNlAb/DHC53u7kF4JHP6tj4MmEknMG41wbNBdBnPg1 eGcyF//cbtojJhMW2djWkbmbP6lvEgkRMglkwthNtYOaLS7x6YvgO//gIB9OKdMmdYJDglWe /kxbU7Y/5z3Fl1iQC/0jBnEFdyRK3yqT775J3PN20Zcvj6StSafO/Jy2jgq7+eNeoje/p+Ij tnZTdVvZcxvBfx77toxsUHJvinfe5+4Px3xbwXban/so7Vcnz/IGt4fsZQiaNqQd7quk82wH m/Tv/vbDIgBJv+nLeX/q4/TNnvxEJ23drP/ybM3vxbwx2ZPtlhWBnYrA1h8Zd2rLd1i7DOIX XnjhIAQG0O2KgZd+iJBB8u/+7u+mu9zlLoOMGEiRjkj05oNoyvbl/CM/8iPTVa961em3f/u3 x8TNbiYBvjM58avMkRhStie/dGJHvQ9+8INj9dmEIX/ZMu8rfSEmfUDE6zCpfehDH1oPVVpe yhWooy4bkbnt5O3teW5js/jgCONLLrlkvW+C92te85rpsY997Igr/Zq4rfCyp677NmRLXoiM Nl100UXTj/3Yj633FduO7cqrX/3q6dGPfvS4h1MvuM3btl17G/XYIpvhv1F3O9dPe9rT1vtW fGJ2hgsfCAufyduOzTy/yOCll1463elOdxr1t1N32Tra755xn8FA291LPoQ/+9nPHnmwkA8b GNELdsuO/wMf+MD6czq/t/dm/NlT/2knPIg0gVnuy5HRP0XgMEPgilnwMGv4TmuuQdNBMtBL G+jn541pg+Z8ELSSEEE6lDmTTLLSVkciGVDjS740fTI/Z5CWr97b3va2Yetxj3ucrPVJIDHF D93YURafOasbn8nLObjQ0ZaQLHbYTNz0UyfkSp15zK4JkhDZrDw26SRuenMfytJOZ9iLVYwI /7nnnjv0X/nKV06nnHIK9SEPeMADxocP9ujQVSexB5/YTn7qu077ErsY53FGN/lsabP41L/D He4wve51rxt23vjGN04PfOAD123SedaznjWddNJJw2bwT1xsz1eU+Ag2yhAZok3SsIxf59jL WTy5b9MGdc4444zpJ37iJ9ZX5+nxI47EEv3hcPaHroMEq/iQx74PSRGxiid4Jn9vz09+8pOn 1772tcN32pcY3beIPhE/SZyJVV705UmLjbDHhrzdxZk20lE/enP7sTuM7uaPeqkTG1FNfq73 dA4G2pu4YS/O//bf/tv0J3/yJ+tYsJO20k9d+RtjSJ8qIynnwzHP4y8yT0dvnhc70Z/3E/3E BAPp1J3XU5ZrddIm+hv7L/b5k2YvPtSN/ZzpzdPpC7qO+KU3T7uuFIFVQaDEdlV6ajdxzgem DHLzAc1A+L73vW99FYjOxz/+8TG4qfvnf/7nY0CU73WylRADWvZMykdinA2Yzsr/+Z//ebrN bW4zPfKRjxyDrbJnPOMZI0pkFNmxCkefDl2rFwbpkEJ1XNNBYNTPtfPznve8dQIU//O4tPPD H/7wqK/cF9yQDWmC9Ek7fumXfmm0TWzxIV8MH/nIR4ZOVoU++tGPjnguvvji6Y53vOOIP3ZO P/30YTsrjDBk78/+7M+GjdiGu5Vv18985jOHH/giLWxdcMEFIy+TiYn2ale72lj5TD0rlbD+ 8R//8WFHHW3SR9e5znVG/etf//rT29/+9nXfI7i1P3yw7RBz4kqb/+Iv/mKsAv/oj/7o9LCH PWz0FbL5O7/zO0OXHf2lndqi3ic+8YnxIeSTn/zksHviiSeOGGCk/GMf+9hw/1//638dpOMN b3jDuN7THwSFXz7Eyc48rf0ht+wg8a7l/+mf/um6/hOf+MTpzne+86TPCFL78pe/fPKhgE34 wsBB1Nf3DqIfzz///PW2isHqJmEXgXJ/0LM6rQ/uf//7j3K67JHYHxf78Mf9ee9733vEa2WV T/Yf/vCHD2unrH3A0X5t0rdZtcyzBMO0yQeP9Dsb8j0/6rkmt7/97YctaRjxR+jwof6rXvWq 0UeucyiHda7d2yHd7lP3rv5Xn01+3U/S4nCPb0eCK9zTZ+qx+/M///Pj+Z/riIdtPhJb4lTv t37rt0a+Z025Z5NI62d2Hb/yK78yzvy672PL85B28sF3+h/e2hdddt3f+ssHPXoO9yasYWis oQ8/Z7adxeCNh7Nr9bbqP77oRei73tP4z0eev1NPPXWM9d4IscV3pQisIgIltqvYa7OYDVwm Q4OfgYxkEJQ2Cd/qVrcaA6jB1KB/3HHHDRKi7G53u9sggwbw//Sf/tMgcgbvDKhIy1Oe8pT1 1Rf5Dj7/5m/+Zjr22GPH4P7e97536PGJXFope8973jPKbnzjG4+0+PiZk0ITgzyE7/GPf/zQ d832ox71qDEBuM6EawJwbVJg7wY3uMHkdbO8d73rXROiR0yi97nPfcZrbmU/+IM/OL4cBxt1 s2oBjxve8IbT+9///uFL7Mccc8z6tg4ryje60Y2GfbperRM22XDwdc973nMQIe1B0OHGj4PQ Q5yQVL5MrGwQeJr8TP7y+Dn++OMHTtqob8VgEuTnzW9+88BJXcTth37ohySHmJD0M1HXYcIK ziYyGNz1rncdk9df/uVfTje5yU0GNvrwSU960qgrjutd73rTH//xH4822C6ATJsE9at7ik0i 9pvf/OaDrGvvv//3/36sxOe/bx5Ku/mjX8WsHp8OvoJz7kM6yNJTn/rU8ZNVSDQsYS+Oo48+ enrrW9864mPL9oO/+qu/mr7zO79z4CFGkv7YGI7J/7a3ve3wzZc+ci/pC88Fcic2glCHvLv2 E1owJcF+XOzDH+3Vnuc+97mjzz1njhe+8IXjPn/JS14yrGqHZ+YJT3jC+KAoZvducPOBz72C KInbPWl7kfuBD3LrW996+s//+T+PcvdnMJLWntj8yZ/8yfFssYOg+XABl1vc4hbjw5F8+HmO 9Idn9C1vecv4cKMtv/u7vzv64SEPecj02c9+dtjwQXM7kljpBpvU01b3pOdD7OIlxptXvOIV o115lrT7TW9602RFXLzuTXjlTZEPCJ45ZY6zzz57fTXYeICky3fvsy9NxKR/tPse97jHGG+1 +elPf/oYA8TlnuBfn7m3jXOuHcav17/+9SN2BJxt+COX973vffeq/zJG8ueDlwWErcb/n/3Z nx3PCZ83velNp//1v/7XuP9iazSyf4rAiiFQYrtiHbYxXIMYsmGAzaSNIBk0iUn45JNPnr73 e793TFbXve51xzXCYoXBStC1r33toYuQnHfeeYPkGJCt5iC1BuyIQZvwyx8iJI0cOQzIBmqv ppExcZx22mmDWBo8I5mETErk7ne/+5gs5DsQJXWtLtORF9/0lZnYTZbIuVj4Q4YR3f/5P//n SJus1JNP4BRsXNNjI5PVsWtE/RGPeMRYLTG4860uTGGHHJgs5TvgBEt1rnWta402mHCRrEwO mch8oEAyrWYjQ8rnscReiIX45Pm1CmfEOPpWxdTXPy960YuoDgzkicnECzOTbkiXSRXZN9nB JB8wkEA6sLJKiNRZgX7oQx86thOIR9ljHvOYMcG71ufqOIsJ/ib3xOfa5LodSXzuE4JQi0Oc 4mfLPQhHhFzcf/iHfzg9+MEPHsRGHfehFbfEIw5t/Kd/+qd1wpatFPLhSdiHE4JvBRaJgR+S 9qAHPWgQWgTx+c9//qij3n//7/99Qghiz/PHBszV3V/xDLkfzzrrrIGx9rJva4f+1UZ9gNi7 N9LnnmPtVv9//I//Md40IE7aCFtvZ2Apztvd7nZjdd6KPdEH8mGT/mAXoTdGwIN4++LeRvTg 5Z5Xx7PhOfqjP/qjcQ2nF7zgBSNWz4N4+Ud6xcKGWLYS9YhYtE1sbDusfGZsggmcvNGwkvtT P/VTo564PJvGCveHWOMXKSbIP7yRWQIvWMDXSr03Gj5UqWuMs5prXHXNr/43lmqXZ1w+wuyt CDuEDWnP0S1vecuxsGCcco+dcMIJo8wHqVPWVuR9KIAjPLfqP3XJvP/cg9oAj63G/1/+5V8e H1Jhq13ezgXzvFkbDvqnCKwQAiW2K9RZm4WaidRglAHJ5JHB2cBrUjLIEROk1bZMkAbTkADl Jgz1EVx6XovKi226bFsJIwhfYlDPtYnDak5EfYTSObb4JwZ7MSkXK5Lqtzb5YYc9+YTOXBAp 2xWsfmbCs/Ji5UN7tU3cmcisooaA0DcBmZQyaYstk5c2WdURn7S2sZWfREvM9PlCiMSciUyc dEIcXNOlR7RFOUn71BUrO/xEEES+EffoioeOFTCTpRXMtJMdbYFxSBdbiIgJ+Ng18k7Yojf3 ZTJTB3lH5uCk/fS85v7bv/3bkSemYONML20bxtf+bGdi1C62xc4GQfph4HAvIEHIgBUsuKmj zIqjukT99I14tI3u1a9+9RG/6zwD6qgvT1vh9C//8i/DR+5LZXy65g+5Q/Lcc/AOAeZXv9Jj l+7+CjLiXnUvJ9a0UxvTJ/Hjw5Zyh3jEoO1IkXtMG+Ur15dWst1LtpLAZB6zeiR9p210k+86 frwFke+Q781B7lV5fPMXUY/M75uU7e6cOmxpg7o5fOBMP9Kj4779/d///YGRPIfVbx/q4Oba M5m0azEirsHblibxw8CHK4SXH3Xkn3nmmdOla9tUpNM+41Tuf3nGSPrquc59pZ3y/Yc3PrRb gVcuz3HOOeeMVVP94sP9Vv3HLuyJtoiJwIq4j/c0/ofcizdxqKd+nhfXlSKwSgiU2K5Sb+0m VoMXQXYyyBmUDHQGPgMoHYOpaysUBmKDrolgPlmZNAzoVmsN9FYaskczE4uJEhGIT+f4RXj5 9Vqb/nzSHBXW/ig3kBIDsZi8Ind+8YtfPAhK2oTYGHAzYGcAVtfqiNWYz3zmM8O/2LXL61m+ xWCyUJ/YVgCXuW/xWQk1gRD673jHO0Ys4iHRh5c8dZzVkceXVTKSa2d+P/e5z61jw7b9EuSi fgAAIABJREFUvFYk4aVN6tJjUznsEarYY5N/q6gmKHsCrbJYiVJfXSuuL3vZy0b99EOIifpi 1Y+OE9ZWePjkS4xsICP6JPcEm1b/kDl24Co+Z3tZ4f6///f/HvcU+/J9mED4IrGd692dxTEX /a7PETht0XZbBLzaRWScxeweFAPfhD/3v2ttiSD++kmbSPpMWh368tR1vxBtpe++0H4x2lqD cFoR/Lmf+7nRdn5ywJRsbM/I3Ms/YtInWeVD9PVxfjEhvhBtfevDnTqJFT7apk+UE9eJ1bNt ld+Kd/Z+wkC5MwmpgQNSqC/Yjz0Y2S7kTMTkA4hr/ulHN88PPf3Kj36mt5XQiQ+67PIlFqLd YtU+/tyb7hMfSukqV98Kavbcv3htjJEXu2IRk7FErGxbmUaIreTaNiFutuJf30gT8bh/kODY dH8qd52xLPgqo2/7jDdbiZFf+t6cObsfg+Ge+g9GwdgzTHK/q7+n8T9vRcRExMCemCpFYFUR KLFd1Z67PG4DkEEsg5uzAdUganA8YY3ImLzsubMSgTAZsL2StgLmdWe+yOT1s9WM2GPb62ev eQ3eBtcMwgbC+BSKtEHVBIAEWO2zh81E75WbV70GWbHRzcCfyYGuV9/Hrq0myjP5EPYMttoj n40MulaTTersKSe2NCDiXvnbl2jgJvb5qZdJCDauYSDWfPEJgfyDP/iDsYKTyV6s9OMj/uEk zx5ENhAAdbxy9arXZONDhA8I/CqHvbjz2li7tes5z3nOIHAm6e///u8fWCJS6lk9h6MVoX/3 7/7d9NKXvnSQlsRhBe4XfuEXRjthIZ8d2BF4wBBRFbN0hH3+xW2yha96XnsHW7r6Xpvs1bQS SO/Na3t9if5V3wp5YspEOxT28If/CAIgPmf2tUG7kTz5uRdh7j5F5t3PbMAKsVLGN319lgmb D7Hl/hFv7ht5Xl3rQ/a01cqsa1sO6NqOYLWaT9sh2Eei2KTPl2PenrRrb8/3ute9xocXzwP7 2o+owcOHovjwTNOBO3FviUF/egbdU7n/kWAruz7cwsg9YH8twhe82YCFaz74Fgs7cOHf1h2r yZ6bF689o54b+vZ/2x7i9T1RN8ImrIm2RMS5lYjTPSBmNsUFb7Foq3x5bDmMa7Yo8EmP2BPs voUFQupDoXvEymtsIsOeD3bV+/SnPz3uHeOJMdKzwb54bJNxb/AdyXaVfMHSM2FFXLvFQsQr rQ/ELFZjFImtEy4frz1//G6n/2KXncSpT7Qt9vY0/n/qU58a8SD+vogKb3GLsVIEVhGBr18u WcUWHOYxG7wcBt2b3exmY/A0MBuUDOb2tyGuVtPkKbMSYPVPPROVbQMZxJxN6tFFYqwQmsj5 IAY9h2vEgS5b3/7t3z78q2NvmrN8E4b9bYivevIc0pl8rEhJ25fHrtitLNnLZ9IxeNMn0onP qhrCHiJlErZXjC1EUgxin39ZhW+DNzl2jUhrr8mOyH/3u9898JJPTBJsECQn6UyqCKcPBVZ3 TFwmPzGa9JHErLzwizgTen67FylhT1q72dROryjt53Mt35dzkvYFFq+HgyNSpx30gin/IRH2 hFqF9pqTwMbKb/YmqxPbsUHPaj5dfsToC1P5sg0SJQairnaxQ5fv3BdDYZt/+IgvNmAde+LS Rw9Z23coJv3vC0Ly+Ne/yK56yAnRl67TX+okRvizqS5717zmNcd9gKymrnsr+8/lIbnuqdjT z0geG4hDXkUPA/vxB3l1HyNVYhOz1VV7OOHhWeJT/yD7PoDQs6IrHvtJPTPuC/d/BBZ///d/ P+5nGB111FHTKaecMu4T93juc+2DFR/uT/byLMNUu/lzT3qVHQzdA2IXV3CGsTrOEX4Ina1E nIigs5hyj8CB3Yh8MfHjw6J0xAdLY5A8PhFa4gM/vHypTNr9oy3EuGnc49M4wD783DeeA2Xx IRbtNua5B4KH8c4HAvXmOCCfkfkzJk+s7nHtzRga3d31n3FXn4hVLMFVHL4QtqfxHwG3T5po kw8/njvxxk7891wEVgWBI9a+ZXzFR+tViXqHxWkw8skcCbr1D9527QtB/3FtYPG6+Eprg9JF awTiile8u4bdXSsiGYSXCYfY53EYXH3yN/hbaTRAGtxNjMocmRDm6WW2YX99awcMHPM2eRWM /GUFkR+Tk/Y7QiQW4Z89tjPZJ46c5z7id2PfzXUWmU4M+xKfOHJ/ZbIVvy/g/fqv//r4oo22 w9qHNb7iL2cTdYisyTpxbCyf4yHtYHs7EptzG9upd6jozOMXUzA/WPHBLX4Ty8HyfTj4yX2Z s7nGWzSr9nnT4e2S8cpzkTFiUdjoXV184YUXjUUDH859OP70p/5xPJv5ACY+z1ye9dwXi4qj dg4PBLY3ah8eWLSV+4GAASgrOV6x+eKDydHKg9ViKxMZtAycBq7tkob9COugVJ23Qxsd2mhl JuRJIAgW4imPjlWgRUhICPImzQcCJx0f+oaIIXniOBiyP/Glblb7M+F69eu+SjutTLnHfIhI fzjDeU5q2bOa7AwnZ/vCTfQRGMl3bEfE5q0B/fjeTr1DRWdj/J7ZvWn//rQjfviEnWtYiqmy GAQ8A3Aleea9zbHy77kJ7j4Y+g4AHWOEepUisIoIlNiuYq8dQjFnwBSSyYkgrbYSGBitvrj2 ik6ahHCMixX/k4kiOGSScPaNfBg4YIBg0Q+hQtIWIWzC1plN6fgM/umb+evFeXoRcezOxv7E BzfiQwJyHlKOiNq2op3yXdtGkD218pQR57RVGkbOiQsZDtmVzwe/6dNhZA9/9CciFtz3oHpI Fm2MH7EPPgc6YH3AV3zCEJZiqiwGAfcxnCOwJsYEq6bBXj8Ys4xdnp/t3v+xu9l5rNRuVtC8 InAAESixPYDgHg6mDYYmI4MgMuAaiQiRCjExUYWUGGQNnvPBdqdgNZ8MkDF4aDtMYKHdkZCt XO/rmc0QAf6CuVgc8iJ8Bnf9djBkf+ITr0mWZOVVOvuHtUG+9iOs2uZw/+WenPcJLOYYsR8c 4BbMtts3bOW+nmMrxlWQzeKHHyyUHWhJX6TfgjtMD4b/A92+ZduHK8kzAF/PC3Hfz/tfn+eD YXSGYv8UgRVD4IpZdsUCb7iHDgIZPA2a+RKX6BCNTFTyDaR0EZ0QiEOnFfsWyZyoBgeWQi5N HPIdwSJ687r75n3Xz/Nk0oI3QhA/8R3Sy4fVMH6t0szz99X/VvXmKz/7Ep948yFJO7WNHZKV vRAgbdc2R7YWaCO80x/qKY8ddek4Bzc6mfSl9yRiyj3PB9urJJvFH3wORlvmvqTTTzDNfb1K eB5qsQZPWErnWRGn+x7meZ4Su3uffj5QJr/nIrAqCKzWKLwqqB5GcRoAfbo3ERk4841fA2hW suRLh0iF2B0OE5eJQzszgZgspBHL4LA/twvSFxIHY1hnouInPvSPdPoHiZtPcvsTw57q7m98 4oXVfFLWTm1Upk0mYuVzyYquNjpCWqWDCTv6hn3l7Ogfttyr27k/6VrlUjc+xCGf3UNdlh1/ +jY4pR9gKrbK/iHgHo+4n93/cJ2TVjquYa8/3PuekXygTP2ei8CqIFBiuyo9dYjGGeJgIjIw GhCz2qKMGEiVGTyVh+whAztBMgHPiZD2Eu2VT8cEYrLQbh8G5vr7gwOCB3PCLz988J0PE/yG XMuntyj/W8W+P/HBDVaJN/eM2HMvaa/yOUnaeO+pxxYMHPkwIPa8doUVnNhiOzb21D66JNhG V37sJu9QPC87/vRtsPGBBZYksaWs531DwHPhXs69P783UxYSm3uW7nbu/32LqLWKwIFFoMT2 wOK7460jGFkhRB4MiAZJxEBZiIky+chDVgt24sSlvUT7Q7zgo61pNwxMKIsglsF5vsLFHx/6 QjzRkUYC+T1Yk1Z872t8mWDnuIpd+9KGkN2QJLraKJ8egf/8fkO2SezCTKxz2W7/5L6HrZhi M3HNbR6K6WXGH4xgBjsY6vPkH4p4rVpMnoE888E1z06IrDbJ8wykbLv3/6rh0Xh3PgIltjuk j+cD0u6aZFDL5J0BLqRUnbmNrADKz0AnnUl7nocwyA/ZYCeDojLXyuITwZWOzkabbCePT4II RtKGXMeu/JDHlM115zZTJ3r7cw5h0p6kTc4OYrImWRWRnk8orvdV4kP9+I6/5EUn5eIM9vvq d7v14juxOO9NfIk1scMtsScv5DUxbZafOtHJdXRDqFI+jzt5uzvP/bMbm/P83dU9FPLncR7s +OMbZukTmCT/UMBn1WPIvQzf4Jo8bQvuzvJzvertbvyHLwIltive9yFoBiTpELnkz4mrQS16 mXxN6CGx6hjUEEArbK6zrSD25gMi6Phjy4FUqp9JKgRVHTYzqOZLP/JSny3k1TUbGVxdOxCa EFP2QmDFxW7qIY8h3c7s0HGkzXQ3toP/ShEoAkWgCBSBIrDaCJTYrnb/rZO129zmNuO/tPVf MiKrIXEIKpkTXNchjshkdEIe/d/q6vuNw5Qhj4hi7CCHJARRPlIZ8hniGT9XucpVxn/tSC+v gVM/+x3VD/llPwSUDXbF9JnPfGb49d9IEvrKHSG0sZd8OolfXGKek+NhqH+KQBEoAkWgCBSB lUegv4K94l2IvJF3v/vd66/r/Z/f8kMOlbvO6iaShyjKyytxpFA5UvzXf/3X63XZsGrrW+bR R4bVm9dRjyCf6iCpIaZ8scFvCDcbqYPouv7EJz6xTk5DRIfRtT8h0Fe72tVGFoLsYENcDtfi Qnq1Jfni5Ne1uJSxJ10pAkWgCBSBIlAEdg4CXbHdAX2JsCFrIZ/f/d3fPUge8oboWaVEFC+9 9NJB6Ogjkx/84AdHvZDAkESQWKml9+EPf3j8l6PSDjbofeADH5ge9rCHTSeeeOLI99+ZfvGL Xxxo8vvHf/zH66upj3jEI4ZOVlQvvvjiYV9sbLIl/TM/8zOjvjyk+FnPetYoZ8/x9Kc/fX1F 2HWIcey6vv3tbz/qSD/qUY8a9ZBabcyWC1jwUSkCRaAIFIEiUAR2FgIltjugPxE7RM8reCuf Vi4jrrNK6f8C/8M//MOxKopMXv/61x/kE9FTx97XkEQk9UMf+tD4b0ovueSSQQwvuOCC6TrX uc6ExLJ51llnjcOK7PWud72xUirN9k//9E+PrQfs3fzmNx92kWlx3vjGN55e9apXTV/4wheG 7g1ucINBOl/xilcMsqrOm970pumxj33sqOdaLE94whPWV3y1D1mdy3Oe85zp1re+9XqdM888 c3r9618/ff7znx/tFLP4ImlrrnsuAkWgCBSBIlAEVhuBEtvV7r91omaF0yqklU/kzYqlPGcr lYjir/zKrwzCKf+oo46aHvrQhw6CiSA65ntfkd1XvvKV00Me8pBBZq16IsZsnHvuudNnP/vZ 6cd+7McmpBRZVR9RpKf84Q9/+CiTd+qppw6UEczXvva108knnzzd5z73GeT42te+9vTIRz5y euMb3zjIJ4It/rvc5S7DnrQV52OOOWas8r7vfe8btvgUI/sIqzOijtySxHO3u91t+o7v+I6R 54/4CP2u2g4o+qcIFIEiUASKwI5BoMR2xbsyRC3NQNgiyB+iaFvBhRdeOD3vec8bZE4dZPD5 z3/+2GqAICK16iKRIXzq3+IWtxjmlDluectbjjMCbYUYgSRspJ6V35vc5Cbr9SR++Id/eJBu K68vetGLhi59xNsX3qwKZ6uA+NIuBFdsrpX7ElpiCZkXszwk9vd+7/eGbXbFRLJKy586rrWt UgSKQBEoAkWgCOwsBDq776z+XCeXG5tlq4A9sQhgyB2Cd9ppp30d8cvqp/rKs0KKFKr73ve+ d50UIo4O9oizAxFVjz4CiZC+853vHPaue93rjlXfrKjyQS/bDoahtT9Wffl8yUteMurTIakX 2yNz7Q9dh321ysRhtfn0008fpDj1xIMk81spAkWgCBSBA4eAbzL02wwHDt9a3hyBEtvNcVmp XEQOWbM665AOEQwhvde97jX2wypP2Z3vfOfpjDPOWL/O6muI38/+7M+OOh/5yEfG3t2Pfexj Y9X3pJNOGviE0M7BQhxtLbC/1RfP2HzhC1+4TjbvcY97TGefffY6ORbf7W53u+k3f/M3xz7d tINP2x6uda1rjZVeK7FINEK62WorO7/4i7+4vu2BX6u2dK0sq0tnvio8j7vpIlAEikARKAJF YPURKLFd/T4chA3hQ+Qc0kgcQWKlHRdddNF4lW9V1vVP/dRPjRXOrLo6+7KZ+gim/a/nn3/+ OFuFde2LY0cfffTY3oA8IsrI45zkStM77rjjxi81WK2V52e4nN///veP+onrgQ984PTkJz95 xOs3dG1zOOGEEwaplo7eD/3QD02ve93rvm61dU50X/aylw3b9NVDYq3g+rUIeiRtlQ7Bl64U gSJQBIrAYhHwnu2KzXGLtV1rRWB3CPR3bHeHzIrlI2khl3PCljzNQUjtR0VeEb+5IKdIqvzY kucLY/QR5nk9Pyn29re/fZig76BP6NpykDicrdoS8dzoRjcacczLk0Zc1UdO5TmkxW0lOCRa vjwkHWkVO9LqS3KJYzhc+2PFdv4fTcifE+Lo9VwEikARKAJFoAisNgJdsV3t/lsns5qRlc00 CYkMQZQXcicvhDfbD5SHvEorV4YAypdGHCNWX5FJ+bHnHEl955BWuiGm0VOHD2dHVmiVJ18a gVU3tuSFnPu1hTkZVkaPP3LVq1517PuND2dtca4UgSJQBIpAESgCOweBEtsV70tkDwEkIW7S 8pRZ1ZwTOKuXSGAIZl7N0yOuQ2Kls1839hBVhJawo1yeNMlKKvvzg026kRDU1A0JVW5lWL64 pZXN28iu39lVzs7857zUI87iF6vDf8XLBn1lOYZy/xSBIlAEikARKAI7AoES2xXvRgQNYQzx 0xzpEE4rncghQersN1WHIHwhmPSShxAm7YxAhrgmPSefdNhkTzlbCDQCipQ62AyZlB/CLU3i k05WbZVJa0tIsbbxbcWY8CcvBFacab/y2LVHOL60Wb1c06sUgSJQBIpAESgCq49Aie2K92GI aYif5iStDKnMK3uEjyB0yCECGoKX/w43BBfBDJmVDglkj4R8IpTKHPFLH4Em8kMulfNHElPi V0eafghy8pyJcjbmfkJixZp42Yi+c9otLX7laYe8ShEoAkWgCBSBIrAzECixXfF+RPIQwazK ak7IojRSGWIXgokoJk1H/fxigevoS5NsU2AXeYx9JNW1+nORP9eZl8V2yKcyuuqE9CaPjrLo Khe7Nic/JDcxxG9sskUn+ASPefvpVIpAESgCRaAIFIHVR6DEdvX7cJBLK6BII4IXghhCiNgh hCF38h3yHMipM3Gmn7rseY2vLrt8hDxa9UR6s1Iav+wk7axu6iCUWS1lS7kjhJd/OqknjrSL Xdfz+tJsz2NIrMrY0Yb5CnHIcHyyWykCRaAIFIEiUARWH4ES2xXvQ+TNflZkDmELWcuKZVZb EcKQO01OuXxk0tkhPzbYDmEMIeUDWaQnz9aFjXWUE/+1rjJ+Q5zlq+c3ZkMwQ2bFKgb1+VaP Hn35RDtd01E/aWVf+MIXhj8xIbPKiDaw51BPXCRxjov+KQJFoAgUgSJQBFYegRLbFe9C5M1+ ViQQYctPeoVIZs8sUicPaXUgdQ7EE2lEGIlrhDEEEmkmdPgKQRyZl/8JSWWPDhsOK718IZls Jq2aOF0TtsWWNshjC9FNe0KC6RD21XcOgb7KVa6yblM+YVdaXA7X4gqJHkr9UwSKQBEoAkWg COwIBEpsV7wbkcYIwhZCl7MyZA5RRCBDMOUjr/KR1RBG5I/QUw9pzqqvfDasgKoXH/wSBJcg nGwS9rLqy2ZIrLpsELryXacNrkPK44deCLh67KXO3J8y7YFN7MojrueY7crt3yJQBIpAESgC RWAnIFBiu+K9GNIY8hmyiFAihEicAwlE/hxXu9rVBkFE/lyHzIICcURoEUAHyXYD9rLKKj+E U1p+VlfFIK7ExIc0uyG3dGJfXmwgx2yp77/l5UM9Qi8EPLGwLWb6ic1ZuTx16UT+7M/+bDrt tNNGm7IlIWU9F4EiUASKQBEoAquNQIntavffOplD4iJzwiZ/vuf00ksvHa/uQxBDalMn+Ygn gpjVTdfS7CGNOfhEHkM+kcisivLrmg9p+bY2pK4zH2wTZ+SYvvzYRWhdRxBZtujTTcz01EkZ fWn56jz3uc+d7nnPe66T96w0x27PRaAIFIEiUASKwGojUGK72v03SNsHPvCB6WEPe9h0hzvc YZC9T3ziE9Mll1yyTvAQQHmI5MknnzxajKDSQfqspobk0f3oRz86ffCDH5we+tCHTne9612H zXPPPXek+UEWHWecccYgrtIhyM50EVS2Qjo5veiii8ZqcerzwT957WtfO9Lq8EvUVS5O8arn /JGPfGSQUySdPkGgQ2pHxtof7SV0nv/8508vetGLhp/YQnYrRaAIFIEiUASKwM5BoMR2B/Ql YvriF794evnLXz5WVY877rjphje84fSa17xmkLsLL7xwOvbYYwfBO/vss8fZ6isiiPQhjATR c40QSrPpcH388cdPb3vb26ab3OQmY8UX+X3sYx+7XjcrqkjoT/zET0wXXHDBIJvPfOYzpx/5 kR+ZxHDTm950eve73z1iREqvd73rje0G6px00kkjLSa+xMSmOG92s5utt+Xiiy+ebnCDGwwi rZ1iUwehVgfBTRty1raHP/zh09vf/vavW/lFcCtFoAgUgSJwYBCw7LBr6eHA2K/VIrAZAiW2 m6GyQnmInV8DuO1tbzsdc8wxg+C98pWvHCu497rXvUZLEEUrtfaXXvWqVx1kUAFihzw6I4dW SJFBpFbebW5zm+noo48eNrIi+uhHP3qsxl796lef7njHO04IKhuIpTMfD3zgAwf5tNpqhfe8 886bXv/610+nnHLKdKMb3WjosqvsT/7kT0aZ+K573esO/094whNGPHy+5S1vme53v/uNLQRi TL2Qdnkh5rDIyrF0iLoGyNe2eRtdV4pAESgCRaAIFIGdg0CJ7Yr3JXL2+c9/fpC7vNb3iv/M M88c5FQ54nfOOeeMrQCf/vSn11tspdfe16yMKkAUHeoht9lKQC9EkL58ZBF55BeRdFbXyizb RD1ib62VV4Kw0qdnRdZ/54uYy2eb3OlOdxr+3/ve904veclLxv+Mxj6/Z5111tiOQC8xSSf2 Bz3oQYPs+9/UkGc+2FUuXnGH1KtXKQJFoAgUgcUjYDPYrg1hi7ddi0VgdwiU2O4OmRXKR9JC 8HwJ7MY3vvGE3CFzCCSS6Wy19Xu+53tGy5A8RFGZMwIaMuonvvw6AZuIIHFmg9BHlv2HCNKE Ln9I5Cc/+clBHl0jq/KsKp9//vkjjVg6kFZE1SryO97xjpE3jK39+cu//Mvh0yruox71qPX/ fCE+Tj311HWyqg4fiQuJR5YdtkLAR5kYpdO2xB6fPReBIlAEikARKAKrjUCJ7Wr33yBpSCkC SXwJ7IQTTphe+tKXrq9gInO3v/3tp2c84xnT933f9w09xNReVWf7VomVUfthET/EE/mdr9jS QSDlE2WO/CcOiOKJJ544/c7v/M708Y9/fJBJ2xCsvtoW4QtcvuhG7Kt1fY973GN8WcwXu3y5 TDxPf/rThw8E297bZz/72aOOP8pPWGvf0572tGGfT23XRiI2BxIrtnwpDmmXZ3UbXvN2jIr9 UwSKQBEoAkWgCKw8AiW2K96Ffn0gZC7kFsl7z3ves078ELp/+2//7fS4xz1uEFZ7YxFEWxZO P/308WUtOlY4rcwqYyMrtCBShijSU04PQXRGQAn9W9ziFuOXB469/Mtqd7nLXcaXzlwjs1aT 2bANwe/U2v8r7o997GMjj21Emy8+EdJL1/bxxjeieu9733t64hOfOPLZQsJJYmdDLMEFiRUr UT9toFcpAkWgCBSBIlAEdg4CRzz1qU/tFpgl9ycShvRZ+bz1D952espT/uPaa3bf9L/S2q8L XLT2havj1yPM153UQdCQNiQ0Z6/qN75iTx6iNxc2HCGGymJHOvWkyfw6tpBDK7z5zxmQ1Kz2 Kou9ed1d1q7wJQYEFnkl6iGpuZ6nkV17Z/lXL22d25fvWhzScKLvnLTrebsTU89FoAgUgSKw GATm5GJtKF77dZyLxmKGhRe/K/7pT/3jGOe9PSTGa+N/xnPXlSKwtwh0yWpvETvE9LOy6kwQ N4NBVm8ROOTPQKHMoEGc5YfcKSfy1EnaGenMNZJJ1I+tkFp+2eNbmWtxZZBSL36kldEVV2zE pnLCBoIrJmmklqgnVnmOeVp52hU82JVObMrnsahTKQJFoAgUgSJQBFYbgV3vcFe7DYd99Ahb JMQwxC7XiF8kebl2TvlmZXmNTy+rqNIbJXHEd65jm/487Tq60nPf8RMb8zK6G8vlRVd6LvP8 ub+NsczrNF0EikARKAJFoAisHgJdsV29PmvERaAIFIEiUASKQBEoApsgUGK7CSjNKgJFoAgU gSJQBIpAEVg9BEpsV6/PGnERKAJFoAgUgSJQBIrAJgiU2G4CSrOKQBEoAkWgCBSBIlAEVg+B EtvV67NGXASKQBEoAkWgCBSBIrAJAiW2m4DSrCJQBIpAESgCRaAIFIHVQ6DEdvX6rBEXgSJQ BIpAESgCRaAIbIJAie0moDSrCBSBIlAEikAR2D8E/ML6Fb+yvn+2WrsIbBeBEtvtIlW9IlAE ikARKAJFoAgUgUMagf7PY4d09zS4IlAEikARKAKricBlqxl2o15xBLpiu+Id2PCLQBEoAkWg CBSBIlAEdiFQYts7oQgUgSJQBIpAESgCRWBHIFBiuyO6sY0oAkWgCBSBIlAEikARKLHtPVAE ikARKAJFoAgUgSKwIxAosd0R3dhGFIEiUASKQBEoAkWgCJTY9h4oAkWgCBSBIlAathCSAAAg AElEQVQEikAR2BEIlNjuiG5sI4pAESgCRaAIFIEiUARKbHsPFIEiUASKQBEoAkWgCOwIBEps d0Q3thFFoAgUgSJQBIpAESgCJba9B4pAESgCRaAIFIEiUAR2BAIltjuiG9uIIlAEikARKAJF oAgUgRLb3gNFoAgUgSJQBIpAESgCOwKBEtsd0Y1tRBEoAkWgCBSBIlAEikCJbe+BIlAEikAR KAJFYOEIHLFm0VEpAgcTgRLbg4l2fRWBIlAEikARKAJFoAgcMASOPGCWa7gIFIEiUASKQBE4 bBG47LBteRu+TAS6YrtM9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQBIpAEVgm AiW2y0S/votAESgCRaAIFIEiUAQWhkCJ7cKgrKEiUASKQBEoAkWgCBSBZSJQYrtM9Ou7CBSB IlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQBIpAEVgmAiW2y0S/votAESgCRaAIFIEiUAQW hkCJ7cKgrKEiUASKQBEoAkWgCBSBZSJQYrtM9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJF oAgUgSJQBIpAEVgmAiW2y0S/votAESgCRaAIFIEiUAQWhkCJ7cKgrKEiUASKQBEoAkWgCBSB ZSJQYrtM9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQBILAEWsJR6UIHEwESmwP Jtr1VQSKQBEoAkWgCBSBInDAEDjygFmu4SJQBIpAESgCReCwReCyw7blbfgyEeiK7TLRr+8i UASKQBEoAkWgCBSBhSFQYrswKGuoCBSBIlAEikARKAJFYJkIlNguE/36LgJFoAgUgSJQBIpA EVgYAiW2C4OyhopAESgCRaAIFIEiUASWiUCJ7TLRr+8iUASKQBEoAkWgCBSBhSFQYrswKGuo CBSBIlAEikARKAJFYJkIlNguE/36LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEi UASWiUCJ7TLRr+8iUASKQBEoAkWgCBSBhSFQYrswKGuoCBSBIlAEikARKAJFYJkIlNguE/36 LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEiUASWiUCJ7TLRr+8iUASKQBEoAkWg CBSBhSFQYrswKGuoCBSBIlAEikARCAJHrCUclSJwMBEosT2YaNdXESgCRaAIFIEiUASKwAFD 4MgDZrmGi0ARKAJFoAgUgcMWgcsO25a34ctEoCu2y0S/votAESgCRaAIFIEiUAQWhkCJ7cKg rKEiUASKQBEoAkWgCBSBZSJQYrtM9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQ BIpAEVgmAiW2y0S/votAESgCRaAIFIEiUAQWhkCJ7cKgrKEiUASKQBEoAkWgCBSBZSJQYrtM 9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQBIpAEVgmAiW2y0S/votAESgCRaAI FIEiUAQWhkCJ7cKgrKEiUASKQBEoAkWgCBSBZSJQYrtM9Ou7CBSBIlAEikARKAJFYGEIlNgu DMoaKgJFoAgUgSJQBIpAEVgmAiW2y0S/votAESgCRaAIFIEiUAQWhkCJ7cKgrKEiUASKQBEo AkUgCByxlnBUisDBRKDE9mCiXV9FoAgUgSJQBIpAESgCBwyBIw+Y5RouAkWgCBSBIlAEDlsE LjtsW96GLxOBrtguE/36LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEiUASWiUCJ 7TLRr+8iUASKQBEoAkWgCBSBhSFQYrswKGuoCBSBIlAEikARKAJFYJkIlNguE/36LgJFoAgU gSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEiUASWiUCJ7TLRr+8iUASKQBEoAkWgCBSBhSFQ YrswKGuoCBSBIlAEikARKAJFYJkIlNguE/36LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgC RaAIFIEiUASWiUCJ7TLRr+8iUASKQBEoAkWgCBSBhSFQYrswKGuoCBSBIlAEikARKAJFYJkI lNguE/36LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEgcMRawlEpAgcTgRLbg4l2 fRWBIlAEikARKAJFoAgcMASOPGCWa7gIFIEiUASKQBE4bBG47LBteRu+TAS6YrtM9Ou7CBSB IlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQBIpAEVgmAiW2y0S/votAESgCRaAIFIEiUAQW hkCJ7cKgrKEiUASKQBEoAkWgCBSBZSJQYrtM9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJF oAgUgSJQBIpAEVgmAiW2y0S/votAESgCRaAIFIEiUAQWhkCJ7cKgrKEiUASKQBEoAkWgCBSB ZSJQYrtM9Ou7CBSBIlAEikARKAJFYGEIlNguDMoaKgJFoAgUgSJQBIpAEVgmAiW2y0S/votA ESgCRaAIFIEiUAQWhkCJ7cKgrKEiUASKQBEoAkWgCBSBZSJQYrtM9Ou7CBSBIlAEikARKAJF YGEIlNguDMoaKgJFoAgUgSJQBILAEWsJR6UIHEwESmwPJtr1VQSKQBEoAkWgCBSBInDAEDjy gFmu4SJQBIpAESgCReCwReCyw7blbfgyEeiK7TLRr+8iUASKQBEoAkWgCBSBhSFQYrswKGuo CBSBIlAEikARKAJFYJkIlNguE/36LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEi UASWiUCJ7TLRr+8iUASKQBEoAkWgCBSBhSFQYrswKGuoCBSBIlAEikARKAJFYJkIlNguE/36 LgJFoAgUgSJQBIpAEVgYAiW2C4OyhopAESgCRaAIFIEiUASWiUCJ7TLRr+8iUASKQBEoAkWg CBSBhSFQYrswKGuoCBSBIlAEikARKAJFYJkIlNguE/36LgJFoAgUgSJQBIpAEVgYAiW2C4Oy hopAESgCRaAIFIEiUASWiUCJ7TLR38T3EZvkNasIFIEiUASKQBEoAkVgawSO3FqlGgcSgSMu W7O+dozzcPS16YiR8bW1q7XPHUc4XyHUK0WgCBSBIlAElo7Axglpw8rM11+a6Nbms/mx9AY0 gJ2IQFdsD6FevYLcHkJBNZQiUASKQBEoAkWgCKwIAl2xXZGOaphFoAgUgSJQBA5JBL5+afYb QrSwe9kWOt9QqRlFYB8R6IrtPgLXakWgCBSBIlAEikARKAKHFgJdsT20+qPRFIEiUASKQBFY DQS6Crsa/XSYRdkV28Osw9vcIlAEikARKAJFoAjsVAS6YrtTe7btKgJFoAgUgSKwBAQ2/ljC EkKoy8MYga7YHsad36YXgSJQBIpAESgCRWAnIdAV20O8N7/2tV2/Y3vZ2m/bHrH2j+xK7wq8 n4x34dC/RaAIFIEicHAR2M4W28xd5q+vfvWrI8CvXrY2nx2xndoHtz31tjMQ6IrtId6P3/RN 3zQiRHC/dtkukrtOcNcGh0oRKAJFoAgUgUMZAeSWXLY2Z13piCsNUvvly0nuoRx3Y1tNBLpi e4j3G0JrULjSlb7xM4hBop96D/EObHhFoAgUgR2OwO6WWEJoLclYn3V2HHnkkdOVr3zltVSl CCwegW9kS4v3UYv7gcBXvvKVCbm1SpuVWuYQ2s3I7n64atUiUASKQBEoAgtFYD53mcu+9OUv TeY1CzOVInAgEOiK7YFAdYE2v+3bvm0Q2ytfaden2y9/5cvjk+5lX/3a+NTbsWGBYNdUESgC RaAIbBuBy7b4f+DzVvErX/3KWIgxn1mQ6aLMtiGu4j4gUGK7D6AdzCqf+9znpksvvXRsujdI OL75m795+vK/fmnX4HBEX+cczP6oryJQBIpAEbgcgSN2fe9jd3h8+ctfHvOU/bRXucpVxvyV t5C7q9P8IrC/CJTY7i+CB7j+rW55q+Hhq1/76th+YAuCf19de5Vjj5LfSqgUgSJQBIpAETjo CGy1Ynv5/GTF1txlK8I3HflNI92tCAe9tw4bhyW2h3hX23w/qKw9tWvfJo3kFU+uey4CRaAI FIEicHAR2PPCyldtQbjykdM3rx1fWftVn29ZI7VfWsv72trCzJGbfCH64MZebzsVgRLbJffs ZWvjgi304zwbI3wQdonUkivPSK3r/AzYrIrsShEoAkWgCOxABLIVzf5Uvwd7KPyqQOKYL7TY aiA2K7TftEZo/a6PeWwsy6xtpbvyWv6RvjPytX55bAfepodEk65YAjwkwmkQRaAIFIEiUASK AAQQRq/vyfyXcA4FUismcdhHK0ZnMrbLrZHXSHYrbDynvOcisGgESmwXjWjtFYEiUASKQBFY AAJzMhtzVkSRSOdlC0KL3Dq8Rfy///f/jvSXvvSl6f/9v/+37PDq/zBFoFsRDtOOb7OLQBEo AkVgdRDI6u3/+T//Z/zSQK6X2QKENmT2N37jN6anPe1pg3T75Z5KEVgWAiW2y0K+fotAESgC RaAI7AUCiCRBHBHbXO+FiYWqWrH91m/91vGb6laR7bnNb9QqS3qhTmusCGyBQIntFgC1uAgU gSJQBIrAshGwLYEgtF77OydvWbEhrg7bDvKlMbHYitBV22X1Sv2W2PYeKAJFoAgUgSJwCCIw J69JI5BWQ10vW8SAxH7Lt3zL9F3f9V3Tv/7rv450frVn2fHV/+GJQInt4dnvbXURKAJFoAgc 4giEzM7DDKFFbpe9YisuJNY2hM985jOD1MqzJeHII0svYFE5+Aj0VxEOPub1WASKQBEoAkVg SwQ2I65WR62SIrhZubVSaiUXmVQm7VA/184IKCJKX11H9JwRUnalldGz1UB+6udaXurT/7Zv +7ZxrVEltVt2bRUOIAIltgcQ3JouAkWgCBSBIrCvCGxGbJFJpNKBqLr2JTJk8gEPeMD03d/9 3dO/+Tf/ZvqO7/iO6RWveMVYRX32s589nX322UPvs5/97MhDWtVX94tf/OIgwWwiqcQZifYr DHR9SQzplVaWGJzFoMxRKQLLRqDvCpbdA/VfBIpAESgCRWCbCCCPIZJIpRVUebe73e2mBz3o QdMLXvCC8RNcVmavcY1rTBdddNEgpX6Wy4EAx0ZIbfKQWGnEmA/6dL/92799kF8h8qfMl8MQ Y5J4xkX/FIElI9AV2yV3QN0XgSJQBIpAEdgbBBBJpDbk8v3vf/8gpCeffPL4hQKk1mrv3//9 30+nnXbaSCOoiOxjHvOYsRJ71FFHjbTVV2T2Dne4w3TccceNFd8HP/jB69sdnvOc54ztC9/z Pd8znXrqqevbE5DerNqKh2SFma9KEVgWAiW2y0K+fotAESgCRaAI7AGBOUEMaaQuH6m079U2 gje+8Y3TTW960/XtA0hn/ucvJBd5Vec1r3nNdMEFF0yf+tSnpo985CPT3/7t305vfvObp1e/ +tXTrW51q+mSSy6ZPv3pT08f/vCHp9e97nXTG97whrHi+/nPf36QZFsT6LPNHkI7X62dx7iH ZrWoCBxQBEpsDyi8NV4EikARKAJFYLEIZMUWwbRN4EY3utEgr7YGIKbf+Z3fOQinazr2xapz 7rnnTo9//OOHrtXeJz3pSYPUHn300dNLXvKSyfmZz3zm0Lvzne88WQk+55xzxn5d2xrs00WM sx0hcSDZiG6J7WL7udb2DYES233DrbWKQBEoAkWgCBx0BOZk0mqs1dTv/d7vnd7znvcMwml/ rBVZ2w7oIptIbNLILmKqHsKr/Pjjjx+rtB/96EfH/tqrXe1q01vf+taxBeGpT33qsPfJT35y cpxyyimDGOfXFqwGs43YVorAoYBAie2h0AuNoQgUgSJQBIrABgQ2WwHNFgCrpL7sRa5znetM N7/5zdf3zNKxHeGkk06a7JG1aosEn3jiidMznvGM9X2yfi3hR3/0R6fnPve50xOe8IRhC3G9 3/3uN7YlWMG1yovEIsFWe1/2spdNX/jCF4YuQptDRsntgKV/loxAie2SO8CgQAxg+XardPKX HF7dF4EiUASKwJIQCFG0KkrMC1ZiQ3jNGcqsyP7qr/7qIKPXvva1J8cxxxwz3fve955++Zd/ eXw5DNG9+93vPvKuda1rTY573vOeg/ze//73Hyu217zmNSdfKvNLCurd9a53HeXyr3/96w/i SveqV73qILviyYptVohdJ745bNoi33mu4zpl2eIwr9d0EdhbBI5Ye83Q9wd7i9oB0L/44ovH qyF7pf7Lf/kv49O1h9+n7EoRKAJFoAgcnghYmQ1ptI3A6uk//dM/rc8Nrv2qAdJID4F1dsjL 1gM62XrgN2mTZh8xpouo5oxsKvMfL5iLXLNlTkpMiChxbSX49NNPH9fxyVZ05os1bD3lKU+Z zj///GEfaaYnP1KSGyR63lsEumK7t4gtWN+D7zAIGTA8/M7ySmoXDHbNFYEiUARWDAFzAXH2 qwTmCAcyiQhm/6xtCfIRWKI8uuYSRBZZZONzn/vcOuF07ZcVYs82A77Yps+uMof/9IGYr0i2 KPCDAPNB+KMTsqo8RBXpVTaPN22MTzbmJNd1pQhsF4ES2+0idQD1PMAGAgOUgcL/DONTd6UI FIEiUAQOXwRC9LJiCglzA5JozpDOT2/5wpe5A0kM8URG6dILWTXHWLFlE8l0ZFVWnl9ZIHw7 EGXEVH1fOGNrIynljx3+IuIi8sVA2DDP/eM//uPwr55Y5BP+cuazUgT2BYES231BbcF1PNQZ THzr9LzzzhuDlk+/edAX7LLmikARKAJF4BBHIIQvZ+EisOYGJBMpRHARSqQWQUUkkU8E1qFu VkwRTHmuQ2hDQJFMxFZ58tg1B9HNXGSuQk7ZUs6+Our6RQZnIrb4dU1XGf23vOUtY5WYHTFX isAiEeh/qbtINPfBVh58r3i8HrJ3CrG9z33uMwaADCZMzwc318o25smvFIEiUASKwOojECKY FVhj/hOf+MTxv4PZMmD+sKIqH3FEJpFM17YYmB+Q3EjyQ0oRWOQSUeYDMZUOSbUyq5wdeVl5 5ZfwSZDT2BQPu65DgOmo71o873znO8dPh/ltXHGKaz6XSTvkV4rA3iJQYru3iB0AfYOEh9uP atvr9PGPf3z6i7/4i+mHf/iHx6dxLucPves88DnLqxSBIlAEisDOQSD7ZbXIqinyecYZZwzy GcKpDMFUlrM5JUR0boOuOQP5NedslBBVtujwEZnbDsGdl8nLfJQ9tikXO1vK3/a2t03//M// PEg4Io2My88cJ+2Y24udnovAdhDoVoTtoHQAdQwgxKDhU+9VrnKV8YnW//DyiU98YpR5yMnu zqOwf4pAESgCRWBHIYAQEiQVWUT2pLPKmZVT84j8CD1EMqu5KQtpVF+eawSWsIVcsiV/TmrF EaKZGOZ1M4/Fjmv2El/Ol1566fgfzv5/e3cCbFlVnn180TPzjBhQQVREASfEwmgUNWQwQY0a h0RiBs1gTKLRqIkDmjLGMs5RKY1VcaAsTdQM4FeJBqdgYkCKgMiMTALKjAI997d/6/p2Hxok 9O176XPOfRa1e++9hne969mXtf/nPWvvY42tbylBrX70Owq2Vb87ln+iwBYqsHh4OfPxW9gm 1edQgZpoTAQ1CfkaB+h+85vf7L8I4+sa/9ObgGrCMLnck//5tatP2iYnfWhXx/pPigJRIApE gfFToGDSPF73ijoub0fzq6zyan6v/NE28mx1T1HXsftM3SfkqSOVDXkFt9W2/BzNV7/uNUDa Ky29Eky01reTAjl77LHHxvvYaF9lr/zNPgpsiQIB2y1Ra57q1v/Q/mc2UdTkYH3TGWec0dc9 +clEX9uYLEw86tjuSWJ/tA+f1NmpSeme2EidKBAFokAUmD4F3Atso6nOBUUk+7ovOVcOfqW6 j6gDZAuKrdUVZXb+pS99qZ1wwgnt+uuv75Fa+X70ISkKzIcCWWM7H6puoc1RQPXVTJ37ZGu9 7ac//en+QNmRRx7ZnvCEJ/RPu+DWr8PUxPOTugS0JhERWpMO2/UVk0X8ypKiQBSIAlFg4Sng /iC5L7g/jIKp/AJc+e4b7jfyQKu2oBa4OrcXhfWrZyK06n/lK19pJ598crvmmmv6PUi+vvbd d1/mk6LAvCgQsJ0XWbfMqInCJDEKnl7pYmIQpb3xxhvblVde2S666KL2xS9+scOouqMv2f6/ ejQhmXREa+urJhPR/wXG/5fdlEeBKBAFosBkKuC+U4DqXuDYfUcgxH3JfabgVpn7h/LReu4j 8tUFupYZOLakToTWa8jcxwCtZXXKBXDcv4BuUhSYawUCtnOt6BbaK6g1UUi1N3n4n96kYUKo yePSYfE9MFV+T8C0JhsTD9sitOw5rnW2W+hyqkeBKBAFosAUKFD3n4JY9wab+0bdL9xvACvw df/wTZ8IrXPt1Fde9URn5bm/eOftrrvu2vbaa69+P5PPrvuRQIt+kqLAXCsQsJ1rRbfQnomh PikXxNa5Mp9uTQ4mCWtuvbvQpGJCkGdyuLtkEpF8YjYh2dena0+lspEUBaJAFIgCC08B9xj3 iNrcg9xnCjqVi7w6twe2tezAvaTuX8BWqgCMMse23XffvQdilKtffYjaJkWB+VAgYDsfqm6B TROK/9ltUoFoTTQg16dcEwrA9Towx8rrE/PddWfC+fd///f+afrwww9vz3rWs/rEdN111/V1 TjUh3Z2NlEWBKBAFosD0KSBAUlFUx+4tXjNpyYB7zOWXX96Xv/mm0L3CMx7uQ+oCXHUKVuse Vvcl52zby5OqzubHvTD/RIE5UiBgO0dCztaMicH//JLJYvOJQp5NMrGo69xEYRKqsl7hLv5h Tzt1fRJX33HZuosmyYoCUSAKRIEFokCBbQ3XfaLuQ/JEaiX3DEvgnANUkVfJsW/+2PFtoCV0 BbLsVFkBr/sPO1WnG8k/UWAOFQjYzqGYszXlf37JBFGfaEeBtfLsq65JZqZOvfLrjmuVhmX/ 3eaGDVU+8+Rr227m5durVg8v8x6OB5zu9fJPFIgCUSAKLCwFKlBi1HV/cW+p48ofPddGqnuR vTZSBVvUl9yjqn7VCdB2afLPPCqwiXrmsZOYHj8F1m83RH773JM/gfG7OvEoCkSBKDBeChTI jpdX8SYK3FmBRGzvrMmE5STiOmEXLO5GgSgQBaJAFIgC86RAwnXzJGzMRoEoEAWiQBSIAlEg Cty7CgRs712901sUiAJRIApEgSgQBaLAPCkQsJ0nYWM2CkSBKBAFokAUiAJR4N5VIGB77+o9 571tGB4AsyVFgSgQBaJAFIgCUWChKxCwXeh/ARl/FIgCUSAKRIEoEAWmRIGA7ZRcyAwjCkSB KBAFosB8KVDvpp0v+7EbBeZKgYDtXCkZO1EgCkSBKBAFplSBvMd2Si/sFA4r77Gd8Iu63cyP lk34KOJ+FIgCUSAKRIEoEAW2XoFEbLdew1iIAlEgCkSBKBAFokAUGAMFArZjcBHiQhSIAlEg CkSBKBAFosDWKxCw3XoNYyEKRIEoEAWiQBSIAlFgDBQI2I7BRYgLUSAKRIEoEAWiQBSIAluv QB4e23oNYyEKRIEJVWC7DZs+22/Ybn3zMGZ+8GRCL2bcjgJRIAoMCmya1SPHhCrgEs7+Mq5f v77ZKtUrXeR5b6HzJUuWtHXr1m2st3Tp0rZmzZpmr44y+6pnP2pTmW3t2rXVTS8fzV++fHkv W7RoUa+3ePHiXkf/daxC+VV75dUXe5L6fFLG3sqVK9uyZcv6Mb+laq/eaH1jkviqrXr26sir vuyNU9JP1eeDMv3JKx/0W/WVl6/K2S49nSvTZ5XZK5dfZfbsKCs/2ZHnXF/6X3ip/n+Y2S9a tGTQbt3wt+rva8hbP2jbFg/7DW3DuuHarR30b8N1HI6XLl7WFg+6blg3A7iL2qD3AL4z8Ft2 S1H/z2z6/6Zys48C06qAOScpCkyCAmbrpAWsABgCQQVSq1ev7lBGEpCk/Pbbb+8Qu+uuuzbl dX7bbbf1OmBw1apVvcyxOkC1gKxs6af60h/bJksQ+KMf/ajDmz532GGHjVBWcCe/6hbM2dtG IZL9W265pffPF3C32267tR/+8If9mF/lA6jlB2hcsWJF/yswtrJfkKhf49t+++17PePeaaed ug4FnLvvvnsfQ+l566239n7Y4gNfHLNZ9mv8/DBmdpU5Z1eipXo777xzP5anTvmsjE32jUVf thqb+gs50UaiY2lCO7q55nVd/A2tvn1l/xtbPIRtlat/Z3jNlNkFzT8LTgHzTlIUmAQFMktP wlW6Wx+3LnJUIASKpD333LNHOE1idXO3V37jjTd2qAJZzoGepFweYChwY5cNAFhA4Vz0FMCB ChA5Aw+tw6m66sgHkyDUHpSAUba1l7TXh8QXgFLl++yzTwfRUQAHjuVH2dCO7+zox14/bLEP ZuUXCLHnGCgbg3pgX/5NN93U4VS5xBe2CtL1Yxz6ZFsC8/rkl3byjV8bSXsaqwPMwRc7/Fem 39JUfT6pow2bFQVXtnDSnf9/oOfSpYv7tnjpEIUfwrPrNvhbuq2t3bC2rVo7/E0tHj5wLB7y h/82LBo+MA37RUNg17KEu16a4P+XTJ8L5+8qI40CUWBSFMjMPClXap78LFgCXWDshhtu6DBX kawCMeUFTSALOAEu9QCavIJi0AXAQCHYktjWHoipB8rUY99xQSm7jtmt/AJcZeDPXp8ATnvR 0epPHzfffHM/32WXXbqNglR2tOVX9V0gqY1yddkGjwXl8tUD72DX5lzio3L+ssl3ffBNPbbA L3vq8JdWfNhxxx17vdJFHccizvxgq8bGF9fAONmp/kpP/utTvjbaqr/Qkw8MtHCd6OGcxq4V YF26fFnbbgDYlauGDyrLLd+Y0dh+/XofnH4MytbiblyPG6hd6H9XGX8UiPWEHmIAACAASURB VALjq0DAdnyvzb3imRs+GJJEtoDX17/+9fbHf/zH7U//9E/bK17xivY///M/HaiAQcGlNiAB 3IIusPDd7363/fmf/3mve9VVV7VXvvKV/RhEqF99ATeAJmkPyICeOqDvta99bQdBwMju97// /b5Xr2yAFVDHX3UKWApclb/lLW/ptr70pS9tHKNy4wTcjqs+yOQLGGWP7Xe84x3tkksu6b7w T1RW/3/7t3/brrnmmva6171uYxSW7/Qx1uuuu67rJs84jcmer+D1DW94Q3vZy17W/uiP/qj9 7u/+bveBFoCU5uryzxjY4JfxOVaHDmxK8tXda6+9+jH/6Vaa9koL8p8ZIN1+++WDLqu6dq67 SO3a9eva6rXDcoSlS4b9qrZoyfC3OFzf7Ybrs93iRUPe6rZih+GDxRC3vVPaCLdKMn3eSZ9k RIEoEAW2sQJ5K8I2vgDbuns3exBnA03f/OY321lnndXe9a53daACWUBT2VOe8pQOUQV5e+yx Rz8HbKAOgIEvoGU5wDvf+c4OaaPRyh4pG/pSH0BWNJUOIA0kv+c97+mACez0JeKpf/UlNgpO K7LJf+35oN7111/fAZMP7Ej2fBVF1c7Y1OefsrKvT2VHHHFEu/TSS9tDHvKQ3l756aef3h79 6Ef38f3VX/1V9wUcGzOgBJZSfQjgK1ts6vfVr351e/nLX94OOOCAPoavfvWrXevXvOY1va12 xma82vIPdANZ/fPbcfkPYPlv7Prnh77ooX2NvTu1AP+hyxe+8IV+PdqimQ9P/kZ9QBn4tuu0 dMWw5GV4kgwAL18+rBdfvbI989hndj3Xrpn522nDGxM2RWwXoJAZchSIAlFgQhQI2E7IhfpJ btb6P68pmk0CUDvttMNG6Pvnf/7nBtjAEcACViKv8o466qgOaX/wB3/Qnvvc57Z/+Id/6PWO P/74tvfee/doLTADaa9//evb2972tvY3f/M3HbBEOR/zmMe0j3/84x0YfuVXfqVHGd/97nc3 gPyLv/iL7ad/+qc7gABp7T7zmc/06HHBIdD8tV/7tQ596vCRf+973/t61FM9QAcA9Q0k+f6m N72pj0UbZfLf//73d5AEiXwCnyLOgB4MOge0fHja0562ERRPPfXUdtxxx3UbxmhcIPqTn/xk h0vRZT5pTwdRYw+WiVKDTpo+7GEP6+MEwY973OPav/zLv7SLL764R4hB7O/93u/18RvLq171 qg6uxvahD32o+8wW7Y3jiiuuaB/84Af7+P/kT/5kI8i+973v3RiNns3fxeS0qajpjwF0szcV rB/odcmSRe1pxzy9X5fhI0tbvW5mLfWyZcNaZ/8DDcC78nbXZtkAtbe3L37x39riJcPDZMPb FPobFEoMcHuHtPn5HQpzEgWiQBSIAttAgborbIOu0+U4KACkAJfIn+gkyBQNBFUg1X7//fdv D37wg9u5557bYRJQnnnmmQ2UgrC/+Iu/6EAF5ETDQCmArKglewDa8oQPf/jDvT6YO+ecc9rf //3f9ygmUOOLeqJswPA5z3lOB1BR1wc84AHtyU9+coeTz3/+8+0Zz3hGB8FnPvOZHfJoqT/9 Aj/LBEAkwLM2FrCr+5d/+Zd9mQWwVRfEWm5wzDHH9PFU5NO4RZ3Z9NCcej/4wQ+6TXqASuOS b7vsssvaL/3SL3WfADQQBdfGpK7xsOcDwNvf/vZ+ro4o65vf/OYO0ccPHxBcCxFr1+PEE09s T3/609tHPvKR9vu///vdtr74dsEFF/T+PvCBD3TwpZ8PB0D72GOPbZ/61Kd6dHoc/sa2pQ+u sQ8vournn3dh+/a532nf+c55g34XtdO/dWa76OIr2jdOPb2d/IV/a9ddf0tbsnhYw73Wq9Ss TphZLnNn/wFtoPbOuiQnCkSBKLDtFQjYbvtrsE09KIh04wdxBwxfkUsAytf2kgjkvvvu24FK Pkh76lOf2uHtkEMOaQ960IPa1Vdf3c+VAza2tFe31n1ayiDfetD99tuvHX300b0f59qIdoK2 asc3EO3r+kMPPbTd73736/X/67/+qx1++OEdgNlQzn8QY69PfjjWt3yRVnXZO/jgg9tFF13U H5RTB1w+9KEP7fYKdqtvUeazzz67+3X++ef3fumhnjp85bs8NvRtDJK+HCsDqnwRgf2FX/iF vpbWeto//MM/7Jr4alw9sKwd/61tprMyUV7RWet3JXYf/vCHd5vg2bro0sQHAEtK2FhoaYi/ Dm+f9Q8yHdbTjmjqWu+xx15tz732brvsulvba5992373e2A7+5wL2ncvubJdc/UNQ/DW+26H 5SOLhnXnfT2t/wdGp8mC2trrLCkKTL8C5qWkKDAJCmQpwiRcpXn00WS1aHivEeiaOZ55S4F1 rSKfoE0CnYcddlivo57II0hVLpqq/P73v38HPcCnDHzZwJ821oJ6I4Jj7YCXes4BoXJg51iU 01sNvve977XTTjutR3lFc5Wx5wEsEVnQBxpFVb2GC6Taql91rrzyyt6OXyLK+j7wwAM3fsVf kMo2+NFWO5oAStFPUA6Of/u3f7v7y45UYygb2jo2JhuwYst4KjoO0kWhjd1bKCxpeOtb39rH 1dd+DroAWL6wwZ5jHy74yH/5xsNmvYbNw240VEdiw3VayIlWt946PCC5ZFj/vHa4tgP2Llux /XA9bmv32ec+7dtnn9tWrVw7HN9v+GbifoN+w1sRhh918JnAtaTzpjQCs7Us4Q4Pk22qmaMo MG0K+P8hKQpMggIB20m4Snfj42zX1pZJk9Xi4UlwcGhN6Uc/+tH2ghe8YOMrswAVQPJVu3Ln gAzQgSogBhyf9KQndXgrkCuArMkQIACuiiIWtAFZ0AdAHLNtA9byrV19yUte0iGOzyCRDWCo Pjv6qMgpGHUOivkC/gAhm5J66lh+wJb2wNAxcAaR6lgS4NjyAXVES/lYYF4AWeCpH+OjybXX XtuBVXu+8ANsi8Bax2t9svrKRa4f//jHt0uHZSCOXQc2Aak2fDIWPrgGdHVsDHymPx0cW+JQ /fFFXX0vxFT/X9DDGmearl/nQ8eKtuPOuw/UurTtsOOuw/Ka8wcNV7WffcpjB+Rd3CO1fo1s Sf/FsuHhxR8Ha/3c7miqW/wo9o6W5zgKRIEoEAW2jQKj37FtGw/S6zZVACS6+dvbrF319Tgw AlngSiTQus0CNEDmq29QZa2n9pYTqKsd6ANfYAIkAiy2AZi6EtBkRz1tlLOnjfqAztfpj3jE IzqY6Ft7dTxEZo0vOLTu1zpT7dhiU5/gssPMYJf9Jz7xid1e+fzIRz6yj48vfNKW//oAtY7Z l//Yxz62Wf5w5JFH9nLAXdCorbGAXsei3Np6u4P2yvjDjrcpfO1rX+u+l5YA9YwzzuhQW/3x Sb7+vKGCDeO0zrlgvKK/dDNmyxJOOeWUPqbzzjuvr7VlY6En2t10y83D31dr97nvvsOrvhYN a2vPGOB2l2E5yiX9Q40PEQ895OB24xA9Xz78kIMfZ/DQ2eIlha13hNoZTTdforDQlc74o0AU iALjoUAituNxHbaZF2AMiFWEUuRVhOvP/uzPOpiBQw8uHXTQQXf4gQLA4NVV2nt3LcASWbR+ Vb4HqABuRT8BJZhjD2gCQyBnA2bK5UsgUbt//Md/7Mce/JKnT9Fkmz68sYBNbz3ghw3M8YNN 8KedHy940Yte1F760pe2z372s90vD5UVcIoO80N7OgB6vtjkWYJhuYBxgtqqz7YEhrXXTv/g 07pj7wKutxPQmP8eWvP2An6DXbp540RFZenndWAnnHBCe/azn91tfOxjH+vjYav6A/405J9x eFODd+LSit03vvGNXeveYMH9swlE/R3ssL0PGX5aeY/22c+dNLyndl172CGP7K9uW79+TTvq cY8eHhrzzcWwnnzd8CMZw6+TrV0zEwn3zltpYwS4h3A32V9w0mbAUSAKRIExV2C74QZY36qN uatxb0sUGL547dUH7Glf/vKXO0hZV/q857+ww9Dl37tqWBO7f1uxxC+H3dIhCFx5dZVoIziy gTUQBeY8XS6BPK/REqUFqgALxFrjqj3gkwcM2QBfbLID7iRvKmCPfXkFaQBQvjaO2VWHHX0B yIJOEFpQWZFM9fQPlNlQrj2b2oNAfZU9gFkJZBonEJavrTYFq471qX/lzvULdgGUemC66htX tdemQLr6oQdfjUkyXhFfOhboayPRQV/qas+uTXnV5Tv/jLvK1Z3uVF86GeegSx9sjXn4+dzh dV4nnfz/2lOe9nPtiiuvGR4Yu2/7wAc/3JYuG34OetBprz13ay8+7oVt9aDvbbff0vbcbcf2 T//0ufacZz9r0NCHpR9HbX+8ltZpZemqgLd3m3+iwIQrYN6w5MmDuuaRyy+/vJ100kl9jjOH eQBXflIUGGcF6q4wzj7Gt3lUAOABqoqaitYCMsAF0oAUKC0IVQ9k2QNEEyF4Ao+gTD4AA1zs FEBaGiBVX7WuVdva1K+v+dkxgYI1dk2qzsGmBCyBOD/V1Y4vbKhnHAWgytTjr8R/dQsI1eev cq/0Uhd4l+/yQShorDEbo7EU1BZAs+24fpq4tBnVyTF7yiTn+qy6yoxPnvG4BvrTv2Rcyvin jf74W8svtJWUL5QEMGvhwOiYXS8PigHUdcMa26VLl7dHPeoxg37DkpVBx8MOO2RoOLx9Y5Ft 7fB34PfG1gzlg7UhWrto+KWyRXe4jw9TJsjNQ2OjMuc4CkSBKDA2CmQpwthcim3jCGjyAnsQ VAAHFCWABaKqDAiCXQ9ugTrQpQ0wBF3KgJZ2wEtb0Aa0gJlUwAZKQaA6+itA44/+5Il8lt2C NW30CUSVaV/Ay0b5A8TZUI9N7ZWNJuf6UqcgEiCqW75pqw911Sn/AS2Yp8PomI2XTVHXaks3 vsgvzdTTr7bynRuHVPBu/PKds8EHvrCrLn/s+URzGqtf42Rb/QWTwObGh7xmPrPTauedLUlx vr7deP217bFHPHLQCQmvb4cfeki75eYbhmu5fPhuY+Zv1wc518n1UW09Yh72o5HarmmH24oO LxiVM9AoEAWiwFgrELAd68tzT5ybuYG7ac8mzYDhzDtrARYoAku2UUBi27kbPqBSV1vwJAE3 ECFVu7IDEBxLBXOj0FU2ABowszkehTJ5UsGc/h1LykbtOx7tR/mor44LJruB4R++q1f5ZbN8 Hy1Tp4C2+mdTv9VPjUnd0pQtSb3y1/noOJwbu1T1ac6uxF7ll40qq/JecfhnNL/ypm8/8/e3 CTrr/4eZvT+bdWuHbw7WrGwPfuDM16vr129oP3v0kV3HDcOvkK1YOnwIGwh2px127Nr7Gd1F i5f14yXDG0Ms59kUDq7lDtOnZEYUBe5OAXNgUhSYBAUCtpNwleJjFIgCW6BAwe1MEx8EvvqV U3pU27EPYT54+ABRkXTnPiiIevustnr1zI9q1IeMLeg8VaPAVCpQH6SncnAZ1FQpELCd+Ms5 u0jtxA87A4gC90ABIOvnkkXFQWwlwKrMXkS9IvSi41IttVEnKQpEgSgQBSZHgczak3Ot4mkU iAJbqIClG9bMFsQCV8tORGjBLNgVibKm2rl68pwHardQ7FSPAlEgCoyBAonYjsFFiAtRIArM jwLWGXsIr9ZmV3RWpBb0Alx1RkG31jHLd5wUBaJAFIgCk6NAwHZyrlU8jQJRYAsVEIUVobW0 AODWUgOgKyILXitqq1wSwZUHdpOiQBSIAlFgshQI2E7W9bqTt/U0uNcSJUWBKHBHBUBqRV5F awti5RfkaqEOCJZqD4jrjRW9IP9EgSgQBaLA2CsQsB37SxQHo0AUmK0C4NR7gO0Bq6UFFZEF uqK2ABfYyrfJt1QhUDtb1dMuCkSBKLDtFAjYbjvt03MUiALzrABgBakVhXUu1TIDoFt56hT8 Vv15di/mo8DEKJD/JybmUi14R/NWhAX/JxABokAUiAJRIArcvQK+zUiKApOgQCK2k3CV7sbH rK29G3FSFAWiQBSIAlEgCiwoBRKxXVCXO4ONAlEgCkSBKBAFosD0KhCwnd5rm5FFgSgQBaJA FIgCUWBBKRCwXVCXO4ONAlEgCkSBKBAFosD0KhCwnd5rm5FFgSgQBaJAFIgCUWBBKRCwXVCX e9NgF21Y3xb1h1zz60qbVMlRFIgCUSAKRIEoMMkKBGwn+erdje8b2qJm8w5PL6GXvIdw0fAa he3a+rZ0yZC3Yd3dWEhRFIgCUSAKTLMC9T5nY3Tsvc6Se8bm763d/LxXzD9RYAwVyOu+xvCi zKVLfkXJhGTS2nzLewnnUunYigJRIApMlgJ+Xa+AFczWPQHgjkKvUVXZZI0w3i5EBQK2U37V TVomJJuJq346VH4dT7kEGV4UiAJRIArchQJ+alpyP3B/qEhtHd9Fk2RFgbFXIEsRxv4SbZ2D PnWD2tGJi8V8+t46XdM6CkSBKDDpCixbtqxV1LaCIHXPmPSxxf+Fq0Aitgvg2puwatKy5tZW yxK2W5Q/gQXwJ5AhRoEoEAXupID7gGUHo9/q1bGypCgwiQqEaibxqm2hz6BWqk/iliDYfN20 9scPC2yhyVSPAlEgCkSBCVdg8fCdrftAJceey3B/EABJigKTqEDAdhKv2hb4PPqp2ydxE1eB 7urVq9vSZSu2wFqqRoEoEAWiwLQo4A057gfuE/UGnVqmNnrvmJbxZhwLQ4GA7QK4zgWyhmrS MmHdfvvt7YYbbmgrV808PLAAZMgQo0AUiAJRYFSB4ZWP9RCx+8L222/f7w3uE/KTosAkKhCw ncSrtgU+i9BaQwVuRwH3oIMO6ufedZsUBaJAFBhHBQCWr8ZXrlzZQcscJs+8BsRG57TZ+M+G B6i8HaD6YdMDVfqR1NGfc2WjfVed2fR9b7Thq7GB1FWrVrVddtmln8tT5r3mm6e9995741hH y7ZW61FbOY4C86lAwHY+1Y3tKBAFokAUmLUCwPG2225ru+66a48kMgQ6f/jDH7bdd9+9w9qs jQ8NwZ1vr+z1tWLFig5+FQzQlzIJ4KpTe3njDrZ8NYZ6n/mtt97K7X4uf+2aVf38nvwz7mO9 J2NInYWhQMB2YVznjDIKRIEoMHEKgMqdd9653XTTTT3q6Ktya0F32mmnDrxb+3W5KCT7N998 c4/Gsr3DDjv0Y88giGyqU9HKglx7ecBxnFP90IJxLF++vGsIaPn+ox/9qK1YvnSc3Y9vUWBW CgRsZyVbGkWBKBAFosB8KwDMbAW0oqvA09IEgLa1YCkKCfB8Re+r+h133LFHcNmuBGLVK7jl jzx9V17VHbe9ZRZ8BLb8tRknyKXphvV588G4XbP4s/UKbPq/d+ttxcIEKuCp2KQoEAWiwDgq sHzZkrZq5W3dNdAJQtesXtnMW4sXDQ83DcC5NQmgrls7/HzsujX9a3lfzVuOIK1ePbPudtGi 7QYYXL1xScJAg23JsGZ1Xa+1df13E/P4j/F0SB8eElu/fkPbcYfth3Gt3rgEwfiTosC0KRCw nbYrmvFEgSgQBaZIgU984hPty1/+ch9RvVtV1PaWW27pD35tzVArGmzNriimCOfb3va2ts8+ +/R1qCKcop3eIGNvE/G0ZrW/LnHYj3Pir+UaAJbP1iXvueeefQxZMzvOVy6+bY0CAdutUS9t o0AUiAJRYN4UALIitR/+8If7EgQd+WrdA2XWxgK3rUngrqKW+hIRftGLXtQ+//nP97W8YFC5 KK51vaKf2vgaH9iOOxwW1NLpuuuua+edd1479NBD+zh8ODCGpCgwbQoEbKftimY8USAKRIEp UQBIWk8LMEVV7cEkuK0HyrZmqGyJyorUAtt6QwL7tTbVXj2AawOJNnW1G+fETz4bAy1pBmiN ma7KkqLAtCmQv+ppu6IZTxSIAlFgahQAZYsHEPOKquV9P7ykawCyJQOoLesRR1FJoFbgWSBn Lx/Q2UsVYS2gs1duSUK9YUHdUZh1DHq1Va/OZyDbe229ZcBP0Hod2My5/bp1Gzaer169tvvL d2VVXxvjsFfGTVvVccyO/eLFSwegtrLXu8mB9pLuM58KvIGsY77x2fhqrOrJt5cqv5/cg3+M OykKTIICAdtJuErxMQpEgSiwABUoEAVjBa8FmMDNcgR7SwUsTxCFBGDqyrd8QJ5k+UBB4KiU 1UftR8t+0rG65Yc1t1L1W0sUQCQ/wCY/+VERaPVFe63r9W5Zx3wryK4lAs4BaI2fTfb0JU89 QO7YVkArmiy//NySsfHtrtJc2Lgru8mLAnOtQMB2rhWNvSgQBaJAFJgTBUajp2WwAAu4WZ5w 8sknt2OPPba9+MUvbr/5m7/Z/vM//7MDpbW5Z511Vl8zC/p8De+HHgDva17zmnbCCSd08GN3 FADLfvVX+9E6lQc8bWeeeWZ73vOe117wghf0/Zve9KYOoFVeP4xgDS/IBcMAVb5zUF4RVJDL d9DqATk2AGsBumO+FOhXpFn9a665ph133HEdcsveTxpPjSH7KDBtCmSN7bRd0YwnCkSBKDAl CgBS4AbSCiwBHbgTtfzXf/239q1vfat96lOf6nXU+9Vf/dUOuUcffXTbbbfdOkAWDALFN7zh De35z39+e9SjHtXhkFRsszea6rz6rbICRXuR0a997Wv94bZPf/rTPXIMnN/97ne3N7/5ze2N b3zjxoiqta0VWXYMbgGsH4cokDU27esNDaLMFYXmT+lQ0LpmeJ0XP8AwsK2tHm7Txrb5GGos 2UeBaVQgYDuNVzVjigJRIApMgQIFl4BPqnPwJ9rpVWCf+cxnOrwCYPXe8573tFe84hXtl3/5 l3vEU7uC0de+9rU9ovmQhzyktylALPCretpsnu6qjvZ/93d/1z7wgQ/0visSKyL86le/up19 9tntsMMOa9dff3174Qtf2E0aw6te9ap2xBFHtCuuuKK3ve9979sBmf+vf/3r2yGHHNLhnT1R 4ALad73rXe3+979/913+4sUzSxJoIAFcEd5RGOY3uzW20rA3yD9RYAoVyFKEKbyoGVIUiAJR YBoUAKuArB6KMiZgBvhA4UEHHbQRUMEfoAN+++23XzvttNN6WQHv8ccf337913+9PehBD+qR U3YKVtmt4wLA6steqvKZs5nzCy+8sPe31157dZsAUrRUZNhyBK/WKpAVxf3c5z7X3v/+97d3 vOMdPSoL0M8///x2wAEHtI9//OMdyD/72c92ONXfW9/61g7B8oDyRz7ykR7R/dCHPtRe8pKX NFFiSw/kq29JA5sAt2C4/K49/wO3dRWzn0YFArbTeFUzpigQBaLAVCgg0jjzFL/hgDYJmIHH BzzgAf24opSWHIDgAsoC49e97nXtnHPOaaeeemoHSl/ZVyrgs/9JqcpG6zoGkuBa/9b78k8+ qOSL/q2r/djHPtZ+6qd+qtfZe++9e5l87dR/xjOe0esaz3e+850+pssuu6x9+9vfbkcddVS3 8YhHPKLDsvF94xvfaKLOxnHMMce0U045pduxhliyhKO0Yj8pCiwkBbIUYSFd7Yw1CkSBKDBB CoiASgAQLIrUOgaNBYUg11pXefWVuygoEAR48p7whCe0t7/97e25z31u23///fvDZmWLffC3 eRRz8/O7qgNWL7300t4PiLV21hIJxwDU3psZ+OBhNWBdUM1fx/b1kJhIq8Q3tsCrY/n8MR4R YdBqucMgST9W5gcY6ECnWptcPt8V3FZZ7zD/RIEpUiAR2ym6mHc1FBOcibP2daOo/eaT913Z SF4UiAJRYFsoYH4CfwV1YNbc5fzhD394+4//+I8OpR62sgxB+ZVXXtkuueSSdvjhh3e4VPbz P//zvfx973tf/8r/qquu6jYqqgkc9QMKpYI+86a+nLPt3HEtjbA29uKLL25XX311h9gCSz6+ 5S1vaWeccUa362G1Jz7xiX3JgHWy+lGHTbbAqmNRX2WOjeeCCy7odtW16b/0OPHEE5vtk5/8 ZN/vu+++vZ16NR62+Q+wtat8fchXt8bqvJI8ZZL82vTPlnJbUhQYRwUCtuN4VebQJ5OQyag2 pk1q8mtSm8PuYioKRIEoMGcKmKNAnz0IFb00l0k+sL/0pS/tbzgQ8dxll1165FMk87d+67d6 vXq1lnbsWAur7J3vfOdGsDMXAkrwNmobFOqjIFT/ygGdNuCTXx4Ke+UrX9kBlQ35HigDkuD6 xhtv7DYOPPDAHoUFl9p9//vf7/CqDf+ArCgt+/q2Tth64P/93//t9QG7NcJ8eupTn9pOOumk 7o+lC9bz6q/8Mh5AL1oMtmssFU1mQ94o4PJJ4k/dI0bvGwWy6qmTFAXGVYEsRRjXKzNHfpmA TEgmqtFJzOSbFAWiQBQYZwX82pZf2QJnoA8UAkxrW81tj3/84zvAvexlL+tgaCzefPDQhz60 A2aBnNdpaQPonvzkJ7evf/3r7fjhYbJ63yxbbANDCbw5B39SgZw5VOKDMr8g5h26ljf8xm/8 Rm+nDoj967/+617nPve5T4dvD3tpZx2tNcBAVbvqU3/Gab4GlHwV9fVwmHbA3ANoXg/2O7/z O93miSd+ovvz3ve+t/uorbpeKwZ0K4otH+hK+qELewXqvWDkH/Vt7PDPMQ1sdRzAHREsh2Ol wHbDe/byfcJYXZK5dcYk5IXlIgK+uvMKHBMfsK3Jem57jLUoEAWiwNwoAGq9LaAerjJ31Qd0 oOY9rgCwoFWv5jqRWpBojgNgAM3DWgCXDW3Uk8yFjtWxTvXlL3/58H7cf+0Apy9z6A9+8IMO hOzJq3w/8ys6ChLZraRN9VM+68cxsBTFtXSAj+BTHptAE0jyT5ljAK1c2xpLge/Klbd1/8tG jaVgmR+SfD/ecO211/YPAzNQvmmJgXHZ1LfppzavVBNdNsaf+Zmf6fnlq31SFBg3BfJXOW5X ZB78qUnTxFUTZ01g89BdTEaBKBAF5kQBQAbyQKw5C0gBLPlgFnyN9ydZ2wAAB7FJREFUQppz 81zttXUMCrUDsPLAqHmRHXOiaHDBYUFegZ59RS0Nij1tRERFPtnRtiBPP1JFSPVXeWVr9913 7xDNTs3P7Dk2HmOT9AW4azzy2KtobPmqnToFnAW2xqeOMmP062uO+caOVDYcF9Q6ZqMitvzU psaoXLukKDCOCuT76HG8KnPok4lI9MJEbrMWzetmTPQmsUxOcyh2TEWBKDCnCoAwgCXCCazM V+DM3CV/zZqZCKj5DBDKU+bresdAtuY5deTdcMMN/RfJRHABm3J77QtG1R0FPoOSV2AHJGfm zplIJz/BpnKRUet91WdbAqraqGMDjGBXu1G7jvnCTxuf5KnvuMavH9C6du0MXLMFrtU1hopG a6NfZXzZc889u4980J4dNpXVVuNWJlLLljz3kRqPvbykKDCOCgRsx/GqzLFPHpgwQfn1Gy81 9zOTmZjmWOSYiwJRYM4VAGU+jFtrCmgrygi6pHXr1nRAK0AEW+qY3+SBtwIzwAfopIJeeTbg rJ3tox/9aAe+6ksbPlimABALgAHo7bevat4dqz916id89aF/+ezba8ef8quWTxiLMv3xFcQC VGOxB60VwfUwmGN908Yvj9VY2dW2osgFngW78r0Tt+pZ6sBG+clnvpQ97fxymg8Aor177LFH Lyu76idFgXFUIGA7jldlDn0ySfnay+Rq4vWaGy8Ud14T9xx2F1NRIApEgTlTAOhZ8wquAB6g KzAEXsCuwAysAjPwWcfmP5v5zjdW9srAqmNlQBGQ2oM2eYDPOXv69cCXXzQzZ1b/MxC6Q68n H7TqW1t+jkJitSn7da6vAmVlbGpXkFsgzOcap3J1Z7SYiSIrp4dkXPzQVtJH9efcsbr6qDL7 0kw5n9jwKjM+gdoCeH2rY58UBcZRgcVPetKTjh9Hx+LT3Chg8jFp2UzsPqWb+HwlZeIzSdbN Qt2atNSXMnnNzXWIlSgQBbZcAey0bh2AFUn0OipfgYM1YDVjr+YtZwVoNW9VWYGdc3UKOh2b B4Fe1bVXv0DP/Civ2uhHufP16y0bED0Gj2CW/fLNHDrjr3zHtfG9xlNtqr2yOq72VVeZvOqD DzWm8r98U9O4Ruf50Xnd+JyDfDCuPYh1LAjy3//93+3yyy/vYO9BN3ALhksrfSdFgXFUIBHb cbwqc+hTTeiittad2c4999w+QT3wgQ/sk1ZN2jUxbv5Jfw7diakoEAWiQBS4lxQAqUC0YLfg vCKyli6AVeWScm9OOOuss/oyBHmCIJ7LUKadva2AX52kKDBOCgRsx+lqzIMvPpEDVg8z+JUc Xy9ZZyvf8cEHH9xfgWMCNGmZBH2Sr0/z8+BSTEaBKBAFosC9pABoreiz+R2U1jIM0doqU89r zU477bR24YUX9nz3De/htQxBeUVr675yLw0h3USBLVIgYLtFck1eZZNWwaqHyBwDWw+TWZrg gYhaP1YvQNdGMumZBJOiQBSIAlFg8hQQVQWh5nEBDseSvTXE5nwBjUuGnyA+77zz+k8D11sn wOwBBxzQ34ZQdnybB3Cdu0+UvclTJh5PswIB22m+usPYTGaSichXTvV74n7b3JPAp59+ervs ssv6+ikPSXilCwD2Sd1kCISTokAUiAJRYPIUMIcD19GorGNvSPAgsT3A9XCe5y9s7hm+3QO1 lio4Z8NeW3tAW/vJUyUeT7sCAdspv8ImIJNRfY3kE7f1UpYeeBLYzzpaknDppZd2iAW/fp1n dEKccokyvCgQBaLAVCpgHjf3g1B79wNzv2/jvM9XMENZRWQ9IHbg8HPAorXuC/LdM2yObWxW 5HYqRcugJl6BgO3EX8K7H4CJzERkq4kJ6HqYTHR2n3326e+H9MCAB8t8MvfpXaqJ8O57SGkU iAJRIAqMqwLAVar53PwPTL0TV5k9oLUBXfeKukeI6BbcsuFYG1sBs/ykKDBOCgRsx+lqzJMv JiDJhCXVeX0S93WTr55MZpLlBwXENSn2gvwTBaJAFIgCE6NABTTM+eZ754IXALXehmAvz5zv oTLzv9d+jUZ3DbiitNq6VyjP/WFi/hQWlKMB2wV1ue84WBOTVJOTCavOK6/q9IL8EwWiQBSI AhOjQM3pgBbc2sztILXAFMjKswFcyXIFqYDYMVtVT9sKlChLigLjpEAeeR+nqxFfokAUiAJR IApEgSgQBWatQMB21tKlYRSIAlEgCkSBKBAFosA4KRCwHaerEV+iQBSIAlEgCkSBKBAFZq1A wHbW0qVhFIgCUSAKRIEoEAWiwDgpELAdp6sRX6JAFIgCUSAKRIEoEAVmrUDAdtbSpWEUiAJR IApEgSgQBaLAOCkQsB2nqxFfokAUiAJRIApEgSgQBWatQMB21tKlYRSIAlEgCkSBKBAFosA4 KRCwHaerEV+iQBSIAlEgCkSBKBAFZq1AwHbW0qVhFIgCUSAKRIEoEAWiwDgpELAdp6sRX6JA FIgCUSAKRIEoEAVmrUDAdtbSpWEUiAJRIApEgSgQBaLAOCkQsB2nqxFfokAUiAJRIApEgSgQ BWatQMB21tKlYRSIAlEgCkSBKBAFosA4KRCwHaerEV+iQBSIAlEgCkSBKBAFZq1AwHbW0qVh FIgCUSAKRIEoEAWiwDgp8P8BG+b7h23iQTUAAAAASUVORK5CYII= --------------010504030303090109000601-- --------------000904060502090707080709--

Hi Kanagaraj, Please find the logs from here :- http://ur1.ca/jeszc [image: Inline image 1] [image: Inline image 2] On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the packages are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo. http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5 VDSM :- vdsm-4.16.7-1.gitdb83943.el7 KVM :- 1.5.3 - 60.el7_0.2 libvirt-1.1.1-29.el7_0.4 Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] ^C
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

I can see the failures in glusterd log. Can someone from glusterfs dev pls help on this? Thanks, Kanagaraj ----- Original Message -----
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" <vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the packages are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo. http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote: > > Hi, > > I try to add gluster volume but it failed... > > Ovirt :- 3.5 > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > KVM :- 1.5.3 - 60.el7_0.2 > libvirt-1.1.1-29.el7_0.4 > Glusterfs :- glusterfs-3.5.3-1.el7 > > Engine Logs :- > > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > ^C > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

Hi, Please find the more details on this ....can anybody from gluster will help me here :- Gluster CLI Logs :- /var/log/glusterfs/cli.log [2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0 Command log :- /var/log/glusterfs/.cmd_log_history Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
----- Original Message ----- packages
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com>
wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com>
wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
> Hi Punit, > > can you please provide also errors from /var/log/vdsm/vdsm.log and > /var/log/vdsm/vdsmd.log > > it would be really helpful if you provided exact steps how to > reproduce the problem. > > regards > > Martin Pavlik - rhev QE > > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com>
wrote:
> > > > Hi, > > > > I try to add gluster volume but it failed... > > > > Ovirt :- 3.5 > > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > > KVM :- 1.5.3 - 60.el7_0.2 > > libvirt-1.1.1-29.el7_0.4 > > Glusterfs :- glusterfs-3.5.3-1.el7 > > > > Engine Logs :- > > > > 2015-01-08 09:57:52,569 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:52,609 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,582 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,591 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,596 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,633 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > ^C > > > > > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

Punit, cli log wouldn't help much here. To debug this issue further can you please let us know the following: 1. gluster peer status output 2. gluster volume status output 3. gluster --version output. 4. Which command got failed 5. glusterd log file of all the nodes ~Atin On 01/13/2015 07:48 AM, Punit Dambiwal wrote:
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
----- Original Message ----- packages
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com>
wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com>
wrote:
> Do you see any errors in the UI? > > Also please provide the engine.log and vdsm.log when the failure > occured. > > Thanks, > Kanagaraj > > > On 01/08/2015 02:25 PM, Punit Dambiwal wrote: > > Hi Martin, > > The steps are below :- > > 1. Step the ovirt engine on the one server... > 2. Installed centos 7 on 4 host node servers.. > 3. I am using host node (compute+storage)....now i have added all 4 > nodes to engine... > 4. Create the gluster volume from GUI... > > Network :- > eth0 :- public network (1G) > eth1+eth2=bond0= VM public network (1G) > eth3+eth4=bond1=ovirtmgmt+storage (10G private network) > > every hostnode has 24 bricks=24*4(distributed replicated) > > Thanks, > Punit > > > On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> > wrote: > >> Hi Punit, >> >> can you please provide also errors from /var/log/vdsm/vdsm.log and >> /var/log/vdsm/vdsmd.log >> >> it would be really helpful if you provided exact steps how to >> reproduce the problem. >> >> regards >> >> Martin Pavlik - rhev QE >> > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com>
wrote:
>>> >>> Hi, >>> >>> I try to add gluster volume but it failed... >>> >>> Ovirt :- 3.5 >>> VDSM :- vdsm-4.16.7-1.gitdb83943.el7 >>> KVM :- 1.5.3 - 60.el7_0.2 >>> libvirt-1.1.1-29.el7_0.4 >>> Glusterfs :- glusterfs-3.5.3-1.el7 >>> >>> Engine Logs :- >>> >>> 2015-01-08 09:57:52,569 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >>> , sharedLocks= ] >>> 2015-01-08 09:57:52,609 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >>> , sharedLocks= ] >>> 2015-01-08 09:57:55,582 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >>> , sharedLocks= ] >>> 2015-01-08 09:57:55,591 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >>> , sharedLocks= ] >>> 2015-01-08 09:57:55,596 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >>> , sharedLocks= ] >>> 2015-01-08 09:57:55,633 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >>> , sharedLocks= ] >>> ^C >>> >>> >> >> > > <216 09-Jan-15.jpg><217 09-Jan-15.jpg>
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Hi Atin, Please find the output from here :- http://ur1.ca/jf4bs On Tue, Jan 13, 2015 at 12:37 PM, Atin Mukherjee <amukherj@redhat.com> wrote:
Punit,
cli log wouldn't help much here. To debug this issue further can you please let us know the following:
1. gluster peer status output 2. gluster volume status output 3. gluster --version output. 4. Which command got failed 5. glusterd log file of all the nodes
~Atin
On 01/13/2015 07:48 AM, Punit Dambiwal wrote:
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (-->
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy < kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
----- Original Message ----- packages
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install
/usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] those
packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
> Hi Punit, > > can you verify that nodes contain cluster packages from the following > log? > > Thread-14::DEBUG::2015-01-09 > 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-object',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-plugin',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-account',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-proxy',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-doc',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-container',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package > ('glusterfs-geo-replication',) not found > > > M. > > On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote: > > Hi Kanagaraj, > > Please find the attached logs :- > > Engine Logs :- http://ur1.ca/jdopt > VDSM Logs :- http://ur1.ca/jdoq9 > > > > On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote: > >> Do you see any errors in the UI? >> >> Also please provide the engine.log and vdsm.log when the failure >> occured. >> >> Thanks, >> Kanagaraj >> >> >> On 01/08/2015 02:25 PM, Punit Dambiwal wrote: >> >> Hi Martin, >> >> The steps are below :- >> >> 1. Step the ovirt engine on the one server... >> 2. Installed centos 7 on 4 host node servers.. >> 3. I am using host node (compute+storage)....now i have added all 4 >> nodes to engine... >> 4. Create the gluster volume from GUI... >> >> Network :- >> eth0 :- public network (1G) >> eth1+eth2=bond0= VM public network (1G) >> eth3+eth4=bond1=ovirtmgmt+storage (10G private network) >> >> every hostnode has 24 bricks=24*4(distributed replicated) >> >> Thanks, >> Punit >> >> >> On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> >> wrote: >> >>> Hi Punit, >>> >>> can you please provide also errors from /var/log/vdsm/vdsm.log and >>> /var/log/vdsm/vdsmd.log >>> >>> it would be really helpful if you provided exact steps how to >>> reproduce the problem. >>> >>> regards >>> >>> Martin Pavlik - rhev QE >>> > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote: >>>> >>>> Hi, >>>> >>>> I try to add gluster volume but it failed... >>>> >>>> Ovirt :- 3.5 >>>> VDSM :- vdsm-4.16.7-1.gitdb83943.el7 >>>> KVM :- 1.5.3 - 60.el7_0.2 >>>> libvirt-1.1.1-29.el7_0.4 >>>> Glusterfs :- glusterfs-3.5.3-1.el7 >>>> >>>> Engine Logs :- >>>> >>>> 2015-01-08 09:57:52,569 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>>> , sharedLocks= ] >>>> 2015-01-08 09:57:52,609 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>>> , sharedLocks= ] >>>> 2015-01-08 09:57:55,582 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>>> , sharedLocks= ] >>>> 2015-01-08 09:57:55,591 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>>> , sharedLocks= ] >>>> 2015-01-08 09:57:55,596 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>>> , sharedLocks= ] >>>> 2015-01-08 09:57:55,633 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>>> , sharedLocks= ] >>>> ^C >>>> >>>> >>> >>> >> >> > <216 09-Jan-15.jpg><217 09-Jan-15.jpg> > > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

On 01/13/2015 12:12 PM, Punit Dambiwal wrote:
Hi Atin,
Please find the output from here :- http://ur1.ca/jf4bs
Looks like http://review.gluster.org/#/c/9269/ should solve this issue. Please note this patch has not been taken in 3.6 release. Would you be able to apply this patch on the source and re-test? ~Atin
On Tue, Jan 13, 2015 at 12:37 PM, Atin Mukherjee <amukherj@redhat.com> wrote:
Punit,
cli log wouldn't help much here. To debug this issue further can you please let us know the following:
1. gluster peer status output 2. gluster volume status output 3. gluster --version output. 4. Which command got failed 5. glusterd log file of all the nodes
~Atin
On 01/13/2015 07:48 AM, Punit Dambiwal wrote:
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (-->
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy < kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
> Hi Punit, > > unfortunately I’am not that good with the gluster, I was just following > the obvious clue from the log. Could you try on the nodes if the
----- Original Message ----- packages
> are even available for installation > > yum install gluster-swift gluster-swift-object gluster-swift-plugin > gluster-swift-account > gluster-swift-proxy gluster-swift-doc gluster-swift-container > glusterfs-geo-replication > > if not you could try to get them in official gluster repo. >
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
> > HTH > > M. > > > > > On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote: > > Hi Martin, > > I installed gluster from ovirt repo....is it require to install
/usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] those
> packages manually ?? > > On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote: > >> Hi Punit, >> >> can you verify that nodes contain cluster packages from the following >> log? >> >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift-object',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift-plugin',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift-account',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift-proxy',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift-doc',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package >> ('gluster-swift-container',) not found >> Thread-14::DEBUG::2015-01-09 >> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package >> ('glusterfs-geo-replication',) not found >> >> >> M. >> >> On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote: >> >> Hi Kanagaraj, >> >> Please find the attached logs :- >> >> Engine Logs :- http://ur1.ca/jdopt >> VDSM Logs :- http://ur1.ca/jdoq9 >> >> >> >> On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote: >> >>> Do you see any errors in the UI? >>> >>> Also please provide the engine.log and vdsm.log when the failure >>> occured. >>> >>> Thanks, >>> Kanagaraj >>> >>> >>> On 01/08/2015 02:25 PM, Punit Dambiwal wrote: >>> >>> Hi Martin, >>> >>> The steps are below :- >>> >>> 1. Step the ovirt engine on the one server... >>> 2. Installed centos 7 on 4 host node servers.. >>> 3. I am using host node (compute+storage)....now i have added all 4 >>> nodes to engine... >>> 4. Create the gluster volume from GUI... >>> >>> Network :- >>> eth0 :- public network (1G) >>> eth1+eth2=bond0= VM public network (1G) >>> eth3+eth4=bond1=ovirtmgmt+storage (10G private network) >>> >>> every hostnode has 24 bricks=24*4(distributed replicated) >>> >>> Thanks, >>> Punit >>> >>> >>> On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> >>> wrote: >>> >>>> Hi Punit, >>>> >>>> can you please provide also errors from /var/log/vdsm/vdsm.log and >>>> /var/log/vdsm/vdsmd.log >>>> >>>> it would be really helpful if you provided exact steps how to >>>> reproduce the problem. >>>> >>>> regards >>>> >>>> Martin Pavlik - rhev QE >>>> > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I try to add gluster volume but it failed... >>>>> >>>>> Ovirt :- 3.5 >>>>> VDSM :- vdsm-4.16.7-1.gitdb83943.el7 >>>>> KVM :- 1.5.3 - 60.el7_0.2 >>>>> libvirt-1.1.1-29.el7_0.4 >>>>> Glusterfs :- glusterfs-3.5.3-1.el7 >>>>> >>>>> Engine Logs :- >>>>> >>>>> 2015-01-08 09:57:52,569 INFO >>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>> value: GLUSTER >>>>> , sharedLocks= ] >>>>> 2015-01-08 09:57:52,609 INFO >>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>> value: GLUSTER >>>>> , sharedLocks= ] >>>>> 2015-01-08 09:57:55,582 INFO >>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>> value: GLUSTER >>>>> , sharedLocks= ] >>>>> 2015-01-08 09:57:55,591 INFO >>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>> value: GLUSTER >>>>> , sharedLocks= ] >>>>> 2015-01-08 09:57:55,596 INFO >>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>> value: GLUSTER >>>>> , sharedLocks= ] >>>>> 2015-01-08 09:57:55,633 INFO >>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>> value: GLUSTER >>>>> , sharedLocks= ] >>>>> ^C >>>>> >>>>> >>>> >>>> >>> >>> >> <216 09-Jan-15.jpg><217 09-Jan-15.jpg> >> >> >> > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Hi Atin, What about if i will use glusterfs 3.5 ?? is this bug will affect 3.5 also ?? On Tue, Jan 13, 2015 at 3:00 PM, Atin Mukherjee <amukherj@redhat.com> wrote:
On 01/13/2015 12:12 PM, Punit Dambiwal wrote:
Hi Atin,
Please find the output from here :- http://ur1.ca/jf4bs
Looks like http://review.gluster.org/#/c/9269/ should solve this issue. Please note this patch has not been taken in 3.6 release. Would you be able to apply this patch on the source and re-test?
On Tue, Jan 13, 2015 at 12:37 PM, Atin Mukherjee <amukherj@redhat.com> wrote:
Punit,
cli log wouldn't help much here. To debug this issue further can you please let us know the following:
1. gluster peer status output 2. gluster volume status output 3. gluster --version output. 4. Which command got failed 5. glusterd log file of all the nodes
~Atin
On 01/13/2015 07:48 AM, Punit Dambiwal wrote:
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (-->
/usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for
~Atin details. details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy < kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
----- Original Message -----
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
> Looks like there are some failures in gluster. > Can you send the log output from glusterd log file from the relevant hosts? > > Thanks, > Kanagaraj > > > On 01/12/2015 10:24 AM, Punit Dambiwal wrote: > > Hi, > > Is there any one from gluster can help me here :- > > Engine logs :- > > 2015-01-12 12:50:33,841 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:34,725 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:36,824 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:36,853 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:36,866 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:37,751 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:39,849 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:39,878 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:39,890 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:40,776 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:42,878 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:42,903 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:42,916 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > , sharedLocks= ] > 2015-01-12 12:50:43,771 INFO >
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
> (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, > log id: 303e70a4 > 2015-01-12 12:50:43,780 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: > 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: > -1, Message: Creation of Gluster Volume vol01 failed. > 2015-01-12 12:50:43,785 INFO > [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] > (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock > [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks= ] > > [image: Inline image 2] > > > On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote: > >> Hi Punit, >> >> unfortunately I’am not that good with the gluster, I was just following >> the obvious clue from the log. Could you try on the nodes if the packages >> are even available for installation >> >> yum install gluster-swift gluster-swift-object gluster-swift-plugin >> gluster-swift-account >> gluster-swift-proxy gluster-swift-doc gluster-swift-container >> glusterfs-geo-replication >> >> if not you could try to get them in official gluster repo. >>
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
>> >> HTH >> >> M. >> >> >> >> >> On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote: >> >> Hi Martin, >> >> I installed gluster from ovirt repo....is it require to install those >> packages manually ?? >> >> On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote: >> >>> Hi Punit, >>> >>> can you verify that nodes contain cluster packages from the following >>> log? >>> >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift-object',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift-plugin',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift-account',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift-proxy',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift-doc',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package >>> ('gluster-swift-container',) not found >>> Thread-14::DEBUG::2015-01-09 >>> 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package >>> ('glusterfs-geo-replication',) not found >>> >>> >>> M. >>> >>> On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote: >>> >>> Hi Kanagaraj, >>> >>> Please find the attached logs :- >>> >>> Engine Logs :- http://ur1.ca/jdopt >>> VDSM Logs :- http://ur1.ca/jdoq9 >>> >>> >>> >>> On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote: >>> >>>> Do you see any errors in the UI? >>>> >>>> Also please provide the engine.log and vdsm.log when the failure >>>> occured. >>>> >>>> Thanks, >>>> Kanagaraj >>>> >>>> >>>> On 01/08/2015 02:25 PM, Punit Dambiwal wrote: >>>> >>>> Hi Martin, >>>> >>>> The steps are below :- >>>> >>>> 1. Step the ovirt engine on the one server... >>>> 2. Installed centos 7 on 4 host node servers.. >>>> 3. I am using host node (compute+storage)....now i have added all 4 >>>> nodes to engine... >>>> 4. Create the gluster volume from GUI... >>>> >>>> Network :- >>>> eth0 :- public network (1G) >>>> eth1+eth2=bond0= VM public network (1G) >>>> eth3+eth4=bond1=ovirtmgmt+storage (10G private network) >>>> >>>> every hostnode has 24 bricks=24*4(distributed replicated) >>>> >>>> Thanks, >>>> Punit >>>> >>>> >>>> On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík < mpavlik@redhat.com> >>>> wrote: >>>> >>>>> Hi Punit, >>>>> >>>>> can you please provide also errors from /var/log/vdsm/vdsm.log and >>>>> /var/log/vdsm/vdsmd.log >>>>> >>>>> it would be really helpful if you provided exact steps how to >>>>> reproduce the problem. >>>>> >>>>> regards >>>>> >>>>> Martin Pavlik - rhev QE >>>>> > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I try to add gluster volume but it failed... >>>>>> >>>>>> Ovirt :- 3.5 >>>>>> VDSM :- vdsm-4.16.7-1.gitdb83943.el7 >>>>>> KVM :- 1.5.3 - 60.el7_0.2 >>>>>> libvirt-1.1.1-29.el7_0.4 >>>>>> Glusterfs :- glusterfs-3.5.3-1.el7 >>>>>> >>>>>> Engine Logs :- >>>>>> >>>>>> 2015-01-08 09:57:52,569 INFO >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>>> value: GLUSTER >>>>>> , sharedLocks= ] >>>>>> 2015-01-08 09:57:52,609 INFO >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>>> value: GLUSTER >>>>>> , sharedLocks= ] >>>>>> 2015-01-08 09:57:55,582 INFO >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>>> value: GLUSTER >>>>>> , sharedLocks= ] >>>>>> 2015-01-08 09:57:55,591 INFO >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>>> value: GLUSTER >>>>>> , sharedLocks= ] >>>>>> 2015-01-08 09:57:55,596 INFO >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>>> value: GLUSTER >>>>>> , sharedLocks= ] >>>>>> 2015-01-08 09:57:55,633 INFO >>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>>>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>>>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>>>> value: GLUSTER >>>>>> , sharedLocks= ] >>>>>> ^C >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> <216 09-Jan-15.jpg><217 09-Jan-15.jpg> >>> >>> >>> >> >> > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavl=C3=ADk" <mpavlik@redhat.com>, "Vijay Bellur" = <vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> = wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant = hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:43,771 INFO = [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, = CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event = ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: = GLUSTER , sharedLocks=3D ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I=E2=80=99am not that good with the gluster, I was = just following the obvious clue from the log. Could you try on the nodes if the =
are even available for installation
yum install gluster-swift gluster-swift-object = gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo. = http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs= -epel.repo
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> = wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install =
This is a multipart message in MIME format. ------=_NextPart_000_00B5_01D02F1E.B7FDC3D0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Are you using ctdb??? and did you specify Lock=3DFalse in = /etc/nfsmount.conf =20 Can you give a full run down of topology, and has this ever been working = or is it a new deployment? =20 Donny D =20 From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf = Of Punit Dambiwal Sent: Monday, January 12, 2015 7:18 PM To: Kanagaraj Mayilsamy Cc: gluster-users@gluster.org; Kaushal M; users@ovirt.org Subject: Re: [ovirt-users] Failed to create volume in OVirt with gluster =20 Hi, =20 Please find the more details on this ....can anybody from gluster will = help me here :-=20 =20 =20 Gluster CLI Logs :- /var/log/glusterfs/cli.log =20 [2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: = got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T = [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got = RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] = 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T = [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got = RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] = 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T = [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request = fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> = /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] = (--> = /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a429= 3] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] = (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> = /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: = connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] = 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: = 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] = 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] = 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, = ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] = 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: = Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] = 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D = [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D = [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D = [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning = 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] = 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D = [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: = 0 =20 Command log :- /var/log/glusterfs/.cmd_log_history =20 Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking = failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for = details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for = details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking = failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for = details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for = details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking = failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for = details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for = details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS =20 =20 On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy = <kmayilsa@redhat.com> wrote: I can see the failures in glusterd log. Can someone from glusterfs dev pls help on this? Thanks, Kanagaraj ----- Original Message ----- packages those
packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the = following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> = wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> = wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all = 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=3Dbond0=3D VM public network (1G) eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=3D24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log = and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> = wrote: > > Hi, > > I try to add gluster volume but it failed... > > Ovirt :- 3.5 > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > KVM :- 1.5.3 - 60.el7_0.2 > libvirt-1.1.1-29.el7_0.4 > Glusterfs :- glusterfs-3.5.3-1.el7 > > Engine Logs :- > > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > ^C > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>
> > (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, = Job ID:<br>> > 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: = null, Custom Event ID:<br>> > -1, Message: Creation of Gluster = Volume vol01 failed.<br>> > 2015-01-12 12:50:43,785 INFO<br>> = > = [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]<br>> = > (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object = EngineLock<br>> > [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER<br>> > , = sharedLocks=3D ]<br>> ><br>> > [image: Inline image = 2]<br>> ><br>> ><br>> > On Sun, Jan 11, 2015 at 6:48 = PM, Martin Pavl=C3=ADk <<a =
=20 ------=_NextPart_000_00B5_01D02F1E.B7FDC3D0 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta = name=3DGenerator content=3D"Microsoft Word 14 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Are you using ctdb??? and did you specify Lock=3DFalse in = /etc/nfsmount.conf<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Can you give a full run down of topology, and has this ever been = working or is it a new deployment?<o:p></o:p></span></p><p = class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Donny D<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><b><span = style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>= </b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> = users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] <b>On Behalf Of = </b>Punit Dambiwal<br><b>Sent:</b> Monday, January 12, 2015 7:18 = PM<br><b>To:</b> Kanagaraj Mayilsamy<br><b>Cc:</b> = gluster-users@gluster.org; Kaushal M; users@ovirt.org<br><b>Subject:</b> = Re: [ovirt-users] Failed to create volume in OVirt with = gluster<o:p></o:p></span></p><p = class=3DMsoNormal><o:p> </o:p></p><div><p = class=3DMsoNormal>Hi,<o:p></o:p></p><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal>Please find the more details on this ....can anybody = from gluster will help me here :- <o:p></o:p></p><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal>Gluster CLI Logs = :- /var/log/glusterfs/cli.log<o:p></o:p></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.071969] T = [cli.c:264:cli_rpc_notify] 0-glusterfs: got = RPC_CLNT_CONNECT<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.072012] T = [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got = RPC_CLNT_CONNECT<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.072024] I = [socket.c:2344:socket_event_handler] 0-transport: disconnecting = now<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] = 0-glusterfs: got RPC_CLNT_DISCONNECT<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.072131] T = [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: = 0, gid: 0, owner:<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.072176] T = [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request = fraglen 128, payload: 64, rpc hdr: 64<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.072572] T = [socket.c:2863:socket_connect] (--> = /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] = (--> = /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a429= 3] (--> = /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> = /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> = /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: = connect () called on transport already = connected<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: = submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) = to rpc-transport (glusterfs)<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.072633] D = [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is = 0, returning<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: = received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, = Proc: 27) from rpc-transport (glusterfs)<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.075976] D = [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to = status cmd<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning = 0<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: = Returning: 0<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: = Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: = Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076256] D = [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning = 0<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: = Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] = 0-cli: Returning 0<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] = 0-: Exiting with: 0<o:p></o:p></p></div></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div></div><div><p = class=3DMsoNormal>Command log = :- /var/log/glusterfs/.cmd_log_history<o:p></o:p></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><div><p = class=3DMsoNormal>Staging failed on = 00000000-0000-0000-0000-000000000000. Please check log file for = details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging failed on = 00000000-0000-0000-0000-000000000000. Please check log file for = details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:10:35.836676] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:16:25.956514] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:17:36.977833] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:21:07.048053] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:26:57.168661] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:28:07.194428] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:30:27.256667] : volume status vol01 : FAILED : Locking failed = on <a = href=3D"http://cpu02.zne01.hkg1.stack.com">cpu02.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Locking failed on <a = href=3D"http://cpu03.zne01.hkg1.stack.com">cpu03.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Locking failed on <a = href=3D"http://cpu04.zne01.hkg1.stack.com">cpu04.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 01:34:58.350748] : volume status all = tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. = Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Staging failed on = 00000000-0000-0000-0000-000000000000. Please check log file for = details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging failed on = 00000000-0000-0000-0000-000000000000. Please check log file for = details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:36:08.375326] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:36:08.386470] : volume status vol01 : FAILED : Locking failed = on <a = href=3D"http://cpu02.zne01.hkg1.stack.com">cpu02.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Locking failed on <a = href=3D"http://cpu03.zne01.hkg1.stack.com">cpu03.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Locking failed on <a = href=3D"http://cpu04.zne01.hkg1.stack.com">cpu04.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 01:42:59.524215] : volume stop vol01 = : FAILED : Locking failed on <a = href=3D"http://cpu02.zne01.hkg1.stack.com">cpu02.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Locking failed on <a = href=3D"http://cpu03.zne01.hkg1.stack.com">cpu03.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Locking failed on <a = href=3D"http://cpu04.zne01.hkg1.stack.com">cpu04.zne01.hkg1.stack.com</a>= . Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>[2015-01-13 01:45:10.550659] : volume status all = tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. = Please check log file for details.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Staging failed on = 00000000-0000-0000-0000-000000000000. Please check log file for = details.<o:p></o:p></p></div><div><p class=3DMsoNormal>Staging failed on = 00000000-0000-0000-0000-000000000000. Please check log file for = details.<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:46:10.656802] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:51:02.796031] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:52:02.897804] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:55:25.841070] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:55:26.752084] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:55:32.499049] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:55:38.716907] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:56:52.905899] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 01:58:53.109613] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:03:26.769430] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:04:22.859213] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:05:22.970393] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal>[2015-01-13 = 02:06:23.075823] : volume status all tasks : = SUCCESS<o:p></o:p></p></div></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div></div><div><p = class=3DMsoNormal><o:p> </o:p></p><div><p class=3DMsoNormal>On Mon, = Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <<a = href=3D"mailto:kmayilsa@redhat.com" = target=3D"_blank">kmayilsa@redhat.com</a>> wrote:<o:p></o:p></p><p = class=3DMsoNormal>I can see the failures in glusterd log.<br><br>Can = someone from glusterfs dev pls help on = this?<br><br>Thanks,<br>Kanagaraj<o:p></o:p></p><div><div><p = class=3DMsoNormal><br>----- Original Message -----<br>> From: = "Punit Dambiwal" <<a = href=3D"mailto:hypunit@gmail.com">hypunit@gmail.com</a>><br>> To: = "Kanagaraj" <<a = href=3D"mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>><br>> = Cc: "Martin Pavl=C3=ADk" <<a = href=3D"mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>>, = "Vijay Bellur" <<a = href=3D"mailto:vbellur@redhat.com">vbellur@redhat.com</a>>, = "Kaushal M" <<a = href=3D"mailto:kshlmster@gmail.com">kshlmster@gmail.com</a>>,<br>> = <a href=3D"mailto:users@ovirt.org">users@ovirt.org</a>, <a = href=3D"mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><b= r>> Sent: Monday, January 12, 2015 3:36:43 PM<br>> Subject: Re: = Failed to create volume in OVirt with gluster<br>><br>> Hi = Kanagaraj,<br>><br>> Please find the logs from here :- <a = href=3D"http://ur1.ca/jeszc" = target=3D"_blank">http://ur1.ca/jeszc</a><br>><br>> [image: Inline = image 1]<br>><br>> [image: Inline image 2]<br>><br>> On Mon, = Jan 12, 2015 at 1:02 PM, Kanagaraj <<a = href=3D"mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>> = wrote:<br>><br>> > Looks like there are some failures in = gluster.<br>> > Can you send the log output from glusterd log file = from the relevant hosts?<br>> ><br>> > Thanks,<br>> > = Kanagaraj<br>> ><br>> ><br>> > On 01/12/2015 10:24 AM, = Punit Dambiwal wrote:<br>> ><br>> > Hi,<br>> ><br>> = > Is there any one from gluster can help me here :-<br>> = ><br>> > Engine logs :-<br>> ><br>> > = 2015-01-12 12:50:33,841 INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:34,725 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:36,824 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:36,853 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:36,866 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:37,751 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:39,849 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:39,878 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:39,890 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:40,776 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:42,878 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:42,903 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:42,916 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:43,771 = INFO<br>> > = [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]<b= r>> > (ajp--127.0.0.1-8702-1) [330ace48] FINISH, = CreateGlusterVolumeVDSCommand,<br>> > log id: 303e70a4<br>> = > 2015-01-12 12:50:43,780 ERROR<br>> > = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br= href=3D"mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>> = wrote:<br>> ><br>> >> Hi Punit,<br>> >><br>> = >> unfortunately I=E2=80=99am not that good with the = gluster, I was just following<br>> >> the obvious clue from the = log. Could you try on the nodes if the packages<br>> >> are = even available for installation<br>> >><br>> >> = yum install gluster-swift gluster-swift-object = gluster-swift-plugin<br>> >> = gluster-swift-account<br>> >> gluster-swift-proxy = gluster-swift-doc gluster-swift-container<br>> >> = glusterfs-geo-replication<br>> >><br>> >> if not = you could try to get them in official gluster repo.<br>> >> <a = href=3D"http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/g= lusterfs-epel.repo" = target=3D"_blank">http://download.gluster.org/pub/gluster/glusterfs/LATES= T/CentOS/glusterfs-epel.repo</a><br>> >><br>> >> = HTH<br>> >><br>> >> M.<br>> >><br>> = >><br>> >><br>> >><br>> >> = On 10 Jan 2015, at 04:35, Punit Dambiwal <<a = href=3D"mailto:hypunit@gmail.com">hypunit@gmail.com</a>> = wrote:<br>> >><br>> >> Hi Martin,<br>> = >><br>> >> I installed gluster from ovirt = repo....is it require to install those<br>> >> packages = manually ??<br>> >><br>> >> On Fri, Jan 9, 2015 at = 7:19 PM, Martin Pavl=C3=ADk <<a = href=3D"mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>> = wrote:<br>> >><br>> >>> Hi Punit,<br>> = >>><br>> >>> can you verify that nodes contain = cluster packages from the following<br>> >>> log?<br>> = >>><br>> >>> Thread-14::DEBUG::2015-01-09<br>> = >>> 18:06:28,823::caps::716::root::(_getKeyPackages) rpm = package<br>> >>> ('gluster-swift',) not found<br>> = >>> Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-object',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-plugin',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-account',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-proxy',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-doc',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-container',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('glusterfs-geo-replication',) not found<br>> = >>><br>> >>><br>> >>> M.<br>> = >>><br>> >>> On 09 Jan 2015, at 11:13, = Punit Dambiwal <<a = href=3D"mailto:hypunit@gmail.com">hypunit@gmail.com</a>> = wrote:<br>> >>><br>> >>> Hi = Kanagaraj,<br>> >>><br>> >>> Please find = the attached logs :-<br>> >>><br>> >>> = Engine Logs :- <a href=3D"http://ur1.ca/jdopt" = target=3D"_blank">http://ur1.ca/jdopt</a><br>> >>> VDSM Logs = :- <a href=3D"http://ur1.ca/jdoq9" = target=3D"_blank">http://ur1.ca/jdoq9</a><br>> >>><br>> = >>><br>> >>><br>> >>> On Thu, Jan 8, = 2015 at 6:05 PM, Kanagaraj <<a = href=3D"mailto:kmayilsa@redhat.com">kmayilsa@redhat.com</a>> = wrote:<br>> >>><br>> >>>> Do you see = any errors in the UI?<br>> >>>><br>> >>>> = Also please provide the engine.log and vdsm.log when the failure<br>> = >>>> occured.<br>> >>>><br>> = >>>> Thanks,<br>> >>>> Kanagaraj<br>> = >>>><br>> >>>><br>> >>>> On = 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br>> = >>>><br>> >>>> Hi Martin,<br>> = >>>><br>> >>>> The steps are below = :-<br>> >>>><br>> >>>> 1. Step the = ovirt engine on the one server...<br>> >>>> 2. Installed = centos 7 on 4 host node servers..<br>> >>>> 3. I am using = host node (compute+storage)....now i have added all 4<br>> = >>>> nodes to engine...<br>> >>>> 4. Create = the gluster volume from GUI...<br>> >>>><br>> = >>>> Network :-<br>> >>>> eth0 :- = public network (1G)<br>> >>>> eth1+eth2=3Dbond0=3D VM = public network (1G)<br>> >>>> = eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network)<br>> = >>>><br>> >>>> every hostnode has 24 = bricks=3D24*4(distributed replicated)<br>> >>>><br>> = >>>> Thanks,<br>> >>>> Punit<br>> = >>>><br>> >>>><br>> >>>> On = Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk <<a = href=3D"mailto:mpavlik@redhat.com">mpavlik@redhat.com</a>><br>> = >>>> wrote:<br>> >>>><br>> = >>>>> Hi Punit,<br>> >>>>><br>> = >>>>> can you please provide also errors from = /var/log/vdsm/vdsm.log and<br>> >>>>> = /var/log/vdsm/vdsmd.log<br>> >>>>><br>> = >>>>> it would be really helpful if you provided exact = steps how to<br>> >>>>> reproduce the problem.<br>> = >>>>><br>> >>>>> regards<br>> = >>>>><br>> >>>>> Martin Pavlik - rhev = QE<br>> >>>>> > On 08 Jan 2015, at 03:06, = Punit Dambiwal <<a = href=3D"mailto:hypunit@gmail.com">hypunit@gmail.com</a>> = wrote:<br>> >>>>> ><br>> >>>>> = > Hi,<br>> >>>>> ><br>> >>>>> = > I try to add gluster volume but it failed...<br>> = >>>>> ><br>> >>>>> > Ovirt :- = 3.5<br>> >>>>> > VDSM :- = vdsm-4.16.7-1.gitdb83943.el7<br>> >>>>> > KVM :- = 1.5.3 - 60.el7_0.2<br>> >>>>> > = libvirt-1.1.1-29.el7_0.4<br>> >>>>> > Glusterfs :- = glusterfs-3.5.3-1.el7<br>> >>>>> ><br>> = >>>>> > Engine Logs :-<br>> >>>>> = ><br>> >>>>> > 2015-01-08 09:57:52,569 = INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:52,609 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,582 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,591 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,596 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,633 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > ^C<br>> = >>>>> ><br>> >>>>> ><br>> = >>>>><br>> >>>>><br>> = >>>><br>> >>>><br>> >>> = <216 09-Jan-15.jpg><217 09-Jan-15.jpg><br>> = >>><br>> >>><br>> >>><br>> = >><br>> >><br>> ><br>> = ><br>><o:p></o:p></p></div></div></div><p = class=3DMsoNormal><o:p> </o:p></p></div></div></body></html> ------=_NextPart_000_00B5_01D02F1E.B7FDC3D0--

Hi Donny, No I am not using CTDB....it's totally new deployment... On Wed, Jan 14, 2015 at 1:50 AM, Donny Davis <donny@cloudspin.me> wrote:
Are you using ctdb??? and did you specify Lock=False in /etc/nfsmount.conf
Can you give a full run down of topology, and has this ever been working or is it a new deployment?
Donny D
*From:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On Behalf Of *Punit Dambiwal *Sent:* Monday, January 12, 2015 7:18 PM *To:* Kanagaraj Mayilsamy *Cc:* gluster-users@gluster.org; Kaushal M; users@ovirt.org *Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now
[2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT
[2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
[2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64
[2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected
[2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs)
[2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning
[2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
[2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd
[2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0
[2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0
[2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0
[2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
[2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
[2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS
[2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS
[2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS
[2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS
[2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS
[2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS
[2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS
[2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS
[2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS
[2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS
[2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS
[2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS
[2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
----- Original Message ----- packages
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com>
wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com>
wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
> Hi Punit, > > can you please provide also errors from /var/log/vdsm/vdsm.log and > /var/log/vdsm/vdsmd.log > > it would be really helpful if you provided exact steps how to > reproduce the problem. > > regards > > Martin Pavlik - rhev QE > > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com>
wrote:
> > > > Hi, > > > > I try to add gluster volume but it failed... > > > > Ovirt :- 3.5 > > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > > KVM :- 1.5.3 - 60.el7_0.2 > > libvirt-1.1.1-29.el7_0.4 > > Glusterfs :- glusterfs-3.5.3-1.el7 > > > > Engine Logs :- > > > > 2015-01-08 09:57:52,569 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:52,609 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,582 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,591 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,596 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,633 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > ^C > > > > > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavl=C3=ADk" <mpavlik@redhat.com>, "Vijay Bellur" = <vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> = wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant = hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks=3D ] 2015-01-12 12:50:43,771 INFO = [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-1) [330ace48] FINISH, = CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event = ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300 value: = GLUSTER , sharedLocks=3D ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I=E2=80=99am not that good with the gluster, I was = just following the obvious clue from the log. Could you try on the nodes if the =
are even available for installation
yum install gluster-swift gluster-swift-object = gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo. = http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs= -epel.repo
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> = wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install =
This is a multipart message in MIME format. ------=_NextPart_000_0137_01D02F61.6F1BCB10 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable And=20 =20 rpcbind is running =20 can you do a regular nfs mount of the gluster volume? gluster volume info {your volume name here} =20 =20 Just gathering intel to hopefully provide a solution. I just deployed = gluster with hosted engine today, and I did get some of the same errors = as you when I was bringing everything up Did you follow a guide, or are you craving your own? Are you using swift for anything=E2=80=A6 that is usually for openstack = to my knowledge? I guess you could use it for ovirt, but I = didn=E2=80=99t =20 Donny D =20 From: Punit Dambiwal [mailto:hypunit@gmail.com]=20 Sent: Tuesday, January 13, 2015 6:41 PM To: Donny Davis Cc: users@ovirt.org Subject: Re: [ovirt-users] Failed to create volume in OVirt with gluster =20 Hi Donny, =20 No I am not using CTDB....it's totally new deployment... =20 On Wed, Jan 14, 2015 at 1:50 AM, Donny Davis <donny@cloudspin.me> wrote: Are you using ctdb??? and did you specify Lock=3DFalse in = /etc/nfsmount.conf =20 Can you give a full run down of topology, and has this ever been working = or is it a new deployment? =20 Donny D =20 From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf = Of Punit Dambiwal Sent: Monday, January 12, 2015 7:18 PM To: Kanagaraj Mayilsamy Cc: gluster-users@gluster.org; Kaushal M; users@ovirt.org Subject: Re: [ovirt-users] Failed to create volume in OVirt with gluster =20 Hi, =20 Please find the more details on this ....can anybody from gluster will = help me here :-=20 =20 =20 Gluster CLI Logs :- /var/log/glusterfs/cli.log =20 [2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: = got RPC_CLNT_CONNECT [2015-01-13 02:06:23.072012] T = [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got = RPC_CLNT_CONNECT [2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] = 0-transport: disconnecting now [2015-01-13 02:06:23.072055] T = [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got = RPC_CLNT_DISCONNECT [2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] = 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner: [2015-01-13 02:06:23.072176] T = [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request = fraglen 128, payload: 64, rpc hdr: 64 [2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> = /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] = (--> = /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a429= 3] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] = (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> = /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: = connect () called on transport already connected [2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] = 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: = 2, Proc: 27) to rpc-transport (glusterfs) [2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] = 0-glusterfs: ping timeout is 0, returning [2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] = 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, = ProgVers: 2, Proc: 27) from rpc-transport (glusterfs) [2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] = 0-cli: Received response to status cmd [2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: = Returning 0 [2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] = 0-cli: Returning: 0 [2015-01-13 02:06:23.076192] D = [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0 [2015-01-13 02:06:23.076244] D = [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0 [2015-01-13 02:06:23.076256] D = [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning = 0 [2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] = 0-cli: Returning 0 [2015-01-13 02:06:23.076459] D = [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0 [2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: = 0 =20 Command log :- /var/log/glusterfs/.cmd_log_history =20 Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking = failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for = details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for = details. [2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking = failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for = details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for = details. [2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking = failed on cpu02.zne01.hkg1.stack.com. Please check log file for details. Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for = details. Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for = details. [2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : = Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. Staging failed on 00000000-0000-0000-0000-000000000000. Please check log = file for details. [2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS [2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS [2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS [2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS [2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS [2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS [2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS [2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS [2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS [2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS [2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS [2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS [2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS =20 =20 On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy = <kmayilsa@redhat.com> wrote: I can see the failures in glusterd log. Can someone from glusterfs dev pls help on this? Thanks, Kanagaraj ----- Original Message ----- packages those
packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the = following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> = wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> = wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all = 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=3Dbond0=3D VM public network (1G) eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=3D24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk = <mpavlik@redhat.com> wrote:
Hi Punit,
can you please provide also errors from /var/log/vdsm/vdsm.log = and /var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce the problem.
regards
Martin Pavlik - rhev QE > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> = wrote: > > Hi, > > I try to add gluster volume but it failed... > > Ovirt :- 3.5 > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > KVM :- 1.5.3 - 60.el7_0.2 > libvirt-1.1.1-29.el7_0.4 > Glusterfs :- glusterfs-3.5.3-1.el7 > > Engine Logs :- > > 2015-01-08 09:57:52,569 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:52,609 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,582 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,591 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,596 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > 2015-01-08 09:57:55,633 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and = wait lock EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER > , sharedLocks=3D ] > ^C > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>
> > (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, = Job ID:<br>> > 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: = null, Custom Event ID:<br>> > -1, Message: Creation of Gluster = Volume vol01 failed.<br>> > 2015-01-12 12:50:43,785 INFO<br>> = > = [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]<br>> = > (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object = EngineLock<br>> > [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300 value: GLUSTER<br>> > , = sharedLocks=3D ]<br>> ><br>> > [image: Inline image = 2]<br>> ><br>> ><br>> > On Sun, Jan 11, 2015 at 6:48 = PM, Martin Pavl=C3=ADk <<a href=3D"mailto:mpavlik@redhat.com" = target=3D"_blank">mpavlik@redhat.com</a>> wrote:<br>> ><br>> = >> Hi Punit,<br>> >><br>> >> unfortunately = I=E2=80=99am not that good with the gluster, I was just = following<br>> >> the obvious clue from the log. Could you try = on the nodes if the packages<br>> >> are even available for = installation<br>> >><br>> >> yum install = gluster-swift gluster-swift-object gluster-swift-plugin<br>> = >> gluster-swift-account<br>> >> = gluster-swift-proxy gluster-swift-doc gluster-swift-container<br>> = >> glusterfs-geo-replication<br>> >><br>> = >> if not you could try to get them in official gluster = repo.<br>> >> <a =
=20 =20 ------=_NextPart_000_0137_01D02F61.6F1BCB10 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta = name=3DGenerator content=3D"Microsoft Word 14 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.MsoAcetate, li.MsoAcetate, div.MsoAcetate {mso-style-priority:99; mso-style-link:"Balloon Text Char"; margin:0in; margin-bottom:.0001pt; font-size:8.0pt; font-family:"Tahoma","sans-serif";} span.EmailStyle17 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} span.BalloonTextChar {mso-style-name:"Balloon Text Char"; mso-style-priority:99; mso-style-link:"Balloon Text"; font-family:"Tahoma","sans-serif";} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>And <o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>rpcbind is running<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>can you do a regular nfs mount of the gluster = volume?<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>gluster volume info {your volume name here}<o:p></o:p></span></p><p = class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Just gathering intel to hopefully provide a solution. I just deployed = gluster with hosted engine today, and I did get some of the same errors = as you when I was bringing everything up<o:p></o:p></span></p><p = class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Did you follow a guide, or are you craving your = own?<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Are you using swift for anything=E2=80=A6 that is usually for = openstack to my knowledge? I guess you could use it for ovirt, but I = didn=E2=80=99t<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Donny D<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><b><span = style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>= </b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> = Punit Dambiwal [mailto:hypunit@gmail.com] <br><b>Sent:</b> Tuesday, = January 13, 2015 6:41 PM<br><b>To:</b> Donny Davis<br><b>Cc:</b> = users@ovirt.org<br><b>Subject:</b> Re: [ovirt-users] Failed to create = volume in OVirt with gluster<o:p></o:p></span></p><p = class=3DMsoNormal><o:p> </o:p></p><div><p class=3DMsoNormal>Hi = Donny,<o:p></o:p></p><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal>No I am not using CTDB....it's totally new = deployment...<o:p></o:p></p></div></div><div><p = class=3DMsoNormal><o:p> </o:p></p><div><p class=3DMsoNormal>On Wed, = Jan 14, 2015 at 1:50 AM, Donny Davis <<a = href=3D"mailto:donny@cloudspin.me" = target=3D"_blank">donny@cloudspin.me</a>> = wrote:<o:p></o:p></p><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Are you using ctdb??? and did you specify Lock=3DFalse in = /etc/nfsmount.conf</span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'> </span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Can you give a full run down of topology, and has this ever been = working or is it a new deployment?</span><o:p></o:p></p><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'> </span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Donny D</span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'> </span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span = style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>= </b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> = <a href=3D"mailto:users-bounces@ovirt.org" = target=3D"_blank">users-bounces@ovirt.org</a> [mailto:<a = href=3D"mailto:users-bounces@ovirt.org" = target=3D"_blank">users-bounces@ovirt.org</a>] <b>On Behalf Of </b>Punit = Dambiwal<br><b>Sent:</b> Monday, January 12, 2015 7:18 PM<br><b>To:</b> = Kanagaraj Mayilsamy<br><b>Cc:</b> <a = href=3D"mailto:gluster-users@gluster.org" = target=3D"_blank">gluster-users@gluster.org</a>; Kaushal M; <a = href=3D"mailto:users@ovirt.org" = target=3D"_blank">users@ovirt.org</a><br><b>Subject:</b> Re: = [ovirt-users] Failed to create volume in OVirt with = gluster</span><o:p></o:p></p><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Hi,<o:p></o:= p></p><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Please find = the more details on this ....can anybody from gluster will help me here = :- <o:p></o:p></p><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Gluster CLI = Logs :- /var/log/glusterfs/cli.log<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got = RPC_CLNT_CONNECT<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] = 0-glusterfs: got RPC_CLNT_CONNECT<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: = disconnecting now<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] = 0-glusterfs: got RPC_CLNT_DISCONNECT<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth = Info: pid: 0, uid: 0, gid: 0, owner:<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] = 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: = 64<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072572] T [socket.c:2863:socket_connect] (--> = /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] = (--> = /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a429= 3] (--> = /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> = /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> = /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: = connect () called on transport already = connected<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: = submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) = to rpc-transport (glusterfs)<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] = 0-glusterfs: ping timeout is 0, returning<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: = received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, = Proc: 27) from rpc-transport (glusterfs)<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: = Received response to status cmd<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning = 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: = Returning: 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: = Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: = Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076256] D = [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning = 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: = Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] = 0-cli: Returning 0<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: = 0<o:p></o:p></p></div></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Command log = :- /var/log/glusterfs/.cmd_log_history<o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:10:35.836676] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:16:25.956514] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:17:36.977833] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:21:07.048053] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:26:57.168661] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:28:07.194428] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:30:27.256667] : volume status vol01 : FAILED : Locking failed = on <a href=3D"http://cpu02.zne01.hkg1.stack.com" = target=3D"_blank">cpu02.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Locking = failed on <a href=3D"http://cpu03.zne01.hkg1.stack.com" = target=3D"_blank">cpu03.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Locking = failed on <a href=3D"http://cpu04.zne01.hkg1.stack.com" = target=3D"_blank">cpu04.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:34:58.350748] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:36:08.375326] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:36:08.386470] : volume status vol01 : FAILED : Locking failed = on <a href=3D"http://cpu02.zne01.hkg1.stack.com" = target=3D"_blank">cpu02.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Locking = failed on <a href=3D"http://cpu03.zne01.hkg1.stack.com" = target=3D"_blank">cpu03.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Locking = failed on <a href=3D"http://cpu04.zne01.hkg1.stack.com" = target=3D"_blank">cpu04.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on = <a href=3D"http://cpu02.zne01.hkg1.stack.com" = target=3D"_blank">cpu02.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Locking = failed on <a href=3D"http://cpu03.zne01.hkg1.stack.com" = target=3D"_blank">cpu03.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Locking = failed on <a href=3D"http://cpu04.zne01.hkg1.stack.com" = target=3D"_blank">cpu04.zne01.hkg1.stack.com</a>. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:45:10.550659] : volume status all tasks : FAILED : Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Staging = failed on 00000000-0000-0000-0000-000000000000. Please check log file = for details.<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:46:10.656802] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:51:02.796031] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:52:02.897804] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:55:25.841070] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:55:26.752084] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:55:32.499049] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:55:38.716907] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:56:52.905899] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 01:58:53.109613] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:03:26.769430] : system:: uuid get : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:04:22.859213] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:05:22.970393] : volume status all tasks : = SUCCESS<o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>[2015-01-13 = 02:06:23.075823] : volume status all tasks : = SUCCESS<o:p></o:p></p></div></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>On Mon, Jan = 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <<a = href=3D"mailto:kmayilsa@redhat.com" = target=3D"_blank">kmayilsa@redhat.com</a>> wrote:<o:p></o:p></p><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I can see = the failures in glusterd log.<br><br>Can someone from glusterfs dev pls = help on this?<br><br>Thanks,<br>Kanagaraj<o:p></o:p></p><div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><br>----- = Original Message -----<br>> From: "Punit Dambiwal" <<a = href=3D"mailto:hypunit@gmail.com" = target=3D"_blank">hypunit@gmail.com</a>><br>> To: = "Kanagaraj" <<a href=3D"mailto:kmayilsa@redhat.com" = target=3D"_blank">kmayilsa@redhat.com</a>><br>> Cc: "Martin = Pavl=C3=ADk" <<a href=3D"mailto:mpavlik@redhat.com" = target=3D"_blank">mpavlik@redhat.com</a>>, "Vijay Bellur" = <<a href=3D"mailto:vbellur@redhat.com" = target=3D"_blank">vbellur@redhat.com</a>>, "Kaushal M" = <<a href=3D"mailto:kshlmster@gmail.com" = target=3D"_blank">kshlmster@gmail.com</a>>,<br>> <a = href=3D"mailto:users@ovirt.org" target=3D"_blank">users@ovirt.org</a>, = <a href=3D"mailto:gluster-users@gluster.org" = target=3D"_blank">gluster-users@gluster.org</a><br>> Sent: Monday, = January 12, 2015 3:36:43 PM<br>> Subject: Re: Failed to create volume = in OVirt with gluster<br>><br>> Hi Kanagaraj,<br>><br>> = Please find the logs from here :- <a href=3D"http://ur1.ca/jeszc" = target=3D"_blank">http://ur1.ca/jeszc</a><br>><br>> [image: Inline = image 1]<br>><br>> [image: Inline image 2]<br>><br>> On Mon, = Jan 12, 2015 at 1:02 PM, Kanagaraj <<a = href=3D"mailto:kmayilsa@redhat.com" = target=3D"_blank">kmayilsa@redhat.com</a>> wrote:<br>><br>> = > Looks like there are some failures in gluster.<br>> > = Can you send the log output from glusterd log file from the relevant = hosts?<br>> ><br>> > Thanks,<br>> > Kanagaraj<br>> = ><br>> ><br>> > On 01/12/2015 10:24 AM, Punit Dambiwal = wrote:<br>> ><br>> > Hi,<br>> ><br>> > Is = there any one from gluster can help me here :-<br>> ><br>> = > Engine logs :-<br>> ><br>> > 2015-01-12 = 12:50:33,841 INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:34,725 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:36,824 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:36,853 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:36,866 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:37,751 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:39,849 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:39,878 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:39,890 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:40,776 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:42,878 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:42,903 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:42,916 = INFO<br>> > = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> > = (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait = lock<br>> > EngineLock [exclusiveLocks=3D key: = 00000001-0001-0001-0001-000000000300<br>> > value: GLUSTER<br>> = > , sharedLocks=3D ]<br>> > 2015-01-12 12:50:43,771 = INFO<br>> > = [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]<b= r>> > (ajp--127.0.0.1-8702-1) [330ace48] FINISH, = CreateGlusterVolumeVDSCommand,<br>> > log id: 303e70a4<br>> = > 2015-01-12 12:50:43,780 ERROR<br>> > = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br= href=3D"http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/g= lusterfs-epel.repo" = target=3D"_blank">http://download.gluster.org/pub/gluster/glusterfs/LATES= T/CentOS/glusterfs-epel.repo</a><br>> >><br>> >> = HTH<br>> >><br>> >> M.<br>> >><br>> = >><br>> >><br>> >><br>> >> = On 10 Jan 2015, at 04:35, Punit Dambiwal <<a = href=3D"mailto:hypunit@gmail.com" = target=3D"_blank">hypunit@gmail.com</a>> wrote:<br>> = >><br>> >> Hi Martin,<br>> >><br>> = >> I installed gluster from ovirt repo....is it require to = install those<br>> >> packages manually ??<br>> = >><br>> >> On Fri, Jan 9, 2015 at 7:19 PM, Martin = Pavl=C3=ADk <<a href=3D"mailto:mpavlik@redhat.com" = target=3D"_blank">mpavlik@redhat.com</a>> wrote:<br>> = >><br>> >>> Hi Punit,<br>> = >>><br>> >>> can you verify that nodes contain = cluster packages from the following<br>> >>> log?<br>> = >>><br>> >>> Thread-14::DEBUG::2015-01-09<br>> = >>> 18:06:28,823::caps::716::root::(_getKeyPackages) rpm = package<br>> >>> ('gluster-swift',) not found<br>> = >>> Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-object',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-plugin',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-account',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-proxy',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-doc',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('gluster-swift-container',) not found<br>> >>> = Thread-14::DEBUG::2015-01-09<br>> >>> = 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package<br>> = >>> ('glusterfs-geo-replication',) not found<br>> = >>><br>> >>><br>> >>> M.<br>> = >>><br>> >>> On 09 Jan 2015, at 11:13, = Punit Dambiwal <<a href=3D"mailto:hypunit@gmail.com" = target=3D"_blank">hypunit@gmail.com</a>> wrote:<br>> = >>><br>> >>> Hi Kanagaraj,<br>> = >>><br>> >>> Please find the attached logs = :-<br>> >>><br>> >>> Engine Logs :- <a = href=3D"http://ur1.ca/jdopt" = target=3D"_blank">http://ur1.ca/jdopt</a><br>> >>> VDSM Logs = :- <a href=3D"http://ur1.ca/jdoq9" = target=3D"_blank">http://ur1.ca/jdoq9</a><br>> >>><br>> = >>><br>> >>><br>> >>> On Thu, Jan 8, = 2015 at 6:05 PM, Kanagaraj <<a href=3D"mailto:kmayilsa@redhat.com" = target=3D"_blank">kmayilsa@redhat.com</a>> wrote:<br>> = >>><br>> >>>> Do you see any errors in the = UI?<br>> >>>><br>> >>>> Also please = provide the engine.log and vdsm.log when the failure<br>> = >>>> occured.<br>> >>>><br>> = >>>> Thanks,<br>> >>>> Kanagaraj<br>> = >>>><br>> >>>><br>> >>>> On = 01/08/2015 02:25 PM, Punit Dambiwal wrote:<br>> = >>>><br>> >>>> Hi Martin,<br>> = >>>><br>> >>>> The steps are below = :-<br>> >>>><br>> >>>> 1. Step the = ovirt engine on the one server...<br>> >>>> 2. Installed = centos 7 on 4 host node servers..<br>> >>>> 3. I am using = host node (compute+storage)....now i have added all 4<br>> = >>>> nodes to engine...<br>> >>>> 4. Create = the gluster volume from GUI...<br>> >>>><br>> = >>>> Network :-<br>> >>>> eth0 :- = public network (1G)<br>> >>>> eth1+eth2=3Dbond0=3D VM = public network (1G)<br>> >>>> = eth3+eth4=3Dbond1=3Dovirtmgmt+storage (10G private network)<br>> = >>>><br>> >>>> every hostnode has 24 = bricks=3D24*4(distributed replicated)<br>> >>>><br>> = >>>> Thanks,<br>> >>>> Punit<br>> = >>>><br>> >>>><br>> >>>> On = Thu, Jan 8, 2015 at 3:20 PM, Martin Pavl=C3=ADk <<a = href=3D"mailto:mpavlik@redhat.com" = target=3D"_blank">mpavlik@redhat.com</a>><br>> >>>> = wrote:<br>> >>>><br>> >>>>> Hi = Punit,<br>> >>>>><br>> >>>>> can you = please provide also errors from /var/log/vdsm/vdsm.log and<br>> = >>>>> /var/log/vdsm/vdsmd.log<br>> = >>>>><br>> >>>>> it would be really = helpful if you provided exact steps how to<br>> >>>>> = reproduce the problem.<br>> >>>>><br>> = >>>>> regards<br>> >>>>><br>> = >>>>> Martin Pavlik - rhev QE<br>> = >>>>> > On 08 Jan 2015, at 03:06, Punit Dambiwal = <<a href=3D"mailto:hypunit@gmail.com" = target=3D"_blank">hypunit@gmail.com</a>> wrote:<br>> = >>>>> ><br>> >>>>> > Hi,<br>> = >>>>> ><br>> >>>>> > I try to add = gluster volume but it failed...<br>> >>>>> = ><br>> >>>>> > Ovirt :- 3.5<br>> = >>>>> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7<br>> = >>>>> > KVM :- 1.5.3 - 60.el7_0.2<br>> = >>>>> > libvirt-1.1.1-29.el7_0.4<br>> = >>>>> > Glusterfs :- glusterfs-3.5.3-1.el7<br>> = >>>>> ><br>> >>>>> > Engine Logs = :-<br>> >>>>> ><br>> >>>>> > = 2015-01-08 09:57:52,569 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:52,609 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,582 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,591 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,596 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > 2015-01-08 = 09:57:55,633 INFO<br>> >>>>> = [org.ovirt.engine.core.bll.lock.InMemoryLockManager]<br>> = >>>>> (DefaultQuartzScheduler_Worker-16) Failed to = acquire lock and wait lock<br>> >>>>> EngineLock = [exclusiveLocks=3D key: 00000001-0001-0001-0001-000000000300<br>> = >>>>> value: GLUSTER<br>> >>>>> > , = sharedLocks=3D ]<br>> >>>>> > ^C<br>> = >>>>> ><br>> >>>>> ><br>> = >>>>><br>> >>>>><br>> = >>>><br>> >>>><br>> >>> = <216 09-Jan-15.jpg><217 09-Jan-15.jpg><br>> = >>><br>> >>><br>> >>><br>> = >><br>> >><br>> ><br>> = ><br>><o:p></o:p></p></div></div></div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div></div></div></div></div></div><p = class=3DMsoNormal><o:p> </o:p></p></div></div></body></html> ------=_NextPart_000_0137_01D02F61.6F1BCB10--

Hi Donny, I am not using gluster for the NFS mount...no volume has been created because of those errors.... On Wed, Jan 14, 2015 at 9:47 AM, Donny Davis <donny@cloudspin.me> wrote:
And
rpcbind is running
can you do a regular nfs mount of the gluster volume?
gluster volume info {your volume name here}
Just gathering intel to hopefully provide a solution. I just deployed gluster with hosted engine today, and I did get some of the same errors as you when I was bringing everything up
Did you follow a guide, or are you craving your own?
Are you using swift for anything… that is usually for openstack to my knowledge? I guess you could use it for ovirt, but I didn’t
Donny D
*From:* Punit Dambiwal [mailto:hypunit@gmail.com] *Sent:* Tuesday, January 13, 2015 6:41 PM *To:* Donny Davis *Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi Donny,
No I am not using CTDB....it's totally new deployment...
On Wed, Jan 14, 2015 at 1:50 AM, Donny Davis <donny@cloudspin.me> wrote:
Are you using ctdb??? and did you specify Lock=False in /etc/nfsmount.conf
Can you give a full run down of topology, and has this ever been working or is it a new deployment?
Donny D
*From:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On Behalf Of *Punit Dambiwal *Sent:* Monday, January 12, 2015 7:18 PM *To:* Kanagaraj Mayilsamy *Cc:* gluster-users@gluster.org; Kaushal M; users@ovirt.org *Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now
[2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT
[2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
[2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64
[2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected
[2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs)
[2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning
[2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
[2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd
[2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0
[2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0
[2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0
[2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
[2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
[2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS
[2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS
[2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS
[2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS
[2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS
[2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS
[2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS
[2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS
[2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS
[2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS
[2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS
[2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS
[2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
----- Original Message ----- packages
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install those packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com>
wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com>
wrote:
Do you see any errors in the UI?
Also please provide the engine.log and vdsm.log when the failure occured.
Thanks, Kanagaraj
On 01/08/2015 02:25 PM, Punit Dambiwal wrote:
Hi Martin,
The steps are below :-
1. Step the ovirt engine on the one server... 2. Installed centos 7 on 4 host node servers.. 3. I am using host node (compute+storage)....now i have added all 4 nodes to engine... 4. Create the gluster volume from GUI...
Network :- eth0 :- public network (1G) eth1+eth2=bond0= VM public network (1G) eth3+eth4=bond1=ovirtmgmt+storage (10G private network)
every hostnode has 24 bricks=24*4(distributed replicated)
Thanks, Punit
On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
> Hi Punit, > > can you please provide also errors from /var/log/vdsm/vdsm.log and > /var/log/vdsm/vdsmd.log > > it would be really helpful if you provided exact steps how to > reproduce the problem. > > regards > > Martin Pavlik - rhev QE > > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com>
wrote:
> > > > Hi, > > > > I try to add gluster volume but it failed... > > > > Ovirt :- 3.5 > > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 > > KVM :- 1.5.3 - 60.el7_0.2 > > libvirt-1.1.1-29.el7_0.4 > > Glusterfs :- glusterfs-3.5.3-1.el7 > > > > Engine Logs :- > > > > 2015-01-08 09:57:52,569 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:52,609 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,582 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,591 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,596 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > 2015-01-08 09:57:55,633 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock > EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 > value: GLUSTER > > , sharedLocks= ] > > ^C > > > > > >
<216 09-Jan-15.jpg><217 09-Jan-15.jpg>

Hi, Can any one help me here ?? On Wed, Jan 14, 2015 at 3:25 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Donny,
I am not using gluster for the NFS mount...no volume has been created because of those errors....
On Wed, Jan 14, 2015 at 9:47 AM, Donny Davis <donny@cloudspin.me> wrote:
And
rpcbind is running
can you do a regular nfs mount of the gluster volume?
gluster volume info {your volume name here}
Just gathering intel to hopefully provide a solution. I just deployed gluster with hosted engine today, and I did get some of the same errors as you when I was bringing everything up
Did you follow a guide, or are you craving your own?
Are you using swift for anything… that is usually for openstack to my knowledge? I guess you could use it for ovirt, but I didn’t
Donny D
*From:* Punit Dambiwal [mailto:hypunit@gmail.com] *Sent:* Tuesday, January 13, 2015 6:41 PM *To:* Donny Davis *Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi Donny,
No I am not using CTDB....it's totally new deployment...
On Wed, Jan 14, 2015 at 1:50 AM, Donny Davis <donny@cloudspin.me> wrote:
Are you using ctdb??? and did you specify Lock=False in /etc/nfsmount.conf
Can you give a full run down of topology, and has this ever been working or is it a new deployment?
Donny D
*From:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On Behalf Of *Punit Dambiwal *Sent:* Monday, January 12, 2015 7:18 PM *To:* Kanagaraj Mayilsamy *Cc:* gluster-users@gluster.org; Kaushal M; users@ovirt.org *Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now
[2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT
[2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
[2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64
[2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected
[2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs)
[2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning
[2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
[2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd
[2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0
[2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0
[2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0
[2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
[2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
[2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS
[2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS
[2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS
[2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS
[2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS
[2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS
[2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS
[2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS
[2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS
[2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS
[2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS
[2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS
[2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy < kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install
----- Original Message ----- packages those
packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
can you verify that nodes contain cluster packages from the following log?
Thread-14::DEBUG::2015-01-09 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-14::DEBUG::2015-01-09 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-14::DEBUG::2015-01-09 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-14::DEBUG::2015-01-09 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-14::DEBUG::2015-01-09 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt VDSM Logs :- http://ur1.ca/jdoq9
On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
> Do you see any errors in the UI? > > Also please provide the engine.log and vdsm.log when the failure > occured. > > Thanks, > Kanagaraj > > > On 01/08/2015 02:25 PM, Punit Dambiwal wrote: > > Hi Martin, > > The steps are below :- > > 1. Step the ovirt engine on the one server... > 2. Installed centos 7 on 4 host node servers.. > 3. I am using host node (compute+storage)....now i have added all 4 > nodes to engine... > 4. Create the gluster volume from GUI... > > Network :- > eth0 :- public network (1G) > eth1+eth2=bond0= VM public network (1G) > eth3+eth4=bond1=ovirtmgmt+storage (10G private network) > > every hostnode has 24 bricks=24*4(distributed replicated) > > Thanks, > Punit > > > On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com> > wrote: > >> Hi Punit, >> >> can you please provide also errors from /var/log/vdsm/vdsm.log and >> /var/log/vdsm/vdsmd.log >> >> it would be really helpful if you provided exact steps how to >> reproduce the problem. >> >> regards >> >> Martin Pavlik - rhev QE >> > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote: >> > >> > Hi, >> > >> > I try to add gluster volume but it failed... >> > >> > Ovirt :- 3.5 >> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 >> > KVM :- 1.5.3 - 60.el7_0.2 >> > libvirt-1.1.1-29.el7_0.4 >> > Glusterfs :- glusterfs-3.5.3-1.el7 >> > >> > Engine Logs :- >> > >> > 2015-01-08 09:57:52,569 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >> > , sharedLocks= ] >> > 2015-01-08 09:57:52,609 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >> > , sharedLocks= ] >> > 2015-01-08 09:57:55,582 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >> > , sharedLocks= ] >> > 2015-01-08 09:57:55,591 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >> > , sharedLocks= ] >> > 2015-01-08 09:57:55,596 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >> > , sharedLocks= ] >> > 2015-01-08 09:57:55,633 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >> value: GLUSTER >> > , sharedLocks= ] >> > ^C >> > >> > >> >> > > <216 09-Jan-15.jpg><217 09-Jan-15.jpg>

Hi Atin, It seems glusterfs is not a good and stable product for storage use....i am still facing the same issue...in the ovirt wiki it says ovirt has native support with glusterfs but even i am tring to use it from ovirt it failed and i am keep try to make it work from last 7 days...but still fail and no one in the community has clue about the same....may be it's bug in the gluster 3.6.1 but even i try with gluster 3.5.3....it same.... Hope someone from gluster help me to get rid of those errors and make it work.. Thanks, Punit On Thu, Jan 15, 2015 at 2:58 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
Can any one help me here ??
On Wed, Jan 14, 2015 at 3:25 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Donny,
I am not using gluster for the NFS mount...no volume has been created because of those errors....
On Wed, Jan 14, 2015 at 9:47 AM, Donny Davis <donny@cloudspin.me> wrote:
And
rpcbind is running
can you do a regular nfs mount of the gluster volume?
gluster volume info {your volume name here}
Just gathering intel to hopefully provide a solution. I just deployed gluster with hosted engine today, and I did get some of the same errors as you when I was bringing everything up
Did you follow a guide, or are you craving your own?
Are you using swift for anything… that is usually for openstack to my knowledge? I guess you could use it for ovirt, but I didn’t
Donny D
*From:* Punit Dambiwal [mailto:hypunit@gmail.com] *Sent:* Tuesday, January 13, 2015 6:41 PM *To:* Donny Davis *Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi Donny,
No I am not using CTDB....it's totally new deployment...
On Wed, Jan 14, 2015 at 1:50 AM, Donny Davis <donny@cloudspin.me> wrote:
Are you using ctdb??? and did you specify Lock=False in /etc/nfsmount.conf
Can you give a full run down of topology, and has this ever been working or is it a new deployment?
Donny D
*From:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On Behalf Of *Punit Dambiwal *Sent:* Monday, January 12, 2015 7:18 PM *To:* Kanagaraj Mayilsamy *Cc:* gluster-users@gluster.org; Kaushal M; users@ovirt.org *Subject:* Re: [ovirt-users] Failed to create volume in OVirt with gluster
Hi,
Please find the more details on this ....can anybody from gluster will help me here :-
Gluster CLI Logs :- /var/log/glusterfs/cli.log
[2015-01-13 02:06:23.071969] T [cli.c:264:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072012] T [cli-quotad-client.c:94:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_CONNECT
[2015-01-13 02:06:23.072024] I [socket.c:2344:socket_event_handler] 0-transport: disconnecting now
[2015-01-13 02:06:23.072055] T [cli-quotad-client.c:100:cli_quotad_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT
[2015-01-13 02:06:23.072131] T [rpc-clnt.c:1381:rpc_clnt_record] 0-glusterfs: Auth Info: pid: 0, uid: 0, gid: 0, owner:
[2015-01-13 02:06:23.072176] T [rpc-clnt.c:1238:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 128, payload: 64, rpc hdr: 64
[2015-01-13 02:06:23.072572] T [socket.c:2863:socket_connect] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fed02f15420] (--> /usr/lib64/glusterfs/3.6.1/rpc-transport/socket.so(+0x7293)[0x7fed001a4293] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_submit+0x468)[0x7fed0266df98] (--> /usr/sbin/gluster(cli_submit_request+0xdb)[0x40a9bb] (--> /usr/sbin/gluster(cli_cmd_submit+0x8e)[0x40b7be] ))))) 0-glusterfs: connect () called on transport already connected
[2015-01-13 02:06:23.072616] T [rpc-clnt.c:1573:rpc_clnt_submit] 0-rpc-clnt: submitted request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) to rpc-transport (glusterfs)
[2015-01-13 02:06:23.072633] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning
[2015-01-13 02:06:23.075930] T [rpc-clnt.c:660:rpc_clnt_reply_init] 0-glusterfs: received rpc message (RPC XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 27) from rpc-transport (glusterfs)
[2015-01-13 02:06:23.075976] D [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd
[2015-01-13 02:06:23.076025] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0
[2015-01-13 02:06:23.076049] D [cli-rpc-ops.c:6811:gf_cli_status_volume] 0-cli: Returning: 0
[2015-01-13 02:06:23.076192] D [cli-xml-output.c:84:cli_begin_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076244] D [cli-xml-output.c:131:cli_xml_output_common] 0-cli: Returning 0
[2015-01-13 02:06:23.076256] D [cli-xml-output.c:1375:cli_xml_output_vol_status_begin] 0-cli: Returning 0
[2015-01-13 02:06:23.076437] D [cli-xml-output.c:108:cli_end_xml_output] 0-cli: Returning 0
[2015-01-13 02:06:23.076459] D [cli-xml-output.c:1398:cli_xml_output_vol_status_end] 0-cli: Returning 0
[2015-01-13 02:06:23.076490] I [input.c:36:cli_batch] 0-: Exiting with: 0
Command log :- /var/log/glusterfs/.cmd_log_history
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:10:35.836676] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:16:25.956514] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:17:36.977833] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:21:07.048053] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:26:57.168661] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:28:07.194428] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:30:27.256667] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:34:58.350748] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.375326] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:36:08.386470] : volume status vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:42:59.524215] : volume stop vol01 : FAILED : Locking failed on cpu02.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu03.zne01.hkg1.stack.com. Please check log file for details.
Locking failed on cpu04.zne01.hkg1.stack.com. Please check log file for details.
[2015-01-13 01:45:10.550659] : volume status all tasks : FAILED : Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
Staging failed on 00000000-0000-0000-0000-000000000000. Please check log file for details.
[2015-01-13 01:46:10.656802] : volume status all tasks : SUCCESS
[2015-01-13 01:51:02.796031] : volume status all tasks : SUCCESS
[2015-01-13 01:52:02.897804] : volume status all tasks : SUCCESS
[2015-01-13 01:55:25.841070] : system:: uuid get : SUCCESS
[2015-01-13 01:55:26.752084] : system:: uuid get : SUCCESS
[2015-01-13 01:55:32.499049] : system:: uuid get : SUCCESS
[2015-01-13 01:55:38.716907] : system:: uuid get : SUCCESS
[2015-01-13 01:56:52.905899] : volume status all tasks : SUCCESS
[2015-01-13 01:58:53.109613] : volume status all tasks : SUCCESS
[2015-01-13 02:03:26.769430] : system:: uuid get : SUCCESS
[2015-01-13 02:04:22.859213] : volume status all tasks : SUCCESS
[2015-01-13 02:05:22.970393] : volume status all tasks : SUCCESS
[2015-01-13 02:06:23.075823] : volume status all tasks : SUCCESS
On Mon, Jan 12, 2015 at 10:53 PM, Kanagaraj Mayilsamy < kmayilsa@redhat.com> wrote:
I can see the failures in glusterd log.
Can someone from glusterfs dev pls help on this?
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: "Martin Pavlík" <mpavlik@redhat.com>, "Vijay Bellur" < vbellur@redhat.com>, "Kaushal M" <kshlmster@gmail.com>, users@ovirt.org, gluster-users@gluster.org Sent: Monday, January 12, 2015 3:36:43 PM Subject: Re: Failed to create volume in OVirt with gluster
Hi Kanagaraj,
Please find the logs from here :- http://ur1.ca/jeszc
[image: Inline image 1]
[image: Inline image 2]
On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Looks like there are some failures in gluster. Can you send the log output from glusterd log file from the relevant hosts?
Thanks, Kanagaraj
On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
Hi,
Is there any one from gluster can help me here :-
Engine logs :-
2015-01-12 12:50:33,841 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:34,725 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,824 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,853 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:36,866 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:37,751 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,849 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:39,890 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:40,776 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,878 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,903 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:42,916 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ] 2015-01-12 12:50:43,771 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand, log id: 303e70a4 2015-01-12 12:50:43,780 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID: 896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID: -1, Message: Creation of Gluster Volume vol01 failed. 2015-01-12 12:50:43,785 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-1) [330ace48] Lock freed to object EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 value: GLUSTER , sharedLocks= ]
[image: Inline image 2]
On Sun, Jan 11, 2015 at 6:48 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
Hi Punit,
unfortunately I’am not that good with the gluster, I was just following the obvious clue from the log. Could you try on the nodes if the
are even available for installation
yum install gluster-swift gluster-swift-object gluster-swift-plugin gluster-swift-account gluster-swift-proxy gluster-swift-doc gluster-swift-container glusterfs-geo-replication
if not you could try to get them in official gluster repo.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-ep...
HTH
M.
On 10 Jan 2015, at 04:35, Punit Dambiwal <hypunit@gmail.com>
wrote:
Hi Martin,
I installed gluster from ovirt repo....is it require to install
----- Original Message ----- packages those
packages manually ??
On Fri, Jan 9, 2015 at 7:19 PM, Martin Pavlík <mpavlik@redhat.com> wrote:
> Hi Punit, > > can you verify that nodes contain cluster packages from the following > log? > > Thread-14::DEBUG::2015-01-09 > 18:06:28,823::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,825::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-object',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,826::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-plugin',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-account',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-proxy',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,829::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-doc',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package > ('gluster-swift-container',) not found > Thread-14::DEBUG::2015-01-09 > 18:06:28,830::caps::716::root::(_getKeyPackages) rpm package > ('glusterfs-geo-replication',) not found > > > M. > > On 09 Jan 2015, at 11:13, Punit Dambiwal <hypunit@gmail.com> wrote: > > Hi Kanagaraj, > > Please find the attached logs :- > > Engine Logs :- http://ur1.ca/jdopt > VDSM Logs :- http://ur1.ca/jdoq9 > > > > On Thu, Jan 8, 2015 at 6:05 PM, Kanagaraj <kmayilsa@redhat.com> wrote: > >> Do you see any errors in the UI? >> >> Also please provide the engine.log and vdsm.log when the failure >> occured. >> >> Thanks, >> Kanagaraj >> >> >> On 01/08/2015 02:25 PM, Punit Dambiwal wrote: >> >> Hi Martin, >> >> The steps are below :- >> >> 1. Step the ovirt engine on the one server... >> 2. Installed centos 7 on 4 host node servers.. >> 3. I am using host node (compute+storage)....now i have added all 4 >> nodes to engine... >> 4. Create the gluster volume from GUI... >> >> Network :- >> eth0 :- public network (1G) >> eth1+eth2=bond0= VM public network (1G) >> eth3+eth4=bond1=ovirtmgmt+storage (10G private network) >> >> every hostnode has 24 bricks=24*4(distributed replicated) >> >> Thanks, >> Punit >> >> >> On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík <mpavlik@redhat.com
>> wrote: >> >>> Hi Punit, >>> >>> can you please provide also errors from /var/log/vdsm/vdsm.log and >>> /var/log/vdsm/vdsmd.log >>> >>> it would be really helpful if you provided exact steps how to >>> reproduce the problem. >>> >>> regards >>> >>> Martin Pavlik - rhev QE >>> > On 08 Jan 2015, at 03:06, Punit Dambiwal <hypunit@gmail.com> wrote: >>> > >>> > Hi, >>> > >>> > I try to add gluster volume but it failed... >>> > >>> > Ovirt :- 3.5 >>> > VDSM :- vdsm-4.16.7-1.gitdb83943.el7 >>> > KVM :- 1.5.3 - 60.el7_0.2 >>> > libvirt-1.1.1-29.el7_0.4 >>> > Glusterfs :- glusterfs-3.5.3-1.el7 >>> > >>> > Engine Logs :- >>> > >>> > 2015-01-08 09:57:52,569 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>> > , sharedLocks= ] >>> > 2015-01-08 09:57:52,609 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>> > , sharedLocks= ] >>> > 2015-01-08 09:57:55,582 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>> > , sharedLocks= ] >>> > 2015-01-08 09:57:55,591 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>> > , sharedLocks= ] >>> > 2015-01-08 09:57:55,596 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>> > , sharedLocks= ] >>> > 2015-01-08 09:57:55,633 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock >>> EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-000000000300 >>> value: GLUSTER >>> > , sharedLocks= ] >>> > ^C >>> > >>> > >>> >>> >> >> > <216 09-Jan-15.jpg><217 09-Jan-15.jpg> > > >
participants (6)
-
Atin Mukherjee
-
Donny Davis
-
Kanagaraj
-
Kanagaraj Mayilsamy
-
Martin Pavlík
-
Punit Dambiwal