Adding gluster ml
On Mon, Mar 4, 2019 at 7:17 AM Guillaume Pavese
<guillaume.pavese(a)interactiv-group.com> wrote:
>
> I got that too so upgraded to gluster6-rc0 nit still, this morning one engine brick
is down :
>
> [2019-03-04 01:33:22.492206] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler
> [2019-03-04 01:38:34.601381] I [addr.c:54:compare_addr_and_update]
0-/gluster_bricks/engine/engine: allowed = "*", received addr =
"10.199.211.5"
> [2019-03-04 01:38:34.601410] I [login.c:110:gf_auth] 0-auth/login: allowed user
names: 9e360b5b-34d3-4076-bc7e-ed78e4e0dc01
> [2019-03-04 01:38:34.601421] I [MSGID: 115029]
[server-handshake.c:550:server_setvolume] 0-engine-server: accepted client from
CTX_ID:f7603ec6-9914-408b-85e6-e64e9844e326-GRAPH_ID:0-PID:300490-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
(version: 6.0rc0) with subvol /gluster_bricks/engine/engine
> [2019-03-04 01:38:34.610400] I [MSGID: 115036] [server.c:498:server_rpc_notify]
0-engine-server: disconnecting connection from
CTX_ID:f7603ec6-9914-408b-85e6-e64e9844e326-GRAPH_ID:0-PID:300490-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
> [2019-03-04 01:38:34.610531] I [MSGID: 101055] [client_t.c:436:gf_client_unref]
0-engine-server: Shutting down connection
CTX_ID:f7603ec6-9914-408b-85e6-e64e9844e326-GRAPH_ID:0-PID:300490-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
> [2019-03-04 01:38:34.610574] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler
> [2019-03-04 01:39:18.520347] I [addr.c:54:compare_addr_and_update]
0-/gluster_bricks/engine/engine: allowed = "*", received addr =
"10.199.211.5"
> [2019-03-04 01:39:18.520373] I [login.c:110:gf_auth] 0-auth/login: allowed user
names: 9e360b5b-34d3-4076-bc7e-ed78e4e0dc01
> [2019-03-04 01:39:18.520383] I [MSGID: 115029]
[server-handshake.c:550:server_setvolume] 0-engine-server: accepted client from
CTX_ID:f3be82ea-6340-4bd4-afb3-aa9db432f779-GRAPH_ID:0-PID:300885-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
(version: 6.0rc0) with subvol /gluster_bricks/engine/engine
> [2019-03-04 01:39:19.711947] I [MSGID: 115036] [server.c:498:server_rpc_notify]
0-engine-server: disconnecting connection from
CTX_ID:f3be82ea-6340-4bd4-afb3-aa9db432f779-GRAPH_ID:0-PID:300885-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
> [2019-03-04 01:39:19.712431] I [MSGID: 101055] [client_t.c:436:gf_client_unref]
0-engine-server: Shutting down connection
CTX_ID:f3be82ea-6340-4bd4-afb3-aa9db432f779-GRAPH_ID:0-PID:300885-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
> [2019-03-04 01:39:19.712484] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler
> (END)
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Mon, Mar 4, 2019 at 3:56 AM Endre Karlson <endre.karlson(a)gmail.com> wrote:
>>
>> I have tried bumping to 5.4 now and still getting alot of "Failed
Eventhandler" errors in the logs, any ideas guys?
>>
>> Den søn. 3. mar. 2019 kl. 09:03 skrev Guillaume Pavese
<guillaume.pavese(a)interactiv-group.com>:
>>>
>>> Gluster 5.4 is released but not yet in official repository
>>> If like me you can not wait the official release of Gluster 5.4 with the
instability bugfixes (planned for around March 12 hopefully), you can use the following
repository :
>>>
>>> For Gluster 5.4-1 :
>>>
>>> #/etc/yum.repos.d/Gluster5-Testing.repo
>>> [Gluster5-Testing]
>>> name=Gluster5-Testing $basearch
>>>
baseurl=https://cbs.centos.org/repos/storage7-gluster-5-testing/os/$basea...
>>> enabled=1
>>> #metadata_expire=60m
>>> gpgcheck=0
>>>
>>>
>>> If adventurous ;) Gluster 6-rc0 :
>>>
>>> #/etc/yum.repos.d/Gluster6-Testing.repo
>>> [Gluster6-Testing]
>>> name=Gluster6-Testing $basearch
>>>
baseurl=https://cbs.centos.org/repos/storage7-gluster-6-testing/os/$basea...
>>> enabled=1
>>> #metadata_expire=60m
>>> gpgcheck=0
>>>
>>>
>>> GLHF
>>>
>>> Guillaume Pavese
>>> Ingénieur Système et Réseau
>>> Interactiv-Group
>>>
>>>
>>> On Sun, Mar 3, 2019 at 6:16 AM Endre Karlson <endre.karlson(a)gmail.com>
wrote:
>>>>
>>>> Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster
that's breakin apart daily due to the issues with GlusterFS after upgrading from 4.2.8
that was rock solid. I am wondering why 4.3 was released as a stable version at all??
**FRUSTRATION**
>>>>
>>>> Endre
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCAN...
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/53PH4H7HNDV...