[ovirt-users] Gluster command [<UNKNOWN>] failed on server - SOLVED

suporte at logicworks.pt suporte at logicworks.pt
Tue May 12 13:59:12 UTC 2015


Thanks Sahina, you were right. vdsm-gluster was not installed. Weird, I don't remember to remove it, but now I have the 2 nodes green. 
Thanks a lot 

----- Mensagem original -----

De: "Sahina Bose" <sabose at redhat.com> 
Para: suporte at logicworks.pt, "Daniel Helgenberger" <daniel.helgenberger at m-box.de> 
Cc: users at ovirt.org 
Enviadas: Terça-feira, 12 De Maio de 2015 11:45:53 
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server 

vdsm-gluster installed on your node? 

>From logs - it seems to indicate that it is not. 

On 05/12/2015 03:02 PM, suporte at logicworks.pt wrote: 




This is the engine log: 
2015-05-12 10:27:44,012 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-2) [76c5a7e7] Lock Acquired to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS 
, sharedLocks= ] 
2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Running command: ActivateVdsCommand internal: false. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDSAction group MANIPULATE_HOST with role type ADMIN 
2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Before acquiring lock in order to prevent monitoring for host ovserver1 from data-center Default 
2015-05-12 10:27:44,186 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock acquired, from now a monitoring of host will be skipped for host ovserver1 from data-center Default 
2015-05-12 10:27:44,189 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: dca9241 
2015-05-12 10:27:44,236 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetVdsStatusVDSCommand, log id: dca9241 
2015-05-12 10:27:44,320 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] START, SetHaMaintenanceModeVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 3106a21a 
2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetHaMaintenanceModeVDSCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] FINISH, SetHaMaintenanceModeVDSCommand, log id: 3106a21a 
2015-05-12 10:27:44,324 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Activate finished. Lock released. Monitoring can run now for host ovserver1 from data-center Default 
2015-05-12 10:27:44,369 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Correlation ID: 76c5a7e7, Job ID: 41492531-353a-41e7-96ab-ca4a09651fbc, Call Stack: null, Custom Event ID: -1, Message: Host ovserver1 was activated by admin at internal. 
2015-05-12 10:27:44,411 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-40) [76c5a7e7] Lock freed to object EngineLock [exclusiveLocks= key: b505a91a-38b2-48c9-a161-06f1360a3d6f value: VDS 
, sharedLocks= ] 
2015-05-12 10:27:45,047 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] START, GetHardwareInfoVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, vds=Host[ovserver1,b505a91a-38b2-48c9-a161-06f1360a3d6f]), log id: 633e992b 
2015-05-12 10:27:45,051 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-51) [4d2b49f] FINISH, GetHardwareInfoVDSCommand, log id: 633e992b 
2015-05-12 10:27:45,052 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-51) [4d2b49f] Host ovserver1 is running with disabled SELinux. 
2015-05-12 10:27:45,137 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 
2015-05-12 10:27:45,139 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] START, GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f), log id: 770f2d6e 
2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.] 
2015-05-12 10:27:45,142 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Unexpected return value: StatusForXmlRpc [mCode=-32601, mMessage=The method does not exist / is not available.] 
2015-05-12 10:27:45,142 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Failed in GlusterServersListVDS method 
2015-05-12 10:27:45,143 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] Command GlusterServersListVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterServersListVDS, error = The method does not exist / is not available., code = -32601 
2015-05-12 10:27:45,143 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-51) [211ecca6] FINISH, GlusterServersListVDSCommand, log id: 770f2d6e 
2015-05-12 10:27:45,311 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 
2015-05-12 10:27:45,312 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] START, SetVdsStatusVDSCommand(HostName = ovserver1, HostId = b505a91a-38b2-48c9-a161-06f1360a3d6f, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: 9dbd40f 
2015-05-12 10:27:45,353 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-51) [7e3688d2] FINISH, SetVdsStatusVDSCommand, log id: 9dbd40f 
2015-05-12 10:27:45,355 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (org.ovirt.thread.pool-8-thread-41) [7e3688d2] ResourceManager::vdsMaintenance - There is not host capable of running the hosted engine VM 
2015-05-12 10:27:45,394 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: 7e3688d2, Job ID: 2e6c4d5a-c1c3-4713-b103-2e20c2892e6b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server ovserver1. 
2015-05-12 10:27:45,561 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-51) [7e3688d2] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host ovserver1 was set to NonOperational. 
2015-05-12 10:27:45,696 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-51) [b01e893] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: b505a91a-38b2-48c9-a161-06f1360a3d6f Type: VDS 
2015-05-12 10:27:45,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-51) [b01e893] Host b505a91a-38b2-48c9-a161-06f1360a3d6f : ovserver1 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 

VDSM log: 
Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::1191::Storage.TaskManager.Task::(prepare) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::finished: {'75e6fd87-b38b-4280-b676-08c16748ff97': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000110247', 'lastCheck': '6.5', 'valid': True}} 
Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::595::Storage.TaskManager.Task::(_updateState) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::moving from state preparing -> state finished 
Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 
Thread-84704::DEBUG::2015-05-12 10:27:49,884::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 
Thread-84704::DEBUG::2015-05-12 10:27:49,884::task::993::Storage.TaskManager.Task::(_decref) Task=`a7b984ad-2390-4b5f-8ea6-c95b0d0e8c37`::ref 0 aborting False 
JsonRpc (StompReactor)::DEBUG::2015-05-12 10:27:49,914::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 
JsonRpcServer::DEBUG::2015-05-12 10:27:49,915::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 
Thread-84705::DEBUG::2015-05-12 10:27:49,916::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 
Detector thread::DEBUG::2015-05-12 10:27:49,974::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:49510 
Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:49510 
Detector thread::DEBUG::2015-05-12 10:27:49,980::protocoldetector::246::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:49510 
Detector thread::DEBUG::2015-05-12 10:27:49,980::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 49510) 
Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('09546d15-6679-4a99-9fe6-3fa4730811d4',) {} 
Thread-84706::DEBUG::2015-05-12 10:27:49,982::BindingXMLRPC::1140::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': u'5900'}], 'memUsage': '0', 'acpiEnable': 'true', 'guestFQDN': '', 'pid': '5587', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {}, 'pauseCode': 'NOERR', 'network': {u'vnet0': {'macAddr': '00:16:3e:42:95:b9', 'rxDropped': '29', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmType': 'kvm', 'cpuUser': '1.64', 'elapsedTime': '69926', 'vmJobs': {}, 'cpuSys': '0.27', 'appsList': [], 'displayType': 'vnc', 'vcpuCount': '2', 'clientIp': '', 'hash': '-3724559636060176164', 'vmId': '09546d15-6679-4a99-9fe6-3fa4730811d4', 'displayIp': '0', 'vcpuPeriod': 100000L, 'displayPort': u'5900', 'vcpuQuota': '-1', 'kvmEnable': 'true', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '32212254720', 'writeLatency': '0', 'imageID': '39f6830c-8fa1-4abd-9259-90654e91ff2d', 'flushLatency': '0', 'truesize': '15446843392'}, u'hdc': {'flushLatency': '0', 'readLatency': '0', 'truesize': '0', 'apparentsize': '0', 'writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '1.83', 'username': 'Unknown', 'status': 'Up', 'guestCPUCount': -1, 'ioTune': [], 'guestIPs': ''}]} 
clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state init -> state preparing 
clientIFinit:: INFO::2015-05-12 10:27:50,809::logUtils::44::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList(options=None) 
clientIFinit:: INFO::2015-05-12 10:27:50,809::logUtils::47::dispatcher::(wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []} 
clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::1191::Storage.TaskManager.Task::(prepare) Task=`decf270c-4715-432c-a01d-942181f61e80`::finished: {'poollist': []} 
clientIFinit::DEBUG::2015-05-12 10:27:50,809::task::595::Storage.TaskManager.Task::(_updateState) Task=`decf270c-4715-432c-a01d-942181f61e80`::moving from state preparing -> state finished 
clientIFinit::DEBUG::2015-05-12 10:27:50,809::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 
clientIFinit::DEBUG::2015-05-12 10:27:50,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 
clientIFinit::DEBUG::2015-05-12 10:27:50,810::task::993::Storage.TaskManager.Task::(_decref) Task=`decf270c-4715-432c-a01d-942181f61e80`::ref 0 aborting False 



It's something wrong with Glusterfs? Or Centos 7.1? 


----- Mensagem original -----

De: suporte at logicworks.pt 
Para: "Daniel Helgenberger" <daniel.helgenberger at m-box.de> 
Cc: users at ovirt.org 
Enviadas: Terça-feira, 12 De Maio de 2015 10:14:11 
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server 

Hi Daniel, 

Well, I have glusterfs up and running: 

# service glusterd status 
Redirecting to /bin/systemctl status glusterd.service 
glusterd.service - GlusterFS, a clustered file-system server 
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 19h ago 
Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) 
Main PID: 3061 (glusterd) 
CGroup: /system.slice/glusterd.service 
ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid 
ââ3202 /usr/sbin/glusterfsd -s ovserver2.domain.com --volfile-id gv... 

May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... 
May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... 
Hint: Some lines were ellipsized, use -l to show in full. 

# gluster volume info 
Volume Name: gv0 
Type: Distribute 
Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261 
Status: Started 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: ovserver2.domain.com:/home2/brick1 

I stopped iptables, but cannot bring the nodes up. 
Everything was working until I needed to do a restart. 

Any more ideas? 



----- Mensagem original -----

De: "Daniel Helgenberger" <daniel.helgenberger at m-box.de> 
Para: users at ovirt.org 
Enviadas: Segunda-feira, 11 De Maio de 2015 18:17:47 
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server 



On Mo, 2015-05-11 at 16:05 +0100, suporte at logicworks.pt wrote: 
> Hi, 
> 
> I just restart it again, and now start the gluster service before starting the hosted engine, but still gets the same error message. 
> 
> Any more ideas? 
I just had the same problem. 
My <unknown> error indeed to the fact that glusterd / glusterfsd were 
not running. 

After starting them it turned out the host setup did not automatically 
add the iptables rules for gluster. I added to iptables: 

# gluster 
-A INPUT -p tcp --dport 24007:24011 -j ACCEPT 
-A INPUT -p tcp --dport 38465:38485 -j ACCEPT 

Afterwards 'gluster peer status' worked and my host was operational 
again. 

Hint: Sometimes this is do to gluster itself. Restaring glusterd works 
most of the time to fix this. 

> 
> Thanks 
> 
> Jose 
> 
> # hosted-engine --vm-status 
> 
> --== Host 1 status ==-- 
> 
> Status up-to-date : True 
> Hostname : ovserver1.domain.com 
> Host ID : 1 
> Engine status : {"health": "good", "vm": "up", "detail": "up"} 
> Score : 2400 
> Local maintenance : False 
> Host timestamp : 4998 
> Extra metadata (valid at timestamp): 
> metadata_parse_version=1 
> metadata_feature_version=1 
> timestamp=4998 (Mon May 11 16:03:48 2015) 
> host-id=1 
> score=2400 
> maintenance=False 
> state=EngineUp 
> 
> 
> # service glusterd status 
> Redirecting to /bin/systemctl status glusterd.service 
> glusterd.service - GlusterFS, a clustered file-system server 
> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
> Active: active (running) since Mon 2015-05-11 14:37:14 WEST; 1h 27min ago 
> Process: 3060 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) 
> Main PID: 3061 (glusterd) 
> CGroup: /system.slice/glusterd.service 
> ââ3061 /usr/sbin/glusterd -p /var/run/glusterd.pid 
> ââ3202 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... 
> 
> May 11 14:37:11 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... 
> May 11 14:37:14 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... 
> Hint: Some lines were ellipsized, use -l to show in full. 
> 
> 
> ----- Mensagem original ----- 
> 
> De: suporte at logicworks.pt 
> Para: "knarra" <knarra at redhat.com> 
> Cc: Users at ovirt.org 
> Enviadas: Segunda-feira, 11 De Maio de 2015 13:15:14 
> Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server 
> 
> Hi, 
> 
> I have 2 nodes, but only one is working with glusterfs. 
> 
> But you were right, glusterfs was not running, I just start the service - I didn't check it :( : 
> # service glusterd status 
> Redirecting to /bin/systemctl status glusterd.service 
> glusterd.service - GlusterFS, a clustered file-system server 
> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
> Active: active (running) since Mon 2015-05-11 13:06:24 WEST; 3s ago 
> Process: 4482 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS) 
> Main PID: 4483 (glusterd) 
> CGroup: /system.slice/glusterd.service 
> ââ4483 /usr/sbin/glusterd -p /var/run/glusterd.pid 
> ââ4618 /usr/sbin/glusterfsd -s ovserver2.acloud.pt --volfile-id gv... 
> 
> May 11 13:06:22 ovserver2.domain.com systemd[1]: Starting GlusterFS, a cluste.... 
> May 11 13:06:24 ovserver2.domain.com systemd[1]: Started GlusterFS, a cluster.... 
> Hint: Some lines were ellipsized, use -l to show in full. 
> 
> But still the problem remains 
> 
> Should I first start the glusterfs before the hosted engine? 
> 
> Thanks 
> 
> ----- Mensagem original ----- 
> 
> De: "knarra" <knarra at redhat.com> 
> Para: suporte at logicworks.pt , Users at ovirt.org 
> Enviadas: Segunda-feira, 11 De Maio de 2015 12:45:19 
> Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server 
> 
> On 05/11/2015 05:00 PM, suporte at logicworks.pt wrote: 
> 
> 
> 
> Hi, 
> 
> I'm testing ovirt 3.5.1, with hosted engine, using centos7.1. Have installed some VMs, no problem. I needed to shutdown the computer machines (follow this procedure: http://lists.ovirt.org/pipermail/users/2014-April/023861.html ), after rebooting could not get it 
> working again, when trying to activate the hosts this message come up: Gluster command [<UNKNOWN>] failed on server 
> I have tried a lot of things, including update it to Version 3.5.2-1.el7.centos, but no success. 
> Gluster version: 
> glusterfs-3.6.3-1.el7.x86_64 
> glusterfs-libs-3.6.3-1.el7.x86_64 
> glusterfs-fuse-3.6.3-1.el7.x86_64 
> glusterfs-cli-3.6.3-1.el7.x86_64 
> glusterfs-rdma-3.6.3-1.el7.x86_64 
> glusterfs-api-3.6.3-1.el7.x86_64 
> 
> Any help? 
> 
> _______________________________________________ 
> Users mailing list 
> Users at ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 

-- 
Daniel Helgenberger 
m box bewegtbild GmbH 

P: +49/30/2408781-22 
F: +49/30/2408781-10 

ACKERSTR. 19 
D-10115 BERLIN 


www.m-box.de www.monkeymen.tv 

Geschäftsführer: Martin Retschitzegger / Michaela Göllner 
Handeslregister: Amtsgericht Charlottenburg / HRB 112767 
_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 



_______________________________________________
Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users 




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150512/3e400417/attachment-0001.html>


More information about the Users mailing list