[ovirt-users] on oVirt and Gluster hyper-converged environment vmmemory show up to 97~99%

胡茂荣 maorong.hu at horebdata.cn
Tue Apr 12 07:36:40 UTC 2016


I  change the envirnment :
      glusterfs versions: glusterfs-3.7.10
     ovirt-engine: 3.6.4
     gluster volume brick use SSD disk  




     hosted-engine volume : replicate 3 arbit 1  (only for ovirt-engine vm)
     data volume : replicate 2   (for other vms ,replicate 2  write performance is goole than replicate 3 )


 and vm memory is show normal value , not up to the high value suddenly .


 I think there are several possible :
    1. gluster volume( replicate 3 )  on HDD disk (and I only one HDD disk on each server as gluster volume brick ) performance is pool  
    2. possible  is  versions(glusterfs:3.6.7 /ovirt:3.6.1 ) problem ,the glusterfs and ovirt-engine on host , memory or cpu resource management is not very google .
    3. maybe only is ovirt show problem .      


 
 
------------------ Original ------------------
From:  "胡茂荣"<maorong.hu at horebdata.cn>;
Date:  Tue, Apr 12, 2016 10:47 AM
To:  "Sahina Bose"<sabose at redhat.com>; 

Subject:  Re: [ovirt-users] on oVirt and Gluster hyper-converged environment vmmemory show up to 97~99%

 
 glusterfs versions: glusterfs-3.7.6 
 ovirt-engine: 3.6.1 
 
 glusterfs volume : hosted-engine replicate 3 arbit 1 
   features.shard is  on 
 config  as :
gluster volume set hosted-engine group virt
gluster volume set hosted-engine storage.owner-uid 36
gluster volume set hosted-engine storage.owner-gid 36
gluster volume set hosted-engine cluster.quorum-type auto
gluster volume set hosted-engine network.ping-timeout 10
gluster volume set hosted-engine auth.allow \*
gluster volume set hosted-engine server.allow-insecure on
gluster volume set hosted-engine features.shard on
gluster volume set hosted-engine cluster.data-self-heal-algorithm full
gluster volume set hosted-engine nfs.disable on



 and three servers memory 、cpu  usage  is low , cpu% is low to 40% (my host: memory is 16GB ,4 cpu cores). only start 1~3 vms , these problem also occure .




 
------------------ Original ------------------
From:  "Sahina Bose"<sabose at redhat.com>;
Date:  Mon, Apr 11, 2016 10:25 PM
To:  "胡茂荣"<maorong.hu at horebdata.cn>; "users"<users at ovirt.org>; 

Subject:  Re: [ovirt-users] on oVirt and Gluster hyper-converged environment vmmemory show up to 97~99%

 
                   
     
     On 04/06/2016 02:53 PM, 胡茂荣 wrote:
     
                       my problem :
             On oVirt and Gluster hyper-converged environment (           which is glusterfs and ovirt on one host ,and there are three           hosts ,glusterfs volume is replication 3 abrit 1 )
         
         
          storage domain use  glusterfs volume which is used           glusterfs mount or nfs mount ,  vms running , happenned vm           memory top to 97%~99% suddenly , this current high 
         
         
         probability on vms which on the same host , but this time ,           vm'memory not used 97%~99% actually ,and memory down to the           actual value  latterly  , all above problem can see by 
         
       
          
     Is sharding enabled on the gluster volume?
     Which version of glusterfs are you using?
     
     Is the CPU usage on the servers consistently high? 
     
                     
         
         ovirt-engine webmin UI 
       
       
       
       
       
       
       
       
              
       _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160412/e6de2bd3/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: A9EA30E2 at FA2D8B33.88A50C57
Type: application/octet-stream
Size: 184982 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160412/e6de2bd3/attachment-0001.obj>


More information about the Users mailing list