[ovirt-users] ovirt+gluster+NFS : storage hicups
Nicolas Ecarnot
nicolas at ecarnot.net
Thu Sep 3 13:28:57 UTC 2015
Le 06/08/2015 16:36, Tim Macy a écrit :
> Nicolas, I have the same setup dedicated physical system running engine
> on CentOS 6.6 three hosts running CentOS 7.1 with Gluster and KVM, and
> firewall is disabled on all hosts. I also followed the same documents
> to build my environment so I assume they are very similar. I have on
> occasion had the same errors and have also found that "ctdb rebalanceip
> <floating ip>" is the only way to resolve the problem. I intend to
> remove ctdb since it is not needed with the configuration we are
> running. CTDB is only needed for hosted engine on a floating NFS mount,
> so you should be able change the gluster storage domain mount paths to
> "localhost:<name>". The only thing that has prevented me from making
> this change is that my environment is live with running VM's. Please
> let me know if you go this route.
>
> Thank you,
> Tim Macy
This week, I eventually took the time to change this, as this DC is not
in production.
- Our big NFS storage domain was the master, it contained some VMs
- I wiped all my VMs
- I created a very small temporary NFS master domain, because I did not
want to bother with any issue related with erasing the last master
storage domain
- I removed the big NFS SD
- I wiped all that was inside, on a filesystem level
[
- I disabled ctdb, removed the "meta" gluster volume that ctdb used for
its locks
]
- I added a new storage domain, using your advice :
- gluster type
- localhost:<name>
- I removed the temp SD, and all switched correctly on the big glusterFS
I then spent some time playing with P2V, and storing new VMs on this new
style glusterFS storage domain.
I'm watching the CPU and i/o on the hosts, and yes, they are working,
but that keeps sane.
On this particular change (NFS to glusterFS), everything was very smooth.
Regards,
--
Nicolas ECARNOT
More information about the Users
mailing list