[ovirt-users] Hyperconverged Setup and Gluster healing

Sven Achtelik Sven.Achtelik at eps.aero
Mon Apr 24 12:06:06 UTC 2017


Hi Kasturi,

I'll try that. Will this be persistent over a reboot of a host or even stopping of the complete cluster ?


Thank you
Von: knarra [mailto:knarra at redhat.com]
Gesendet: Montag, 24. April 2017 13:44
An: Sven Achtelik <Sven.Achtelik at eps.aero>; users at ovirt.org
Betreff: Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:
Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to stay on the current version and I'm applying updates/upgrade if there are any. For this I put a host in maintenance and also use the "Stop Gluster Service"  checkbox. After it's done updating I'll set it back to active and wait until the engine sees all bricks again and then I'll go for the next host.

This worked fine for me the last month and now that I have more and more VMs running the changes that are written to the gluster volume while a host is in maintenance become a lot more and it takes pretty long for the healing to complete. What I don't understand is that I don't really see a lot of network usage in the GUI during that time and it feels quiet slow. The Network for the gluster is a 10G and I'm quiet happy with the performance of it, it's just the healing that takes long. I noticed that because I couldn't update the third host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during healing that needs to be configured ? Or should I maybe change my updating process somehow to avoid having so many changes in queue?

Thank you,

Sven




_______________________________________________

Users mailing list

Users at ovirt.org<mailto:Users at ovirt.org>

http://lists.ovirt.org/mailman/listinfo/users

Hi Sven,

    Do you have granular entry heal enabled on the volume? If no, there is a feature called granular entry self-heal which should be enabled with sharded volumes to get the benefits. So when a brick goes down and say only 1 in those million entries is created/deleted. Self-heal would be done for only that file it won't crawl the entire directory.

    You can run gluster volume set VOLNAME cluster.granular-entry-heal enable / disable command only if the volume is in Created state. If the volume is in any other state other than Created , for example, Started , Stopped, and so on, execute gluster volume heal VOLNAME granular-entry-heal enable / disable command to enable or disable granular-entry-heal option.

Thanks

kasturi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170424/b8c5a3a1/attachment.html>


More information about the Users mailing list