[ovirt-users] Is oVirt 3.6 with GlusterFS 3.7 recommended for production usage?
Sahina Bose
sabose at redhat.com
Fri Jun 24 07:15:55 UTC 2016
On 06/24/2016 11:25 AM, Dewey Du wrote:
> I prefer deploying as a hyperconverged setup, but it is still under
> experiment, right?
Hyperconverged deployment with oVirt and Gluster has been tested and is
currently offered as a preview feature with guidance on do's/don'ts &
recommended gluster volume settings. We're working on enhancing this
further to make it easier to setup, integrate better in the oVirt UI in 4.1
>
> So I try to separate vm service and storage. I added new storage
> domain( Domain Type "Data", Storage Type "GlusterFS", Use Host
> "host-01". But then I can't add another new storage domain( Domain
> Type "Data", Storage Type "GlusterFS", Use Host "host-02"). The input
> field "path" is unwritable (gray) on the Popup New Domain Window.
>
> My question is, should we add a new storage domain for each ovirt-node?
No, you dont need to.
This seems like a bug in the Create storage domain UI. Does refreshing
your browser fix the greyed-out input field? Any errors seen in engine logs?
>
>
>
> On Tue, Jun 21, 2016 at 7:15 PM, Sahina Bose <sabose at redhat.com
> <mailto:sabose at redhat.com>> wrote:
>
> Make sure that you use replica 3 gluster volumes for storing VM
> images. Are you planning to deploy as a hyperconverged setup?
> Either way, try and use the latest ovirt 3.6 and glusterfs 3.7
> (3.7.12 that addresses bugs related to sharding and o-direct is
> due to be released soon)
>
> On 06/21/2016 07:08 AM, Dewey Du wrote:
>> I want to deploy oVirt 3.6 with GlusterFS 3.7 to my online
>> servers. Is it recommended for production usage?
>>
>> Thx.
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org <mailto:Users at ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160624/f22dd292/attachment-0001.html>
More information about the Users
mailing list