[ovirt-users] storage redundancy in Ovirt
Nir Soffer
nsoffer at redhat.com
Sun Apr 16 11:29:01 UTC 2017
On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dyasny at redhat.com> wrote:
>
>
> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsoffer at redhat.com> wrote:
>
> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dyasny at gmail.com> wrote:
>
>> When you set up a storage domain, you need to specify a host to perform
>> the initial storage operations, but once the SD is defined, it's details
>> are in the engine database, and all the hosts get connected to it directly.
>> If the first host you used to define the SD goes down, all other hosts will
>> still remain connected and work. SPM is an HA service, and if the current
>> SPM host goes down, SPM gets started on another host in the DC. In short,
>> unless your actual NFS exporting host goes down, there is no outage.
>>
>
> There is no storage outage, but if you shutdown the spm host, the spm host
> will not move to a new host until the spm host is online again, or you
> confirm
> manually that the spm host was rebooted.
>
>
> In a properly configured setup the SBA should take care of that. That's
> the whole point of HA services
>
In some cases like power loss or hardware failure, there is no way to start
the spm host, and the system cannot recover automatically.
Nir
>
>
> Nir
>
>
>>
>> On Sat, Apr 15, 2017 at 1:53 PM, Konstantin Raskoshnyi <
>> konrasko at gmail.com> wrote:
>>
>>> Hi Fernando,
>>> I see each host has direct connection nfs mount, but yes, if main host
>>> to which I connected nfs storage going down the storage becomes unavailable
>>> and all vms are down
>>>
>>>
>>> On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI <
>>> fernando.frediani at upx.com> wrote:
>>>
>>>> Hello Konstantin.
>>>>
>>>> That doesn`t make much sense make a whole cluster depend on a single
>>>> host. From what I know any host talk directly to NFS Storage Array or
>>>> whatever other Shared Storage you have.
>>>> Have you tested that host going down if that affects the other with the
>>>> NFS mounted directlly in a NFS Storage array ?
>>>>
>>>> Fernando
>>>>
>>>> 2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi <konrasko at gmail.com>:
>>>>
>>>>> In ovirt you have to attach storage through specific host.
>>>>> If host goes down storage is not available.
>>>>>
>>>>> On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <
>>>>> fernando.frediani at upx.com> wrote:
>>>>>
>>>>>> Well, make it not go through host1 and dedicate a storage server for
>>>>>> running NFS and make both hosts connect to it.
>>>>>> In my view NFS is much easier to manage than any other type of
>>>>>> storage, specially FC and iSCSI and performance is pretty much the same, so
>>>>>> you won`t get better results other than management going to other type.
>>>>>>
>>>>>> Fernando
>>>>>>
>>>>>> 2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi <konrasko at gmail.com>:
>>>>>>
>>>>>>> Hi guys,
>>>>>>> I have one nfs storage,
>>>>>>> it's connected through host1.
>>>>>>> host2 also has access to it, I can easily migrate vms between them.
>>>>>>>
>>>>>>> The question is - if host1 is down - all infrastructure is down,
>>>>>>> since all traffic goes through host1,
>>>>>>> is there any way in oVirt to use redundant storage?
>>>>>>>
>>>>>>> Only glusterfs?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>>>
>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170416/d8141a00/attachment.html>
More information about the Users
mailing list