[ovirt-users] Storage questions

Yaniv Kaul ykaul at redhat.com
Thu Nov 24 13:26:11 UTC 2016


On Thu, Nov 24, 2016 at 1:39 PM, Fernando Frediani <
fernando.frediani at upx.com.br> wrote:

> I have the similar frustrations with oVirt Oscar, specially regarding ways
> to managed local and shared storage.
>
> Instead of making it easier for the many scenarios used by people seems
> the design process seems was a bit dificulted. Just look to one of market
> leaders, VMware vSphere and one can easily see the flexibility it has to
> move things around, even if they don't belong to the same cluster. As we
> are talking about Linux 'under the hood' it shouldn't need much in order to
> do similar things.
>
> Perhaps people who work in the design can flexibilize a bit some of these
> things.
>

I'd like to use the opportunity and encourage the community to send
patches. Both to the design as well as the implementation.
The value of open source is not only in consumption, but also in
participation.
Active contribution is not only the best way to influence the project, but
it is also a rewarding and joyful experience to the contributor.
Getting into the internal bits of a project, understanding why some key
design decisions were made, suggesting and implementing enhancements and
changes isn't easy.
It is a journey, with ups and downs, but certainly a great ride.

Feel free to reach the developers at the devel mailing list and we'll be
happy to assist in onboarding, consulting, advice and reviews for your code.
Y.


> On 23/11/2016 10:03, Oscar Segarra wrote:
>
> Hi Pave, users,
>
> Thanks a lot for your clarifications:
>
> I'm surprised because this system is very rigid... I don't understand why
> oVirt has been designed with this limitations.
>
> Regarding to my performance worry (without configuing any kind of backup):
>
> Do you mean that 1000 vdis against a shared gluster volume provided by 10
> physical hosts (the same hosts that run kvm) won't have performance
> problems? Do you know any similar experience?
>
> And related to rsync, as Gluster Geocluster is fully supported, do you
> have experience backing up VMs using this product?
>
> Thanks a lot.
>
>
> 2016-11-23 12:36 GMT+01:00 Pavel Gashev <Pax at acronis.com>:
>
>> 1. You can create a datacenter per host, but you can't have a storage
>> shared among datacenters.
>>
>>
>>
>> 2. I mean backups would add performance problems. When you rsync a disk
>> image, in order to find the difference it reads both the source and the
>> destination images. In other words, if you want to make daily backups,
>> rsync will daily read everything located on local storages, plus everything
>> located on gluster. Plus, in order to make consistent backups, you have to
>> make VM snapshots and merge them after rsync.
>>
>>
>>
>>
>>
>> *From: *Oscar Segarra <oscar.segarra at gmail.com>
>> *Date: *Wednesday 23 November 2016 at 13:42
>> *To: *Pavel Gashev <Pax at acronis.com>
>>
>> *Cc: *users <users at ovirt.org>
>> *Subject: *Re: [ovirt-users] Storage questions
>>
>>
>>
>> Hi Pavel,
>>
>>
>>
>> 1. Local storage datacenter doesn’t support multiple hosts. If you have
>> multiple hosts, you have to have a shared storage, even it’s a
>> hyper-converged setup.
>>
>>
>>
>> Is it not possible to create a datacenter for each node and set up a
>> shared storage (transversal to all hosts) for storing engine and other
>> infraestructure virtual servers?
>>
>>
>>
>> 2. In your case most of disk and network performance would be used by
>> backups. And a backup cycle would take more than 24 hours. Even rsync would
>> take much resources since it has to at least read the whole disk images.
>>
>>
>>
>> Do you mean that 1000 vdis against a shared gluster volume provided by 10
>> physical hosts (the same hosts that run kvm) won't have performance
>> problems? Do you know any similar experience?
>>
>>
>>
>> Related to rsync, the idea is launch one rsync process per physical node
>> for backing up the contained virtual machines. But if you expect rsync to
>> require the whole day... do you mean gluster georeplication will require 24
>> hours too?
>>
>>
>>
>> Thanks a lot
>>
>>
>>
>>
>>
>> 2016-11-23 11:02 GMT+01:00 Pavel Gashev <Pax at acronis.com>:
>>
>> Oscar,
>>
>>
>>
>> I’d make two notes:
>>
>>
>>
>> 1. Local storage datacenter doesn’t support multiple hosts. If you have
>> multiple hosts, you have to have a shared storage, even it’s a
>> hyper-converged setup.
>>
>>
>>
>> 2. In your case most of disk and network performance would be used by
>> backups. And a backup cycle would take more than 24 hours. Even rsync would
>> take much resources since it has to at least read the whole disk images.
>>
>>
>>
>> I’d recommend a scenario with a dedicated shared storage that supports
>> snapshots.
>>
>>
>>
>>
>>
>> *From: *<users-bounces at ovirt.org> on behalf of Oscar Segarra <
>> oscar.segarra at gmail.com>
>> *Date: *Wednesday 23 November 2016 at 03:11
>> *To: *Yaniv Dary <ydary at redhat.com>
>> *Cc: *users <users at ovirt.org>
>> *Subject: *Re: [ovirt-users] Storage questions
>>
>>
>>
>> Hi,
>>
>>
>>
>> As on oVirt is it possible to attach local storage I supose it can be
>> used to run virtual machines:
>>
>>
>>
>> I have drawn a couple of diagrams in order to know if is it possible to
>> set up this configuration:
>>
>>
>>
>> 1.- In on-going scenario:
>>
>> Every host runs 100 vdi virtual machines whose disks are placed on local
>> storage. There is a common gluster volume shared between all nodes.
>>
>>
>>
>> [image: ágenes integradas 1]
>>
>>
>>
>> 2.- If one node fails:
>>
>>
>>
>> [image: ágenes integradas 2]
>>
>>
>>
>> oVirt has to be able to inventory the copy of machines (in our example
>> vdi201 ... vdi300) and start them on remaining nodes.
>>
>>
>>
>> ¿Is it possible to reach this configuration with oVirt? ¿or something
>> similar?
>>
>>
>>
>> Making backup with the import-export procedure based on snapshot can take
>> lot of time and resources. Incremental rsync is cheaper in terms of
>> resources.
>>
>>
>>
>> Thanks lot.
>>
>>
>>
>>
>>
>> 2016-11-22 10:49 GMT+01:00 Yaniv Dary <ydary at redhat.com>:
>>
>> I suggest you setup that environment and test for the performance and
>> report if you have issues.
>>
>> Please note that currently there is no data locality guarantee, so a VM
>> might be running on a host that doesn't have its disks.
>>
>>
>>
>> We have APIs to do backup\restore and that is the only supported option
>> for backup:
>>
>> https://www.ovirt.org/develop/release-management/features/st
>> orage/backup-restore-api-integration/
>>
>> You can look at the Gluster DR option that was posted a while back, you
>> can look that up.
>>
>> It used geo replication and import storage domain to do the DR.
>>
>>
>>
>>
>> Yaniv Dary
>>
>> Technical Product Manager
>>
>> Red Hat Israel Ltd.
>>
>> 34 Jerusalem Road
>>
>> Building A, 4th floor
>>
>> Ra'anana, Israel 4350109
>>
>>
>>
>> Tel : +972 (9) 7692306
>>
>>         8272306
>>
>> Email: ydary at redhat.com
>>
>> IRC : ydary
>>
>>
>>
>> On Mon, Nov 21, 2016 at 5:17 PM, Oscar Segarra <oscar.segarra at gmail.com>
>> wrote:
>>
>> Hi,
>>
>>
>>
>> I'm planning to deploy a scalable VDI infraestructure where each phisical
>> host can run over 100 VDIs and I'd like to deploy 10 physical hosts (1000
>> VDIs).
>>
>>
>>
>> In order to avoid performance problems (replicating 1000VDIs changes over
>> gluster network I think can provoque performance problems) I have thought
>> to use local storage for VDI assuming that VDIs cannot be migrated between
>> phisical hosts.
>>
>>
>>
>> ¿Is my worry founded in terms of performance?
>>
>> ¿Is it possible to utilize local SSD storage for VDIs?
>>
>>
>>
>> I'd like to configure a gluster volume for backup on rotational disks
>> (tiered+replica 2+ stripe=2) just to provide HA if a physical host fails.
>>
>>
>>
>> ¿Is it possible to use rsync for backing up VDIs?
>>
>> If not ¿How can I sync/backup  the VDIs running on local storage on the
>> gluster shared storage?
>>
>> If a physical host fails ¿How can I start the latest backup of the VDI on
>> the shared gluster?
>>
>>
>>
>> Thanks a lot
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>>
>>
>>
>
>
>
> _______________________________________________
> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161124/8042d2be/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 57919 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161124/8042d2be/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 36850 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161124/8042d2be/attachment-0003.png>


More information about the Users mailing list