[ovirt-users] Storage questions

Oscar Segarra oscar.segarra at gmail.com
Wed Nov 23 12:08:41 UTC 2016


Sorry, I have not seen your email. I continue this chain:

Hi Pave, users,

Thanks a lot for your clarifications:

I'm surprised because this system is very rigid... I don't understand why
oVirt has been designed with this limitations.

Regarding to my performance worry (without configuing any kind of backup):

Do you mean that 1000 vdis against a shared gluster volume provided by 10
physical hosts (the same hosts that run kvm) won't have performance
problems? Do you know any similar experience?

Краснобаев Михаил: Have you tested to backup VMs by a simple rsync or a
Gluster GeoCluster?

My reason for not using a classic shared gluster DataCenter is that 1000
users acceeding gluster can provoke a heavy access to gluster device and if
there were any bug or a corruption in gluster would be possible to loose
1000 virtual machines, it means 1000 people stopped working.

I would feel glad if somebody can explain to us their experience in this
kind of big environments.

Thanks a lot.


2016-11-23 12:58 GMT+01:00 Краснобаев Михаил <milo1 at ya.ru>:

> Good day,
>
> >>you have to make VM snapshots and merge them
>
> This operation also takes a lot of time (in my experience equal to reading
> out the whole virtual disk).
>
> Pavel, why don't you consider building a classic Datacenter, where you
> have a shared storage?
>
>
>
> This operation
>
> 23.11.2016, 14:36, "Pavel Gashev" <pax at acronis.com>:
>
> 1. You can create a datacenter per host, but you can't have a storage
> shared among datacenters.
>
>
>
> 2. I mean backups would add performance problems. When you rsync a disk
> image, in order to find the difference it reads both the source and the
> destination images. In other words, if you want to make daily backups,
> rsync will daily read everything located on local storages, plus everything
> located on gluster. Plus, in order to make consistent backups, you have to
> make VM snapshots and merge them after rsync.
>
>
>
>
>
> *From: *Oscar Segarra <oscar.segarra at gmail.com>
> *Date: *Wednesday 23 November 2016 at 13:42
> *To: *Pavel Gashev <Pax at acronis.com>
> *Cc: *users <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Storage questions
>
>
>
> Hi Pavel,
>
>
>
> 1. Local storage datacenter doesn’t support multiple hosts. If you have
> multiple hosts, you have to have a shared storage, even it’s a
> hyper-converged setup.
>
>
>
> Is it not possible to create a datacenter for each node and set up a
> shared storage (transversal to all hosts) for storing engine and other
> infraestructure virtual servers?
>
>
>
> 2. In your case most of disk and network performance would be used by
> backups. And a backup cycle would take more than 24 hours. Even rsync would
> take much resources since it has to at least read the whole disk images.
>
>
>
> Do you mean that 1000 vdis against a shared gluster volume provided by 10
> physical hosts (the same hosts that run kvm) won't have performance
> problems? Do you know any similar experience?
>
>
>
> Related to rsync, the idea is launch one rsync process per physical node
> for backing up the contained virtual machines. But if you expect rsync to
> require the whole day... do you mean gluster georeplication will require 24
> hours too?
>
>
>
> Thanks a lot
>
>
>
>
>
> 2016-11-23 11:02 GMT+01:00 Pavel Gashev <Pax at acronis.com>:
>
> Oscar,
>
>
>
> I’d make two notes:
>
>
>
> 1. Local storage datacenter doesn’t support multiple hosts. If you have
> multiple hosts, you have to have a shared storage, even it’s a
> hyper-converged setup.
>
>
>
> 2. In your case most of disk and network performance would be used by
> backups. And a backup cycle would take more than 24 hours. Even rsync would
> take much resources since it has to at least read the whole disk images.
>
>
>
> I’d recommend a scenario with a dedicated shared storage that supports
> snapshots.
>
>
>
>
>
> *From: *<users-bounces at ovirt.org> on behalf of Oscar Segarra <
> oscar.segarra at gmail.com>
> *Date: *Wednesday 23 November 2016 at 03:11
> *To: *Yaniv Dary <ydary at redhat.com>
> *Cc: *users <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Storage questions
>
>
>
> Hi,
>
>
>
> As on oVirt is it possible to attach local storage I supose it can be used
> to run virtual machines:
>
>
>
> I have drawn a couple of diagrams in order to know if is it possible to
> set up this configuration:
>
>
>
> 1.- In on-going scenario:
>
> Every host runs 100 vdi virtual machines whose disks are placed on local
> storage. There is a common gluster volume shared between all nodes.
>
>
>
> [image: ágenes integradas 1]
>
>
>
> 2.- If one node fails:
>
>
>
> [image: ágenes integradas 2]
>
>
>
> oVirt has to be able to inventory the copy of machines (in our example
> vdi201 ... vdi300) and start them on remaining nodes.
>
>
>
> ¿Is it possible to reach this configuration with oVirt? ¿or something
> similar?
>
>
>
> Making backup with the import-export procedure based on snapshot can take
> lot of time and resources. Incremental rsync is cheaper in terms of
> resources.
>
>
>
> Thanks lot.
>
>
>
>
>
> 2016-11-22 10:49 GMT+01:00 Yaniv Dary <ydary at redhat.com>:
>
> I suggest you setup that environment and test for the performance and
> report if you have issues.
>
> Please note that currently there is no data locality guarantee, so a VM
> might be running on a host that doesn't have its disks.
>
>
>
> We have APIs to do backup\restore and that is the only supported option
> for backup:
>
> https://www.ovirt.org/develop/release-management/features/
> storage/backup-restore-api-integration/
>
> You can look at the Gluster DR option that was posted a while back, you
> can look that up.
>
> It used geo replication and import storage domain to do the DR.
>
>
>
>
>
> Yaniv Dary
>
> Technical Product Manager
>
> Red Hat Israel Ltd.
>
> 34 Jerusalem Road
>
> Building A, 4th floor
>
> Ra'anana, Israel 4350109
>
>
>
> Tel : +972 (9) 7692306
>
>         8272306
>
> Email: ydary at redhat.com
>
> IRC : ydary
>
>
>
> On Mon, Nov 21, 2016 at 5:17 PM, Oscar Segarra <oscar.segarra at gmail.com>
> wrote:
>
> Hi,
>
>
>
> I'm planning to deploy a scalable VDI infraestructure where each phisical
> host can run over 100 VDIs and I'd like to deploy 10 physical hosts (1000
> VDIs).
>
>
>
> In order to avoid performance problems (replicating 1000VDIs changes over
> gluster network I think can provoque performance problems) I have thought
> to use local storage for VDI assuming that VDIs cannot be migrated between
> phisical hosts.
>
>
>
> ¿Is my worry founded in terms of performance?
>
> ¿Is it possible to utilize local SSD storage for VDIs?
>
>
>
> I'd like to configure a gluster volume for backup on rotational disks
> (tiered+replica 2+ stripe=2) just to provide HA if a physical host fails.
>
>
>
> ¿Is it possible to use rsync for backing up VDIs?
>
> If not ¿How can I sync/backup  the VDIs running on local storage on the
> gluster shared storage?
>
> If a physical host fails ¿How can I start the latest backup of the VDI on
> the shared gluster?
>
>
>
> Thanks a lot
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
>
> ,
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> С уважением, Краснобаев Михаил.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161123/77b11edb/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 36850 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161123/77b11edb/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 57919 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161123/77b11edb/attachment-0003.png>


More information about the Users mailing list