Hi,
Thanks a lot for your clarifications, I will try to continue working with
oVirt in order to check if it finally fits in our requirements. If not, I
will have to look what's on the market.
Óscar
2016-11-24 14:54 GMT+01:00 Yaniv Kaul <ykaul(a)redhat.com>:
On Thu, Nov 24, 2016 at 3:33 PM, Fernando Frediani <
fernando.frediani(a)upx.com.br> wrote:
> As many may think contributions don't resume only to writing lines of
> code certainly. Thanks for the invitation. I'm sure it can be an
> interesting exercise.
>
Right - and thanks for pointing it out. There are plenty of ways one can
contribute, ranging from bug reports, enhancement requests and sharing best
practices, to improving documentation, translating, educating other users,
blog posts, website updates and feature design reviews. Attending or
lecturing on the project in various meetups is also a great way.
Lastly, I'd like also to specifically mention that extending the project
by connecting it with other projects is also a powerful way to contribute
and develop, even if not directly in the project.
We are seeing development in anything from bash to Ruby, connecting oVirt
with monitoring solutions, cloud solutions, backup and more! From scripts
to full blown apps, this is a growing eco-system around oVirt.
Y.
> On 24/11/2016 11:26, Yaniv Kaul wrote:
>
>
>
> On Thu, Nov 24, 2016 at 1:39 PM, Fernando Frediani <
> fernando.frediani(a)upx.com.br> wrote:
>
>> I have the similar frustrations with oVirt Oscar, specially regarding
>> ways to managed local and shared storage.
>>
>> Instead of making it easier for the many scenarios used by people seems
>> the design process seems was a bit dificulted. Just look to one of market
>> leaders, VMware vSphere and one can easily see the flexibility it has to
>> move things around, even if they don't belong to the same cluster. As we
>> are talking about Linux 'under the hood' it shouldn't need much in
order to
>> do similar things.
>>
>> Perhaps people who work in the design can flexibilize a bit some of
>> these things.
>>
>
> I'd like to use the opportunity and encourage the community to send
> patches. Both to the design as well as the implementation.
> The value of open source is not only in consumption, but also in
> participation.
> Active contribution is not only the best way to influence the project,
> but it is also a rewarding and joyful experience to the contributor.
> Getting into the internal bits of a project, understanding why some key
> design decisions were made, suggesting and implementing enhancements and
> changes isn't easy.
> It is a journey, with ups and downs, but certainly a great ride.
>
> Feel free to reach the developers at the devel mailing list and we'll be
> happy to assist in onboarding, consulting, advice and reviews for your code.
> Y.
>
>
>> On 23/11/2016 10:03, Oscar Segarra wrote:
>>
>> Hi Pave, users,
>>
>> Thanks a lot for your clarifications:
>>
>> I'm surprised because this system is very rigid... I don't understand
>> why oVirt has been designed with this limitations.
>>
>> Regarding to my performance worry (without configuing any kind of
>> backup):
>>
>> Do you mean that 1000 vdis against a shared gluster volume provided by
>> 10 physical hosts (the same hosts that run kvm) won't have performance
>> problems? Do you know any similar experience?
>>
>> And related to rsync, as Gluster Geocluster is fully supported, do you
>> have experience backing up VMs using this product?
>>
>> Thanks a lot.
>>
>>
>> 2016-11-23 12:36 GMT+01:00 Pavel Gashev <Pax(a)acronis.com>:
>>
>>> 1. You can create a datacenter per host, but you can't have a storage
>>> shared among datacenters.
>>>
>>>
>>>
>>> 2. I mean backups would add performance problems. When you rsync a disk
>>> image, in order to find the difference it reads both the source and the
>>> destination images. In other words, if you want to make daily backups,
>>> rsync will daily read everything located on local storages, plus everything
>>> located on gluster. Plus, in order to make consistent backups, you have to
>>> make VM snapshots and merge them after rsync.
>>>
>>>
>>>
>>>
>>>
>>> *From: *Oscar Segarra <oscar.segarra(a)gmail.com>
>>> *Date: *Wednesday 23 November 2016 at 13:42
>>> *To: *Pavel Gashev <Pax(a)acronis.com>
>>>
>>> *Cc: *users <users(a)ovirt.org>
>>> *Subject: *Re: [ovirt-users] Storage questions
>>>
>>>
>>>
>>> Hi Pavel,
>>>
>>>
>>>
>>> 1. Local storage datacenter doesn’t support multiple hosts. If you have
>>> multiple hosts, you have to have a shared storage, even it’s a
>>> hyper-converged setup.
>>>
>>>
>>>
>>> Is it not possible to create a datacenter for each node and set up a
>>> shared storage (transversal to all hosts) for storing engine and other
>>> infraestructure virtual servers?
>>>
>>>
>>>
>>> 2. In your case most of disk and network performance would be used by
>>> backups. And a backup cycle would take more than 24 hours. Even rsync would
>>> take much resources since it has to at least read the whole disk images.
>>>
>>>
>>>
>>> Do you mean that 1000 vdis against a shared gluster volume provided by
>>> 10 physical hosts (the same hosts that run kvm) won't have performance
>>> problems? Do you know any similar experience?
>>>
>>>
>>>
>>> Related to rsync, the idea is launch one rsync process per physical
>>> node for backing up the contained virtual machines. But if you expect rsync
>>> to require the whole day... do you mean gluster georeplication will require
>>> 24 hours too?
>>>
>>>
>>>
>>> Thanks a lot
>>>
>>>
>>>
>>>
>>>
>>> 2016-11-23 11:02 GMT+01:00 Pavel Gashev <Pax(a)acronis.com>:
>>>
>>> Oscar,
>>>
>>>
>>>
>>> I’d make two notes:
>>>
>>>
>>>
>>> 1. Local storage datacenter doesn’t support multiple hosts. If you have
>>> multiple hosts, you have to have a shared storage, even it’s a
>>> hyper-converged setup.
>>>
>>>
>>>
>>> 2. In your case most of disk and network performance would be used by
>>> backups. And a backup cycle would take more than 24 hours. Even rsync would
>>> take much resources since it has to at least read the whole disk images.
>>>
>>>
>>>
>>> I’d recommend a scenario with a dedicated shared storage that supports
>>> snapshots.
>>>
>>>
>>>
>>>
>>>
>>> *From: *<users-bounces(a)ovirt.org> on behalf of Oscar Segarra <
>>> oscar.segarra(a)gmail.com>
>>> *Date: *Wednesday 23 November 2016 at 03:11
>>> *To: *Yaniv Dary <ydary(a)redhat.com>
>>> *Cc: *users <users(a)ovirt.org>
>>> *Subject: *Re: [ovirt-users] Storage questions
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> As on oVirt is it possible to attach local storage I supose it can be
>>> used to run virtual machines:
>>>
>>>
>>>
>>> I have drawn a couple of diagrams in order to know if is it possible to
>>> set up this configuration:
>>>
>>>
>>>
>>> 1.- In on-going scenario:
>>>
>>> Every host runs 100 vdi virtual machines whose disks are placed on
>>> local storage. There is a common gluster volume shared between all nodes.
>>>
>>>
>>>
>>> [image: ágenes integradas 1]
>>>
>>>
>>>
>>> 2.- If one node fails:
>>>
>>>
>>>
>>> [image: ágenes integradas 2]
>>>
>>>
>>>
>>> oVirt has to be able to inventory the copy of machines (in our example
>>> vdi201 ... vdi300) and start them on remaining nodes.
>>>
>>>
>>>
>>> ¿Is it possible to reach this configuration with oVirt? ¿or something
>>> similar?
>>>
>>>
>>>
>>> Making backup with the import-export procedure based on snapshot can
>>> take lot of time and resources. Incremental rsync is cheaper in terms of
>>> resources.
>>>
>>>
>>>
>>> Thanks lot.
>>>
>>>
>>>
>>>
>>>
>>> 2016-11-22 10:49 GMT+01:00 Yaniv Dary <ydary(a)redhat.com>:
>>>
>>> I suggest you setup that environment and test for the performance and
>>> report if you have issues.
>>>
>>> Please note that currently there is no data locality guarantee, so a VM
>>> might be running on a host that doesn't have its disks.
>>>
>>>
>>>
>>> We have APIs to do backup\restore and that is the only supported option
>>> for backup:
>>>
>>>
https://www.ovirt.org/develop/release-management/features/st
>>> orage/backup-restore-api-integration/
>>>
>>> You can look at the Gluster DR option that was posted a while back, you
>>> can look that up.
>>>
>>> It used geo replication and import storage domain to do the DR.
>>>
>>>
>>>
>>>
>>> Yaniv Dary
>>>
>>> Technical Product Manager
>>>
>>> Red Hat Israel Ltd.
>>>
>>> 34 Jerusalem Road
>>>
>>> Building A, 4th floor
>>>
>>> Ra'anana, Israel 4350109
>>>
>>>
>>>
>>> Tel : +972 (9) 7692306
>>>
>>> 8272306
>>>
>>> Email: ydary(a)redhat.com
>>>
>>> IRC : ydary
>>>
>>>
>>>
>>> On Mon, Nov 21, 2016 at 5:17 PM, Oscar Segarra
<oscar.segarra(a)gmail.com>
>>> wrote:
>>>
>>> Hi,
>>>
>>>
>>>
>>> I'm planning to deploy a scalable VDI infraestructure where each
>>> phisical host can run over 100 VDIs and I'd like to deploy 10 physical
>>> hosts (1000 VDIs).
>>>
>>>
>>>
>>> In order to avoid performance problems (replicating 1000VDIs changes
>>> over gluster network I think can provoque performance problems) I have
>>> thought to use local storage for VDI assuming that VDIs cannot be migrated
>>> between phisical hosts.
>>>
>>>
>>>
>>> ¿Is my worry founded in terms of performance?
>>>
>>> ¿Is it possible to utilize local SSD storage for VDIs?
>>>
>>>
>>>
>>> I'd like to configure a gluster volume for backup on rotational disks
>>> (tiered+replica 2+ stripe=2) just to provide HA if a physical host fails.
>>>
>>>
>>>
>>> ¿Is it possible to use rsync for backing up VDIs?
>>>
>>> If not ¿How can I sync/backup the VDIs running on local storage on the
>>> gluster shared storage?
>>>
>>> If a physical host fails ¿How can I start the latest backup of the VDI
>>> on the shared gluster?
>>>
>>>
>>>
>>> Thanks a lot
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>> _______________________________________________
>> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>> _______________________________________________ Users mailing list
>> Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users