On Mon, Jan 23, 2017 at 7:33 PM, Marina Kalinin <mkalinin(a)redhat.com> wrote:
Hi Christian,
Indeed, lvm adds some overhead when scanning the domains, so there is a
limit around 300 lvs per domain, where each snapshot of each disk counts as
a new lv.
We are testing currently 1000 lvs per storage domain. It is slower than 750,
I'm not sure yet how it compared to 100 or 300 lvs.
But basically, the number of lvs is limited, so once the pass the
limit that make
the system too slow for you, you need to add a new storage domain.
If you are managing less then 300 lvs, and you have don't have other reason
to have multiple storage domains (e.g. separate storage servers, separate
users, etc.), few storage domain is best.
I don't know if having more physical devices on the host will lead to better
performance, you will have to test this with your environment.
On the other hand, having too many domains can also affect on
overall
performance required for oVirt to manage all of them.
I suggest, if you have 7 now and they work fine - try sticking to them.
Or, reduce to 3 or 5 and monitor your performance.
Also, afaik, there should be a lot of performance improvements coming in the
new release, so try 4.1 and see.
I hope this helps.
Marina.
On Mon, Jan 23, 2017 at 9:53 AM, Maor Lipchuk <mlipchuk(a)redhat.com> wrote:
>
> I know that there is a monitoring process which monitors each Storage
> Domain, so I guess that if you will have multiple storage domains this will
> make the monitor runs every time multiple times, but I think that the affect
> is insignificant on the Host or the Storage Domain.
>
> You should also consider the limitation of logical volumes in a volume
> group,
> IINM LVM has a default limit of 255 volumes (I'm not sure how it behaves
> regarding efficiency if you use more volumes)
> oVirt uses LV also for snapshots so with one storage domain you might get
> to this limit pretty quick
>
> On Mon, Jan 23, 2017 at 4:35 PM, Grundmann, Christian
> <Christian.Grundmann(a)fabasoft.com> wrote:
>>
>> Hi,
>>
>> thx for you input
>>
>> both aren’t problems for me, all domains are from the same storage and so
>> can’t be maintained independently.
>>
>>
>>
>> Are there performance problems with only one Domain (like waiting for
>> locks etc.) which I don’t have that much with multiple?
>>
>>
>>
>> Thx Christian
>>
>>
>>
>>
>>
>> Von: Maor Lipchuk [mailto:mlipchuk@redhat.com]
>> Gesendet: Montag, 23. Jänner 2017 15:32
>> An: Grundmann, Christian <Christian.Grundmann(a)fabasoft.com>
>> Cc: users(a)ovirt.org
>> Betreff: Re: [ovirt-users] Storage Domain sizing
>>
>>
>>
>> There are many factors that can be discussed on this issue,
>>
>> two things that pop up on my mind are that many storage domains will make
>> your Data Center be more robust and flexible, since you can maintain part of
>> the storage domains which could help with upgrading your storage server in
>> the future or fixing issues that might occur in your storage, without moving
>> your Data Center to a non operational state.
>>
>>
>>
>> One storage domain is preferrable if you want to use large disks with
>> your VMs that small storage domains does not have the capacity to do so
>>
>>
>>
>> Regards,
>>
>> Maor
>>
>>
>>
>> On Mon, Jan 23, 2017 at 9:52 AM, Grundmann, Christian
>> <Christian.Grundmann(a)fabasoft.com> wrote:
>>
>> Hi,
>>
>> Ii am about to migrate to a new storage.
>>
>> Whats the best practice in sizing?
>>
>> 1 big Storage Domain or multiple smaller ones?
>>
>>
>>
>> My current Setup:
>>
>> 11 Hosts
>>
>> 7 FC Storage Domains 1 TB each
>>
>>
>>
>> Can anyone tell me the pro and cons of 1 vs. many?
>>
>>
>>
>>
>>
>> Thx Christian
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
--
Thanks,
Marina.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users