On 04/30/2012 07:30 PM, Itamar Heim wrote:
> On 04/29/2012 04:19 PM, Dan Kenigsberg wrote:
>> On Sun, Apr 29, 2012 at 07:24:52AM -0400, Andrew Cathrow wrote:
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Dan Kenigsberg"<danken(a)redhat.com>
>>>> To: "Gal Hammer"<ghammer(a)redhat.com>
>>>> Cc: vdsm-devel(a)lists.fedorahosted.org
>>>> Sent: Sunday, April 29, 2012 7:19:10 AM
>>>> Subject: Re: [vdsm] reserve virtio-balloon device created by libvirt
>>>>
>>>> On Mon, Apr 23, 2012 at 04:00:55PM +0300, Gal Hammer wrote:
>>>>> On 23/04/2012 12:26, Mark Wu wrote:
>>>>>> Hi guys,
>>>>>>
>>>>>> I saw that an option to create balloon device was added by Gal
in
>>>>>>
http://gerrit.ovirt.org/1573
>>>>>> I have a question about it. Why don we preserve the old default
>>>>>> behaviour? I know it's not supported by ovirt-engine now, but
I
>>>>>> can't
>>>>>> figure out what will break if it's not disabled explicitly.
So do
>>>>>> you
>>>>>> think we can just make use of the balloon device added by
libvirt?
>>>>>
>>>>> We didn't change the old behavior.
>>>>>
>>>>> Libvirt creates by default a memory-balloon device, so vdsm
>>>>> defaults
>>>>> was to disable it by adding a "none"-type device. This was
done
>>>>> because vdsm didn't include an option to add such device.
>>>>>
>>>>> My patch added an option to create a memory-balloon through vdsm.
>>>>> If
>>>>> the user didn't request to add the device, the behavior is same
as
>>>>> before, disabling the memory-balloon.
>>>>
>>>> I feel that it would be best not to flip Vdsm's default at the
>>>> moment,
>>>> even though it is the opposite of libvirt's. I would consider to
flip
>>>> them only after your (Mark's) patches are in, tested, and proven
>>>> worthwhile for the common case.
>>>>
>>>> Currently, without any management for the balloon, reserving a guest
>>>> PCI
>>>> device was deemed wasteful.
>>>
>>> On the other side of the fence
>>> - We know that we do need to do ballooning
>>> - In the (next?) release we'll end up adding this support
>>> - There's no harm (see next point) in adding the device now in fact
>>> it saves a config change on upgrade.
>>
>> Well, there is a surprise factor, for someone running a guest generated
surprise == another minor reason for windows guest to re-activate.
>> in a previous version. Suddenly, after Vdsm upgrade, it would see an
>> additional device. At the least, I would like Vdsm to have a
>> configurable option to keep the old behavior.
In qemu we have a compatibility level controlled by -M flag.
VDSM should have a similar compatibility level and defaults shouldn't
normally change in minor releases.
>
> please take into consideration engine has an algorithm testing max
> number of devices and it should be aware of newly introduced devices by
> vdsm or it will overflow.
It is good timing to change the algorithm too. IIRC the algorithm has
some hard coded assumptions about qemu. Instead, it would be better to
consult w/ qemu in run time and get the free pci slots number and any
other limits.
The engine uses the hard coded limitation for example when adding a disk
(using a pci slot) to a VM when the VM is down (no qemu to consult with).
Soon qemu will support pci bridges and this will increase
the number of pci devices and virtio-scsi will insert some other
calculation factor.
>
>>
>>> - While it takes up a PCI slot it's going to be very, very rare
>>> deployments that will ever see the limit,
>>> libvirt/virtmanager/virt-install has done this forever without seeing
>>> push back.
>>
>>
>> _______________________________________________
>> vdsm-devel mailing list
>> vdsm-devel(a)lists.fedorahosted.org
>>
https://fedorahosted.org/mailman/listinfo/vdsm-devel
>
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________
Engine-devel mailing list
Engine-devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel