----- Original Message -----
> On 11/20/2013 09:07 AM, Mike Kolesnik wrote:
>> ----- Original Message -----
>>> On 11/11/2013 11:48 AM, Mike Kolesnik wrote:
>>>>
>>>> ----- Original Message -----
>>>>> Hi Mike,
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Mike Kolesnik" <mkolesni(a)redhat.com>
>>>>>> To: "engine-devel" <engine-devel(a)ovirt.org>
>>>>>> Cc: "Barak Azulay" <bazulay(a)redhat.com>,
"Martin Perina"
>>>>>> <mperina(a)redhat.com>, "Livnat Peer"
<lpeer(a)redhat.com>,
>>>>>> "Itamar Heim" <iheim(a)redhat.com>
>>>>>> Sent: Monday, November 11, 2013 8:49:33 AM
>>>>>> Subject: Custom properties per device + vNIC profile = not
working (<
>>>>>> 3.3)
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I came across a situation where I wanted to define custom
properties on
>>>>>> a
>>>>>> vNIC profile sitting under a network in a 3.2 data center.
>>>>>> From what I saw the configuration value for custom properties
>>>>>> (CustomDeviceProperties) is split into 4, one per each version
(3.0,
>>>>>> 3.1,
>>>>>> 3.2, 3.3).
>>>>>> Since vNIC profile is located under the DC tree, it takes the DC
>>>>>> version
>>>>>> -
>>>>>> 3.2 in this specific case.
>>>>>
>>>>> Custom Device Properties were designed to be specified for each
cluster
>>>>> version
>>>>> independently, it doesn't care about DC version. AFAIK cluster
version
>>>>> defines
>>>>> what features are available ...
>>>>>
>>>>>>
>>>>>> I tried to set the config value for 3.2 but got:
>>>>>> $ engine-config -s
>>>>>>
CustomDeviceProperties="{type=interface;prop={myProp=[a-zA-Z0-9-]+}}"
>>>>>> --cver=3.2
>>>>>> Cannot set value {type=interface;prop={myProp=[a-zA-Z0-9-]+}} to
key
>>>>>> CustomDeviceProperties. Device custom properties are not
supported in
>>>>>> version 3.2
>>>>>>
>>>>>> This is already not very good, since in a 3.2 DC there can be
3.3
>>>>>> clusters
>>>>>> with 3.3 hosts that do support custom device properties.
>>>>>
>>>>> Specify your properties for 3.3 version, since they will be used in
3.3
>>>>> clusters ...
>>>>>
>>>>
>>>> But the effective version is the DC version as I explained.
>>>>
>>>> In a DC 3.0-3.3 I can have clusters which the minimal version is the DC
>>>> version, and the maximal version is 3.3.
>>>> For example I can have the following:
>>>> DC - version 3.0
>>>> + Cluster 1 - version 3.0
>>>> + Cluster 2 - version 3.1
>>>> + Cluster 3 - version 3.2
>>>> + Cluster 4 - version 3.3
>>>>
>>>> In this constellation, I could use custom device properties only on
>>>> Cluster
>>>> 4, but it's not possible to define them since the vNIC profile is
using
>>>> the DC version 3.0.
>>>> So effectively this feature is not usable to me unless I use a 3.3 DC.
>>>>
>>>>>>
>>>>>> I also tried to alter the config value in the DB directly, but
the
>>>>>> custom
>>>>>> properties code ignored it since custom properties are not
supported in
>>>>>> 3.2.
>>>>>> So, de facto, I have no reasonable way as a user to define
custom
>>>>>> device
>>>>>> properties to use for my vNIC profiles in DC < 3.3.
>>>>>
>>>>> There are two configuration properties for Custom Device
Properties:
>>>>>
>>>>> 1) SupportCustomDeviceProperties
>>>>> - defines in what version properties are supported
>>>>> - cannot be altered by users of course
>>>>>
>>>>> 2) CustomDeviceProperties
>>>>> - holds properties specification for each version
>>>>> - can be defined using engine-config
>>>>>
>>>>>>
>>>>>> I opened the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1028757
>>>>>> for
>>>>>> this, however I also want to discuss the situation:
>>>>>
>>>>> I looked at the bug and the problem is, that management network
profile
>>>>> is bound to DC and not the Cluster. And that's something we
never
>>>>> thought
>>>>> of
>>>>> ...
>>>>>
>>>>>>
>>>>>> 1. As a user, I can't set custom properties for level <
3.3 which is
>>>>>> not
>>>>>> good.
>>>>>
>>>>> Well, it's 3.3 feature, so it looks OK for me
>>>>>
>>>>>> Removing the blocking, and loading custom properties for all
versions
>>>>>> would
>>>>>> fix the bug and allow using custom device properties for older
>>>>>> versions,
>>>>>> the
>>>>>> reasonable place to block this would be running a VM (or plugging
a
>>>>>> device).
>>>>>> Basically this is the lesser issue..
>>>>>>
>>>>>> 2. I just don't see the added value of splitting the
definition of the
>>>>>> properties per level..
>>>>>
>>>>> The idea behind the version splitting was:
>>>>>
>>>>> 1) We have a device with a feature that doesn't work correctly
with
>>>>> version
>>>>> 3.3,
>>>>> but it's fixed in 3.4
>>>>> 2) By specifying custom property per version we cane disable this
>>>>> feature
>>>>> for
>>>>> 3.3
>>>>> and enable for 3.4
>>>>
>>>> Custom properties is not for specifying which features are enabled,
there
>>>> is a whole other mechanism for that..
>>>>
>>>> Custom properties is for hooks (and other possible extensions), which by
>>>> definition are not something that is guaranteed to exist so I see no
>>>> point
>>>> to force the user to update multiple configurations and cause confusion
>>>> for him..
>>>
>>> as martin explained, we have predefined custom properties, which are
>>> based on the vdsm version, and hence are actually features we know to
>>> expose or not to expose.
>>> user-defined custom properties - are up to the admin, but we let these
>>> be at cluster level as well to allow more granularity.
>>
>> There are no predefined properties here, only user defined properties.
>>
>>>
>>>>
>>>>>
>>>>>> The custom properties are extensions which might or might not be
>>>>>> available
>>>>>> to
>>>>>> a certain VM, I don't see how having different sets of
custom
>>>>>> properties
>>>>>> per
>>>>>> version (what version, DC version, cluster version?) would make
any
>>>>>> difference - either the VM can utilize the extension given some
state
>>>>>> of
>>>>>> the
>>>>>> system, or it can't, but the determining factor is not the
version but
>>>>>> rather the availability of the extension.
>>>>>> For example, I can have a hook for vNIC altering some property
>>>>>> installed
>>>>>> on
>>>>>> host A and not host B, if the VM runs on host A it will get this
>>>>>> capability
>>>>>> and on host B it won't, regardless the DC version the VM is
in.
>>>>>>
>>>>>> This is not to say we shouldn't block custom properties on
the
>>>>>> engine-VDSM
>>>>>> API level since it's only available since 3.3, but this is
handled by
>>>>>> another config value (SupportCustomDeviceProperties) which is
not
>>>>>> alterable
>>>>>> by the user.
>>>>>> So basically, I think splitting the value per version is over
>>>>>> complication
>>>>>> and see no added value to the users, just more maintenance should
they
>>>>>> choose to use this feature.
>>>>>>
>>>>>> Your thoughts please.
>>>>>
>>>>> AFAIK only network and storage team wanted to use device custom
>>>>> properties
>>>>> in 3.3 version, but I'm not sure what's current usage
status.
>>>>>
>>>>> But IMHO it's too late for 3.3 to change specification ...
>>>>
>>>> Since I can have cluster 3.3 in a DC < 3.3, and this is the upgrade
path
>>>> for existing users,
>>>> I'd argue that the bug is severe enough and should be fixed asap even
for
>>>> 3.3 versions.
>>>
>>> please note that if you expose this at cluster level and not DC level,
>>> you need to make sure to verify it when moving a VM between clusters in
>>> same DC.
>>> also, if this is somehow related to logical networks, not vnic specific,
>>> than logical networks are DC level entities.
>>>
>>>
>>
>> OK but my point was that a custom properties is not meant to be split by
>> versions since
>> by definition of it, a hook might or might not exist on a given host
>> (regardless of the host version).
>> It only imposes more strain on the user to define possible custom
>> properties by version..
>>
>> I see no value to users in this approach, only more work and unclearness..
>>
>> Mind you, hook is not a "feature" that is explicitly supported on a
given
>> version, but an extension
>> mechanism which can have 3rd party extensions that might or might not exist
>> on a given host, but this
>> won't stop an action from occurring (i.e. VM would still start if a hook is
>> missing but some custom
>> property was sent).
>>
>> Also the original bug still exists because even though the vNIC is sitting
>> at VM which is in cluster
>> (thus in effect having access to the cluster version), the profile sits
>> under network (which, as you
>> mention, is DC level entity).
>> So for the user using a DC < 3.3 there is no option to use this feature
>> even though he can have 3.3
>> clusters in his DC.
>>
>
> except some hooks are shipped as required, and some custom properties
> are supported by vdsm even without hooks.
> so allowing to specify they are 'there' for a specific vdsm version is
> useful.
>
Seems to me you're referring to things that should be in a predefined properties
list, which as I mentioned doesn't exist for this feature.
1. maybe it should.
2. still, i think the granularity isn't hurting in this case - more
likely a hook will be needed, or not needed, in specific compat versions
as features are added/removed from the product, even for custom hooks.