[Engine-devel] oVirt upstream meeting : VM Version

Hi We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted.
From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to block moving VM if it's version is not fully supported compatible with the target Cluster. One idea for getting the VM version is the OVF which actually holds inside its header OvfVersion. The question is , is the OVF good enough for all our needs or should we persist that else (for example in DB) Also, any other issues/difficulties we may encounter implementing and storing VM version.
Keep in mind that this is a new feature that impacts the Stable Device Addresses feature but may be useful/relevant for other features as well. Thanks Eli

On 02/02/2012 02:56 AM, Eli Mesika wrote:
Hi
We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to block moving VM if it's version is not fully supported compatible with the target Cluster. One idea for getting the VM version is the OVF which actually holds inside its header OvfVersion. The question is , is the OVF good enough for all our needs or should we persist that else (for example in DB) Also, any other issues/difficulties we may encounter implementing and storing VM version.
Keep in mind that this is a new feature that impacts the Stable Device Addresses feature but may be useful/relevant for other features as well.
Can you give some examples which will cause an issue moving a VM from a 3.1 cluster to a 3.0 one?

On 02/02/2012 08:46 AM, Itamar Heim wrote:
On 02/02/2012 02:56 AM, Eli Mesika wrote:
Hi
We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)? I think its the above + the meta data /devices you keep for it.
block moving VM if it's version is not fully supported compatible with the target Cluster. One idea for getting the VM version is the OVF which actually holds inside its header OvfVersion. The question is , is the OVF good enough for all our needs or should we persist that else (for example in DB) Also, any other issues/difficulties we may encounter implementing and storing VM version.
Keep in mind that this is a new feature that impacts the Stable Device Addresses feature but may be useful/relevant for other features as well.
Can you give some examples which will cause an issue moving a VM from a 3.1 cluster to a 3.0 one? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
On 02/02/2012 08:46 AM, Itamar Heim wrote:
On 02/02/2012 02:56 AM, Eli Mesika wrote:
Hi
We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)?
I think its the above + the meta data /devices you keep for it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file). A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should... I'm sure there are additional scenarios we're not thinking of.
block moving VM if it's version is not fully supported compatible with the target Cluster. One idea for getting the VM version is the OVF which actually holds inside its header OvfVersion. The question is , is the OVF good enough for all our needs or should we persist that else (for example in DB) Also, any other issues/difficulties we may encounter implementing and storing VM version.
Keep in mind that this is a new feature that impacts the Stable Device Addresses feature but may be useful/relevant for other features as well.
Can you give some examples which will cause an issue moving a VM from a 3.1 cluster to a 3.0 one? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 02/02/2012 12:15 PM, Ayal Baron wrote:
----- Original Message -----
On 02/02/2012 08:46 AM, Itamar Heim wrote:
On 02/02/2012 02:56 AM, Eli Mesika wrote:
Hi
We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)?
I think its the above + the meta data /devices you keep for it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file).
same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it.
A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should...
I'm sure there are additional scenarios we're not thinking of.

On 03/02/12 17:00, Itamar Heim wrote:
On 02/02/2012 12:15 PM, Ayal Baron wrote:
----- Original Message -----
On 02/02/2012 08:46 AM, Itamar Heim wrote:
On 02/02/2012 02:56 AM, Eli Mesika wrote:
Hi
We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)?
I think its the above + the meta data /devices you keep for it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file).
same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it.
IIUC you suggest to use features granularity for setting on which cluster (version) the VM can be started. Note that *all* VMs that were started on a 3.1 cluster will loose functionality when running on 3.0 cluster (stable device addressed will be lost). I would go with a simple approach here. Derive the VM version from the cluster version, VM can be executed on all hosts in the cluster without loosing any functionality, when we change the VM cluster we practically change the VM version. I would require a force flag to execute the VM on a lower cluster version. What we are missing today is saving this version as part of the OVF to support version compatibility functionality during import/export VM flows and snapshots of VM configuration.
A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should...
I'm sure there are additional scenarios we're not thinking of.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
On 03/02/12 17:00, Itamar Heim wrote:
On 02/02/2012 12:15 PM, Ayal Baron wrote:
----- Original Message -----
On 02/02/2012 08:46 AM, Itamar Heim wrote:
On 02/02/2012 02:56 AM, Eli Mesika wrote:
Hi
We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)?
I think its the above + the meta data /devices you keep for it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file).
same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it.
IIUC you suggest to use features granularity for setting on which cluster (version) the VM can be started. Note that *all* VMs that were started on a 3.1 cluster will loose functionality when running on 3.0 cluster (stable device addressed will be lost).
I would go with a simple approach here. Derive the VM version from the cluster version, VM can be executed on all hosts in the cluster without loosing any functionality, when we change the VM cluster we practically change the VM version. I would require a force flag to execute the VM on a lower cluster version.
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
What we are missing today is saving this version as part of the OVF to support version compatibility functionality during import/export VM flows and snapshots of VM configuration.
A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should...
I'm sure there are additional scenarios we're not thinking of.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 05/02/12 10:34, Yaniv Kaul wrote:
----- Original Message -----
On 03/02/12 17:00, Itamar Heim wrote:
On 02/02/2012 12:15 PM, Ayal Baron wrote:
----- Original Message -----
On 02/02/2012 08:46 AM, Itamar Heim wrote:
On 02/02/2012 02:56 AM, Eli Mesika wrote: > Hi > > We had discussed today the Stable Device Addresses feature > One of the questions arose from the meeting (and actually > defined > as > an open issue in the feature wiki) is: > What happens to a 3.1 VM running on 3.1 Cluster when it is > moved > to a > 3.0 cluster. > We encountered that VM may lose some configuration data but > also > may > be corrupted. > From that point we got to the conclusion that we have somehow > to > maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)?
I think its the above + the meta data /devices you keep for it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file).
same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it.
IIUC you suggest to use features granularity for setting on which cluster (version) the VM can be started. Note that *all* VMs that were started on a 3.1 cluster will loose functionality when running on 3.0 cluster (stable device addressed will be lost).
I would go with a simple approach here. Derive the VM version from the cluster version, VM can be executed on all hosts in the cluster without loosing any functionality, when we change the VM cluster we practically change the VM version. I would require a force flag to execute the VM on a lower cluster version.
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.). The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing. Livnat
What we are missing today is saving this version as part of the OVF to support version compatibility functionality during import/export VM flows and snapshots of VM configuration.
A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should...
I'm sure there are additional scenarios we're not thinking of.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 05/02/12 10:45, Livnat Peer wrote:
On 05/02/12 10:34, Yaniv Kaul wrote:
----- Original Message -----
On 03/02/12 17:00, Itamar Heim wrote:
On 02/02/2012 12:15 PM, Ayal Baron wrote:
----- Original Message -----
On 02/02/2012 08:46 AM, Itamar Heim wrote: > On 02/02/2012 02:56 AM, Eli Mesika wrote: >> Hi >> >> We had discussed today the Stable Device Addresses feature >> One of the questions arose from the meeting (and actually >> defined >> as >> an open issue in the feature wiki) is: >> What happens to a 3.1 VM running on 3.1 Cluster when it is >> moved >> to a >> 3.0 cluster. >> We encountered that VM may lose some configuration data but >> also >> may >> be corrupted. >> From that point we got to the conclusion that we have somehow >> to >> maintain a VM Version that will allow us to
What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)?
I think its the above + the meta data /devices you keep for it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file).
same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it.
IIUC you suggest to use features granularity for setting on which cluster (version) the VM can be started. Note that *all* VMs that were started on a 3.1 cluster will loose functionality when running on 3.0 cluster (stable device addressed will be lost).
I would go with a simple approach here. Derive the VM version from the cluster version, VM can be executed on all hosts in the cluster without loosing any functionality, when we change the VM cluster we practically change the VM version. I would require a force flag to execute the VM on a lower cluster version.
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.).
The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing.
Livnat
About the -m switch the engine derives it from the cluster level.
What we are missing today is saving this version as part of the OVF to support version compatibility functionality during import/export VM flows and snapshots of VM configuration.
A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should...
I'm sure there are additional scenarios we're not thinking of.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
From: "Livnat Peer" <lpeer@redhat.com> To: "Yaniv Kaul" <ykaul@redhat.com> Cc: dlaor@redhat.com, engine-devel@ovirt.org Sent: Sunday, February 5, 2012 10:46:56 AM Subject: Re: [Engine-devel] oVirt upstream meeting : VM Version
On 05/02/12 10:45, Livnat Peer wrote:
On 05/02/12 10:34, Yaniv Kaul wrote:
----- Original Message -----
On 03/02/12 17:00, Itamar Heim wrote:
On 02/02/2012 12:15 PM, Ayal Baron wrote:
----- Original Message ----- > On 02/02/2012 08:46 AM, Itamar Heim wrote: >> On 02/02/2012 02:56 AM, Eli Mesika wrote: >>> Hi >>> >>> We had discussed today the Stable Device Addresses feature >>> One of the questions arose from the meeting (and actually >>> defined >>> as >>> an open issue in the feature wiki) is: >>> What happens to a 3.1 VM running on 3.1 Cluster when it is >>> moved >>> to a >>> 3.0 cluster. >>> We encountered that VM may lose some configuration data but >>> also >>> may >>> be corrupted. >>> From that point we got to the conclusion that we have >>> somehow >>> to >>> maintain a VM Version that will allow us to > > What do you mean by VM version? > Is that the guest hardware abstraction version (which is the > kvm > hypervisor release + the '-M' flag for compatibility)? > > I think its the above + the meta data /devices you keep for > it.
Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file).
same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it.
IIUC you suggest to use features granularity for setting on which cluster (version) the VM can be started. Note that *all* VMs that were started on a 3.1 cluster will loose functionality when running on 3.0 cluster (stable device addressed will be lost).
I would go with a simple approach here. Derive the VM version from the cluster version, VM can be executed on all hosts in the cluster without loosing any functionality, when we change the VM cluster we practically change the VM version. I would require a force flag to execute the VM on a lower cluster version.
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.).
The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing.
Livnat
However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command.
About the -m switch the engine derives it from the cluster level.
What we are missing today is saving this version as part of the OVF to support version compatibility functionality during import/export VM flows and snapshots of VM configuration.
A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should...
I'm sure there are additional scenarios we're not thinking of.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 02/05/2012 02:57 PM, Miki Kenneth wrote: ...
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.).
The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing.
Livnat
However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command.
but the engine currently has no logic to detect the need to increase the emulated machine to support feature X. the engine currently does not save this parameter at VM level. it will also need to compare it to the list of supported emulated machines at the cluster, and prevent running the VM if there isn't a match. it also increases the matrix of possible emulated machines being run on different versions of hypervisor to N*cluster_levels, instead of just the number of cluster levels. plus, if a cluster is increased to a new version of hosts which doesn't support an older emulated machine level - user will need to upgrade all VMs one by one? (or will engine block upgrading cluster level if the new cluster level doesn't have an emulated machine in use by one of the virtual machines) it also means engine needs to handle validation logic for this field when exporting/importing (point of this discussion), as well as just moving a VM between clusters. so before introducing all this logic - were issues observed where changing the cluster level (i.e., -M at host level) resulted in problematic changes at guest level worth all of these?

----- Original Message -----
On 02/05/2012 02:57 PM, Miki Kenneth wrote: ...
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.).
The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing.
Livnat
However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command.
It's not under the hood, user effectively chose to change it when she changed the cluster level. Going forward, we could check the version before running the VM and then warning the user (so that the change would take effect per VM and not per cluster) but that would be annoying and to mitigate that, we would need to add a checkbox when changing the cluster level "Automatically upgrade VMs" or something (to keep current simple behaviour).
but the engine currently has no logic to detect the need to increase the emulated machine to support feature X. the engine currently does not save this parameter at VM level. it will also need to compare it to the list of supported emulated machines at the cluster, and prevent running the VM if there isn't a match. it also increases the matrix of possible emulated machines being run on different versions of hypervisor to N*cluster_levels, instead of just the number of cluster levels. plus, if a cluster is increased to a new version of hosts which doesn't support an older emulated machine level - user will need to upgrade all VMs one by one? (or will engine block upgrading cluster level if the new cluster level doesn't have an emulated machine in use by one of the virtual machines) it also means engine needs to handle validation logic for this field when exporting/importing (point of this discussion), as well as just moving a VM between clusters.
so before introducing all this logic - were issues observed where changing the cluster level (i.e., -M at host level) resulted in problematic changes at guest level worth all of these? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
----- Original Message -----
On 02/05/2012 02:57 PM, Miki Kenneth wrote: ...
Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y.
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.).
The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing.
Livnat
However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command.
It's not under the hood, user effectively chose to change it when she changed the cluster level.
Going forward, we could check the version before running the VM and then warning the user (so that the change would take effect per VM and not per cluster) but that would be annoying and to mitigate that, we would need to add a checkbox when changing the cluster level "Automatically upgrade VMs" or something (to keep current simple behaviour).
Another thing that would require VM version is unicode support in the ovf.
but the engine currently has no logic to detect the need to increase the emulated machine to support feature X. the engine currently does not save this parameter at VM level. it will also need to compare it to the list of supported emulated machines at the cluster, and prevent running the VM if there isn't a match. it also increases the matrix of possible emulated machines being run on different versions of hypervisor to N*cluster_levels, instead of just the number of cluster levels. plus, if a cluster is increased to a new version of hosts which doesn't support an older emulated machine level - user will need to upgrade all VMs one by one? (or will engine block upgrading cluster level if the new cluster level doesn't have an emulated machine in use by one of the virtual machines) it also means engine needs to handle validation logic for this field when exporting/importing (point of this discussion), as well as just moving a VM between clusters.
so before introducing all this logic - were issues observed where changing the cluster level (i.e., -M at host level) resulted in problematic changes at guest level worth all of these? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 07/02/12 17:43, Ayal Baron wrote:
----- Original Message -----
----- Original Message -----
On 02/05/2012 02:57 PM, Miki Kenneth wrote: ...
> Isn't the VM version derived from the version of the cluster > on > which it was last edited? > For example: you've created a VM on a cluster v3.0. When it is > running on a v3.2 cluster, is there any reason to change its > version? > When it is edited, then perhaps yes - because it may have > changed/added properties/features that are only applicable to > v3.2. > But until then - let it stay in the same version as it was > created. > (btw, how does this map, if at all, to the '-m' qemu command > line > switch?) > Y. >
Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.).
The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing.
Livnat
However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command.
It's not under the hood, user effectively chose to change it when she changed the cluster level.
Going forward, we could check the version before running the VM and then warning the user (so that the change would take effect per VM and not per cluster) but that would be annoying and to mitigate that, we would need to add a checkbox when changing the cluster level "Automatically upgrade VMs" or something (to keep current simple behaviour).
Another thing that would require VM version is unicode support in the ovf.
Why isn't unicode an OVF version issue?
but the engine currently has no logic to detect the need to increase the emulated machine to support feature X. the engine currently does not save this parameter at VM level. it will also need to compare it to the list of supported emulated machines at the cluster, and prevent running the VM if there isn't a match. it also increases the matrix of possible emulated machines being run on different versions of hypervisor to N*cluster_levels, instead of just the number of cluster levels. plus, if a cluster is increased to a new version of hosts which doesn't support an older emulated machine level - user will need to upgrade all VMs one by one? (or will engine block upgrading cluster level if the new cluster level doesn't have an emulated machine in use by one of the virtual machines) it also means engine needs to handle validation logic for this field when exporting/importing (point of this discussion), as well as just moving a VM between clusters.
so before introducing all this logic - were issues observed where changing the cluster level (i.e., -M at host level) resulted in problematic changes at guest level worth all of these? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
-- /d "The answer, my friend, is blowing in the wind" --Bob Dylan, Blowin' in the Wind (1963)
participants (8)
-
Ayal Baron
-
Dor Laor
-
Doron Fediuck
-
Eli Mesika
-
Itamar Heim
-
Livnat Peer
-
Miki Kenneth
-
Yaniv Kaul