Re: [Users] from irc

Hi, I had a question on IRC regarding sysprep/sealing of Windows VMs and use in Pools. Basically, if you follow the Quick Start Guide, it says to seal the VM and shut it down before making the Template. My problem with this is that when you start a VM from the Pool, it takes forever to unseal - i.e. to repersonalize itself. That's a bad experience from a VDI perspective - you want the user to get a desktop they can start using ASAP. Itamar responded to me directly via e-mail:
bobdrad: on your question of windows VMs from pool - you can start them once with an admin for the sysprep to happen, then shut them down. admin launch of VMs doesn't create a stateless snapshot and manipulates the VM itself.
This raises some questions. I'd love to understand this better. He's asked me to cross this conversation onto the Users list now. 1. My understanding is that a Pool clones VMs on demand from a template. So how does the admin "launch" the template? I thought the only way to exercise a pool is from the User Portal. Is it sufficient to do that as Admin? I thought the persistence only came when launching a VM from the Admin Portal. 2. My understanding of "sealing" a system is that this depersonalizes it - e.g. removes hostname, prepares network for reinitialization, etc. And that the next time the system boots up it re-personalizes. So if one were to restart it, even as admin, this would reverse the sealing process, which would seem to make sealing in the first place pointless. What am I missing? At the moment I don't see the point of sealing a VM before putting it into the Pool (assuming you're using DHCP, anyway). What happens if you don't? Thanks, Bob P.S. I note the behavior of Fedora vs RHEL 6 is quite different in this regard. If you follow the "sealing" process on the Quick Start page for Fedora it seems to have no visible effect, but on RHEL 6 it puts you through a re-personalization dialog which is rather extensive (and again, not really suitable for VDI use).

On 11/15/2013 08:41 AM, Bob Doolittle wrote:
Hi,
I had a question on IRC regarding sysprep/sealing of Windows VMs and use in Pools. Basically, if you follow the Quick Start Guide, it says to seal the VM and shut it down before making the Template.
My problem with this is that when you start a VM from the Pool, it takes forever to unseal - i.e. to repersonalize itself. That's a bad experience from a VDI perspective - you want the user to get a desktop they can start using ASAP.
that's why Pools have "auto start VMs" - you can define you want the pool to always keep X VMs up and running and ready for the users.
Itamar responded to me directly via e-mail:
bobdrad: on your question of windows VMs from pool - you can start them once with an admin for the sysprep to happen, then shut them down. admin launch of VMs doesn't create a stateless snapshot and manipulates the VM itself.
This raises some questions. I'd love to understand this better.
He's asked me to cross this conversation onto the Users list now.
1. My understanding is that a Pool clones VMs on demand from a template. So how does the admin "launch" the template? I thought the only way to exercise a pool is from the User Portal. Is it sufficient to do that as Admin? I thought the persistence only came when launching a VM from the Admin Portal.
The VM is created as part of pool creation. an admin can start the VM from webadmin as well. unless admin will flag the runvm action with stateless, it will change the VM. i.e., the admin will be starting the VMs created in the pool, not the tempalte itself. caveat: for windows VMs, if the VM is doing the sysprep by the admin (persistently), you need to change the domain policy to avoid the computer account password change (will cause the VM to lose connectivty to the domain after 90 days). hence the auto starting VMs is better for some use cases.
2. My understanding of "sealing" a system is that this depersonalizes it - e.g. removes hostname, prepares network for reinitialization, etc. And that the next time the system boots up it re-personalizes. So if one were to restart it, even as admin, this would reverse the sealing process, which would seem to make sealing in the first place pointless.
What am I missing? At the moment I don't see the point of sealing a VM before putting it into the Pool (assuming you're using DHCP, anyway). What happens if you don't?
its considered microsoft best practice. there is also a security concern if the VMs are not part of a domain, since they would have the same SID. other caveats may apply, but may be good enough for your use case.
Thanks, Bob
P.S. I note the behavior of Fedora vs RHEL 6 is quite different in this regard. If you follow the "sealing" process on the Quick Start page for Fedora it seems to have no visible effect, but on RHEL 6 it puts you through a re-personalization dialog which is rather extensive (and again, not really suitable for VDI use). _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 11/15/2013 08:51 AM, Itamar Heim wrote:
On 11/15/2013 08:41 AM, Bob Doolittle wrote:
Hi,
I had a question on IRC regarding sysprep/sealing of Windows VMs and use in Pools. Basically, if you follow the Quick Start Guide, it says to seal the VM and shut it down before making the Template.
My problem with this is that when you start a VM from the Pool, it takes forever to unseal - i.e. to repersonalize itself. That's a bad experience from a VDI perspective - you want the user to get a desktop they can start using ASAP.
that's why Pools have "auto start VMs" - you can define you want the pool to always keep X VMs up and running and ready for the users.
Yes I can see this becomes more important when there's such a long startup time due to unsealing.
Itamar responded to me directly via e-mail:
bobdrad: on your question of windows VMs from pool - you can start them once with an admin for the sysprep to happen, then shut them down. admin launch of VMs doesn't create a stateless snapshot and manipulates the VM itself.
This raises some questions. I'd love to understand this better.
He's asked me to cross this conversation onto the Users list now.
1. My understanding is that a Pool clones VMs on demand from a template. So how does the admin "launch" the template? I thought the only way to exercise a pool is from the User Portal. Is it sufficient to do that as Admin? I thought the persistence only came when launching a VM from the Admin Portal.
The VM is created as part of pool creation. an admin can start the VM from webadmin as well.
OK so there seems to be a serious scalability issue here. It is not unusual for a large VDI deployment to utilize hundreds of VMs. From what you're telling me, if we don't want users to experience the unsealing delay (which takes several minutes) every time, an admin would have to start all of those VMs after creating the Pool, and then cleanly shut them all down again once they'd all finished their unsealing process. Either that, or set them all to auto-start, but this puts unnecessary load on the host since they have to unseal after snapshot-revert every time they come up. There is perhaps an attractive feature that might help here. At the moment you can set an absolute number of "auto-start" VMs for the pool. I think what's needed is something more like an HA "standby" model which is applied to hosts and disks - you would like a certain number of not-yet-allocated, auto-started ("waiting-auto-started"?) VMs to be in the pool at all times (unless of course the pool is exhausted). In other words, say you have a large pool of VMs, and you set a waiting-auto-started count to 10. If 20 users connect to VMs, 10 more would be started up and waiting in the pool. You could shut down/return any extras as VMs are returned to the pool (optionally?). The goal here is to eat the startup delays in advance to have ready-to-use VMs available for users at all times, without needing to start them all (which would consume excessive host resources). I think this would be more valuable than a fixed count, although maybe that's helpful in some use cases.
unless admin will flag the runvm action with stateless, it will change the VM. i.e., the admin will be starting the VMs created in the pool, not the tempalte itself.
Excellent - so that's what the "stateless" radiobox is for. :)
caveat: for windows VMs, if the VM is doing the sysprep by the admin (persistently), you need to change the domain policy to avoid the computer account password change (will cause the VM to lose connectivty to the domain after 90 days).
hence the auto starting VMs is better for some use cases.
2. My understanding of "sealing" a system is that this depersonalizes it - e.g. removes hostname, prepares network for reinitialization, etc. And that the next time the system boots up it re-personalizes. So if one were to restart it, even as admin, this would reverse the sealing process, which would seem to make sealing in the first place pointless.
What am I missing? At the moment I don't see the point of sealing a VM before putting it into the Pool (assuming you're using DHCP, anyway). What happens if you don't?
its considered microsoft best practice. there is also a security concern if the VMs are not part of a domain, since they would have the same SID.
other caveats may apply, but may be good enough for your use case.
I see that makes sense. I was also wondering from a security perspective. I know that when Linux first boots up it creates public/private RSA keys, presumably for things like filesystem encryption. So it would be bad to clone a VM that had already generated a (single) private key. I wouldn't be surprised if Windows does something similar at first OS boot. -Bob

On Fri, Nov 15, 2013 at 6:32 PM, Bob Doolittle wrote:
OK so there seems to be a serious scalability issue here. It is not unusual for a large VDI deployment to utilize hundreds of VMs. From what you're telling me, if we don't want users to experience the unsealing delay (which takes several minutes) every time, an admin would have to start all of those VMs after creating the Pool, and then cleanly shut them all down again once they'd all finished their unsealing process. Either that, or set them all to auto-start, but this puts unnecessary load on the host since they have to unseal after snapshot-revert every time they come up.
There is perhaps an attractive feature that might help here. At the moment you can set an absolute number of "auto-start" VMs for the pool. I think what's needed is something more like an HA "standby" model which is applied to hosts and disks - you would like a certain number of not-yet-allocated, auto-started ("waiting-auto-started"?) VMs to be in the pool at all times (unless of course the pool is exhausted). In other words, say you have a large pool of VMs, and you set a waiting-auto-started count to 10. If 20 users connect to VMs, 10 more would be started up and waiting in the pool. You could shut down/return any extras as VMs are returned to the pool (optionally?). The goal here is to eat the startup delays in advance to have ready-to-use VMs available for users at all times, without needing to start them all (which would consume excessive host resources). I think this would be more valuable than a fixed count, although maybe that's helpful in some use cases.
For sure it is interesting and mapping what there is in VMware since long time where you can specify a limit of free started VMs after which you autostart new ones at pool definition stage. I put similar question at RHEV 2.x time for a POC I had in place when the engine was Windows based and there was the command line API shell. I was suggested and I verified it worked as expected with an at job that every X minutes checked number of running VMs and when free pre-started number dropped down a pre-defined limit it would pre-start Y new VMs I think now it should be also easier, even if it could be a good proposal for 3.4 Gianluca

On 11/15/2013 12:45 PM, Gianluca Cecchi wrote:
On Fri, Nov 15, 2013 at 6:32 PM, Bob Doolittle wrote:
OK so there seems to be a serious scalability issue here. It is not unusual for a large VDI deployment to utilize hundreds of VMs. From what you're telling me, if we don't want users to experience the unsealing delay (which takes several minutes) every time, an admin would have to start all of those VMs after creating the Pool, and then cleanly shut them all down again once they'd all finished their unsealing process. Either that, or set them all to auto-start, but this puts unnecessary load on the host since they have to unseal after snapshot-revert every time they come up.
There is perhaps an attractive feature that might help here. At the moment you can set an absolute number of "auto-start" VMs for the pool. I think what's needed is something more like an HA "standby" model which is applied to hosts and disks - you would like a certain number of not-yet-allocated, auto-started ("waiting-auto-started"?) VMs to be in the pool at all times (unless of course the pool is exhausted). In other words, say you have a large pool of VMs, and you set a waiting-auto-started count to 10. If 20 users connect to VMs, 10 more would be started up and waiting in the pool. You could shut down/return any extras as VMs are returned to the pool (optionally?). The goal here is to eat the startup delays in advance to have ready-to-use VMs available for users at all times, without needing to start them all (which would consume excessive host resources). I think this would be more valuable than a fixed count, although maybe that's helpful in some use cases.
For sure it is interesting and mapping what there is in VMware since long time where you can specify a limit of free started VMs after which you autostart new ones at pool definition stage.
I put similar question at RHEV 2.x time for a POC I had in place when the engine was Windows based and there was the command line API shell. I was suggested and I verified it worked as expected with an at job that every X minutes checked number of running VMs and when free pre-started number dropped down a pre-defined limit it would pre-start Y new VMs I think now it should be also easier, even if it could be a good proposal for 3.4
Gianluca
This *is* what the feature is doing. define a pool of 500 VMs. specify you want 10 autostarted VMs. engine will make sure there are allways 10 launched VMs for users to get. and will launch new ones as needed (up to 10). if 50 users will all ask for VMs at the same time, they will have to start them / wait the sysprep. but if that's common - set auto start to 50. oh - right terminology is "prestart VMs"

Hi Bob,
----- Original Message ----- From: "Bob Doolittle" <bob@doolittle.us.com> Sent: Friday, November 15, 2013 8:41:47 AM
Hi,
I had a question on IRC regarding sysprep/sealing of Windows VMs and use in Pools. Basically, if you follow the Quick Start Guide, it says to seal the VM and shut it down before making the Template.
My problem with this is that when you start a VM from the Pool, it takes forever to unseal - i.e. to repersonalize itself. That's a bad experience from a VDI perspective - you want the user to get a desktop they can start using ASAP.
Itamar responded to me directly via e-mail:
bobdrad: on your question of windows VMs from pool - you can start them once with an admin for the sysprep to happen, then shut them down. admin launch of VMs doesn't create a stateless snapshot and manipulates the VM itself.
This raises some questions. I'd love to understand this better.
He's asked me to cross this conversation onto the Users list now.
1. My understanding is that a Pool clones VMs on demand from a template. So how does the admin "launch" the template? I thought the only way to exercise a pool is from the User Portal. Is it sufficient to do that as Admin? I thought the persistence only came when launching a VM from the Admin Portal.
unless something dramatic has changed lately in this feature's implementation: AFAIK, the Pool doesn't clone VMs from a Template on demand; the Pool VMs are provisioned in advance (upon Pool creation), which allows, of course, the admin to access them, launch them, prepare them for usage, etc, as Itamar explained.
2. My understanding of "sealing" a system is that this depersonalizes it - e.g. removes hostname, prepares network for reinitialization, etc. And that the next time the system boots up it re-personalizes. So if one were to restart it, even as admin, this would reverse the sealing process, which would seem to make sealing in the first place pointless.
What am I missing? At the moment I don't see the point of sealing a VM before putting it into the Pool (assuming you're using DHCP, anyway). What happens if you don't?
there is a difference between the way that an admin runs the VM and the way that a user runs the VM (by allocating himself a VM from the pool via his user portal): The admin typically runs the VM like any "regular" VM, i.e., not in a "stateless" mode, which ensures that all changes done on the guest will be persisted for the VMs next run. This is necessary for the initial OS installation of the VM, for example, initial configuration, application installation, etc. When the user runs the VM (again - by allocating himself a VM from the pool via the user portal), the VM actually runs in a stateless mode: right before the VM is run, a snapshot is taken from it; once the VM is being shutdown/returned to the pool, the VM reverts itself to that snapshot, clearing all changes done in this run (but not changes that the admin did in the initial run! those are "sealed" within the VM), leaving the VM and ready for the next allocation.
Thanks, Bob
P.S. I note the behavior of Fedora vs RHEL 6 is quite different in this regard. If you follow the "sealing" process on the Quick Start page for Fedora it seems to have no visible effect, but on RHEL 6 it puts you through a re-personalization dialog which is rather extensive (and again, not really suitable for VDI use). _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks, this answers a lot of questions Einav. Let's be careful about terminology, however. "sealing" in this context is the process of un-sysconfig'ing (aka depersonalizing) the machine to make it generic before cloning/copying. After unconfiguring, the next time the system boots up it goes through sysconfig again. For Windows this involves the 'sysprep' command. For Linux it involves the /.unconfigure file. I'm still trying to understand the value of sealing in the VDI context, and what adverse effects would be if we didn't take this step. So in this context the term "sealing" is quite different from snapshot/restore. I'll respond more in the next mail (to Itamar). Thanks again, Bob On 11/15/2013 09:28 AM, Einav Cohen wrote:
Hi Bob,
----- Original Message ----- From: "Bob Doolittle" <bob@doolittle.us.com> Sent: Friday, November 15, 2013 8:41:47 AM
Hi,
I had a question on IRC regarding sysprep/sealing of Windows VMs and use in Pools. Basically, if you follow the Quick Start Guide, it says to seal the VM and shut it down before making the Template.
My problem with this is that when you start a VM from the Pool, it takes forever to unseal - i.e. to repersonalize itself. That's a bad experience from a VDI perspective - you want the user to get a desktop they can start using ASAP.
Itamar responded to me directly via e-mail:
bobdrad: on your question of windows VMs from pool - you can start them once with an admin for the sysprep to happen, then shut them down. admin launch of VMs doesn't create a stateless snapshot and manipulates the VM itself.
This raises some questions. I'd love to understand this better.
He's asked me to cross this conversation onto the Users list now.
1. My understanding is that a Pool clones VMs on demand from a template. So how does the admin "launch" the template? I thought the only way to exercise a pool is from the User Portal. Is it sufficient to do that as Admin? I thought the persistence only came when launching a VM from the Admin Portal. unless something dramatic has changed lately in this feature's implementation: AFAIK, the Pool doesn't clone VMs from a Template on demand; the Pool VMs are provisioned in advance (upon Pool creation), which allows, of course, the admin to access them, launch them, prepare them for usage, etc, as Itamar explained.
2. My understanding of "sealing" a system is that this depersonalizes it - e.g. removes hostname, prepares network for reinitialization, etc. And that the next time the system boots up it re-personalizes. So if one were to restart it, even as admin, this would reverse the sealing process, which would seem to make sealing in the first place pointless.
What am I missing? At the moment I don't see the point of sealing a VM before putting it into the Pool (assuming you're using DHCP, anyway). What happens if you don't? there is a difference between the way that an admin runs the VM and the way that a user runs the VM (by allocating himself a VM from the pool via his user portal): The admin typically runs the VM like any "regular" VM, i.e., not in a "stateless" mode, which ensures that all changes done on the guest will be persisted for the VMs next run. This is necessary for the initial OS installation of the VM, for example, initial configuration, application installation, etc.
When the user runs the VM (again - by allocating himself a VM from the pool via the user portal), the VM actually runs in a stateless mode: right before the VM is run, a snapshot is taken from it; once the VM is being shutdown/returned to the pool, the VM reverts itself to that snapshot, clearing all changes done in this run (but not changes that the admin did in the initial run! those are "sealed" within the VM), leaving the VM and ready for the next allocation.
Thanks, Bob
P.S. I note the behavior of Fedora vs RHEL 6 is quite different in this regard. If you follow the "sealing" process on the Quick Start page for Fedora it seems to have no visible effect, but on RHEL 6 it puts you through a re-personalization dialog which is rather extensive (and again, not really suitable for VDI use). _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Nov 15, 2013 at 3:28 PM, Einav Cohen wrote:
When the user runs the VM (again - by allocating himself a VM from the pool via the user portal), the VM actually runs in a stateless mode: right before the VM is run, a snapshot is taken from it; once the VM is being shutdown/returned to the pool, the VM reverts itself to that snapshot, clearing all changes done in this run (but not changes that the admin did in the initial run! those are "sealed" within the VM), leaving the VM and ready for the next allocation.
Actually AFAIK in rhev 3.2 (and in oVirt 3.2) pools can be automatic or manual (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...) " In the Pool tab, select one of the following pool types: Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool. Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool. " IN RHEV 2.x there was also a third option that was time based and that in my opinion was interesting. Donna why it was removed. In my opinion there is sort of mislead between what is above and the introduction of pools inside the guide because it is stated this way: (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...) " ... Virtual machines in pools are stateless, data is not persistent across reboots. Virtual machines in a pool are started when there is a user request, and shut down when the user is finished. ..." So if one doesn't go ahead he/she thinks only stateless are allowed..... Gianluca

On 11/15/2013 12:36 PM, Gianluca Cecchi wrote:
Actually AFAIK in rhev 3.2 (and in oVirt 3.2) pools can be automatic or manual (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...) " In the Pool tab, select one of the following pool types: Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool. Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool. "
IN RHEV 2.x there was also a third option that was time based and that in my opinion was interesting. Donna why it was removed.
In my opinion there is sort of mislead between what is above and the introduction of pools inside the guide because it is stated this way: (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...) " ... Virtual machines in pools are stateless, data is not persistent across reboots. Virtual machines in a pool are started when there is a user request, and shut down when the user is finished. ..."
So if one doesn't go ahead he/she thinks only stateless are allowed.....
That is very interesting. After reading the RHEV docs I had wondered about this myself - i.e. how does one create/manage a stateful model where people always return to VMs in the same state they were left? I thought perhaps you had to detach it from the Pool for this, but haven't had time to experiment with it yet. With VMware there is a stateful pool model where the user-to-machine binding remains in effect, but the machine can be rebooted, powered off, etc. Is this what's implied by the "Manual" model? -Bob

On 11/15/2013 01:15 PM, Bob Doolittle wrote:
On 11/15/2013 12:36 PM, Gianluca Cecchi wrote:
Actually AFAIK in rhev 3.2 (and in oVirt 3.2) pools can be automatic or manual (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...)
" In the Pool tab, select one of the following pool types: Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool. Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool. "
IN RHEV 2.x there was also a third option that was time based and that in my opinion was interesting. Donna why it was removed.
In my opinion there is sort of mislead between what is above and the introduction of pools inside the guide because it is stated this way: (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...)
" ... Virtual machines in pools are stateless, data is not persistent across reboots. Virtual machines in a pool are started when there is a user request, and shut down when the user is finished. ..."
So if one doesn't go ahead he/she thinks only stateless are allowed.....
That is very interesting. After reading the RHEV docs I had wondered about this myself - i.e. how does one create/manage a stateful model where people always return to VMs in the same state they were left? I thought perhaps you had to detach it from the Pool for this, but haven't had time to experiment with it yet.
With VMware there is a stateful pool model where the user-to-machine binding remains in effect, but the machine can be rebooted, powered off, etc. Is this what's implied by the "Manual" model?
yes. until the admin returns it to the pool, when it looses the snapshot. allows to easily create a pool of VMs, assign to a group, then each gets their own VM (even "forever"), instead of creating/assigning one by one.

On Fri, Nov 15, 2013 at 7:15 PM, Bob Doolittle wrote:
That is very interesting. After reading the RHEV docs I had wondered about this myself - i.e. how does one create/manage a stateful model where people always return to VMs in the same state they were left? I thought perhaps you had to detach it from the Pool for this, but haven't had time to experiment with it yet.
Yes you can detach a VM from a pool and it will become a normal VM; its icon will change too. But to do this it has to be in down state (donna if this limitation is going away in 3.3. Not tested myself...) Gianluca
participants (4)
-
Bob Doolittle
-
Einav Cohen
-
Gianluca Cecchi
-
Itamar Heim