On Wed, 2012-01-25 at 12:02 +0200, Itamar Heim wrote:
On 01/25/2012 11:09 AM, Dor Laor wrote:
> On 01/22/2012 08:42 PM, Ayal Baron wrote:
...
>>>>> The following wiki page contains a description page of the
>>>>> feature.
>>>>>
http://www.ovirt.org/wiki/Features/VMPayload
>>>>>
...
>
> Currently it seems that there are too many options for it - floppy, cd,
> nfs and maybe more. It would be nice to have a single option that works
> for all cases. How about creating something like s3 compatible storage
> access that the guest can access? If you need boot time access then I'll
> recommend cdrom or plain virtio-blk.
I agree with Dor that there seems to be a large number of options here.
From the Aeolus and Deltacloud perspective, we only need something
that
makes that information available fairly late during boot (certainly
until after file systems have been mounted, even after network start
isn't a deal killer)
The payload data that the VM sees should not change when the VM is
rebooted or stopped/started.
I think there are different use cases here:
1. floppy and iso cover the same use case, for similar needs (and behave
the same). this would cover windows sysprep approach and basic
attachment of files
Just picking one or the other should be sufficient.
2.
http://192.169.192.168 - this would provide compatibility to cloud
approaches iiuc
Except the address is 169.254.169.254 (link-local) ;)
3. injecting into the file system - this covers various other needs,
like injecting ssh key, and is relevant not only during bootstrap,
should we want to allow editing a guest when it is down to troubleshoot
issues.
You don't need that as a feature for troubleshooting; I've unmangled EBS
root volumes in AWS before simply by mounting the EBS disk from another
machine.
The thing I don't like about file injection is that it's inherently
fragile, since oVirt needs to understand the intricacies of disk layout,
volume manager (as in lvm), and filesystem.
Even worse if it is exposed via the API so that you can provide target
paths - now you've tightly coupled your API user to the OS inside the
VM.
I would only entertain (3) if there is an absolutely compelling use case
to do it.
David