[Users] Obtain VM name from within VM?

Hi, I'm interested to know whether it is possible to obtain a VM's name from within a VM. In particular I'd like to obtain VM names from within Linux guests for the purpose of setting up dynamic DNS based on the VM name. I don't see the VM name in the output of dmidecode and there doesn't appear to be an entry in the ovirt guest API to obtain it. (In any case I can't run the ovirt guest agent on my VMs, so I'd like some other way to get it.) Thanks! -- Jonathan Daugherty Software Engineer Galois, Inc.

On 05/25/2013 02:07 AM, Jonathan Daugherty wrote:
Hi,
I'm interested to know whether it is possible to obtain a VM's name from within a VM. In particular I'd like to obtain VM names from within Linux guests for the purpose of setting up dynamic DNS based on the VM name. I don't see the VM name in the output of dmidecode and there doesn't appear to be an entry in the ovirt guest API to obtain it. (In any case I can't run the ovirt guest agent on my VMs, so I'd like some other way to get it.)
Thanks!
greg is working on cloud-init support[1] which should take care of both passing this info to linux guests (and maybe solve the ddns issue as well while at it - greg?) in the meantime, you could try and do this manually by passing it via the vm payload field, which is mounted to the guest as a block device. [1] http://gerrit.ovirt.org/#/q/status:open+project:ovirt-engine+branch:master+t...

in the meantime, you could try and do this manually by passing it via the vm payload field, which is mounted to the guest as a block device.
Thanks for the tip! I'm not familiar with this feature and I don't see any information about it in the Redhat RHEV documentation. All I've found is this: http://www.ovirt.org/Features/VMPayload That suggests I can use the API to set this up, but is this possible via the administrative UI instead? Could I do this in a creation-time VDSM hook? Thanks, -- Jonathan Daugherty Software Engineer Galois, Inc.

On 05/25/2013 08:12 PM, Jonathan Daugherty wrote:
in the meantime, you could try and do this manually by passing it via the vm payload field, which is mounted to the guest as a block device.
Thanks for the tip! I'm not familiar with this feature and I don't see any information about it in the Redhat RHEV documentation. All I've found is this:
http://www.ovirt.org/Features/VMPayload
That suggests I can use the API to set this up, but is this possible via the administrative UI instead? Could I do this in a creation-time VDSM hook?
Thanks,
1. yes, you can do it via a hook using custom properties (that what the floppy/iso inject hooks were for, but they are now deprecated as the vm payload makes them redundant). 2. RHEV has this in 3.1. iirc, its only available via the API though[1] cloud init should be available from the UI via the runonce dialog. [1]https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtua...

Hi, Is VMpayload supported by oVirt 3.2.2 el6? As I'm trying to set up startup hook to pass vmname via payload and getting nothing... my hook code quite simple (and maybe crude):
domxml = hooking.read_domxml() domain = domxml.getElementsByTagName('domain')[0] payloads = domxml.createElement('payloads') npayload = domxml.createElement('payload') npayload.setAttribute('type','cdrom') filename = domxml.createElement('file') filename.setAttribute('name','vmname.txt') content = domxml.createElement('content') names = domxml.getElementsByTagName('name')[0] vmname = names.childNodes[0].nodeValue text = domxml.createTextNode(vmname) content.appendChild(text) filename.appendChild(content) npayload.appendChild(filename) payloads.appendChild(npayload) domain.appendChild(payloads) hooking.write_domxml(domxml) and from vdsm.log I can see that 'payloads' elements are created succesfully: Thread-56974::DEBUG::2013-06-18 11:43:49,612::libvirtvm::1520::vm.Vm::(_run) vmId=`a21a2ec8-c7fd-4a0a-8cbb-5f0ab4faa960`::<?xml version="1.0" encoding="utf-8"?><domain type="kvm"> <name>virt-gate</name> <uuid>a21a2ec8-c7fd-4a0a-8cbb-5f0ab4faa960</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <devices> <channel type="unix"> <target name="com.redhat.rhevm.vdsm" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/virt-gate.com.redhat.rhevm.vdsm"/> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/virt-gate.org.qemu.guest_agent.0"/> </channel> <input bus="ps2" type="mouse"/> <channel type="spicevmc"> <target name="com.redhat.spice.0" type="virtio"/> </channel> <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice"> <channel mode="secure" name="main"/> <channel mode="secure" name="inputs"/> <channel mode="secure" name="cursor"/> <channel mode="secure" name="playback"/> <channel mode="secure" name="record"/> <channel mode="secure" name="display"/> <channel mode="secure" name="usbredir"/> <channel mode="secure" name="smartcard"/> </graphics> <video> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/> <model heads="1" type="qxl" vram="65536"/> </video> <interface type="bridge"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/> <mac address="00:1a:4a:a8:7a:07"/> <model type="virtio"/> <source bridge="vlan1"/> <filterref filter="vdsm-no-mac-spoofing"/> <link state="up"/> </interface> <interface type="bridge"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/> <mac address="00:1a:4a:a8:7a:09"/> <model type="virtio"/> <source bridge="vlan11"/> <filterref filter="vdsm-no-mac-spoofing"/> <link state="up"/> </interface> <disk device="cdrom" snapshot="no" type="file"> <address bus="1" controller="0" target="0" type="drive" unit="0"/> <source file="" startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <serial/> </disk> <disk device="disk" snapshot="no" type="block"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/> <source dev="/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/4927f83f-425e-41ef-aa12-3bc249d7064b/caa56376-deab-41cd-a27c-d35fe8f33946"/>
<target bus="virtio" dev="vda"/> <serial>4927f83f-425e-41ef-aa12-3bc249d7064b</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> </disk> <memballoon model="virtio"/> </devices> <os> <type arch="x86_64" machine="pc">hvm</type> <smbios mode="sysinfo"/> </os> <sysinfo type="smbios"> <system> <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-4.el6.centos.10</entry> <entry name="serial">A6902500-EA00-0706-0025-90A600EA0E0F</entry> <entry name="uuid">a21a2ec8-c7fd-4a0a-8cbb-5f0ab4faa960</entry> </system> </sysinfo> <clock adjustment="-2" offset="variable"> <timer name="rtc" tickpolicy="catchup"/> </clock> <features> <acpi/> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="1" threads="1"/> </cpu> <payloads><payload type="cdrom"><file name="vmname.txt"><content>virt-gate</content></file></payload></payloads></domain> Tried with both 'cdrom' and 'floppy' types. However, inside guest OS (centos 6.4) I couldn't see any payload: /dev/sr0 contains nothing; modprobe floppy doest not finds any floppy devices.
Am I missing something or it's just not implemented in 3.2.2-el6? Yuriy Demchenko On 05/25/2013 10:54 PM, Itamar Heim wrote:
http://www.ovirt.org/Features/VMPayload
That suggests I can use the API to set this up, but is this possible via the administrative UI instead? Could I do this in a creation-time VDSM hook?
Thanks,
1. yes, you can do it via a hook using custom properties (that what the floppy/iso inject hooks were for, but they are now deprecated as the vm payload makes them redundant).
2. RHEV has this in 3.1. iirc, its only available via the API though[1] cloud init should be available from the UI via the runonce dialog.

On 06/18/2013 10:59 AM, Yuriy Demchenko wrote:
Hi,
Is VMpayload supported by oVirt 3.2.2 el6?
it should be. have you tried first passing payload via the engine?
As I'm trying to set up startup hook to pass vmname via payload and getting nothing...
my hook code quite simple (and maybe crude):
domxml = hooking.read_domxml() domain = domxml.getElementsByTagName('domain')[0] payloads = domxml.createElement('payloads') npayload = domxml.createElement('payload') npayload.setAttribute('type','cdrom') filename = domxml.createElement('file') filename.setAttribute('name','vmname.txt') content = domxml.createElement('content') names = domxml.getElementsByTagName('name')[0] vmname = names.childNodes[0].nodeValue text = domxml.createTextNode(vmname) content.appendChild(text) filename.appendChild(content) npayload.appendChild(filename) payloads.appendChild(npayload) domain.appendChild(payloads) hooking.write_domxml(domxml) and from vdsm.log I can see that 'payloads' elements are created succesfully: Thread-56974::DEBUG::2013-06-18 11:43:49,612::libvirtvm::1520::vm.Vm::(_run) vmId=`a21a2ec8-c7fd-4a0a-8cbb-5f0ab4faa960`::<?xml version="1.0" encoding="utf-8"?><domain type="kvm"> <name>virt-gate</name> <uuid>a21a2ec8-c7fd-4a0a-8cbb-5f0ab4faa960</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <devices> <channel type="unix"> <target name="com.redhat.rhevm.vdsm" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/virt-gate.com.redhat.rhevm.vdsm"/> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/virt-gate.org.qemu.guest_agent.0"/> </channel> <input bus="ps2" type="mouse"/> <channel type="spicevmc"> <target name="com.redhat.spice.0" type="virtio"/> </channel> <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice"> <channel mode="secure" name="main"/> <channel mode="secure" name="inputs"/> <channel mode="secure" name="cursor"/> <channel mode="secure" name="playback"/> <channel mode="secure" name="record"/> <channel mode="secure" name="display"/> <channel mode="secure" name="usbredir"/> <channel mode="secure" name="smartcard"/> </graphics> <video> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/> <model heads="1" type="qxl" vram="65536"/> </video> <interface type="bridge"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/> <mac address="00:1a:4a:a8:7a:07"/> <model type="virtio"/> <source bridge="vlan1"/> <filterref filter="vdsm-no-mac-spoofing"/> <link state="up"/> </interface> <interface type="bridge"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/> <mac address="00:1a:4a:a8:7a:09"/> <model type="virtio"/> <source bridge="vlan11"/> <filterref filter="vdsm-no-mac-spoofing"/> <link state="up"/> </interface> <disk device="cdrom" snapshot="no" type="file"> <address bus="1" controller="0" target="0" type="drive" unit="0"/> <source file="" startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <serial/> </disk> <disk device="disk" snapshot="no" type="block"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/> <source dev="/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/4927f83f-425e-41ef-aa12-3bc249d7064b/caa56376-deab-41cd-a27c-d35fe8f33946"/>
<target bus="virtio" dev="vda"/> <serial>4927f83f-425e-41ef-aa12-3bc249d7064b</serial> <boot order="1"/> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> </disk> <memballoon model="virtio"/> </devices> <os> <type arch="x86_64" machine="pc">hvm</type> <smbios mode="sysinfo"/> </os> <sysinfo type="smbios"> <system> <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">6-4.el6.centos.10</entry> <entry name="serial">A6902500-EA00-0706-0025-90A600EA0E0F</entry> <entry name="uuid">a21a2ec8-c7fd-4a0a-8cbb-5f0ab4faa960</entry> </system> </sysinfo> <clock adjustment="-2" offset="variable"> <timer name="rtc" tickpolicy="catchup"/> </clock> <features> <acpi/> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="1" threads="1"/> </cpu> <payloads><payload type="cdrom"><file name="vmname.txt"><content>virt-gate</content></file></payload></payloads></domain>
Tried with both 'cdrom' and 'floppy' types. However, inside guest OS (centos 6.4) I couldn't see any payload: /dev/sr0 contains nothing; modprobe floppy doest not finds any floppy devices.
Am I missing something or it's just not implemented in 3.2.2-el6?
Yuriy Demchenko
On 05/25/2013 10:54 PM, Itamar Heim wrote:
http://www.ovirt.org/Features/VMPayload
That suggests I can use the API to set this up, but is this possible via the administrative UI instead? Could I do this in a creation-time VDSM hook?
Thanks,
1. yes, you can do it via a hook using custom properties (that what the floppy/iso inject hooks were for, but they are now deprecated as the vm payload makes them redundant).
2. RHEV has this in 3.1. iirc, its only available via the API though[1] cloud init should be available from the UI via the runonce dialog.

On 06/18/2013 10:59 AM, Yuriy Demchenko wrote:
Hi,
Is VMpayload supported by oVirt 3.2.2 el6?
it should be. have you tried first passing payload via the engine? erm, not quite understand what do you mean? via admin web-interface?
On 06/19/2013 02:31 PM, Itamar Heim wrote: there's no fields/options related to payload in web interface, not in 'edit vm' dialog, not in 'runonce' dialog (only 'attach cd/floppy', 'custom properties' with fixed choice-list and general options). However, I've achieved my goal some other way - wrote a hook that fills 'SKU Number' bios field (not used by default) with VM name. It appeared even more simpler and convenient solution than vmpayload/fileinject. Hook is in attach, in case someone interested.

On 06/19/2013 03:03 PM, Yuriy Demchenko wrote:
On 06/18/2013 10:59 AM, Yuriy Demchenko wrote:
Hi,
Is VMpayload supported by oVirt 3.2.2 el6?
it should be. have you tried first passing payload via the engine? erm, not quite understand what do you mean? via admin web-interface?
On 06/19/2013 02:31 PM, Itamar Heim wrote: there's no fields/options related to payload in web interface, not in 'edit vm' dialog, not in 'runonce' dialog (only 'attach cd/floppy', 'custom properties' with fixed choice-list and general options).
its not exposed in the UI, but in the engine REST API.
However, I've achieved my goal some other way - wrote a hook that fills 'SKU Number' bios field (not used by default) with VM name. It appeared even more simpler and convenient solution than vmpayload/fileinject. Hook is in attach, in case someone interested.
works as well :)

On 05/25/2013 04:01 AM, Itamar Heim wrote:
On 05/25/2013 02:07 AM, Jonathan Daugherty wrote:
Hi,
I'm interested to know whether it is possible to obtain a VM's name from within a VM. In particular I'd like to obtain VM names from within Linux guests for the purpose of setting up dynamic DNS based on the VM name. I don't see the VM name in the output of dmidecode and there doesn't appear to be an entry in the ovirt guest API to obtain it. (In any case I can't run the ovirt guest agent on my VMs, so I'd like some other way to get it.)
Thanks!
greg is working on cloud-init support[1] which should take care of both passing this info to linux guests (and maybe solve the ddns issue as well while at it - greg?)
I'm not aware of any DDNS support in cloud-init, just basic DNS setup. Cloud-init provides methods to run custom scripts (not sure we'll do that in first out but you can set it up in the image), so I'm sure you could use that or some other method to run scripts for the DDNS update once you've got the vm name propagated. Thanks, Greg
in the meantime, you could try and do this manually by passing it via the vm payload field, which is mounted to the guest as a block device.
[1] http://gerrit.ovirt.org/#/q/status:open+project:ovirt-engine+branch:master+t...
participants (4)
-
Greg Padgett
-
Itamar Heim
-
Jonathan Daugherty
-
Yuriy Demchenko