[Users] Post-Install Engine VM Changes Feasible?
Sandro Bonazzola
sbonazzo at redhat.com
Mon Mar 17 05:01:41 EDT 2014
Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
> Hi Joshua,
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------
> Date: Sat, 15 Mar 2014 02:32:59 -0400
> From: josh at wrale.com
> To: users at ovirt.org
> Subject: [Users] Post-Install Engine VM Changes Feasible?
>
> Hi,
>
> I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
>
> I have a layered networking topology ((V)LANs for public, internal, storage, compute and ipmi). I am comfortable doing the bridging for each
> interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
>
> Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154
>
> Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726
>
> I'm writing a lot of documentation of the many steps I'm taking. I hope to eventually release a distributed introspective all-in-one (including
> distributed storage) guide.
>
> Looking at vm.conf.in <http://vm.conf.in>, it looks like I'd by default end up with one interface on my engine, probably on my internal VLAN, as
> that's where I'd like the control traffic to flow. I definitely could do NAT, but I'd be most happy to see the engine have a presence on all of the
> LANs, if for no other reason than because I want to send backups directly over the storage VLAN.
>
> I'll cut to it: I believe I could successfully alter the vdsm template (vm.conf.in <http://vm.conf.in>) to give me the extra interfaces I require.
> It hit me, however, that I could just take the defaults for the initial install. Later, I think I'll be able to come back with virsh and make my
> changes to the gracefully disabled VM. Is this true?
>
> [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
>
> Thanks,
> Joshua
>
>
> I started from the same reference[1] and ended up "statically" modifying vm.conf.in before launching setup, like this:
>
> cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
> cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
> vmId=@VM_UUID@
> memSize=@MEM_SIZE@
> display=@CONSOLE_TYPE@
> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
> type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk at BOOT_CDROM@}
> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
> slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk at BOOT_DISK@}
> devices={device:scsi,model:virtio-scsi,type:controller}
> devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
> slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface at BOOT_PXE@}
> devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
> slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface at BOOT_PXE@}
> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
> vmName=@NAME@
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
> smp=@VCPUS@
> cpuType=@CPU_TYPE@
> emulatedMachine=@EMULATED_MACHINE@
> EOM
Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf after setup:
- put the system in global maintenance
- edit the vm.conf file on all the hosts running the hosted engine
- shutdown the vm: hosted-engine --vm-shutdown
- start again the vm: hosted-engine --vm-start
- exit global maintenance
Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users on ovirt.org wiki?
>
> I simply added a second nic (with a fixed MAC address from the locally-administered pool, since I didn't know how to auto-generate one) and added an
> index for nics too (mimicking the the storage devices setup already present).
>
> My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated oVirt-management-only network without gateway, my actual LAN with
> gateway and Internet access (for package updates/installation) is connected to lan bridge and the SAN/migration LAN is a further (not bridged) 10
> Gib/s isolated network for which I do not expect to need Engine/VMs reachability (so no third interface for Engine) since all actions should be
> performed from Engine but only through vdsm hosts (I use a "split-DNS" setup by means of carefully crafted hosts files on Engine and vdsm hosts)
>
> I can confirm that the engine vm gets created as expected and that network connectivity works.
>
> Unfortunately I cannot validate the whole design yet, since I'm still debugging HA-agent problems that prevent a reliable Engine/SD startup.
>
> Hope it helps.
>
> Greetings,
> Giuseppe
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
More information about the Users
mailing list