On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau
<andrew(a)andrewklau.com> wrote:
> On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>> Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
>>> Hi Joshua,
>>>
>>>
------------------------------------------------------------------------------------------------------------------------------------------------------
>>> Date: Sat, 15 Mar 2014 02:32:59 -0400
>>> From: josh(a)wrale.com
>>> To: users(a)ovirt.org
>>> Subject: [Users] Post-Install Engine VM Changes Feasible?
>>>
>>> Hi,
>>>
>>> I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using
hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
>>>
>>> I have a layered networking topology ((V)LANs for public, internal, storage,
compute and ipmi). I am comfortable doing the bridging for each
>>> interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
>>>
>>> Here's my desired topology:
http://www.asciiflow.com/#Draw6325992559863447154
>>>
>>> Here's my keepalived setup:
https://gist.github.com/josh-at-knoesis/98618a16418101225726
>>>
>>> I'm writing a lot of documentation of the many steps I'm taking. I
hope to eventually release a distributed introspective all-in-one (including
>>> distributed storage) guide.
>>>
>>> Looking at vm.conf.in <
http://vm.conf.in>, it looks like I'd by
default end up with one interface on my engine, probably on my internal VLAN, as
>>> that's where I'd like the control traffic to flow. I definitely
could do NAT, but I'd be most happy to see the engine have a presence on all of the
>>> LANs, if for no other reason than because I want to send backups directly
over the storage VLAN.
>>>
>>> I'll cut to it: I believe I could successfully alter the vdsm template
(vm.conf.in <
http://vm.conf.in>) to give me the extra interfaces I require.
>>> It hit me, however, that I could just take the defaults for the initial
install. Later, I think I'll be able to come back with virsh and make my
>>> changes to the gracefully disabled VM. Is this true?
>>>
>>> [1]
http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
>>>
>>> Thanks,
>>> Joshua
>>>
>>>
>>> I started from the same reference[1] and ended up "statically"
modifying vm.conf.in before launching setup, like this:
>>>
>>> cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
>>> cat << EOM >
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
>>> vmId=@VM_UUID@
>>> memSize=@MEM_SIZE@
>>> display=@CONSOLE_TYPE@
>>> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
>>>
type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>>>
devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
>>> slot:0x06, domain:0x0000, type:pci,
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>>> devices={device:scsi,model:virtio-scsi,type:controller}
>>>
devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
>>> slot:0x03, domain:0x0000, type:pci,
function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>>
devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
>>> slot:0x09, domain:0x0000, type:pci,
function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>>
devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
>>> vmName=@NAME@
>>>
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>>> smp=@VCPUS@
>>> cpuType=@CPU_TYPE@
>>> emulatedMachine=@EMULATED_MACHINE@
>>> EOM
>>
>>
>> Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf after
setup:
>> - put the system in global maintenance
>> - edit the vm.conf file on all the hosts running the hosted engine
>> - shutdown the vm: hosted-engine --vm-shutdown
>> - start again the vm: hosted-engine --vm-start
>> - exit global maintenance
>>
>> Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users
on
ovirt.org wiki?
>>
>>
>
> So would you simply just add a new line under the original devices line? ie.
>
devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00,
> slot:0x03, domain:0x0000, type:pci,
> function:0x0},device:bridge,type:interface}
>
> Are there any good practices for getting the mac addr so it won't be
> possible to clash with ones vdsm would generate? I assume the same
> applies for deviceid?
> Did you also change the slot?
>
This worked successfully:
yum -y install python-virtinst
# generate uuid and mac address
echo 'import virtinst.util ; print
virtinst.util.uuidToString(virtinst.util.randomUUID())' | python
echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python
hosted-engine --set-maintenance --mode=global
nano /etc/ovirt-hosted-engine/vm.conf
# insert under earlier nicModel
# replace macaddress and uuid from above
# increment slot (so it's not the same as above nicModel)
# modify ovirtmgmt to desired bridge interface
devices={nicModel:pv,macAddr:00:16:3e:35:87:7d,linkActive:true,network:storage_network,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4c32c036-0e5a-e0b3-9ba7-bg3dfzbs40ae,address:{bus:0x00,slot:0x04,
domain:0x0000, type:pci,function:0x0},device:bridge,type:interface}
hosted-engine --vm-shutdown
hosted-engine --vm-start
hosted-engine --set-maintenance --mode=none
However, this won't get propagated to any additional hosts that are
installed. I'm guessing it may be this file
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in that gets
copied for new installs, but I don't have any new hosts to test with
right now.
A few comments on above discussion:
MAC address for Hosted Engine VM is already generated randomly by hosted-engine --deploy
command and proposed as default in the question:
'You may specify a MAC address for the VM or accept a randomly generated default
[@DEFAULT@]: '
So it shouldn't be necessary to generate another one.
However, you can set the MAC address at interactive setup or by creating an answer.conf
file with the following content:
[environment:default]
OVEHOSTED_VM/vmMACAddr=str:00:16:3e:72:85:3f
changing above mac address with whatever you want and running
hosted-engine --deploy --config-append=answer.conf
Note that if you change /etc/ovirt-hosted-engine/vm.conf after deployment you must also
update /etc/ovirt-hosted-engine/answers.conf
in order to have additional host to get the same configuration when you run hosted-engine
--deploy.
Additional host already deployed need to be updated manually.
Remember also that if there's interest in adding configuration steps while deploying
you can open an RFE on bugzilla.
I created an etherpad to allow you to list post install changes to vm.conf, so I can get a
clue on what we're missing there.
>>
>>>
>>> I simply added a second nic (with a fixed MAC address from the
locally-administered pool, since I didn't know how to auto-generate one) and added an
>>> index for nics too (mimicking the the storage devices setup already
present).
>>>
>>> My network setup is much simpler than yours: ovirtmgmt bridge is on an
isolated oVirt-management-only network without gateway, my actual LAN with
>>> gateway and Internet access (for package updates/installation) is connected
to lan bridge and the SAN/migration LAN is a further (not bridged) 10
>>> Gib/s isolated network for which I do not expect to need Engine/VMs
reachability (so no third interface for Engine) since all actions should be
>>> performed from Engine but only through vdsm hosts (I use a
"split-DNS" setup by means of carefully crafted hosts files on Engine and vdsm
hosts)
>>>
>>> I can confirm that the engine vm gets created as expected and that network
connectivity works.
>>>
>>> Unfortunately I cannot validate the whole design yet, since I'm still
debugging HA-agent problems that prevent a reliable Engine/SD startup.
>>>
>>> Hope it helps.
>>>
>>> Greetings,
>>> Giuseppe
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at
redhat.com
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at