--_3a1654ca-ee91-4df4-91db-fa152b0e5325_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi Joshua=2C
Date: Sat=2C 15 Mar 2014 02:32:59 -0400
From: josh(a)wrale.com
To: users(a)ovirt.org
Subject: [Users] Post-Install Engine VM Changes Feasible?
Hi=2C
I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted=
engine=2C introspective GlusterFS+keepalived+NFS ala [1]=2C across six nod=
es.
I have a layered networking topology ((V)LANs for public=2C internal=2C sto=
rage=2C compute and ipmi). I am comfortable doing the bridging for each in=
terface myself via /etc/sysconfig/network-scripts/ifcfg-*. =20
=0A=
Here's my desired topology:
http://www.asciiflow.com/#Draw63259925598634471=
54
Here's my keepalived setup:
https://gist.github.com/josh-at-knoesis/98618a1=
6418101225726
=0A=
I'm writing a lot of documentation of the many steps I'm taking. I hope to=
eventually release a distributed introspective all-in-one (including distr=
ibuted storage) guide. =20
=0A=
Looking at vm.conf.in=2C it looks like I'd by default end up with one inter=
face on my engine=2C probably on my internal VLAN=2C as that's where I'd li=
ke the control traffic to flow. I definitely could do NAT=2C but I'd be mo=
st happy to see the engine have a presence on all of the LANs=2C if for no =
other reason than because I want to send backups directly over the storage =
VLAN. =20
=0A=
I'll cut to it: I believe I could successfully alter the vdsm template (vm=
.conf.in) to give me the extra interfaces I require. It hit me=2C however=
=2C that I could just take the defaults for the initial install. Later=2C =
I think I'll be able to come back with virsh and make my changes to the gra=
cefully disabled VM. Is this true?=20
=0A=
[1]
http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
Thanks=2C
Joshua
=0A=
I started from the same reference[1] and ended up "statically" modifying vm=
.conf.in before launching setup=2C like this:
cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/=
ovirt-hosted-engine-setup/templates/vm.conf.in.orig
cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
vmId=3D@VM_UUID@
memSize=3D@MEM_SIZE@
display=3D@CONSOLE_TYPE@
devices=3D{index:2=2Ciface:ide=2Caddress:{ controller:0=2C target:0=2Cunit:=
0=2C bus:1=2C type:drive}=2CspecParams:{}=2Creadonly:true=2CdeviceId:@CDROM=
_UUID@=2Cpath:@CDROM@=2Cdevice:cdrom=2Cshared:false=2Ctype:disk@BOOT_CDROM@=
}
devices=3D{index:0=2Ciface:virtio=2Cformat:raw=2CpoolID:@SP_UUID@=2CvolumeI=
D:@VOL_UUID@=2CimageID:@IMG_UUID@=2CspecParams:{}=2Creadonly:false=2Cdomain=
ID:@SD_UUID@=2Coptional:false=2CdeviceId:@IMG_UUID@=2Caddress:{bus:0x00=2C =
slot:0x06=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:disk=2Csha=
red:exclusive=2CpropagateErrors:off=2Ctype:disk@BOOT_DISK@}
devices=3D{device:scsi=2Cmodel:virtio-scsi=2Ctype:controller}
devices=3D{index:4=2CnicModel:pv=2CmacAddr:@MAC_ADDR@=2ClinkActive:true=2Cn=
etwork:@BRIDGE@=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2CdeviceId:@N=
IC_UUID@=2Caddress:{bus:0x00=2C slot:0x03=2C domain:0x0000=2C type:pci=2C f=
unction:0x0}=2Cdevice:bridge=2Ctype:interface@BOOT_PXE@}
devices=3D{index:8=2CnicModel:pv=2CmacAddr:02:16:3e:4f:c4:b0=2ClinkActive:t=
rue=2Cnetwork:lan=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2Caddress:{=
bus:0x00=2C slot:0x09=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevic=
e:bridge=2Ctype:interface@BOOT_PXE@}
devices=3D{device:console=2CspecParams:{}=2Ctype:console=2CdeviceId:@CONSOL=
E_UUID@=2Calias:console0}
vmName=3D@NAME@
spiceSecureChannels=3Dsmain=2Csdisplay=2Csinputs=2Cscursor=2Csplayback=2Csr=
ecord=2Cssmartcard=2Csusbredir
smp=3D@VCPUS@
cpuType=3D@CPU_TYPE@
emulatedMachine=3D@EMULATED_MACHINE@
EOM
I simply added a second nic (with a fixed MAC address from the locally-admi=
nistered pool=2C since I didn't know how to auto-generate one) and added an=
index for nics too (mimicking the the storage devices setup already presen=
t).
My network setup is much simpler than yours: ovirtmgmt bridge is on an isol=
ated oVirt-management-only network without gateway=2C my actual LAN with ga=
teway and Internet access (for package updates/installation) is connected t=
o lan bridge and the SAN/migration LAN is a further (not bridged) 10 Gib/s =
isolated network for which I do not expect to need Engine/VMs reachability =
(so no third interface for Engine) since all actions should be performed fr=
om Engine but only through vdsm hosts (I use a "split-DNS" setup by means o=
f carefully crafted hosts files on Engine and vdsm hosts)
I can confirm that the engine vm gets created as expected and that network =
connectivity works.
Unfortunately I cannot validate the whole design yet=2C since I'm still deb=
ugging HA-agent problems that prevent a reliable Engine/SD startup.
Hope it helps.
Greetings=2C
Giuseppe
=
--_3a1654ca-ee91-4df4-91db-fa152b0e5325_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>Hi
Joshua=2C<br><br><div><hr id=
=3D"stopSpelling">Date: Sat=2C 15 Mar 2014 02:32:59 -0400<br>From:
josh@wra=
le.com<br>To: users(a)ovirt.org<br>Subject: [Users] Post-Install Engine VM Ch=
anges Feasible?<br><br><div
dir=3D"ltr"><div><div>Hi=2C<br><br>I'm in the p=
rocess of installing 3.4 RC(2?) on Fedora 19. =3B I'm using hosted engi=
ne=2C introspective GlusterFS+keepalived+NFS ala [1]=2C across six nodes.<b=
r><br></div>I have a layered networking topology ((V)LANs for public=2C
int=
ernal=2C storage=2C compute and ipmi). =3B I am comfortable doing the b=
ridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-=
*. =3B <br>=0A=
<br></div><div>Here's my desired topology: <a
href=3D"http://www.asciiflow.=
com/#Draw6325992559863447154"
target=3D"_blank">http://www.asciiflow.com/#D=
raw6325992559863447154</a><br><br></div><div>Here's my
keepalived setup: <a=
href=3D"https://gist.github.com/josh-at-knoesis/98618a16418101225726... targ=
et=3D"_blank">https://gist.github.com/josh-at-knoesis/98618a...
/a><br>=0A=
</div><div><br></div><div>I'm writing a lot of
documentation of the many st=
eps I'm taking. =3B I hope to eventually release a distributed introspe=
ctive all-in-one (including distributed storage) guide. =3B
<br></div><=
div>=0A=
<br>Looking at <a href=3D"http://vm.conf.in"
target=3D"_blank">vm.conf.in</=
a>=2C it looks like I'd by default end up with one interface on my engine=
=2C probably on my internal VLAN=2C as that's where I'd like the control tr=
affic to flow. =3B I definitely could do NAT=2C but I'd be most happy t=
o see the engine have a presence on all of the LANs=2C if for no other reas=
on than because I want to send backups directly over the storage VLAN. =
=3B <br>=0A=
<br></div>I'll cut to it: =3B I believe I could successfully alter
the =
vdsm template (<a href=3D"http://vm.conf.in"
target=3D"_blank">vm.conf.in</=
a>) to give me the extra interfaces I require. =3B It hit me=2C however=
=2C that I could just take the defaults for the initial install. =3B La=
ter=2C I think I'll be able to come back with virsh and make my changes to =
the gracefully disabled VM. =3B Is this true? <br>=0A=
<br><div><div>[1] <a
href=3D"http://www.andrewklau.com/ovirt-hosted-engine-=
with-3-4-0-nightly/"
target=3D"_blank">http://www.andrewklau.com/ovirt-host=
ed-engine-with-3-4-0-nightly/</a><br><br></div><div>Thanks=2C<br>Joshua<br>=
</div></div></div>=0A=
<br><br>I started from the same reference[1] and ended up
"statically" modi=
fying vm.conf.in before launching setup=2C like this:<br><br>cp -a /usr/sha=
re/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-e=
ngine-setup/templates/vm.conf.in.orig<br>cat <=3B<=3B EOM >=3B
/usr/s=
hare/ovirt-hosted-engine-setup/templates/vm.conf.in<br>vmId=3D@VM_UUID@<br>=
memSize=3D@MEM_SIZE@<br>display=3D@CONSOLE_TYPE@<br>devices=3D{index:2=2Cif=
ace:ide=2Caddress:{ controller:0=2C target:0=2Cunit:0=2C bus:1=2C type:driv=
e}=2CspecParams:{}=2Creadonly:true=2CdeviceId:@CDROM_UUID@=2Cpath:@CDROM@=
=2Cdevice:cdrom=2Cshared:false=2Ctype:disk@BOOT_CDROM@}<br>devices=3D{index=
:0=2Ciface:virtio=2Cformat:raw=2CpoolID:@SP_UUID@=2CvolumeID:@VOL_UUID@=2Ci=
mageID:@IMG_UUID@=2CspecParams:{}=2Creadonly:false=2CdomainID:@SD_UUID@=2Co=
ptional:false=2CdeviceId:@IMG_UUID@=2Caddress:{bus:0x00=2C slot:0x06=2C dom=
ain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:disk=2Cshared:exclusive=2C=
propagateErrors:off=2Ctype:disk@BOOT_DISK@}<br>devices=3D{device:scsi=2Cmod=
el:virtio-scsi=2Ctype:controller}<br>devices=3D{index:4=2CnicModel:pv=2Cmac=
Addr:@MAC_ADDR@=2ClinkActive:true=2Cnetwork:@BRIDGE@=2Cfilter:vdsm-no-mac-s=
poofing=2CspecParams:{}=2CdeviceId:@NIC_UUID@=2Caddress:{bus:0x00=2C slot:0=
x03=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:bridge=2Ctype:in=
terface@BOOT_PXE@}<br>devices=3D{index:8=2CnicModel:pv=2CmacAddr:02:16:3e:4=
f:c4:b0=2ClinkActive:true=2Cnetwork:lan=2Cfilter:vdsm-no-mac-spoofing=2Cspe=
cParams:{}=2Caddress:{bus:0x00=2C slot:0x09=2C domain:0x0000=2C type:pci=2C=
function:0x0}=2Cdevice:bridge=2Ctype:interface@BOOT_PXE@}<br>devices=3D{de=
vice:console=2CspecParams:{}=2Ctype:console=2CdeviceId:@CONSOLE_UUID@=2Cali=
as:console0}<br>vmName=3D@NAME@<br>spiceSecureChannels=3Dsmain=2Csdisplay=
=2Csinputs=2Cscursor=2Csplayback=2Csrecord=2Cssmartcard=2Csusbredir<br>smp=
=3D@VCPUS@<br>cpuType=3D@CPU_TYPE@<br>emulatedMachine=3D@EMULATED_MACHINE@<=
br>EOM<br><br>I simply added a second nic (with a fixed MAC address from
th=
e locally-administered pool=2C since I didn't know how to auto-generate one=
) and added an index for nics too (mimicking the the storage devices setup =
already present).<br><br>My network setup is much simpler than yours: ovirt=
mgmt bridge is on an isolated oVirt-management-only network without gateway=
=2C my actual LAN with gateway and Internet access (for package updates/ins=
tallation) is connected to lan bridge and the SAN/migration LAN is a furthe=
r (not bridged) 10 Gib/s isolated network for which I do not expect to need=
Engine/VMs reachability (so no third interface for Engine) since all actio=
ns should be performed from Engine but only through vdsm hosts (I use a "sp=
lit-DNS" setup by means of carefully crafted hosts files on Engine and vdsm=
hosts)<br><br>I can confirm that the engine vm gets created as expected an=
d that network connectivity works.<br><br>Unfortunately I cannot validate t=
he whole design yet=2C since I'm still debugging HA-agent problems that pre=
vent a reliable Engine/SD startup.<br><br>Hope it
helps.<br><br>Greetings=
=2C<br>Giuseppe<br><br></div>
</div></body>
</html>=
--_3a1654ca-ee91-4df4-91db-fa152b0e5325_--