[Users] Post-Install Engine VM Changes Feasible?

Hi, I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes. I have a layered networking topology ((V)LANs for public, internal, storage, compute and ipmi). I am comfortable doing the bridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-*. Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154 Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726 I'm writing a lot of documentation of the many steps I'm taking. I hope to eventually release a distributed introspective all-in-one (including distributed storage) guide. Looking at vm.conf.in, it looks like I'd by default end up with one interface on my engine, probably on my internal VLAN, as that's where I'd like the control traffic to flow. I definitely could do NAT, but I'd be most happy to see the engine have a presence on all of the LANs, if for no other reason than because I want to send backups directly over the storage VLAN. I'll cut to it: I believe I could successfully alter the vdsm template ( vm.conf.in) to give me the extra interfaces I require. It hit me, however, that I could just take the defaults for the initial install. Later, I think I'll be able to come back with virsh and make my changes to the gracefully disabled VM. Is this true? [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/ Thanks, Joshua

--_3a1654ca-ee91-4df4-91db-fa152b0e5325_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi Joshua=2C Date: Sat=2C 15 Mar 2014 02:32:59 -0400 From: josh@wrale.com To: users@ovirt.org Subject: [Users] Post-Install Engine VM Changes Feasible? Hi=2C I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted= engine=2C introspective GlusterFS+keepalived+NFS ala [1]=2C across six nod= es. I have a layered networking topology ((V)LANs for public=2C internal=2C sto= rage=2C compute and ipmi). I am comfortable doing the bridging for each in= terface myself via /etc/sysconfig/network-scripts/ifcfg-*. =20 =0A= Here's my desired topology: http://www.asciiflow.com/#Draw63259925598634471= 54 Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a1= 6418101225726 =0A= I'm writing a lot of documentation of the many steps I'm taking. I hope to= eventually release a distributed introspective all-in-one (including distr= ibuted storage) guide. =20 =0A= Looking at vm.conf.in=2C it looks like I'd by default end up with one inter= face on my engine=2C probably on my internal VLAN=2C as that's where I'd li= ke the control traffic to flow. I definitely could do NAT=2C but I'd be mo= st happy to see the engine have a presence on all of the LANs=2C if for no = other reason than because I want to send backups directly over the storage = VLAN. =20 =0A= I'll cut to it: I believe I could successfully alter the vdsm template (vm= .conf.in) to give me the extra interfaces I require. It hit me=2C however= =2C that I could just take the defaults for the initial install. Later=2C = I think I'll be able to come back with virsh and make my changes to the gra= cefully disabled VM. Is this true?=20 =0A= [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/ Thanks=2C Joshua =0A= I started from the same reference[1] and ended up "statically" modifying vm= .conf.in before launching setup=2C like this: cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/= ovirt-hosted-engine-setup/templates/vm.conf.in.orig cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in vmId=3D@VM_UUID@ memSize=3D@MEM_SIZE@ display=3D@CONSOLE_TYPE@ devices=3D{index:2=2Ciface:ide=2Caddress:{ controller:0=2C target:0=2Cunit:= 0=2C bus:1=2C type:drive}=2CspecParams:{}=2Creadonly:true=2CdeviceId:@CDROM= _UUID@=2Cpath:@CDROM@=2Cdevice:cdrom=2Cshared:false=2Ctype:disk@BOOT_CDROM@= } devices=3D{index:0=2Ciface:virtio=2Cformat:raw=2CpoolID:@SP_UUID@=2CvolumeI= D:@VOL_UUID@=2CimageID:@IMG_UUID@=2CspecParams:{}=2Creadonly:false=2Cdomain= ID:@SD_UUID@=2Coptional:false=2CdeviceId:@IMG_UUID@=2Caddress:{bus:0x00=2C = slot:0x06=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:disk=2Csha= red:exclusive=2CpropagateErrors:off=2Ctype:disk@BOOT_DISK@} devices=3D{device:scsi=2Cmodel:virtio-scsi=2Ctype:controller} devices=3D{index:4=2CnicModel:pv=2CmacAddr:@MAC_ADDR@=2ClinkActive:true=2Cn= etwork:@BRIDGE@=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2CdeviceId:@N= IC_UUID@=2Caddress:{bus:0x00=2C slot:0x03=2C domain:0x0000=2C type:pci=2C f= unction:0x0}=2Cdevice:bridge=2Ctype:interface@BOOT_PXE@} devices=3D{index:8=2CnicModel:pv=2CmacAddr:02:16:3e:4f:c4:b0=2ClinkActive:t= rue=2Cnetwork:lan=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2Caddress:{= bus:0x00=2C slot:0x09=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevic= e:bridge=2Ctype:interface@BOOT_PXE@} devices=3D{device:console=2CspecParams:{}=2Ctype:console=2CdeviceId:@CONSOL= E_UUID@=2Calias:console0} vmName=3D@NAME@ spiceSecureChannels=3Dsmain=2Csdisplay=2Csinputs=2Cscursor=2Csplayback=2Csr= ecord=2Cssmartcard=2Csusbredir smp=3D@VCPUS@ cpuType=3D@CPU_TYPE@ emulatedMachine=3D@EMULATED_MACHINE@ EOM I simply added a second nic (with a fixed MAC address from the locally-admi= nistered pool=2C since I didn't know how to auto-generate one) and added an= index for nics too (mimicking the the storage devices setup already presen= t). My network setup is much simpler than yours: ovirtmgmt bridge is on an isol= ated oVirt-management-only network without gateway=2C my actual LAN with ga= teway and Internet access (for package updates/installation) is connected t= o lan bridge and the SAN/migration LAN is a further (not bridged) 10 Gib/s = isolated network for which I do not expect to need Engine/VMs reachability = (so no third interface for Engine) since all actions should be performed fr= om Engine but only through vdsm hosts (I use a "split-DNS" setup by means o= f carefully crafted hosts files on Engine and vdsm hosts) I can confirm that the engine vm gets created as expected and that network = connectivity works. Unfortunately I cannot validate the whole design yet=2C since I'm still deb= ugging HA-agent problems that prevent a reliable Engine/SD startup. Hope it helps. Greetings=2C Giuseppe = --_3a1654ca-ee91-4df4-91db-fa152b0e5325_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html> <head> <style><!-- .hmmessage P { margin:0px=3B padding:0px } body.hmmessage { font-size: 12pt=3B font-family:Calibri } --></style></head> <body class=3D'hmmessage'><div dir=3D'ltr'>Hi Joshua=2C<br><br><div><hr id= =3D"stopSpelling">Date: Sat=2C 15 Mar 2014 02:32:59 -0400<br>From: josh@wra= le.com<br>To: users@ovirt.org<br>Subject: [Users] Post-Install Engine VM Ch= anges Feasible?<br><br><div dir=3D"ltr"><div><div>Hi=2C<br><br>I'm in the p= rocess of installing 3.4 RC(2?) on Fedora 19. =3B I'm using hosted engi= ne=2C introspective GlusterFS+keepalived+NFS ala [1]=2C across six nodes.<b= r><br></div>I have a layered networking topology ((V)LANs for public=2C int= ernal=2C storage=2C compute and ipmi). =3B I am comfortable doing the b= ridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-= *. =3B <br>=0A= <br></div><div>Here's my desired topology: <a href=3D"http://www.asciiflow.= com/#Draw6325992559863447154" target=3D"_blank">http://www.asciiflow.com/#D= raw6325992559863447154</a><br><br></div><div>Here's my keepalived setup: <a= href=3D"https://gist.github.com/josh-at-knoesis/98618a16418101225726" targ= et=3D"_blank">https://gist.github.com/josh-at-knoesis/98618a16418101225726<= /a><br>=0A= </div><div><br></div><div>I'm writing a lot of documentation of the many st= eps I'm taking. =3B I hope to eventually release a distributed introspe= ctive all-in-one (including distributed storage) guide. =3B <br></div><= div>=0A= <br>Looking at <a href=3D"http://vm.conf.in" target=3D"_blank">vm.conf.in</= a>=2C it looks like I'd by default end up with one interface on my engine= =2C probably on my internal VLAN=2C as that's where I'd like the control tr= affic to flow. =3B I definitely could do NAT=2C but I'd be most happy t= o see the engine have a presence on all of the LANs=2C if for no other reas= on than because I want to send backups directly over the storage VLAN. = =3B <br>=0A= <br></div>I'll cut to it: =3B I believe I could successfully alter the = vdsm template (<a href=3D"http://vm.conf.in" target=3D"_blank">vm.conf.in</= a>) to give me the extra interfaces I require. =3B It hit me=2C however= =2C that I could just take the defaults for the initial install. =3B La= ter=2C I think I'll be able to come back with virsh and make my changes to = the gracefully disabled VM. =3B Is this true? <br>=0A= <br><div><div>[1] <a href=3D"http://www.andrewklau.com/ovirt-hosted-engine-= with-3-4-0-nightly/" target=3D"_blank">http://www.andrewklau.com/ovirt-host= ed-engine-with-3-4-0-nightly/</a><br><br></div><div>Thanks=2C<br>Joshua<br>= </div></div></div>=0A= <br><br>I started from the same reference[1] and ended up "statically" modi= fying vm.conf.in before launching setup=2C like this:<br><br>cp -a /usr/sha= re/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-e= ngine-setup/templates/vm.conf.in.orig<br>cat <=3B<=3B EOM >=3B /usr/s= hare/ovirt-hosted-engine-setup/templates/vm.conf.in<br>vmId=3D@VM_UUID@<br>= memSize=3D@MEM_SIZE@<br>display=3D@CONSOLE_TYPE@<br>devices=3D{index:2=2Cif= ace:ide=2Caddress:{ controller:0=2C target:0=2Cunit:0=2C bus:1=2C type:driv= e}=2CspecParams:{}=2Creadonly:true=2CdeviceId:@CDROM_UUID@=2Cpath:@CDROM@= =2Cdevice:cdrom=2Cshared:false=2Ctype:disk@BOOT_CDROM@}<br>devices=3D{index= :0=2Ciface:virtio=2Cformat:raw=2CpoolID:@SP_UUID@=2CvolumeID:@VOL_UUID@=2Ci= mageID:@IMG_UUID@=2CspecParams:{}=2Creadonly:false=2CdomainID:@SD_UUID@=2Co= ptional:false=2CdeviceId:@IMG_UUID@=2Caddress:{bus:0x00=2C slot:0x06=2C dom= ain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:disk=2Cshared:exclusive=2C= propagateErrors:off=2Ctype:disk@BOOT_DISK@}<br>devices=3D{device:scsi=2Cmod= el:virtio-scsi=2Ctype:controller}<br>devices=3D{index:4=2CnicModel:pv=2Cmac= Addr:@MAC_ADDR@=2ClinkActive:true=2Cnetwork:@BRIDGE@=2Cfilter:vdsm-no-mac-s= poofing=2CspecParams:{}=2CdeviceId:@NIC_UUID@=2Caddress:{bus:0x00=2C slot:0= x03=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:bridge=2Ctype:in= terface@BOOT_PXE@}<br>devices=3D{index:8=2CnicModel:pv=2CmacAddr:02:16:3e:4= f:c4:b0=2ClinkActive:true=2Cnetwork:lan=2Cfilter:vdsm-no-mac-spoofing=2Cspe= cParams:{}=2Caddress:{bus:0x00=2C slot:0x09=2C domain:0x0000=2C type:pci=2C= function:0x0}=2Cdevice:bridge=2Ctype:interface@BOOT_PXE@}<br>devices=3D{de= vice:console=2CspecParams:{}=2Ctype:console=2CdeviceId:@CONSOLE_UUID@=2Cali= as:console0}<br>vmName=3D@NAME@<br>spiceSecureChannels=3Dsmain=2Csdisplay= =2Csinputs=2Cscursor=2Csplayback=2Csrecord=2Cssmartcard=2Csusbredir<br>smp= =3D@VCPUS@<br>cpuType=3D@CPU_TYPE@<br>emulatedMachine=3D@EMULATED_MACHINE@<= br>EOM<br><br>I simply added a second nic (with a fixed MAC address from th= e locally-administered pool=2C since I didn't know how to auto-generate one= ) and added an index for nics too (mimicking the the storage devices setup = already present).<br><br>My network setup is much simpler than yours: ovirt= mgmt bridge is on an isolated oVirt-management-only network without gateway= =2C my actual LAN with gateway and Internet access (for package updates/ins= tallation) is connected to lan bridge and the SAN/migration LAN is a furthe= r (not bridged) 10 Gib/s isolated network for which I do not expect to need= Engine/VMs reachability (so no third interface for Engine) since all actio= ns should be performed from Engine but only through vdsm hosts (I use a "sp= lit-DNS" setup by means of carefully crafted hosts files on Engine and vdsm= hosts)<br><br>I can confirm that the engine vm gets created as expected an= d that network connectivity works.<br><br>Unfortunately I cannot validate t= he whole design yet=2C since I'm still debugging HA-agent problems that pre= vent a reliable Engine/SD startup.<br><br>Hope it helps.<br><br>Greetings= =2C<br>Giuseppe<br><br></div> </div></body> </html>= --_3a1654ca-ee91-4df4-91db-fa152b0e5325_--

Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
Hi Joshua,
------------------------------------------------------------------------------------------------------------------------------------------------------ Date: Sat, 15 Mar 2014 02:32:59 -0400 From: josh@wrale.com To: users@ovirt.org Subject: [Users] Post-Install Engine VM Changes Feasible?
Hi,
I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
I have a layered networking topology ((V)LANs for public, internal, storage, compute and ipmi). I am comfortable doing the bridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154
Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726
I'm writing a lot of documentation of the many steps I'm taking. I hope to eventually release a distributed introspective all-in-one (including distributed storage) guide.
Looking at vm.conf.in <http://vm.conf.in>, it looks like I'd by default end up with one interface on my engine, probably on my internal VLAN, as that's where I'd like the control traffic to flow. I definitely could do NAT, but I'd be most happy to see the engine have a presence on all of the LANs, if for no other reason than because I want to send backups directly over the storage VLAN.
I'll cut to it: I believe I could successfully alter the vdsm template (vm.conf.in <http://vm.conf.in>) to give me the extra interfaces I require. It hit me, however, that I could just take the defaults for the initial install. Later, I think I'll be able to come back with virsh and make my changes to the gracefully disabled VM. Is this true?
[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
Thanks, Joshua
I started from the same reference[1] and ended up "statically" modifying vm.conf.in before launching setup, like this:
cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in vmId=@VM_UUID@ memSize=@MEM_SIZE@ display=@CONSOLE_TYPE@ devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@} devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@} devices={device:scsi,model:virtio-scsi,type:controller} devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0} vmName=@NAME@ spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp=@VCPUS@ cpuType=@CPU_TYPE@ emulatedMachine=@EMULATED_MACHINE@ EOM
Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf after setup: - put the system in global maintenance - edit the vm.conf file on all the hosts running the hosted engine - shutdown the vm: hosted-engine --vm-shutdown - start again the vm: hosted-engine --vm-start - exit global maintenance Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users on ovirt.org wiki?
I simply added a second nic (with a fixed MAC address from the locally-administered pool, since I didn't know how to auto-generate one) and added an index for nics too (mimicking the the storage devices setup already present).
My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated oVirt-management-only network without gateway, my actual LAN with gateway and Internet access (for package updates/installation) is connected to lan bridge and the SAN/migration LAN is a further (not bridged) 10 Gib/s isolated network for which I do not expect to need Engine/VMs reachability (so no third interface for Engine) since all actions should be performed from Engine but only through vdsm hosts (I use a "split-DNS" setup by means of carefully crafted hosts files on Engine and vdsm hosts)
I can confirm that the engine vm gets created as expected and that network connectivity works.
Unfortunately I cannot validate the whole design yet, since I'm still debugging HA-agent problems that prevent a reliable Engine/SD startup.
Hope it helps.
Greetings, Giuseppe
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
Hi Joshua,
------------------------------------------------------------------------------------------------------------------------------------------------------ Date: Sat, 15 Mar 2014 02:32:59 -0400 From: josh@wrale.com To: users@ovirt.org Subject: [Users] Post-Install Engine VM Changes Feasible?
Hi,
I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
I have a layered networking topology ((V)LANs for public, internal, storage, compute and ipmi). I am comfortable doing the bridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154
Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726
I'm writing a lot of documentation of the many steps I'm taking. I hope to eventually release a distributed introspective all-in-one (including distributed storage) guide.
Looking at vm.conf.in <http://vm.conf.in>, it looks like I'd by default end up with one interface on my engine, probably on my internal VLAN, as that's where I'd like the control traffic to flow. I definitely could do NAT, but I'd be most happy to see the engine have a presence on all of the LANs, if for no other reason than because I want to send backups directly over the storage VLAN.
I'll cut to it: I believe I could successfully alter the vdsm template (vm.conf.in <http://vm.conf.in>) to give me the extra interfaces I require. It hit me, however, that I could just take the defaults for the initial install. Later, I think I'll be able to come back with virsh and make my changes to the gracefully disabled VM. Is this true?
[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
Thanks, Joshua
I started from the same reference[1] and ended up "statically" modifying vm.conf.in before launching setup, like this:
cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in vmId=@VM_UUID@ memSize=@MEM_SIZE@ display=@CONSOLE_TYPE@ devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@} devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@} devices={device:scsi,model:virtio-scsi,type:controller} devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0} vmName=@NAME@ spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp=@VCPUS@ cpuType=@CPU_TYPE@ emulatedMachine=@EMULATED_MACHINE@ EOM
Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf after setup: - put the system in global maintenance - edit the vm.conf file on all the hosts running the hosted engine - shutdown the vm: hosted-engine --vm-shutdown - start again the vm: hosted-engine --vm-start - exit global maintenance
Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users on ovirt.org wiki?
So would you simply just add a new line under the original devices line? ie. devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface} Are there any good practices for getting the mac addr so it won't be possible to clash with ones vdsm would generate? I assume the same applies for deviceid? Did you also change the slot?
I simply added a second nic (with a fixed MAC address from the locally-administered pool, since I didn't know how to auto-generate one) and added an index for nics too (mimicking the the storage devices setup already present).
My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated oVirt-management-only network without gateway, my actual LAN with gateway and Internet access (for package updates/installation) is connected to lan bridge and the SAN/migration LAN is a further (not bridged) 10 Gib/s isolated network for which I do not expect to need Engine/VMs reachability (so no third interface for Engine) since all actions should be performed from Engine but only through vdsm hosts (I use a "split-DNS" setup by means of carefully crafted hosts files on Engine and vdsm hosts)
I can confirm that the engine vm gets created as expected and that network connectivity works.
Unfortunately I cannot validate the whole design yet, since I'm still debugging HA-agent problems that prevent a reliable Engine/SD startup.
Hope it helps.
Greetings, Giuseppe
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau <andrew@andrewklau.com> wrote:
On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
Hi Joshua,
------------------------------------------------------------------------------------------------------------------------------------------------------ Date: Sat, 15 Mar 2014 02:32:59 -0400 From: josh@wrale.com To: users@ovirt.org Subject: [Users] Post-Install Engine VM Changes Feasible?
Hi,
I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
I have a layered networking topology ((V)LANs for public, internal, storage, compute and ipmi). I am comfortable doing the bridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154
Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726
I'm writing a lot of documentation of the many steps I'm taking. I hope to eventually release a distributed introspective all-in-one (including distributed storage) guide.
Looking at vm.conf.in <http://vm.conf.in>, it looks like I'd by default end up with one interface on my engine, probably on my internal VLAN, as that's where I'd like the control traffic to flow. I definitely could do NAT, but I'd be most happy to see the engine have a presence on all of the LANs, if for no other reason than because I want to send backups directly over the storage VLAN.
I'll cut to it: I believe I could successfully alter the vdsm template (vm.conf.in <http://vm.conf.in>) to give me the extra interfaces I require. It hit me, however, that I could just take the defaults for the initial install. Later, I think I'll be able to come back with virsh and make my changes to the gracefully disabled VM. Is this true?
[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
Thanks, Joshua
I started from the same reference[1] and ended up "statically" modifying vm.conf.in before launching setup, like this:
cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in vmId=@VM_UUID@ memSize=@MEM_SIZE@ display=@CONSOLE_TYPE@ devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@} devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@} devices={device:scsi,model:virtio-scsi,type:controller} devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0} vmName=@NAME@ spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp=@VCPUS@ cpuType=@CPU_TYPE@ emulatedMachine=@EMULATED_MACHINE@ EOM
Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf after setup: - put the system in global maintenance - edit the vm.conf file on all the hosts running the hosted engine - shutdown the vm: hosted-engine --vm-shutdown - start again the vm: hosted-engine --vm-start - exit global maintenance
Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users on ovirt.org wiki?
So would you simply just add a new line under the original devices line? ie. devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface}
Are there any good practices for getting the mac addr so it won't be possible to clash with ones vdsm would generate? I assume the same applies for deviceid? Did you also change the slot?
This worked successfully: yum -y install python-virtinst # generate uuid and mac address echo 'import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID())' | python echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python hosted-engine --set-maintenance --mode=global nano /etc/ovirt-hosted-engine/vm.conf # insert under earlier nicModel # replace macaddress and uuid from above # increment slot (so it's not the same as above nicModel) # modify ovirtmgmt to desired bridge interface devices={nicModel:pv,macAddr:00:16:3e:35:87:7d,linkActive:true,network:storage_network,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4c32c036-0e5a-e0b3-9ba7-bg3dfzbs40ae,address:{bus:0x00,slot:0x04, domain:0x0000, type:pci,function:0x0},device:bridge,type:interface} hosted-engine --vm-shutdown hosted-engine --vm-start hosted-engine --set-maintenance --mode=none However, this won't get propagated to any additional hosts that are installed. I'm guessing it may be this file /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in that gets copied for new installs, but I don't have any new hosts to test with right now.
I simply added a second nic (with a fixed MAC address from the locally-administered pool, since I didn't know how to auto-generate one) and added an index for nics too (mimicking the the storage devices setup already present).
My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated oVirt-management-only network without gateway, my actual LAN with gateway and Internet access (for package updates/installation) is connected to lan bridge and the SAN/migration LAN is a further (not bridged) 10 Gib/s isolated network for which I do not expect to need Engine/VMs reachability (so no third interface for Engine) since all actions should be performed from Engine but only through vdsm hosts (I use a "split-DNS" setup by means of carefully crafted hosts files on Engine and vdsm hosts)
I can confirm that the engine vm gets created as expected and that network connectivity works.
Unfortunately I cannot validate the whole design yet, since I'm still debugging HA-agent problems that prevent a reliable Engine/SD startup.
Hope it helps.
Greetings, Giuseppe
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, Il 10/04/2014 02:40, Andrew Lau ha scritto:
On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau <andrew@andrewklau.com> wrote:
On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
Hi Joshua,
------------------------------------------------------------------------------------------------------------------------------------------------------ Date: Sat, 15 Mar 2014 02:32:59 -0400 From: josh@wrale.com To: users@ovirt.org Subject: [Users] Post-Install Engine VM Changes Feasible?
Hi,
I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.
I have a layered networking topology ((V)LANs for public, internal, storage, compute and ipmi). I am comfortable doing the bridging for each interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154
Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726
I'm writing a lot of documentation of the many steps I'm taking. I hope to eventually release a distributed introspective all-in-one (including distributed storage) guide.
I hope you'll publish it also on ovirt.org wiki :-)
Looking at vm.conf.in <http://vm.conf.in>, it looks like I'd by default end up with one interface on my engine, probably on my internal VLAN, as that's where I'd like the control traffic to flow. I definitely could do NAT, but I'd be most happy to see the engine have a presence on all of the LANs, if for no other reason than because I want to send backups directly over the storage VLAN.
I'll cut to it: I believe I could successfully alter the vdsm template (vm.conf.in <http://vm.conf.in>) to give me the extra interfaces I require. It hit me, however, that I could just take the defaults for the initial install. Later, I think I'll be able to come back with virsh and make my changes to the gracefully disabled VM. Is this true?
[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
Thanks, Joshua
I started from the same reference[1] and ended up "statically" modifying vm.conf.in before launching setup, like this:
cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in vmId=@VM_UUID@ memSize=@MEM_SIZE@ display=@CONSOLE_TYPE@ devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@} devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@} devices={device:scsi,model:virtio-scsi,type:controller} devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@} devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0} vmName=@NAME@ spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp=@VCPUS@ cpuType=@CPU_TYPE@ emulatedMachine=@EMULATED_MACHINE@ EOM
Note that you should also be able to edit /etc/ovirt-hosted-engine/vm.conf after setup: - put the system in global maintenance - edit the vm.conf file on all the hosts running the hosted engine - shutdown the vm: hosted-engine --vm-shutdown - start again the vm: hosted-engine --vm-start - exit global maintenance
Giuseppe, Joshua: can you share your changes in a guide for Hosted engine users on ovirt.org wiki?
So would you simply just add a new line under the original devices line? ie. devices={nicModel:pv,macAddr:00:16:3e:6d:34:78,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:0c8a1710-casd-407a-94e8-5b09e55fa141,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface}
Are there any good practices for getting the mac addr so it won't be possible to clash with ones vdsm would generate? I assume the same applies for deviceid? Did you also change the slot?
This worked successfully:
yum -y install python-virtinst
# generate uuid and mac address echo 'import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID())' | python echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python
hosted-engine --set-maintenance --mode=global nano /etc/ovirt-hosted-engine/vm.conf # insert under earlier nicModel # replace macaddress and uuid from above # increment slot (so it's not the same as above nicModel) # modify ovirtmgmt to desired bridge interface
devices={nicModel:pv,macAddr:00:16:3e:35:87:7d,linkActive:true,network:storage_network,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4c32c036-0e5a-e0b3-9ba7-bg3dfzbs40ae,address:{bus:0x00,slot:0x04, domain:0x0000, type:pci,function:0x0},device:bridge,type:interface}
hosted-engine --vm-shutdown hosted-engine --vm-start
hosted-engine --set-maintenance --mode=none
However, this won't get propagated to any additional hosts that are installed. I'm guessing it may be this file /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in that gets copied for new installs, but I don't have any new hosts to test with right now.
A few comments on above discussion: MAC address for Hosted Engine VM is already generated randomly by hosted-engine --deploy command and proposed as default in the question: 'You may specify a MAC address for the VM or accept a randomly generated default [@DEFAULT@]: ' So it shouldn't be necessary to generate another one. However, you can set the MAC address at interactive setup or by creating an answer.conf file with the following content: [environment:default] OVEHOSTED_VM/vmMACAddr=str:00:16:3e:72:85:3f changing above mac address with whatever you want and running hosted-engine --deploy --config-append=answer.conf Note that if you change /etc/ovirt-hosted-engine/vm.conf after deployment you must also update /etc/ovirt-hosted-engine/answers.conf in order to have additional host to get the same configuration when you run hosted-engine --deploy. Additional host already deployed need to be updated manually. Remember also that if there's interest in adding configuration steps while deploying you can open an RFE on bugzilla. I created an etherpad to allow you to list post install changes to vm.conf, so I can get a clue on what we're missing there. http://etherpad.ovirt.org/p/hosted-engine-post-install-changes
I simply added a second nic (with a fixed MAC address from the locally-administered pool, since I didn't know how to auto-generate one) and added an index for nics too (mimicking the the storage devices setup already present).
My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated oVirt-management-only network without gateway, my actual LAN with gateway and Internet access (for package updates/installation) is connected to lan bridge and the SAN/migration LAN is a further (not bridged) 10 Gib/s isolated network for which I do not expect to need Engine/VMs reachability (so no third interface for Engine) since all actions should be performed from Engine but only through vdsm hosts (I use a "split-DNS" setup by means of carefully crafted hosts files on Engine and vdsm hosts)
I can confirm that the engine vm gets created as expected and that network connectivity works.
Unfortunately I cannot validate the whole design yet, since I'm still debugging HA-agent problems that prevent a reliable Engine/SD startup.
Hope it helps.
Greetings, Giuseppe
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

--_f0dc02d7-12f7-4563-98d9-07419cff839a_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi all=2C sorry for the late reply. I noticed that I missed the deviceId property on my additional-nic line bel= ow=2C but I can confirm that the engine vm (installed with my previously mo= dified template in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.i= n as outlined below) is still up and running (apparently) ok without it (I = verified that the deviceId property has not been added automatically in /et= c/ovirt-hosted-engine/vm.conf). I admit that modifying a package file not marked as configuration (under /u= sr/share... may the FHS forgive me... :) is not best practice=2C but modify= ing the configuration one (under /etc...) afterwards seemed more error pron= e (needs propagation to further nodes). In order to have a clear picture of the matter (and write/add-to a wiki pag= e on engine vm customization) I'd like to read more on the syntax of these = vm.conf files (they are neither libvirt XML files nor OTOPI files) and whic= h properties are default/needed/etc.
From simple analogy=2C as an example=2C I thought that an unique index prop= erty would be needed (as in ide/virtio disk devices) for adding a nic=2C bu= t Andrew example does not add it...
Any pointers to doc/code for further enlightenment? Many thanks in advance=2C Giuseppe
Date: Thu=2C 10 Apr 2014 08:40:25 +0200 From: sbonazzo@redhat.com To: andrew@andrewklau.com CC: giuseppe.ragusa@hotmail.com=3B josh@wrale.com=3B users@ovirt.org Subject: Re: [Users] Post-Install Engine VM Changes Feasible? =20 =20 =20 Hi=2C =20 =20 Il 10/04/2014 02:40=2C Andrew Lau ha scritto:
On Tue=2C Apr 8=2C 2014 at 8:52 PM=2C Andrew Lau <andrew@andrewklau.com= wrote:
On Mon=2C Mar 17=2C 2014 at 8:01 PM=2C Sandro Bonazzola <sbonazzo@redh= at.com> wrote:
Il 15/03/2014 12:44=2C Giuseppe Ragusa ha scritto:
Hi Joshua=2C
--------------------------------------------------------------------= ---------------------------------------------------------------------------=
Date: Sat=2C 15 Mar 2014 02:32:59 -0400 From: josh@wrale.com To: users@ovirt.org Subject: [Users] Post-Install Engine VM Changes Feasible?
Hi=2C
I'm in the process of installing 3.4 RC(2?) on Fedora 19. I'm using= hosted engine=2C introspective GlusterFS+keepalived+NFS ala [1]=2C across = six nodes.
I have a layered networking topology ((V)LANs for public=2C internal= =2C storage=2C compute and ipmi). I am comfortable doing the bridging for = each interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
Here's my desired topology: http://www.asciiflow.com/#Draw6325992559= 863447154
Here's my keepalived setup: https://gist.github.com/josh-at-knoesis/= 98618a16418101225726
I'm writing a lot of documentation of the many steps I'm taking. I = hope to eventually release a distributed introspective all-in-one (includin= g distributed storage) guide. =20 I hope you'll publish it also on ovirt.org wiki :-) =20
Looking at vm.conf.in <http://vm.conf.in>=2C it looks like I'd by de= fault end up with one interface on my engine=2C probably on my internal VLA= N=2C as that's where I'd like the control traffic to flow. I definitely cou= ld do NAT=2C but I'd be most happy to see the engine have a presence on all= of the LANs=2C if for no other reason than because I want to send backups d= irectly over the storage VLAN.
I'll cut to it: I believe I could successfully alter the vdsm templ= ate (vm.conf.in <http://vm.conf.in>) to give me the extra interfaces I requ= ire. It hit me=2C however=2C that I could just take the defaults for the = initial install. Later=2C I think I'll be able to come back with virsh and= make my changes to the gracefully disabled VM. Is this true?
[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly= /
Thanks=2C Joshua
I started from the same reference[1] and ended up "statically" modif= ying vm.conf.in before launching setup=2C like this:
cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in /usr= /share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig cat << EOM > /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.= in vmId=3D@VM_UUID@ memSize=3D@MEM_SIZE@ display=3D@CONSOLE_TYPE@ devices=3D{index:2=2Ciface:ide=2Caddress:{ controller:0=2C target:0= =2Cunit:0=2C bus:1=2C type:drive}=2CspecParams:{}=2Creadonly:true=2CdeviceId:@CDROM_UUID@= =2Cpath:@CDROM@=2Cdevice:cdrom=2Cshared:false=2Ctype:disk@BOOT_CDROM@} devices=3D{index:0=2Ciface:virtio=2Cformat:raw=2CpoolID:@SP_UUID@=2C= volumeID:@VOL_UUID@=2CimageID:@IMG_UUID@=2CspecParams:{}=2Creadonly:false= =2CdomainID:@SD_UUID@=2Coptional:false=2CdeviceId:@IMG_UUID@=2Caddress:{bus= :0x00=2C slot:0x06=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:dis= k=2Cshared:exclusive=2CpropagateErrors:off=2Ctype:disk@BOOT_DISK@} devices=3D{device:scsi=2Cmodel:virtio-scsi=2Ctype:controller} devices=3D{index:4=2CnicModel:pv=2CmacAddr:@MAC_ADDR@=2ClinkActive:t= rue=2Cnetwork:@BRIDGE@=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2Cdevi= ceId:@NIC_UUID@=2Caddress:{bus:0x00=2C slot:0x03=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:bri= dge=2Ctype:interface@BOOT_PXE@} devices=3D{index:8=2CnicModel:pv=2CmacAddr:02:16:3e:4f:c4:b0=2ClinkA= ctive:true=2Cnetwork:lan=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2Cad= dress:{bus:0x00=2C slot:0x09=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:bri= dge=2Ctype:interface@BOOT_PXE@} devices=3D{device:console=2CspecParams:{}=2Ctype:console=2CdeviceId:= @CONSOLE_UUID@=2Calias:console0} vmName=3D@NAME@ spiceSecureChannels=3Dsmain=2Csdisplay=2Csinputs=2Cscursor=2Csplayba= ck=2Csrecord=2Cssmartcard=2Csusbredir smp=3D@VCPUS@ cpuType=3D@CPU_TYPE@ emulatedMachine=3D@EMULATED_MACHINE@ EOM
Note that you should also be able to edit /etc/ovirt-hosted-engine/vm= .conf after setup: - put the system in global maintenance - edit the vm.conf file on all the hosts running the hosted engine - shutdown the vm: hosted-engine --vm-shutdown - start again the vm: hosted-engine --vm-start - exit global maintenance
Giuseppe=2C Joshua: can you share your changes in a guide for Hosted = engine users on ovirt.org wiki?
So would you simply just add a new line under the original devices lin= e? ie. devices=3D{nicModel:pv=2CmacAddr:00:16:3e:6d:34:78=2ClinkActive:true= =2Cnetwork:ovirtmgmt=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2Cdevice= Id:0c8a1710-casd-407a-94e8-5b09e55fa141=2Caddress:{bus:0x00=2C slot:0x03=2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:bridge=2Ctype:interface}
Are there any good practices for getting the mac addr so it won't be possible to clash with ones vdsm would generate? I assume the same applies for deviceid? Did you also change the slot?
I simply added a second nic (with a fixed MAC address from the local=
ly-administered pool=2C since I didn't know how to auto-generate one) and a=
index for nics too (mimicking the the storage devices setup already =
=20 This worked successfully: =20 yum -y install python-virtinst =20 # generate uuid and mac address echo 'import virtinst.util =3B print virtinst.util.uuidToString(virtinst.util.randomUUID())' | python echo 'import virtinst.util =3B print virtinst.util.randomMAC()' | pyth= on =20 hosted-engine --set-maintenance --mode=3Dglobal nano /etc/ovirt-hosted-engine/vm.conf # insert under earlier nicModel # replace macaddress and uuid from above # increment slot (so it's not the same as above nicModel) # modify ovirtmgmt to desired bridge interface =20 devices=3D{nicModel:pv=2CmacAddr:00:16:3e:35:87:7d=2ClinkActive:true=2C= network:storage_network=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2Cdev= iceId:4c32c036-0e5a-e0b3-9ba7-bg3dfzbs40ae=2Caddress:{bus:0x00=2Cslot:0x04= =2C domain:0x0000=2C type:pci=2Cfunction:0x0}=2Cdevice:bridge=2Ctype:interf= ace} =20 hosted-engine --vm-shutdown hosted-engine --vm-start =20 hosted-engine --set-maintenance --mode=3Dnone =20 However=2C this won't get propagated to any additional hosts that are installed. I'm guessing it may be this file /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in that gets copied for new installs=2C but I don't have any new hosts to test with right now. =20 A few comments on above discussion: =20 MAC address for Hosted Engine VM is already generated randomly by hosted-= engine --deploy command and proposed as default in the question: =20 'You may specify a MAC address for the VM or accept a randomly generated = default [@DEFAULT@]: ' =20 So it shouldn't be necessary to generate another one. However=2C you can set the MAC address at interactive setup or by creatin= g an answer.conf file with the following content: =20 [environment:default] OVEHOSTED_VM/vmMACAddr=3Dstr:00:16:3e:72:85:3f =20 changing above mac address with whatever you want and running hosted-engine --deploy --config-append=3Danswer.conf =20 Note that if you change /etc/ovirt-hosted-engine/vm.conf after deployment= you must also update /etc/ovirt-hosted-engine/answers.conf in order to have additional host to get the same configuration when you r= un hosted-engine --deploy. Additional host already deployed need to be updated manually. =20 Remember also that if there's interest in adding configuration steps whil= e deploying you can open an RFE on bugzilla. =20 I created an etherpad to allow you to list post install changes to vm.con= f=2C so I can get a clue on what we're missing there. http://etherpad.ovirt.org/p/hosted-engine-post-install-changes =20 =20 =20 dded an present).
My network setup is much simpler than yours: ovirtmgmt bridge is on =
an isolated oVirt-management-only network without gateway=2C my actual LAN = with
gateway and Internet access (for package updates/installation) is co= nnected to lan bridge and the SAN/migration LAN is a further (not bridged) = 10 Gib/s isolated network for which I do not expect to need Engine/VMs = reachability (so no third interface for Engine) since all actions should be performed from Engine but only through vdsm hosts (I use a "split-DN= S" setup by means of carefully crafted hosts files on Engine and vdsm hosts= )
I can confirm that the engine vm gets created as expected and that n= etwork connectivity works.
Unfortunately I cannot validate the whole design yet=2C since I'm st= ill debugging HA-agent problems that prevent a reliable Engine/SD startup.
Hope it helps.
Greetings=2C Giuseppe
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaborat= ion. See how it works at redhat.com _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20 --=20 Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com =
>=3B >=3B>=3B>=3B>=3B devices=3D{index:4=2CnicModel:pv=2CmacAddr= :@MAC_ADDR@=2ClinkActive:true=2Cnetwork:@BRIDGE@=2Cfilter:vdsm-no-mac-spoof= ing=2CspecParams:{}=2CdeviceId:@NIC_UUID@=2Caddress:{bus:0x00=2C<br>>=3B = >=3B>=3B>=3B>=3B slot:0x03=2C domain:0x0000=2C type:pci=2C function= :0x0}=2Cdevice:bridge=2Ctype:interface@BOOT_PXE@}<br>>=3B >=3B>=3B>= =3B>=3B devices=3D{index:8=2CnicModel:pv=2CmacAddr:02:16:3e:4f:c4:b0=2Cli= nkActive:true=2Cnetwork:lan=2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}= =2Caddress:{bus:0x00=2C<br>>=3B >=3B>=3B>=3B>=3B slot:0x09=2C dom= ain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:bridge=2Ctype:interface@BO= OT_PXE@}<br>>=3B >=3B>=3B>=3B>=3B devices=3D{device:console=2Cspe= cParams:{}=2Ctype:console=2CdeviceId:@CONSOLE_UUID@=2Calias:console0}<br>&g= t=3B >=3B>=3B>=3B>=3B vmName=3D@NAME@<br>>=3B >=3B>=3B>=3B&= gt=3B spiceSecureChannels=3Dsmain=2Csdisplay=2Csinputs=2Cscursor=2Csplaybac= k=2Csrecord=2Cssmartcard=2Csusbredir<br>>=3B >=3B>=3B>=3B>=3B smp= =3D@VCPUS@<br>>=3B >=3B>=3B>=3B>=3B cpuType=3D@CPU_TYPE@<br>>= =3B >=3B>=3B>=3B>=3B emulatedMachine=3D@EMULATED_MACHINE@<br>>=3B= >=3B>=3B>=3B>=3B EOM<br>>=3B >=3B>=3B>=3B<br>>=3B >=3B= >=3B>=3B<br>>=3B >=3B>=3B>=3B Note that you should also be able= to edit /etc/ovirt-hosted-engine/vm.conf after setup:<br>>=3B >=3B>= =3B>=3B - put the system in global maintenance<br>>=3B >=3B>=3B>= =3B - edit the vm.conf file on all the hosts running the hosted engine<br>&= gt=3B >=3B>=3B>=3B - shutdown the vm: hosted-engine --vm-shutdown<br>= >=3B >=3B>=3B>=3B - start again the vm: hosted-engine --vm-start<br= >=3B >=3B>=3B>=3B - exit global maintenance<br>>=3B >=3B>=3B= >=3B<br>>=3B >=3B>=3B>=3B Giuseppe=2C Joshua: can you share your = changes in a guide for Hosted engine users on ovirt.org wiki?<br>>=3B >= =3B>=3B>=3B<br>>=3B >=3B>=3B>=3B<br>>=3B >=3B>=3B<br>>= =3B >=3B>=3B So would you simply just add a new line under the original= devices line? ie.<br>>=3B >=3B>=3B devices=3D{nicModel:pv=2CmacAddr:= 00:16:3e:6d:34:78=2ClinkActive:true=2Cnetwork:ovirtmgmt=2Cfilter:vdsm-no-ma= c-spoofing=2CspecParams:{}=2CdeviceId:0c8a1710-casd-407a-94e8-5b09e55fa141= =2Caddress:{bus:0x00=2C<br>>=3B >=3B>=3B slot:0x03=2C domain:0x0000= =2C type:pci=2C<br>>=3B >=3B>=3B function:0x0}=2Cdevice:bridge=2Ctype= :interface}<br>>=3B >=3B>=3B<br>>=3B >=3B>=3B Are there any goo= d practices for getting the mac addr so it won't be<br>>=3B >=3B>=3B =
>=3B >=3B <br>>=3B >=3B hosted-engine --vm-shutdown<br>>=3B >= =3B hosted-engine --vm-start<br>>=3B >=3B <br>>=3B >=3B hosted-engi= ne --set-maintenance --mode=3Dnone<br>>=3B >=3B <br>>=3B >=3B Howev= er=2C this won't get propagated to any additional hosts that are<br>>=3B = >=3B installed. I'm guessing it may be this file<br>>=3B >=3B /usr/sh= are/ovirt-hosted-engine-setup/templates/vm.conf.in that gets<br>>=3B >= =3B copied for new installs=2C but I don't have any new hosts to test with<= br>>=3B >=3B right now.<br>>=3B <br>>=3B A few comments on above di= scussion:<br>>=3B <br>>=3B MAC address for Hosted Engine VM is already = generated randomly by hosted-engine --deploy command and proposed as defaul= t in the question:<br>>=3B <br>>=3B 'You may specify a MAC address for =
>=3B >=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B<br>>=3B >=3B>= =3B>=3B --<br>>=3B >=3B>=3B>=3B Sandro Bonazzola<br>>=3B >=3B= >=3B>=3B Better technology. Faster innovation. Powered by community col= laboration.<br>>=3B >=3B>=3B>=3B See how it works at redhat.com<br>= >=3B >=3B>=3B>=3B _______________________________________________<b= r>>=3B >=3B>=3B>=3B Users mailing list<br>>=3B >=3B>=3B>=3B= Users@ovirt.org<br>>=3B >=3B>=3B>=3B http://lists.ovirt.org/mailma= n/listinfo/users<br>>=3B <br>>=3B <br>>=3B -- <br>>=3B Sandro Bonaz= zola<br>>=3B Better technology. Faster innovation. Powered by community c=
--_f0dc02d7-12f7-4563-98d9-07419cff839a_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html> <head> <style><!-- .hmmessage P { margin:0px=3B padding:0px } body.hmmessage { font-size: 12pt=3B font-family:Calibri } --></style></head> <body class=3D'hmmessage'><div dir=3D'ltr'>Hi all=2C<br>sorry for the late = reply.<br><br>I noticed that I missed the deviceId property on my additiona= l-nic line below=2C but I can confirm that the engine vm (installed with my= previously modified template in /usr/share/ovirt-hosted-engine-setup/templ= ates/vm.conf.in as outlined below) is still up and running (apparently) ok = without it (I verified that the deviceId property has not been added automa= tically in /etc/ovirt-hosted-engine/vm.conf).<br><br>I admit that modifying= a package file not marked as configuration (under /usr/share... may the FH= S forgive me... :) is not best practice=2C but modifying the configuration = one (under /etc...) afterwards seemed more error prone (needs propagation t= o further nodes).<br><br>In order to have a clear picture of the matter (an= d write/add-to a wiki page on engine vm customization) I'd like to read mor= e on the syntax of these vm.conf files (they are neither libvirt XML files = nor OTOPI files) and which properties are default/needed/etc.<br><br>From s= imple analogy=2C as an example=2C I thought that an unique index property w= ould be needed (as in ide/virtio disk devices) for adding a nic=2C but Andr= ew example does not add it...<br><br>Any pointers to doc/code for further e= nlightenment?<br><br>Many thanks in advance=2C<br>Giuseppe<br><br><div>>= =3B Date: Thu=2C 10 Apr 2014 08:40:25 +0200<br>>=3B From: sbonazzo@redhat= .com<br>>=3B To: andrew@andrewklau.com<br>>=3B CC: giuseppe.ragusa@hotm= ail.com=3B josh@wrale.com=3B users@ovirt.org<br>>=3B Subject: Re: [Users]= Post-Install Engine VM Changes Feasible?<br>>=3B <br>>=3B <br>>=3B <= br>>=3B Hi=2C<br>>=3B <br>>=3B <br>>=3B Il 10/04/2014 02:40=2C Andr= ew Lau ha scritto:<br>>=3B >=3B On Tue=2C Apr 8=2C 2014 at 8:52 PM=2C A= ndrew Lau <=3Bandrew@andrewklau.com>=3B wrote:<br>>=3B >=3B>=3B O= n Mon=2C Mar 17=2C 2014 at 8:01 PM=2C Sandro Bonazzola <=3Bsbonazzo@redha= t.com>=3B wrote:<br>>=3B >=3B>=3B>=3B Il 15/03/2014 12:44=2C Gius= eppe Ragusa ha scritto:<br>>=3B >=3B>=3B>=3B>=3B Hi Joshua=2C<br>= >=3B >=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B --------= ---------------------------------------------------------------------------= -------------------------------------------------------------------<br>>= =3B >=3B>=3B>=3B>=3B Date: Sat=2C 15 Mar 2014 02:32:59 -0400<br>>= =3B >=3B>=3B>=3B>=3B From: josh@wrale.com<br>>=3B >=3B>=3B>= =3B>=3B To: users@ovirt.org<br>>=3B >=3B>=3B>=3B>=3B Subject: [= Users] Post-Install Engine VM Changes Feasible?<br>>=3B >=3B>=3B>= =3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Hi=2C<br>>=3B >=3B>=3B&g= t=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B I'm in the process of install= ing 3.4 RC(2?) on Fedora 19. I'm using hosted engine=2C introspective Glus= terFS+keepalived+NFS ala [1]=2C across six nodes.<br>>=3B >=3B>=3B>= =3B>=3B<br>>=3B >=3B>=3B>=3B>=3B I have a layered networking to= pology ((V)LANs for public=2C internal=2C storage=2C compute and ipmi). I = am comfortable doing the bridging for each<br>>=3B >=3B>=3B>=3B>= =3B interface myself via /etc/sysconfig/network-scripts/ifcfg-*.<br>>=3B = >=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Here's my desir= ed topology: http://www.asciiflow.com/#Draw6325992559863447154<br>>=3B &g= t=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Here's my keepali= ved setup: https://gist.github.com/josh-at-knoesis/98618a16418101225726<br>= >=3B >=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B I'm writ= ing a lot of documentation of the many steps I'm taking. I hope to eventua= lly release a distributed introspective all-in-one (including<br>>=3B >= =3B>=3B>=3B>=3B distributed storage) guide.<br>>=3B <br>>=3B I ho= pe you'll publish it also on ovirt.org wiki :-)<br>>=3B <br>>=3B >=3B= >=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Looking at vm.conf.in= <=3Bhttp://vm.conf.in>=3B=2C it looks like I'd by default end up with = one interface on my engine=2C probably on my internal VLAN=2C as<br>>=3B = >=3B>=3B>=3B>=3B that's where I'd like the control traffic to flow.= I definitely could do NAT=2C but I'd be most happy to see the engine have= a presence on all of the<br>>=3B >=3B>=3B>=3B>=3B LANs=2C if for= no other reason than because I want to send backups directly over the stor= age VLAN.<br>>=3B >=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B&g= t=3B I'll cut to it: I believe I could successfully alter the vdsm templat= e (vm.conf.in <=3Bhttp://vm.conf.in>=3B) to give me the extra interface= s I require.<br>>=3B >=3B>=3B>=3B>=3B It hit me=2C however=2C tha= t I could just take the defaults for the initial install. Later=2C I think= I'll be able to come back with virsh and make my<br>>=3B >=3B>=3B>= =3B>=3B changes to the gracefully disabled VM. Is this true?<br>>=3B &= gt=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B [1] http://www.a= ndrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/<br>>=3B >=3B>= =3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Thanks=2C<br>>=3B >= =3B>=3B>=3B>=3B Joshua<br>>=3B >=3B>=3B>=3B>=3B<br>>=3B &= gt=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B I started from t= he same reference[1] and ended up "statically" modifying vm.conf.in before = launching setup=2C like this:<br>>=3B >=3B>=3B>=3B>=3B<br>>=3B = >=3B>=3B>=3B>=3B cp -a /usr/share/ovirt-hosted-engine-setup/templat= es/vm.conf.in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.ori= g<br>>=3B >=3B>=3B>=3B>=3B cat <=3B<=3B EOM >=3B /usr/share= /ovirt-hosted-engine-setup/templates/vm.conf.in<br>>=3B >=3B>=3B>= =3B>=3B vmId=3D@VM_UUID@<br>>=3B >=3B>=3B>=3B>=3B memSize=3D@ME= M_SIZE@<br>>=3B >=3B>=3B>=3B>=3B display=3D@CONSOLE_TYPE@<br>>= =3B >=3B>=3B>=3B>=3B devices=3D{index:2=2Ciface:ide=2Caddress:{ con= troller:0=2C target:0=2Cunit:0=2C bus:1=2C<br>>=3B >=3B>=3B>=3B>= =3B type:drive}=2CspecParams:{}=2Creadonly:true=2CdeviceId:@CDROM_UUID@=2Cp= ath:@CDROM@=2Cdevice:cdrom=2Cshared:false=2Ctype:disk@BOOT_CDROM@}<br>>= =3B >=3B>=3B>=3B>=3B devices=3D{index:0=2Ciface:virtio=2Cformat:raw= =2CpoolID:@SP_UUID@=2CvolumeID:@VOL_UUID@=2CimageID:@IMG_UUID@=2CspecParams= :{}=2Creadonly:false=2CdomainID:@SD_UUID@=2Coptional:false=2CdeviceId:@IMG_= UUID@=2Caddress:{bus:0x00=2C<br>>=3B >=3B>=3B>=3B>=3B slot:0x06= =2C domain:0x0000=2C type:pci=2C function:0x0}=2Cdevice:disk=2Cshared:exclu= sive=2CpropagateErrors:off=2Ctype:disk@BOOT_DISK@}<br>>=3B >=3B>=3B&g= t=3B>=3B devices=3D{device:scsi=2Cmodel:virtio-scsi=2Ctype:controller}<br= possible to clash with ones vdsm would generate? I assume the same<br>>= =3B >=3B>=3B applies for deviceid?<br>>=3B >=3B>=3B Did you also = change the slot?<br>>=3B >=3B>=3B<br>>=3B >=3B <br>>=3B >=3B = This worked successfully:<br>>=3B >=3B <br>>=3B >=3B yum -y install= python-virtinst<br>>=3B >=3B <br>>=3B >=3B # generate uuid and mac= address<br>>=3B >=3B echo 'import virtinst.util =3B print<br>>=3B &= gt=3B virtinst.util.uuidToString(virtinst.util.randomUUID())' | python<br>&= gt=3B >=3B echo 'import virtinst.util =3B print virtinst.util.randomMAC(= )' | python<br>>=3B >=3B <br>>=3B >=3B hosted-engine --set-maintena= nce --mode=3Dglobal<br>>=3B >=3B nano /etc/ovirt-hosted-engine/vm.conf<= br>>=3B >=3B # insert under earlier nicModel<br>>=3B >=3B # replace= macaddress and uuid from above<br>>=3B >=3B # increment slot (so it's = not the same as above nicModel)<br>>=3B >=3B # modify ovirtmgmt to desi= red bridge interface<br>>=3B >=3B <br>>=3B >=3B devices=3D{nicModel= :pv=2CmacAddr:00:16:3e:35:87:7d=2ClinkActive:true=2Cnetwork:storage_network= =2Cfilter:vdsm-no-mac-spoofing=2CspecParams:{}=2CdeviceId:4c32c036-0e5a-e0b= 3-9ba7-bg3dfzbs40ae=2Caddress:{bus:0x00=2Cslot:0x04=2C<br>>=3B >=3B dom= ain:0x0000=2C type:pci=2Cfunction:0x0}=2Cdevice:bridge=2Ctype:interface}<br= the VM or accept a randomly generated default [@DEFAULT@]: '<br>>=3B <br>= >=3B So it shouldn't be necessary to generate another one.<br>>=3B Howe= ver=2C you can set the MAC address at interactive setup or by creating an a= nswer.conf file with the following content:<br>>=3B <br>>=3B [environme= nt:default]<br>>=3B OVEHOSTED_VM/vmMACAddr=3Dstr:00:16:3e:72:85:3f<br>>= =3B <br>>=3B changing above mac address with whatever you want and runnin= g<br>>=3B hosted-engine --deploy --config-append=3Danswer.conf<br>>=3B = <br>>=3B Note that if you change /etc/ovirt-hosted-engine/vm.conf after d= eployment you must also update /etc/ovirt-hosted-engine/answers.conf<br>>= =3B in order to have additional host to get the same configuration when you= run hosted-engine --deploy.<br>>=3B Additional host already deployed nee= d to be updated manually.<br>>=3B <br>>=3B Remember also that if there'= s interest in adding configuration steps while deploying you can open an RF= E on bugzilla.<br>>=3B <br>>=3B I created an etherpad to allow you to l= ist post install changes to vm.conf=2C so I can get a clue on what we're mi= ssing there.<br>>=3B http://etherpad.ovirt.org/p/hosted-engine-post-insta= ll-changes<br>>=3B <br>>=3B <br>>=3B >=3B <br>>=3B >=3B>=3B&g= t=3B<br>>=3B >=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B = I simply added a second nic (with a fixed MAC address from the locally-admi= nistered pool=2C since I didn't know how to auto-generate one) and added an= <br>>=3B >=3B>=3B>=3B>=3B index for nics too (mimicking the the s= torage devices setup already present).<br>>=3B >=3B>=3B>=3B>=3B<b= r>>=3B >=3B>=3B>=3B>=3B My network setup is much simpler than you= rs: ovirtmgmt bridge is on an isolated oVirt-management-only network withou= t gateway=2C my actual LAN with<br>>=3B >=3B>=3B>=3B>=3B gateway = and Internet access (for package updates/installation) is connected to lan = bridge and the SAN/migration LAN is a further (not bridged) 10<br>>=3B &g= t=3B>=3B>=3B>=3B Gib/s isolated network for which I do not expect to = need Engine/VMs reachability (so no third interface for Engine) since all a= ctions should be<br>>=3B >=3B>=3B>=3B>=3B performed from Engine b= ut only through vdsm hosts (I use a "split-DNS" setup by means of carefully= crafted hosts files on Engine and vdsm hosts)<br>>=3B >=3B>=3B>=3B= >=3B<br>>=3B >=3B>=3B>=3B>=3B I can confirm that the engine vm = gets created as expected and that network connectivity works.<br>>=3B >= =3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Unfortunately I ca= nnot validate the whole design yet=2C since I'm still debugging HA-agent pr= oblems that prevent a reliable Engine/SD startup.<br>>=3B >=3B>=3B>= =3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Hope it helps.<br>>=3B >= =3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B Greetings=2C<br>&g= t=3B >=3B>=3B>=3B>=3B Giuseppe<br>>=3B >=3B>=3B>=3B>=3B<b= r>>=3B >=3B>=3B>=3B>=3B<br>>=3B >=3B>=3B>=3B>=3B<br>>= =3B >=3B>=3B>=3B>=3B ______________________________________________= _<br>>=3B >=3B>=3B>=3B>=3B Users mailing list<br>>=3B >=3B>= =3B>=3B>=3B Users@ovirt.org<br>>=3B >=3B>=3B>=3B>=3B http://l= ists.ovirt.org/mailman/listinfo/users<br>>=3B >=3B>=3B>=3B>=3B<br= ollaboration.<br>>=3B See how it works at redhat.com<br></div> = </div></body> </html>= --_f0dc02d7-12f7-4563-98d9-07419cff839a_--
participants (4)
-
Andrew Lau
-
Giuseppe Ragusa
-
Joshua Dotson
-
Sandro Bonazzola