[ovirt-users] Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE
Simone Tiraboschi
stirabos at redhat.com
Mon Oct 26 08:48:03 UTC 2015
On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa <
giuseppe.ragusa at hotmail.com> wrote:
> Hi all,
> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>
> I'm trying to trick the self-hosted-engine setup to create a custom engine
> vm with 3 nics (with fixed MACs/UUIDs).
>
> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm)
> and the network bridges (ovirtmgmt and other two bridges, called nfs and
> lan, for the engine vm) have been preconfigured on the initial
> fully-patched CentOS 7.1 host (plus other two identical hosts which are
> awaiting to be added).
>
> I'm stuck at a point with the engine vm successfully starting but with
> only one nic present (connected to the ovirtmgmt bridge).
>
> I'm trying to obtain the modified engine vm by means of a trick which used
> to work in a previous (aborted because of lacking GlusterFS-by-libgfapi
> support) oVirt 3.5 test setup (about a year ago, maybe more): I'm
> substituting the standard /usr/share/ovirt-hosted-engine-setup/templates/
> vm.conf.in with the following:
>
> vmId=@VM_UUID@
> memSize=@MEM_SIZE@
> display=@CONSOLE_TYPE@
> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1,
> type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM
> @,device:cdrom,shared:false,type:disk at BOOT_CDROM@}
> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID
> @,volumeID:@VOL_UUID@,imageID:@IMG_UUID
> @,specParams:{},readonly:false,domainID:@SD_UUID
> @,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06,
> domain:0x0000, type:pci,
> function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk at BOOT_DISK
> @}
> devices={device:scsi,model:virtio-scsi,type:controller}
>
> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE
> @,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
> slot:0x03, domain:0x0000, type:pci,
> function:0x0},device:bridge,type:interface at BOOT_PXE@}
> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00,
> slot:0x09, domain:0x0000, type:pci,
> function:0x0},device:bridge,type:interface at BOOT_PXE@}
> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00,
> slot:0x0c, domain:0x0000, type:pci,
> function:0x0},device:bridge,type:interface at BOOT_PXE@}
> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID
> @,alias:console0}
> vmName=@NAME@
>
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
> smp=@VCPUS@
> cpuType=@CPU_TYPE@
> emulatedMachine=@EMULATED_MACHINE@
>
> but unfortunately the vm gets created like this (output from "ps"; note
> that I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart:
> the installation should proceed by HTTP on the lan network but obviously
> fails):
>
> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime
> mlock=off
> -smp 2,sockets=2,cores=1,threads=1 -uuid
> f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
> mbios type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-1.1503.el7.centos.2
>
> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
> 0e38aa4a -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/li
> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon
> chardev=charmonitor,id=mo
> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global
> kvm-pit.l
> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
> piix3-usb-uh
> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr
> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
> file=
> /var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive
> file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev
> socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-engine-setup.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
> -chardev
> socket,id=charconsole0,path=/var/run/ovirt-vmconsole-console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait
> -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password
> -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
>
> There seem to be no errors in the logs.
>
> I've tried reading some (limited) Python setup code but I've not found any
> obvious reason why the trick should not work anymore.
>
> I know that 3.6 has different network configuration/management and this
> could be the hot point.
>
> Does anyone have any further suggestion or clue (code/logs to read)?
>
The VM creation path is now a bit different cause we use just vdscli
library instead of vdsClient.
Please take a a look at mixins.py
>
> Many thanks in advance.
>
> Kind regards,
> Giuseppe
>
> PS: please keep also my address in replying because I'm experiencing some
> problems between Hotmail and oVirt-mailing-list
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151026/4dc85248/attachment-0001.html>
More information about the Users
mailing list