On Mon, Nov 23, 2015 at 10:17 PM, Giuseppe Ragusa <
giuseppe.ragusa(a)hotmail.com> wrote:
On Tue, Oct 27, 2015, at 00:10, Giuseppe Ragusa wrote:
> On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
> >
> >
> > On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa <
giuseppe.ragusa(a)hotmail.com> wrote:
> >> Hi all,
> >> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
> >>
> >> I'm trying to trick the self-hosted-engine setup to create a custom
engine vm with 3 nics (with fixed MACs/UUIDs).
> >>
> >> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine
vm) and the network bridges (ovirtmgmt and other two bridges, called nfs
and lan, for the engine vm) have been preconfigured on the initial
fully-patched CentOS 7.1 host (plus other two identical hosts which are
awaiting to be added).
> >>
> >> I'm stuck at a point with the engine vm successfully starting but
with only one nic present (connected to the ovirtmgmt bridge).
> >>
> >> I'm trying to obtain the modified engine vm by means of a trick which
used to work in a previous (aborted because of lacking
GlusterFS-by-libgfapi support) oVirt 3.5 test setup (about a year ago,
maybe more): I'm substituting the standard
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the
following:
> >>
> >> vmId=@VM_UUID@
> >> memSize=@MEM_SIZE@
> >> display=@CONSOLE_TYPE@
> >> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0,
bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID
@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
> >> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID
@,volumeID:@VOL_UUID@,imageID:@IMG_UUID
@,specParams:{},readonly:false,domainID:@SD_UUID
@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06,
domain:0x0000, type:pci,
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK
@}
> >> devices={device:scsi,model:virtio-scsi,type:controller}
> >>
devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE
@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
slot:0x03, domain:0x0000, type:pci,
function:0x0},device:bridge,type:interface@BOOT_PXE@}
> >>
devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00,
slot:0x09, domain:0x0000, type:pci,
function:0x0},device:bridge,type:interface@BOOT_PXE@}
> >>
devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00,
slot:0x0c, domain:0x0000, type:pci,
function:0x0},device:bridge,type:interface@BOOT_PXE@}
> >>
devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID
@,alias:console0}
> >> vmName=@NAME@
> >>
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
> >> smp=@VCPUS@
> >> cpuType=@CPU_TYPE@
> >> emulatedMachine=@EMULATED_MACHINE@
> >>
> >> but unfortunately the vm gets created like this (output from
"ps";
note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded
kickstart: the installation should proceed by HTTP on the lan network but
obviously fails):
> >>
> >> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
> >> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime
mlock=off
> >> -smp 2,sockets=2,cores=1,threads=1 -uuid
f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
> >> mbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-1.1503.el7.centos.2
> >>
.8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
> >> 0e38aa4a -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/li
> >> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon
chardev=charmonitor,id=mo
> >> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew
-global kvm-pit.l
> >> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uh
> >> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr
> >> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
-drive file=
> >>
/var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive
file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev
socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
-chardev
socket,id=charconsole0,path=/var/run/ovirt-vmconsole-console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait
-device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
> >>
> >> There seem to be no errors in the logs.
> >>
> >> I've tried reading some (limited) Python setup code but I've not
found any obvious reason why the trick should not work anymore.
> >>
> >> I know that 3.6 has different network configuration/management and
this could be the hot point.
> >>
> >> Does anyone have any further suggestion or clue (code/logs to read)?
> >
> > The VM creation path is now a bit different cause we use just vdscli
library instead of vdsClient.
> > Please take a a look at mixins.py
>
> Many thanks for your very valuable hint:
>
> I've restored the original
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in and I've
managed to obtain the 3-nics-customized vm by modifying
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py like
this ("diff -Naur" output):
Hi Simone,
it seems that I spoke too soon :(
A separate network issue (already reported to the list) prevented me from
successfully closing up the setup in its final phase (registering the host
inside the Engine), so all seemed well while being stuck there :)
Now that I've solved that (for which I'm informing the list asap in a
separate message) and the setup ended successfully, it seems that the last
step of the HE setup (shutdown the Engine vm to place it under HA agent
control) starts/creates a "different vm" and my virtual hardware
customizations seem to be gone (only one NIC present, connected to
ovirtmgmt).
My wild guess: maybe I need BOTH the mixins.py AND the vm.conf.in
customizations? ;)
Yes, you are right: the final configuration is still generated from the
template so you need to fix both.
It seems (from /etc/ovirt-hosted-engine/hosted-engine.conf) that the
Engine vm definition is now in /var/run/ovirt-hosted-engine-ha/vm.conf
the vm configuration is now read from the shared domain converting back
from the OVF_STORE, the idea is to let you edit it from the engine without
the need to write it on each host.
/var/run/ovirt-hosted-engine-ha/vm.conf is just a temporary copy.
As you are probably still not able to import the hosted-engine storage
domain in the engine due to a well know bug, your system will fallback to
the initial vm.conf configuration still on the shared storage and you can
manually fix it:
Please follow this procedure substituting '192.168.1.115:_Virtual_ext35u36'
with the mount point of hosted-engine storage domain on your system:
mntpoint=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36
dir=`mktemp -d` && cd $dir
sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-engine.conf)
sdUUID=${sdUUID_line:7:36}
conf_volume_UUID_line=$(grep conf_volume_UUID
/etc/ovirt-hosted-engine/hosted-engine.conf)
conf_volume_UUID=${conf_volume_UUID_line:17:36}
conf_image_UUID_line=$(grep conf_image_UUID
/etc/ovirt-hosted-engine/hosted-engine.conf)
conf_image_UUID=${conf_image_UUID_line:16:36}
dd if=$mntpoint/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
2>/dev/null| tar -xvf -
# directly edit vm.conf as you need
tar -cO * | dd
of=$mntpoint/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
When your engine will import the hosted-engine storage domain it will
generate an OVF_STORE with the configuration of the engine VM, you will be
able to edit some parameters from the engine and the agent will read the VM
configuration from there.
Many thanks for your assistance (and obviously I just sent a related
wishlist item on the HE setup ;)
Regards,
Giuseppe
>
************************************************************************************
>
> --- mixins.py.orig 2015-10-20 16:57:40.000000000 +0200
> +++ mixins.py 2015-10-26 22:22:58.351223922 +0100
> @@ -25,6 +25,7 @@
> import random
> import string
> import time
> +import uuid
>
>
> from ovirt_hosted_engine_setup import constants as ohostedcons
> @@ -247,6 +248,44 @@
> ]['@BOOT_PXE@'] == ',bootOrder:1':
> nic['bootOrder'] = '1'
> conf['devices'].append(nic)
> + nic2 = {
> + 'nicModel': 'pv',
> + 'macAddr': '02:50:56:3f:c4:a0',
> + 'linkActive': 'true',
> + 'network': 'lan',
> + 'filter': 'vdsm-no-mac-spoofing',
> + 'specParams': {},
> + 'deviceId': str(uuid.uuid4()),
> + 'address': {
> + 'bus': '0x00',
> + 'slot': '0x09',
> + 'domain': '0x0000',
> + 'type': 'pci',
> + 'function': '0x0'
> + },
> + 'device': 'bridge',
> + 'type': 'interface',
> + }
> + conf['devices'].append(nic2)
> + nic3 = {
> + 'nicModel': 'pv',
> + 'macAddr': '02:50:56:3f:c4:c0',
> + 'linkActive': 'true',
> + 'network': 'nfs',
> + 'filter': 'vdsm-no-mac-spoofing',
> + 'specParams': {},
> + 'deviceId': str(uuid.uuid4()),
> + 'address': {
> + 'bus': '0x00',
> + 'slot': '0x0c',
> + 'domain': '0x0000',
> + 'type': 'pci',
> + 'function': '0x0'
> + },
> + 'device': 'bridge',
> + 'type': 'interface',
> + }
> + conf['devices'].append(nic3)
>
> cli = self.environment[ohostedcons.VDSMEnv.VDS_CLI]
> status = cli.create(conf)
>
>
************************************************************************************
>
> Obviously this is a horrible ad-hoc hack that I'm not able to
generalize/clean-up now: doing so would involve (apart from a deeper
understanding of the whole setup code/workflow) some well-thought-out
design decisions and, given the effective deprecation of the aforementioned
easy-to-modify vm.conf.in template substituted by hardwired Python
program logic, it seems that such a functionality is not very high on the
development priority list atm ;)
>
> Many thanks again!
>
> Kind regards,
> Giuseppe
>
> >> Many thanks in advance.
> >>
> >> Kind regards,
> >> Giuseppe
> >>
> >> PS: please keep also my address in replying because I'm experiencing
some problems between Hotmail and oVirt-mailing-list
> >>
> >> _______________________________________________
> >>
> Users mailing list
> >> Users(a)ovirt.org
> >>
http://lists.ovirt.org/mailman/listinfo/users
> >>