Greetings and observations from an oVirt noob
by Kenneth Marsh
Hello all,
I do development operations for a part of a software division of a large
multinational. I'm an experienced user of VMWare and Amazon AWS, soon to be
pushed onto Azure, and I've found a common thread among all solutions -
they are all expensive enough that my budget will certainly not be approved
with them. I'm deferred to the IT part of the organisation, which operates
too slowly and inefficiently (in terms of both cost and time) for my
requirements. This is what led me to RHEV, and ultimately to oVirt. This is
a feasibility study for what may ultimately be a RHEV-based data center in
a new office, and if I succeed we will be doing more on a fixed budget by
using more RHEV and less Azure.
I spent the weekend working with oVirt and I'm very impressed. I had no
idea such a comprehensive enterprise-class solution was even available.
Being a complete newcomer, I started without a clue and after a weekend had
set up a nearly-working data centre including an oVirt hypervisor node, all
on old Dell notebooks loaned to me temporarily by our IT group. I started
with RHEV but decided to use oVirt for two reasons - one being to see
what's possible with the latest and greatest, the other because RHEV
required some licensing I've not yet purchased. Long term it'll have to be
RHEV for enterprise support reasons I'm sure many are familiar with.
There are a few things I found, from a newcomer's perspective, very unclear.
- What is oVirt, vs oVirt engine, vs oVirt node, vs oVirt host. Try to
find documentation on any of these and get spammed with references to the
others. I think I've worked out that these are the collective suite of
products, the management centre, the bare-metal hypervisor, and
participating member servers, respectively.
- Which versions of CentOS/Fedora/oVirt Node are compatible at which
oVirt compatibility level? This would normally be addressed in the release
notes. It was also confusing to discover oVirt node 3.2.1 is compatible at
the 3.5 level. The answer to this remains unclear but I'm trying to use
Fedora 22 across the board now with oVirt node 3.2.1 and this seems to be
working, although I haven't gotten a server node into a cluster yet, only
oVirt nodes.
- Storage domains - much doco about them being needed and how to
configure them but nothing about what they are or why they are needed. I
would have expected an oVirt node to be capable of both data and ISO
storage but apparently there needs to be an NFS or iSCSI filesystem there
first? And there's local storage vs shared, another concept much talked
about how to prepare and add it but not explained why one would want to do
that or what it means.
I think with further internet combing and by trial-and-error I'm very
likely to figure it all out. I hope all goes well and implement this stuff
in our new data centre and then I'd be keen to contribute some of my own
tech writing.
Meanwhile, I hope to be active on this mailing list and I thank everyone in
advance for sharing their oVirt experience. For any who are looking at the
doco thanks much for the plethora of stuff out there already and I hope the
above bullet points help you understand where doco most needs more
attention. At least from the perspective of one who has just come across
oVirt.
Kind Regards,
Ken Marsh
Brisbane, Australia
9 years
Engine upgrade error
by Frank Rothenstein
Hello,
I tried several times to upgrade my ovirt-engine 3.5.5 to 3.6. Every
time the setup stops at updating the DB-schema. The log revealed when
this happpens: The setup ist complaining about a duplicated key - here
is the part of the log. Of course there is only one network in the
interface.
Can anybody help me getting a solution?
Thnaks, Frank
Running upgrade sql script '/usr/share/ovirt-
engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
rface.sql'...
2542964-
2542965-2015-11-30 08:15:26 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
plugin.execute:941 execute-output: ['/usr/share/ovirt-
engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u',
'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-
engine-setup-20151130073216-erdh9f.log', '-c', 'apply'] stderr:
2543300-psql:/usr/share/ovirt-
engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
rface.sql:1: ERROR: could not create unique index
"vds_interface_vds_id_network_name_unique"
2543487:DETAIL: Key (vds_id, network_name)=(ba954f0f-6ecb-4ec8-b169-
7e0a1147b4cd, KHnetz) is duplicated.
2543585-CONTEXT: SQL statement "ALTER TABLE vds_interface ADD
CONSTRAINT vds_interface_vds_id_network_name_unique unique (vds_id,
network_name)"
2543723-PL/pgSQL function fn_db_create_constraint(character
varying,character varying,text) line 4 at EXECUTE statement
2543835-FATAL: Cannot execute sql command: --file=/usr/share/ovirt-
engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
rface.sql
2543975-
2543976-2015-11-30 08:15:26 DEBUG otopi.context
context._executeMethod:156 method exception
2544060-Traceback (most recent call last):
2544095- File "/usr/lib/python2.7/site-packages/otopi/context.py",
line 146, in _executeMethod
2544183- method['method']()
2544206- File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-
engine-setup/ovirt-engine/db/schema.py", line 291, in _misc
2544325- oenginecons.EngineDBEnv.PGPASS_FILE
2544365- File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line
946, in execute
2544445- command=args[0],
2544466-RuntimeError: Command '/usr/share/ovirt-
engine/dbscripts/schema.sh' failed to execute
2544552-2015-11-30 08:15:26 ERROR otopi.context
context._executeMethod:165 Failed to execute stage 'Misc
configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh'
failed to execute
2544737-2015-11-30 08:15:26 DEBUG otopi.transaction
transaction.abort:134 aborting 'Yum Transaction'
2544830-2015-11-30 08:15:26 INFO
otopi.plugins.otopi.packagers.yumpackager yumpackager.info:95 Yum
Performing yum transaction rollback
______________________________________________________________________________
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten
Telefon: 03821-700-0
Fax: 03821-700-240
E-Mail: info(a)bodden-kliniken.de Internet: http://www.bodden-kliniken.de
Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188
Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski
Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorge-
sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Veröf-
fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sofort den
Absender zu informieren und die E-Mail zu löschen.
Bodden-Kliniken Ribnitz-Damgarten GmbH 2015
*** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
9 years
hosted engine vm.conf missing
by Thomas Scofield
I have recently setup a ovirt hosted engine on iscsi storage and after a
reboot of the system I am unable to start the hosted engine. The agent.log
gives errors indicating there is a missing value in the vm.conf file, but
the vm.conf file does not appear in the location indicated. There is no
error indicated when the agent attempts to reload the vm.conf. Any ideas
on how to get the hosted engine up and running?
MainThread::INFO::2015-11-26
21:31:13,071::hosted_engine::699::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file:
/var/run/vdsm/storage/ddf4a26b-61ff-49a4-81db-9f82da35e44b/6ed6d868-aaf3-4b3f-bdf0-a4ad262709ae/1fe5b7fc-eae7-4f07-a2fe-5a082e14c876)
MainThread::INFO::2015-11-26
21:31:13,072::upgrade::836::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade)
Host configuration is already up-to-date
MainThread::INFO::2015-11-26
21:31:13,072::hosted_engine::422::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Reloading vm.conf from the shared storage domain
MainThread::ERROR::2015-11-26
21:31:13,100::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: ''Configuration value not found:
file=/var/run/ovirt-hosted-engine-ha/vm.conf,
key=memSize'' - trying to restart agent
MainThread::WARNING::2015-11-26
21:31:18,105::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '9'
MainThread::ERROR::2015-11-26
21:31:18,106::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Too many errors occurred, giving up. Please review the log and consider
filing a bug.
MainThread::INFO::2015-11-26
21:31:18,106::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
Agent shutting down
9 years
Windows Networking Issues -
by Matt Wells
Hi all, I have a question about Windows VMs and a networking issue I'm
having.
Here's the setup -
* oVirt - 3.5.1.1-1
* Hypervisors are CentOS 6.7 box with 2 NICs in bond0 'mode=4 miimon=100
lacp_rate=1'
* On bond0 I have a few networks using vlan tagging.
* Networks are 5,10,15,20 - All on an external switch
Network 15 has a Windows 2012 R2 server and a CentOS 6.7 server on it.
The rest of the networks have a few linux.
Every linux box on every network is happy. However any and all Windows
boxes I bring online are incapable of patching or hitting the web. I
pointed the Windows box to the linux box next to it as a proxy (after
installing squid on it) When I do that the Windows box has no issues at
all; it's only when he's attempting to leave on his own.
On my firewall I put in a 'permit any any' on the M$ box IP however all I
see is tcp resets in PCAPs,
I've been playing with for some time but can't seem to find the issue. It
would be one thing if everything on the 15 was bad but the linux box on the
network is fine. Here's the rub, I'm 99.999% sure this used to work.
gggrrr...
Any assistance anyone can offer would be amazingly appreciated.
Thank you for taking the time to read this.
9 years
Re: [ovirt-users] [ANN] oVirt 3.6.1 First Release Candidate is now available for testing
by jvdwege
Sandro Bonazzola schreef op 2015-11-30 15:10:
> On Mon, Nov 30, 2015 at 1:56 PM, Joop <jvdwege(a)xs4all.nl> wrote:
>
>> On 30-11-2015 13:06, Sandro Bonazzola wrote:
>>
>> On Sat, Nov 28, 2015 at 1:34 PM, Joop <jvdwege(a)xs4all.nl> wrote:
>> On 25-11-2015 15:28, Sandro Bonazzola wrote:
>>> The oVirt Project is pleased to announce the availability
>>> of the First Release Candidate of oVirt 3.6.1 for testing, as of
>>> November 25th, 2015.
>>>
>>> This release is available now for Fedora 22,
>>> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
>>> Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar).
>>>
>>> This release supports Hypervisor Hosts running
>>> Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar)
>> and
>>> Fedora 22.
>>> Highly experimental support for Debian 8.1 Jessie has been added
>> too.
>>>
>>> This release of oVirt 3.6.1 includes numerous bug fixes.
>>> See the release notes [1] for an initial list of the new features
>> and bugs
>>> fixed.
>>>
>> Tried the 3.6.1 prerelease but the sanlock error 22 is still there
>> and
>> its not possible to activate the imported hosted-engine storage
>> domain.
>> Host F22, hosted-engine CentOS7.1, storage domain(s) NFS.
>>
>> Sanlock error 22 shows up because BZ 1269768 hasn't been fixed yet.
>>
>> But if you don't import the hosted engine storage everything else
>> should still work fine.
> Ah, but that won't work for my use case since I need to import an
> existing data domain and that won't work without a working data
> domain.
> Will see if creating a dummy small data domain will let me import the
> real one.
>
> it should, let me know if it didn't work.
Thanks that worked :-)
> Adding Simone and Roy so they have a better sight on why people are
> keep trying to import the hosted engine domain despite it's not fixed
> yet :-)
Sorry being pushy just wanted to get on with my ovirt stuff and felt a
little frustrated that such a basic feature didn't work (IMHO).
Again apologies, you're all working hard to get things fixed and I
shouldn't complain.
Regards,
Joop
9 years
Windows 10
by Koen Vanoppen
Dear all,
Yes, onther question :-). This time it's about windows 10.
I'm running ovirt 3.5.4 and I don't manage to install windows 10 on it.
Keeps giving me a blue screen (yes, I know, it's still a windows... ;-) )
on reboot.
Are there any special settings you need to enable when creating the vm?
Which OS do I need to select? Or shall I just wait untile the relase of
ovirt 3.6 :-) ?
Kind regards,
Koen
9 years
Corruped disks
by Koen Vanoppen
Dear all,
lately we are experience some strange behaviour on our vms...
Every now and then we have disks that went corrupt. Is there a chance that
ovirt is the issue here or...? It happens (luckily) on our DEV/UAT cluster.
Since the last 4 weeks, we already had 6 vm's that went totaly corrupt...
Kind regards,
Koen
9 years
Re: [ovirt-users] Engine upgrade error - 03_06_1670_unique_vds_id_network_name_vds_interface.sql
by Yedidyah Bar David
On Mon, Nov 30, 2015 at 3:34 PM, Frank Rothenstein
<f.rothenstein(a)bodden-kliniken.de> wrote:
> Hello,
>
> I tried several times to upgrade my ovirt-engine 3.5.5 to 3.6. Every
> time the setup stops at updating the DB-schema. The log revealed when
> this happpens: The setup ist complaining about a duplicated key - here
> is the part of the log. Of course there is only one network in the
> interface.
>
> Can anybody help me getting a solution?
>
> Thnaks, Frank
>
> Running upgrade sql script '/usr/share/ovirt-
> engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
> rface.sql'...
> 2542964-
> 2542965-2015-11-30 08:15:26 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
> plugin.execute:941 execute-output: ['/usr/share/ovirt-
> engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u',
> 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-
> engine-setup-20151130073216-erdh9f.log', '-c', 'apply'] stderr:
> 2543300-psql:/usr/share/ovirt-
> engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
> rface.sql:1: ERROR: could not create unique index
> "vds_interface_vds_id_network_name_unique"
> 2543487:DETAIL: Key (vds_id, network_name)=(ba954f0f-6ecb-4ec8-b169-
> 7e0a1147b4cd, KHnetz) is duplicated.
> 2543585-CONTEXT: SQL statement "ALTER TABLE vds_interface ADD
> CONSTRAINT vds_interface_vds_id_network_name_unique unique (vds_id,
> network_name)"
> 2543723-PL/pgSQL function fn_db_create_constraint(character
> varying,character varying,text) line 4 at EXECUTE statement
> 2543835-FATAL: Cannot execute sql command: --file=/usr/share/ovirt-
> engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
> rface.sql
Can you post output of 'select * from vds_interface' from engine database?
Adding Eli and changing subject.
> 2543975-
> 2543976-2015-11-30 08:15:26 DEBUG otopi.context
> context._executeMethod:156 method exception
> 2544060-Traceback (most recent call last):
> 2544095- File "/usr/lib/python2.7/site-packages/otopi/context.py",
> line 146, in _executeMethod
> 2544183- method['method']()
> 2544206- File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-
> engine-setup/ovirt-engine/db/schema.py", line 291, in _misc
> 2544325- oenginecons.EngineDBEnv.PGPASS_FILE
> 2544365- File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line
> 946, in execute
> 2544445- command=args[0],
> 2544466-RuntimeError: Command '/usr/share/ovirt-
> engine/dbscripts/schema.sh' failed to execute
> 2544552-2015-11-30 08:15:26 ERROR otopi.context
> context._executeMethod:165 Failed to execute stage 'Misc
> configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh'
> failed to execute
> 2544737-2015-11-30 08:15:26 DEBUG otopi.transaction
> transaction.abort:134 aborting 'Yum Transaction'
> 2544830-2015-11-30 08:15:26 INFO
> otopi.plugins.otopi.packagers.yumpackager yumpackager.info:95 Yum
> Performing yum transaction rollback
>
>
>
>
>
> ______________________________________________________________________________
> BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
> Sandhufe 2
> 18311 Ribnitz-Damgarten
>
> Telefon: 03821-700-0
> Fax: 03821-700-240
>
> E-Mail: info(a)bodden-kliniken.de Internet: http://www.bodden-kliniken.de
>
> Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188
> Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski
>
> Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorge-
> sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Veröf-
> fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sofort den
> Absender zu informieren und die E-Mail zu löschen.
>
>
> Bodden-Kliniken Ribnitz-Damgarten GmbH 2015
> *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
--
Didi
9 years
Re: [ovirt-users] [SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE
by Giuseppe Ragusa
On Wed, Nov 25, 2015, at 12:10, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 23, 2015 at 10:17 PM, Giuseppe Ragusa
> <giuseppe.ragusa(a)hotmail.com> wrote:
>> On Tue, Oct 27, 2015, at 00:10, Giuseppe Ragusa wrote:
>>
> On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
>>
> >
>>
> >
>>
> > On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa
> > <giuseppe.ragusa(a)hotmail.com> wrote:
>>
> >> Hi all,
>>
> >> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>>
> >>
>>
> >> I'm trying to trick the self-hosted-engine setup to create a custom
> >> engine vm with 3 nics (with fixed MACs/UUIDs).
>>
> >>
>>
> >> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the
> >> engine vm) and the network bridges (ovirtmgmt and other two
> >> bridges, called nfs and lan, for the engine vm) have been
> >> preconfigured on the initial fully-patched CentOS 7.1 host (plus
> >> other two identical hosts which are awaiting to be added).
>>
> >>
>>
> >> I'm stuck at a point with the engine vm successfully starting but
> >> with only one nic present (connected to the ovirtmgmt bridge).
>>
> >>
>>
> >> I'm trying to obtain the modified engine vm by means of a trick
> >> which used to work in a previous (aborted because of lacking GlusterFS-by-
> >> libgfapi support) oVirt 3.5 test setup (about a year ago, maybe
> >> more): I'm substituting the standard /usr/share/ovirt-hosted-engine-
> >> setup/templates/vm.conf.in with the following:
>>
> >>
>>
> >> vmId=@VM_UUID@
>>
> >> memSize=@MEM_SIZE@
>>
> >> display=@CONSOLE_TYPE@
>>
> >> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0,
> >> bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUI-
> >> D@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>>
> >> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID-
> >> :@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domain-
> >> ID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
> >> slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shar-
> >> ed:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>>
> >> devices={device:scsi,model:virtio-scsi,type:controller}
>>
> >> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:-
> >> true,network:@BRIDGE@,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
> >> slot:0x03, domain:0x0000, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:-
> >> true,network:lan,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-
> >> 1113f4bfefee,address:{bus:0x00, slot:0x09, domain:0x0000, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive-
> >> :true,network:nfs,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-
> >> 7f98bb59858d,address:{bus:0x00, slot:0x0c, domain:0x0000, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={device:console,specParams:{},type:console,deviceId:@CONSO-
> >> LE_UUID@,alias:console0}
>>
> >> vmName=@NAME@
>>
> >> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,sreco-
> >> rd,ssmartcard,susbredir
>>
> >> smp=@VCPUS@
>>
> >> cpuType=@CPU_TYPE@
>>
> >> emulatedMachine=@EMULATED_MACHINE@
>>
> >>
>>
> >> but unfortunately the vm gets created like this (output from "ps";
> >> note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded
> >> kickstart: the installation should proceed by HTTP on the lan
> >> network but obviously fails):
>>
> >>
>>
> >> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
>>
> >> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -
> >> realtime mlock=off
>>
> >> -smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a -s
>>
> >> mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-
> >> 1.1503.el7.centos.2
>>
> >> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-
> >> 4422-8b91-e91a
>>
> >> 0e38aa4a -no-user-config -nodefaults -chardev
> >> socket,id=charmonitor,path=/var/li
>>
> >> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon
> >> chardev=charmonitor,id=mo
>>
> >> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -
> >> global kvm-pit.l
>>
> >> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
> >> piix3-usb-uh
>>
> >> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-
> >> pci,id=scsi0,bus=pci.0,addr
>>
> >> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
> >> =-drive file=
>>
> >> /var/tmp/engine.iso,if=none,id=drive-ide0-1-
> >> 0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-
> >> 0,bootindex=1 -drive file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-
> >> d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-
> >> 4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-
> >> disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-
> >> e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads -device
> >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-
> >> disk0,id=virtio-disk0 -netdev
> >> tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-
> >> pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0-
> >> x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/chan-
> >> nels/f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-
> >> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rh-
> >> evm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qem-
> >> u/channels/f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-
> >> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.gues-
> >> t_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qe-
> >> mu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-
> >> engine-setup.0,server,nowait -device virtserialport,bus=virtio-
> >> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hos-
> >> ted-engine-setup.0 -chardev socket,id=charconsole0,path=/var/run/ovirt-vmconsole-
> >> console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait -
> >> device virtconsole,chardev=charconsole0,id=console0 -vnc
> >> 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg
> >> timestamp=on
>>
> >>
>>
> >> There seem to be no errors in the logs.
>>
> >>
>>
> >> I've tried reading some (limited) Python setup code but I've not
> >> found any obvious reason why the trick should not work anymore.
>>
> >>
>>
> >> I know that 3.6 has different network configuration/management and
> >> this could be the hot point.
>>
> >>
>>
> >> Does anyone have any further suggestion or clue (code/logs to
> >> read)?
>>
> >
>>
> > The VM creation path is now a bit different cause we use just vdscli
> > library instead of vdsClient.
>>
> > Please take a a look at mixins.py
>>
>
>>
> Many thanks for your very valuable hint:
>>
>
>>
> I've restored the original /usr/share/ovirt-hosted-engine-
> setup/templates/vm.conf.in and I've managed to obtain the 3-nics-
> customized vm by modifying /usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_setup/mixins.py like this ("diff -Naur"
> output):
>>
>> Hi Simone,
>>
it seems that I spoke too soon :(
>>
>>
A separate network issue (already reported to the list) prevented me
from successfully closing up the setup in its final phase
(registering the host inside the Engine), so all seemed well while
being stuck there :)
>>
>>
Now that I've solved that (for which I'm informing the list asap in a
separate message) and the setup ended successfully, it seems that the
last step of the HE setup (shutdown the Engine vm to place it under HA
agent control) starts/creates a "different vm" and my virtual hardware
customizations seem to be gone (only one NIC present, connected to
ovirtmgmt).
>>
>>
My wild guess: maybe I need BOTH the mixins.py AND the vm.conf.in
customizations? ;)
>
> Yes, you are right: the final configuration is still generated from
> the template so you need to fix both.
>
>>
>>
It seems (from /etc/ovirt-hosted-engine/hosted-engine.conf) that the
Engine vm definition is now in /var/run/ovirt-hosted-engine-ha/vm.conf
>>
>
>
> the vm configuration is now read from the shared domain converting
> back from the OVF_STORE, the idea is to let you edit it from the
> engine without the need to write it on each host. /var/run/ovirt-hosted-engine-
> ha/vm.conf is just a temporary copy.
>
> As you are probably still not able to import the hosted-engine storage
> domain in the engine due to a well know bug, your system will fallback
> to the initial vm.conf configuration still on the shared storage and
> you can manually fix it: Please follow this procedure substituting
> '192.168.1.115:_Virtual_ext35u36' with the mount point of hosted-
> engine storage domain on your system:
>
> mntpoint=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36
> dir=`mktemp -d` && cd $dir sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-
> engine.conf) sdUUID=${sdUUID_line:7:36} conf_volume_UUID_line=$(grep
> conf_volume_UUID /etc/ovirt-hosted-engine/hosted-engine.conf)
> conf_volume_UUID=${conf_volume_UUID_line:17:36}
> conf_image_UUID_line=$(grep conf_image_UUID /etc/ovirt-hosted-engine/hosted-
> engine.conf) conf_image_UUID=${conf_image_UUID_line:16:36} dd
> if=$mntpoint/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
> 2>/dev/null| tar -xvf -
> # directly edit vm.conf as you need
> tar -cO * | dd
> of=$mntpoint/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
>
> When your engine will import the hosted-engine storage domain it will
> generate an OVF_STORE with the configuration of the engine VM, you
> will be able to edit some parameters from the engine and the agent
> will read the VM configuration from there.
Hi Simone, I followed your advice and modified the vm.conf inside the
conf_volume tar archive.
Then I put the system (still with only one physical host) in global
maintenance from the host:
hosted-engine --set-maintenance --mode=global
Then i regularly powered off the Engine vm and confirmed it from the
host:
hosted-engine --vm-status
Then restarted it from the host:
hosted-engine --vm-start
Finally, I exited maintenance from the host:
hosted-engine --set-maintenance --mode=none
Waited a bit and verified it:
hosted-engine --vm-status
It all worked as expected!
Many thanks again.
Regards, Giuseppe
>
>>
>>
Many thanks for your assistance (and obviously I just sent a related
wishlist item on the HE setup ;)
>>
>>
Regards,
>>
Giuseppe
>>
>>
>>
> *********************************************************************-
> ***************
>>
>
>>
> --- mixins.py.orig 2015-10-20 16:57:40.000000000 +0200
>>
> +++ mixins.py 2015-10-26 22:22:58.351223922 +0100
>>
> @@ -25,6 +25,7 @@
>>
>import random
>>
>import string
>>
>import time
>>
> +import uuid
>>
>
>>
>
>>
>from ovirt_hosted_engine_setup import constants as ohostedcons
>>
> @@ -247,6 +248,44 @@
>>
>]['@BOOT_PXE@'] == ',bootOrder:1':
>>
>nic['bootOrder'] = '1'
>>
>conf['devices'].append(nic)
>>
> + nic2 = {
>>
> + 'nicModel': 'pv',
>>
> + 'macAddr': '02:50:56:3f:c4:a0',
>>
> + 'linkActive': 'true',
>>
> + 'network': 'lan',
>>
> + 'filter': 'vdsm-no-mac-spoofing',
>>
> + 'specParams': {},
>>
> + 'deviceId': str(uuid.uuid4()),
>>
> + 'address': {
>>
> + 'bus': '0x00',
>>
> + 'slot': '0x09',
>>
> + 'domain': '0x0000',
>>
> + 'type': 'pci',
>>
> + 'function': '0x0'
>>
> + },
>>
> + 'device': 'bridge',
>>
> + 'type': 'interface',
>>
> + }
>>
> + conf['devices'].append(nic2)
>>
> + nic3 = {
>>
> + 'nicModel': 'pv',
>>
> + 'macAddr': '02:50:56:3f:c4:c0',
>>
> + 'linkActive': 'true',
>>
> + 'network': 'nfs',
>>
> + 'filter': 'vdsm-no-mac-spoofing',
>>
> + 'specParams': {},
>>
> + 'deviceId': str(uuid.uuid4()),
>>
> + 'address': {
>>
> + 'bus': '0x00',
>>
> + 'slot': '0x0c',
>>
> + 'domain': '0x0000',
>>
> + 'type': 'pci',
>>
> + 'function': '0x0'
>>
> + },
>>
> + 'device': 'bridge',
>>
> + 'type': 'interface',
>>
> + }
>>
> + conf['devices'].append(nic3)
>>
>
>>
>cli = self.environment[ohostedcons.VDSMEnv.VDS_CLI]
>>
>status = cli.create(conf)
>>
>
>>
> *********************************************************************-
> ***************
>>
>
>>
> Obviously this is a horrible ad-hoc hack that I'm not able to generalize/clean-
> up now: doing so would involve (apart from a deeper understanding of
> the whole setup code/workflow) some well-thought-out design decisions
> and, given the effective deprecation of the aforementioned easy-to-
> modify vm.conf.in template substituted by hardwired Python program
> logic, it seems that such a functionality is not very high on the
> development priority list atm ;)
>>
>
>>
> Many thanks again!
>>
>
>>
> Kind regards,
>>
> Giuseppe
>>
>
>>
> >> Many thanks in advance.
>>
> >>
>>
> >> Kind regards,
>>
> >> Giuseppe
>>
> >>
>>
> >> PS: please keep also my address in replying because I'm
> >> experiencing some problems between Hotmail and oVirt-mailing-
> >> list
>>
> >>
>>
> >> _______________________________________________
>>
> >>
>>
> Users mailing list
>>
> >>Users(a)ovirt.org
>>
> >>http://lists.ovirt.org/mailman/listinfo/users
>>
> >>
9 years