[ovirt-devel] [ovirt-users] [QE] oVirt 3.6.1 Hosted Engine test day
Simone Tiraboschi
stirabos at redhat.com
Tue Dec 15 08:14:54 UTC 2015
On Mon, Dec 14, 2015 at 4:52 PM, Roy Golan <rgolan at redhat.com> wrote:
>
>
> On Mon, Dec 14, 2015 at 5:39 PM, Gianluca Cecchi <
> gianluca.cecchi at gmail.com> wrote:
>
>> On Mon, Dec 14, 2015 at 4:10 PM, Sandro Bonazzola wrote:
>>
>>>
>>>>
>>>> You should just activate it.
>>>> Adding Roy, looks like this doesn't happen to me only.
>>>>
>>>>
>>> See *Bug 1290518* <https://bugzilla.redhat.com/show_bug.cgi?id=1290518>
>>> - failed Activating hosted engine domain during auto-import on NFS
>>>
>>>
>> I confirm that I hit the bug and I was able to simply activate the
>> storage domain and able to see the engine vm
>>
>> Dec 14, 2015 4:22:23 PM Hosted Engine VM was imported successfully
>> Dec 14, 2015 4:22:23 PM Starting to import Vm HostedEngine to Data Center
>> Default, Cluster Default
>> Dec 14, 2015 4:22:19 PM Storage Domain hosted_storage (Data Center
>> Default) was activated by admin at internal
>>
>>
>> NOTES:
>> 1) I configured the engine vm as a vnc one during install (in 3.6.0, as
>> the appliance was the 3.6.0 one as we noted)
>>
>> [ INFO ] Stage: Setup validation
>>
>> --== CONFIGURATION PREVIEW ==--
>>
>> Bridge interface : eth0
>> Engine FQDN : shengine.localdomain.local
>> Bridge name : ovirtmgmt
>> SSH daemon port : 22
>> Firewall manager : iptables
>> Gateway address : 192.168.122.1
>> Host name for web application : hosted_engine_1
>> Host ID : 1
>> Image size GB : 10
>> GlusterFS Share Name : hosted_engine_glusterfs
>> GlusterFS Brick Provisioning : False
>> Storage connection :
>> ovc71.localdomain.local:/SHE_DOMAIN
>> Console type : vnc
>> Memory size MB : 8192
>> MAC address : 00:16:3e:72:e7:26
>> Boot type : disk
>> Number of CPUs : 1
>> OVF archive (for disk boot) :
>> /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-3.6-20151211.1.el7.centos.ova
>> Restart engine VM after engine-setup: True
>> CPU Type : model_Nehalem
>>
>> Please confirm installation settings (Yes, No)[Yes]:
>>
>> but after import in 3.6.1 I see it configured as with spice console in
>> web admin gui.
>> I indeed am able to access it via spice, but just to notice that if one
>> eventually would like to have it vnc, he/she has to edit the engine vm
>>
>
> That smells like a setup bug... Simone did you ever see that?
>
>
No, I'll try to reproduce.
> after update to 3.6.1.....
>> BTW: is it possible/supported to edit engine vm from the gui and change
>> for example the console mode?
>>
>> yes, in 3.6.2
>
>> 2) During setup, see above, it was configured with iptables as firewall
>> manager
>> During the update to 3.6.1 this was the workflow shown at video:
>>
>> [root at shengine ~]# engine-setup
>> [ INFO ] Stage: Initializing
>> [ INFO ] Stage: Environment setup
>> Configuration files:
>> ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
>> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>> Log file:
>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20151214134331-ga1lxy.log
>> Version: otopi-1.4.0 (otopi-1.4.0-1.el7.centos)
>> [ INFO ] Stage: Environment packages setup
>> [ INFO ] Stage: Programs detection
>> [ INFO ] Stage: Environment setup
>> [ INFO ] Stage: Environment customization
>>
>> --== PRODUCT OPTIONS ==--
>>
>>
>> --== PACKAGES ==--
>>
>> [ INFO ] Checking for product updates...
>> Setup has found updates for some packages:
>> PACKAGE: [updated] ovirt-engine-3.6.0.3-1.el7.centos.noarch
>> ...
>> PACKAGE: [update]
>> ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch
>> do you wish to update them now? (Yes, No) [Yes]:
>> [ INFO ] Checking for an update for Setup...
>>
>> --== ALL IN ONE CONFIGURATION ==--
>>
>> --== NETWORK CONFIGURATION ==--
>>
>> Setup can automatically configure the firewall on this system.
>> Note: automatic configuration of the firewall may overwrite
>> current settings.
>> Do you want Setup to configure the firewall? (Yes, No) [Yes]:
>> [ INFO ] firewalld will be configured as firewall manager.
>>
>> So the former setup was with iptables durinh sh engine setup in 3.6.0
>> while then with firewalld in 3.6.1.
>>
>> In fact on the engine I see
>>
>>
>> [root at shengine ~]# systemctl status iptables
>> iptables.service
>> Loaded: not-found (Reason: No such file or directory)
>> Active: inactive (dead)
>>
>> [root at shengine ~]# systemctl status firewalld
>> firewalld.service - firewalld - dynamic firewall daemon
>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
>> Active: active (running) since Mon 2015-12-14 14:19:56 UTC; 1h 15min
>> ago
>> Main PID: 431 (firewalld)
>> CGroup: /system.slice/firewalld.service
>> └─431 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
>>
>> Dec 14 14:19:53 shengine.localdomain.local systemd[1]: Starting firewalld
>> - dynamic firewall da.....
>> Dec 14 14:19:56 shengine.localdomain.local systemd[1]: Started firewalld
>> - dynamic firewall daemon.
>> Hint: Some lines were ellipsized, use -l to show in full.
>>
>>
>> Or is it the firewall manager asked during the initial hosted engine
>> setup related to the host itself and not the engine vm perhaps???
>>
>
>
>
>
>>
>> Gianluca
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20151215/f4ad658e/attachment-0001.html>
More information about the Devel
mailing list