[ovirt-devel] [ovirt-announce] [ovirt-users] [ANN] oVirt 3.6.2 Third Release Candidate is now available for testing

Simone Tiraboschi stirabos at redhat.com
Mon Jan 25 14:16:53 UTC 2016


On Mon, Jan 25, 2016 at 2:18 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:

> On Wed, Jan 20, 2016 at 12:59 PM, Sandro Bonazzola <sbonazzo at redhat.com>
> wrote:
>
>> The oVirt Project is pleased to announce the availability
>> of the Third Release Candidate of oVirt 3.6.2 for testing, as of January
>> 20th, 2016
>>
>>
>>
> Tested successfully on CentOS 7.2 with SH Engine on NFS and verified
> correct auto-import of SH Domain.
> Hypervisor is a CentOS 7.2 VM inside Qemu/KVM in virt-manager (under my
> laptop that is Fedora 23).
> So it is a nested virtualization environment.
>
> Some NOTES
> 1) appliance
> I see that the appliance rpm pulled in was
>  ovirt-engine-appliance-3.6-20151216.1.el7.centos.noarch
>
> I presumed it was not populated with correct rpms.
> So in "hosted-engine --deploy" setup I used the appliance, but answered
>
> Automatically execute engine-setup on the engine appliance on first boot
> (Yes, No)[Yes]? No
>
> I then connect to the appliance OS and configure 3.6-pre repo and run
>
> yum update
>
> Strange that this pulled in the package
>
> ovirt-release36-003-1.noarch
>
> and so the repo file was overwritten and was again 3.6 only and not
> 3.6-pre.
> I re-modified /etc/yum.repos.d/ovirt-3.6.repo re-adding the pre.
>
> and finally
> engine-setup
>
> and successfully completed the final part on host.
>
> 2) localtime on Engine was not asked but was set to UTC.
> So
> [root at shengine ~]# ln -sf /usr/share/zoneinfo/Europe/Rome /etc/localtime
>
> shutdown engine OS and after some minutes the VM came back online.
>
>
You are right here: we are not copying the host timezone to the appliance:
we can do it via cloud-init.
Good catch!


> 3) As this is a nested environment with NFS domain provided by the only
> host itself,
> In /usr/lib/systemd/system/ovirt-ha-broker.service
>
> Agdded in section [Unit]
>
> After=nfs-server.service
> So that I can manage shutdown/restart of host for complete maintenance
> operations
>
> 4) After adding first data domain, the hosted_storage storage domain came
> up automatically without any problem. Able to see engine vm in web admin
> portal and its disk and access its console (used vnc)
> Also able to add ISO Domain.
> (both data and iso are NFS on the host itself)
>
> 5) The only "problem" (?) I see is this message that I don't understand.
> On hypervisors
>
> [root at ovc72 ~]# systemctl status ovirt-ha-agent -l
> ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability
> Monitoring Agent
>    Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
> enabled; vendor preset: disabled)
>    Active: active (running) since Mon 2016-01-25 11:02:56 CET; 1h 0min ago
>  Main PID: 17138 (ovirt-ha-agent)
>    CGroup: /system.slice/ovirt-ha-agent.service
>            └─17138 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon
>
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> storage server
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing the
> storage domain
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Preparing
> images
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> INFO:ovirt_hosted_engine_ha.lib.image.Image:Preparing images
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Reloading
> vm.conf from the shared storage domain
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Trying
> to get a fresher copy of vm configuration from the OVF_STORE
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
> OVF_STORE
> Jan 25 12:03:27 ovc72.localdomain.local ovirt-ha-agent[17138]:
> ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>
>
We will let you edit some of the engine VM parameters from the engine
itself.
All the modification will be saved as OVF file on a special volume called
OVF_STORE; when needed ovirt-ha-agent tries to get the up-to-date engine VM
configuration from there and if there is any issue it falls back to the
initial vm.conf from the setup time.
The issue is that by default OVF_STORE gets populated only after 1 hour and
so if you wait less than 1 hour you'll get that message since the OVF_STORE
is still not there.


> 6) ISO upload from engine by default doesn't work because its own host
> doesn't allow it
> [root at shengine ~]# ovirt-iso-uploader -i ISO_DOMAIN upload
> /root/CentOS-7-x86_64-NetInstall-1511.iso
> Please provide the REST API password for the admin at internal oVirt Engine
> user (CTRL+D to abort):
> Uploading, please wait...
> ERROR: mount.nfs: Connection timed out
>
> If on host I run (.73 is IP of engine)
> # iptables -I INPUT -s 192.168.122.73 -j ACCEPT
>
> Then on engine:
> [root at shengine ~]# ovirt-iso-uploader -i ISO_DOMAIN upload
> /root/CentOS-7-x86_64-NetInstall-1511.iso
> Please provide the REST API password for the admin at internal oVirt Engine
> user (CTRL+D to abort):
> Uploading, please wait...
> INFO: Start uploading /root/CentOS-7-x86_64-NetInstall-1511.iso
> Uploading: [########################################] 100%
> INFO: /root/CentOS-7-x86_64-NetInstall-1511.iso uploaded successfully
>
> 7) Installed a CentOS 7.2 VM with netinst.iso. Access to spice console ok.
> (85MB/s when downloading on nested network not so bad... ;-)
>
> Very good starting point!
>
> Gianluca
>
>
> _______________________________________________
> Announce mailing list
> Announce at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/announce
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20160125/655ffa78/attachment-0001.html>


More information about the Devel mailing list