Guilty, will roll back and try again!
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 29 Mar 2019, at 15:35, Simone Tiraboschi
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
The error comes from here:
TASK [ovirt.hosted_engine_setup : Parse OVF]
***************************************************************************************************************************************
fatal: [virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>]:
FAILED! => {"changed": false, "msg": "missing parameter(s)
required by 'attribute': value"}
but are you really using it with ansible 2.8 alpha 1?
I'd strongly suggest to switch back to a stable release of ansible which is currently
2.7.9.
That one was due to:
https://github.com/ansible/ansible/issues/53459
In the next ansible build it will be just a warning as for:
https://github.com/ansible/ansible/pull/54336
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/150/files already address
this on ovirt-ansible-hosted-engine-setup to be compatible with future ansible releases.
On Fri, Mar 29, 2019 at 3:53 PM Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
The OVF in question is here:
<ovf:Envelope ovf:version="0.9"
xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/"
xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_Re...
xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_Vi...
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><Re...
ovf:description="074a62d4-44f9-4ffe-a172-2702a9fe96df"
ovf:href="6f76686b-199c-4cb3-bbbe-86fc34365745/72bc3948-5d8d-4877-bac8-7db4995045b5"
ovf:id="72bc3948-5d8d-4877-bac8-7db4995045b5" ovf:size="54760833024"
/></References><Section
xsi:type="ovf:NetworkSection_Type"><Info>List of
Networks</Info></Section><Section
xsi:type="ovf:DiskSection_Type"><Disk ovf:actual_size="51"
ovf:boot="true" ovf:disk-interface="VirtIO"
ovf:disk-type="System"
ovf:diskId="72bc3948-5d8d-4877-bac8-7db4995045b5"
ovf:fileRef="6f76686b-199c-4cb3-bbbe-86fc34365745/72bc3948-5d8d-4877-bac8-7db4995045b5"
ovf:format="http://www.vmware.com/specifications/vmdk.html#sparse"
ovf:parentRef="" ovf:size="51"
ovf:vm_snapshot_id="5f2be758-82d7-4c07-a220-9060e782dc7a"
ovf:volume-format="COW" ovf:volume-type="Sparse"
ovf:wipe-after-delete="false" /></Section><Content
ovf:id="out"
xsi:type="ovf:VirtualSystem_Type"><Name>074a62d4-44f9-4ffe-a172-2702a9fe96df</Name><TemplateId>074a62d4-44f9-4ffe-a172-2702a9fe96df</TemplateId><Description>Created
by OVABuilder</Description><Domain /><CreationDate>2019/03/19
08:35:09</CreationDate><TimeZone
/><IsAutoSuspend>false</IsAutoSuspend><VmType>1</VmType><default_display_type>0</default_display_type><default_boot_sequence>1</default_boot_sequence><Section
ovf:id="074a62d4-44f9-4ffe-a172-2702a9fe96df" ovf:required="false"
xsi:type="ovf:OperatingSystemSection_Type"><Info>Guest
OS</Info><Description>OtherLinux</Description></Section><Section
xsi:type="ovf:VirtualHardwareSection_Type"><Info>4 CPU, 16384
Memory</Info><System><vssd:VirtualSystemType>RHEVM
4.6.0.163</vssd:VirtualSystemType></System><Item><rasd:Caption>4
virtual CPU</rasd:Caption><rasd:Description>Number of virtual
CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>1</rasd:num_of_sockets><rasd:cpu_per_socket>4</rasd:cpu_per_socket></Item><Item><rasd:Caption>16384
MB of memory</rasd:Caption><rasd:Description>Memory
Size</rasd:Description><rasd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>16384</rasd:VirtualQuantity></Item><Item><rasd:Caption>Drive
1</rasd:Caption><rasd:InstanceId>72bc3948-5d8d-4877-bac8-7db4995045b5</rasd:InstanceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>6f76686b-199c-4cb3-bbbe-86fc34365745/72bc3948-5d8d-4877-bac8-7db4995045b5</rasd:HostResource><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList
/><rasd:StorageId>00000000-0000-0000-0000-000000000000</rasd:StorageId><rasd:StoragePoolId>00000000-0000-0000-0000-000000000000</rasd:StoragePoolId><rasd:CreationDate>2019/03/19
08:35:09</rasd:CreationDate><rasd:LastModified>2019/03/19
08:35:09</rasd:LastModified></Item><Item><rasd:Caption>Ethernet 0
rhevm</rasd:Caption><rasd:InstanceId>3</rasd:InstanceId><rasd:ResourceType>10</rasd:ResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>rhevm</rasd:Connection><rasd:Name>eth0</rasd:Name><rasd:speed>1000</rasd:speed></Item><Item><rasd:Caption>Graphics</rasd:Caption><rasd:InstanceId>5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuantity></Item></Section></Content></ovf:Envelope>
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 29 Mar 2019, at 14:42, Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
Ok so we're getting very close now, weird OVF error:
Full ansible log attached
Only error in the engine.log looks normal/expected to me:
2019-03-29 14:32:44,370Z ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
(EE-ManagedThreadFactory-engineScheduled-Thread-71) [4405d6db] Can not run fence action on
host 'vir
thyp04.virt.in.bmrc.ox.ac.uk<http://thyp04.virt.in.bmrc.ox.ac.uk/>', no suitable
proxy host was found.
<ovirt.20190329133000.ansible.log>
Feeling damn close to success here, but have managed to replicate this issue twice
re-running the installer.
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 29 Mar 2019, at 11:50, Simone Tiraboschi
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
On Fri, Mar 29, 2019 at 12:36 PM Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
ip link del ovirtmgmt has done the job....
Another issue, but this is likely due to randomised MAC addresses:
fatal: [virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>]:
FAILED! => {"changed": true, "cmd": ["virt-install",
"-n", "HostedEngineLocal", "--os-variant",
"rhel7", "--virt-type", "kvm", "--memory",
"4096", "--vcpus", "64", "--network",
"network=default,mac=fe:58:6c:da:1e:cc,model=virtio", "--disk",
"/var/tmp/localvmOCYiyF/images/6f76686b-199c-4cb3-bbbe-86fc34365745/72bc3948-5d8d-4877-bac8-7db4995045b5",
"--import", "--disk",
"path=/var/tmp/localvmOCYiyF/seed.iso,device=cdrom",
"--noautoconsole", "--rng", "/dev/random",
"--graphics", "vnc", "--video", "vga",
"--sound", "none", "--controller",
"usb,model=none", "--memballoon", "none",
"--boot", "hd,menu=off", "--clock",
"kvmclock_present=yes"], "delta": "0:00:01.355834",
"end": "2019-03-29 11:31:02.100143", "msg": "non-zero
return code", "rc": 1, "start": "2019-03-29
11:31:00.744309", "stderr": "ERROR unsupported configuration:
Unable to use MAC address starting with reserved value 0xFE - 'fe:58:6c:da:1e:cc'
- \nDomain installation does not appear to have been successful.\nIf it was, you can
restart your domain by running:\n virsh --connect qemu:///system start
HostedEngineLocal\notherwise, please restart your installation.",
"stderr_lines": ["ERROR unsupported configuration: Unable to use MAC
address starting with reserved value 0xFE - 'fe:58:6c:da:1e:cc' - ",
"Domain installation does not appear to have been successful.", "If it was,
you can restart your domain by running:", " virsh --connect qemu:///system
start HostedEngineLocal", "otherwise, please restart your installation."],
"stdout": "\nStarting install...", "stdout_lines":
["", "Starting install..."]}
Seems it doesn't take into account reserved values when generating.
If not specified by the user,
a random unicast MAC address is randomly generated here:
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/ta...
for sure it's unicast but maybe we should make it more robust on reserved values.
Simply try again for now; I'll open an issue to track it, thanks!
I hope this feedback is valuable - I have a good feeling about the current deploy
otherwise though.
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 29 Mar 2019, at 11:01, Simone Tiraboschi
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
On Fri, Mar 29, 2019 at 11:56 AM Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
Dear Simone,
It doesn't seem to want to work:
# Settings
he_fqdn: "he.virt.in.bmrc.ox.ac.uk<http://he.virt.in.bmrc.ox.ac.uk/>"
he_ansible_host_name:
"virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>"
he_admin_password: <snip>
he_appliance_password: <snip>
# Resources
he_mem_size_MB: "4096"
# Storage
he_domain_type: "nfs"
he_storage_domain_addr: <snip>
he_storage_domain_path: <snip>
# Network
he_vm_ip_addr: "10.141.31.240"
he_vm_ip_prefix: "20"
he_dns_addr:
["10.141.31.251","10.141.31.252","10.141.31.253"]
he_default_gateway_4: "10.141.31.254"
he_gateway: he_default_gateway_4
he_force_ip4: true
he_bridge_if: bond0.910
#he_just_collect_network_interfaces: true
# Email
he_smtp_port: 25
he_smtp_server: smtp.ox.ac.uk<http://smtp.ox.ac.uk/>
he_dest_email: rescomp-ops@well.ox.ac.uk<mailto:rescomp-ops@well.ox.ac.uk>
he_source_email: ovirt@bmrc.ox.ac.uk<mailto:ovirt@bmrc.ox.ac.uk>
# Ansible Stuff
ansible_ssh_user: root
ansible_become: false
host_key_checking: false
I've attached the output of the ansible command as a log file, this is what happens
when the IF bond0.910 is assigned the IP and `ovirtmgmt` is not defined on the host.
TASK [ovirt.hosted_engine_setup : debug]
*******************************************************************************************************************************************
ok: [virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>] =>
{
"target_address_v4": {
"changed": true,
"cmd": "ip addr show ovirtmgmt | grep 'inet ' | cut -d'
' -f6 | cut -d'/' -f1",
"delta": "0:00:00.008744",
"end": "2019-03-29 10:26:07.510481",
"failed": false,
"rc": 0,
"start": "2019-03-29 10:26:07.501737",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
}
according to the logs ovirtmgmt is still there.
can you please share the output of 'ip a'?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 28 Mar 2019, at 16:23, Simone Tiraboschi
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
On Thu, Mar 28, 2019 at 1:44 PM Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
Dear Simone,
This is my experience too, but I'm now hitting this error on the hosted-engine install
at the part where it registers the hypervisor as the first host in the engine:
2019-03-28 12:40:50,025Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker]
(EE-ManagedThreadFactory-engine-Thread-1) [49f371c1] Engine managed to communicate with
VDSM
agent on host
'virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>'
with address
'virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>'
('db571f8a-fc85-40d3-b86f-c0038e3cd7e7')
2019-03-28 12:40:53,111Z WARN [org.ovirt.engine.core.bll.network.NetworkConfigurator]
(EE-ManagedThreadFactory-engine-Thread-1) [49f371c1] Failed to find a valid interface for
the
management network of host
virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>. If the
interface ovirtmgmt is a bridge, it should be torn-down manually.
2019-03-28 12:40:53,111Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [49f371c1] Exception: org.ovirt.engine.cor
e.bll.network.NetworkConfigurator$NetworkConfiguratorException: Interface ovirtmgmt is
invalid for management network
The host's ovirtmgmt network connection is a statically assigned IP on a VLAN on a
bond, how should I be configuring this if not manually?
If you need to deploy over vlan 123 over bond0 simply configure a device exactly called
bond0.123 and statically configure your IP address there.
Choose it for hosted-engine deployment, nothing more: ovirtmgmt will be automatically
created over that device and the vlan ID will be set at engine level for the whole
management network.
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 27 Mar 2019, at 17:09, Simone Tiraboschi
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
On Wed, Mar 27, 2019 at 4:27 PM Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
It's ok, migrating to 4.3.2 on the oVirt node (from 4.3.0) did the job of fixing it.
It is a bug if you intend on using the ovirtmgmt network to deploy your ansible from
This is a bit tricky: when the engine brings up the host it also creates the management
bridge and this could lead to a temporary network down on the selected interface for the
bridge creation time (a couple of seconds?)
I tried it on a LAN and ansible ssh connection always survived but I 'm not sure
it's always true.
, and you need it to have an IP address already on that range! But - it works as expected
with the ovirtmgmt bridge setup so nothing to worry about.
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
On 27 Mar 2019, at 14:57, Simone Tiraboschi
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
On Wed, Mar 27, 2019 at 3:24 PM Callum Smith
<callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>> wrote:
Dear All,
We're trying to deploy our hosted engine remotely using the ansible hosted engine
playbook, which has been a rocky road but we're now at the point where it's
installing, and failing. We've got a pre-defined bond/VLAN setup for our interface
which has the correct bond0 bond0.123 and ovirtmgmt bridge on top but we're hitting
the classic error:
Failed to find a valid in
terface for the management network of host
virthyp04.virt.in.bmrc.ox.ac.uk<http://virthyp04.virt.in.bmrc.ox.ac.uk/>. If the
interface ovirtmgmt is a bridge, it should be torn-down manually.
Does this bug still exist in the latest (4.3) version, and is installing using ansible
with this network configuration impossible?
I don't think it's a bug; please avoid manually creating ovirtmgmt and simply
set:
he_bridge_if: "bond0.123", in ansible variable file and the management bridge
will be created for you at host-deploy time.
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum@well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
_______________________________________________
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBOZ6FRBRQK...