Beginning oVirt / Hyperconverged
by email@christian-reiss.de
Hey folks!
Quick question, really: I have 4 servers of identical hardware. The documentation says "you need 3", not "you need 3 or more"; is it possible to run hyperconverged with 4 servers (in same rack, let's neglect the possibility of 2 failing servers (split brain))?
Also: I have a nice seperate server (low power footprint, big on cpu (8) and has 32gb ram)-- would love to use this as engine server only. Can I initiate a hyperconverged system with 4 working host starting from a stand-alone engine? Or does "hyperconverged" a must for HA engine on the cluster?
One last one: Installing ovirt nodes, obviously. This will by default create a lvm across the entire RAID volume; can I simply create folders (with correct permissions) and use those as gluster bricks? Or do I need to partition the RAID in a special way?
Thanks for your pointers,
Chris.
4 years, 4 months
oVirt change IP's & add new ISO share
by Jonathan Mathews
Good Day
I have to change the IP addresses of the oVirt Engine, hosts and storage to
a new IP range. Please, can you advise the best way to do this and if there
is anything I would need to change in the database?
I have also run into an issue where someone has removed the ISO share/data
on the storage, so I am unable to remove, activate, detach or even add a
new ISO share.
Please, can you advise the best wat to resolve this?
Please see the below engine logs:
2019-10-30 11:39:13,918 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Failed in
'DetachStorageDomainVDS' method
2019-10-30 11:39:13,942 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
2019-10-30 11:39:13,943 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8]
IrsBroker::Failed::DetachStorageDomainVDS: IRSGenericException:
IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain
does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358
2019-10-30 11:39:13,951 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] FINISH,
DetachStorageDomainVDSCommand, log id: 5547e2df
2019-10-30 11:39:13,951 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [58f6cfb8] -- executeIrsBrokerCommand:
Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
2019-10-30 11:39:13,951 ERROR
[org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
'org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS,
error = Storage domain does not exist:
(u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2019-10-30 11:39:13,952 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [58f6cfb8] START,
HSMGetAllTasksInfoVDSCommand(HostName = host01,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='291a3a19-7467-4783-a6f7-2b2dd0de9ad3'}), log id: 6cc238fb
2019-10-30 11:39:13,952 INFO
[org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
[id=cec030b7-4a62-43a2-9ae8-de56a5d71ef8]: Compensating CHANGED_STATUS_ONLY
of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot:
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
2019-10-30 11:39:13,975 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: 28ac658, Job
ID: b31e0f44-2d82-47bf-90d9-f69e399d994f, Call Stack: null, Custom Event
ID: -1, Message: Failed to detach Storage Domain iso to Data Center
Default. (User: admin@internal)
2019-10-30 11:42:46,711 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] START,
SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true',
storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
ignoreFailoverLimit='false'}), log id: 59192768
2019-10-30 11:42:48,825 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Failed in
'ActivateStorageDomainVDS' method
2019-10-30 11:42:48,855 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
2019-10-30 11:42:48,856 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba]
IrsBroker::Failed::ActivateStorageDomainVDS: IRSGenericException:
IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code =
358
2019-10-30 11:42:48,864 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] FINISH,
ActivateStorageDomainVDSCommand, log id: 518fdcf
2019-10-30 11:42:48,865 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Command
'org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to ActivateStorageDomainVDS,
error = Storage domain does not exist:
(u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2019-10-30 11:42:48,865 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] -- executeIrsBrokerCommand:
Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
2019-10-30 11:42:48,865 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] START,
HSMGetAllTasksInfoVDSCommand(HostName = host02,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='3673a0e1-721d-40ba-a179-b1f13a9aec43'}), log id: 47ef923b
2019-10-30 11:42:48,866 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Command
[id=68bf0e1e-6a0b-41cb-9cad-9eb2bf87c5ee]: Compensating CHANGED_STATUS_ONLY
of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot:
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
2019-10-30 11:42:48,888 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Correlation ID: 5b208434,
Job ID: dff5f615-9dc4-4d79-a37e-5c6e99a2cc6b, Call Stack: null, Custom
Event ID: -1, Message: Failed to activate Storage Domain iso (Data Center
Default) by admin@internal
Thanks
Jonathan
4 years, 5 months
DHCP Client in Guest VM does not work on ovirtmgmt
by ccesario@blueit.com.br
Hello,
Is there any special config to usage dhcp client on Guest VM using ovirtmgmt/ovirtmgmt vnic profile ?
Currently I have a VM using the ovirtmgmt/ovirtmgmt NIC profile and this interface is configured as DHCP client, and this does not work when using ovirtmgmt/ovirtmgmt as NIC profile. But if I assign manual IP address from the same range of DHCP server the comunication it works.
And If usage other NIC profile in other VLAN with other DHCP server it works.
It seems ovirtmgmt/ovirtmgmt profile filter the DHCP protocol.
Could someone has idea to allow DHCP protocol works on ovirtmgmt/ovirtmgmt NIC profile?
Best regards
Carlos
4 years, 5 months
External networks issue after upgrading to 4.3.6
by ada per
After upgrading to the latest stable version the external networks lost all
functionality. Under providers-->ovn-network-provider the test runs
successfully.
But when im creating an external provider network, attaching it to a router
as a LAN setting up dhcp lease it is not reachable from other VMs in the
same network. Hosts and hosted engine can't seem to ping it either.
I tried disabling firewalls on both hosted engines, VMs and host and still
nothing.
When configuring Logical networks or VLANs they work perfectly tje one
problem is the external networks.
I have another environment running on a previous version of ovirt and it
works perfectly there. I think its a bug.
Thanks for your help
4 years, 5 months
Can ovirt-node support multiple network cards ?
by wangyu13476969128@126.com
Can ovirt-node support multiple network cards ?
I download ovirt-node-ng-installer-4.3.5-2019073010.el7.iso from ovirt site. And install on a Dell R730 Server.
The version of ovirt-engine is 4.3.5.5.
I found that the logic network ovirtmgmt can only point to one network card interface.
Can ovirt-node support multiple network cards ?
If yes , please tell me the method.
4 years, 5 months
How to pass parameters between VDSM Hooks domxml in single run
by Vrgotic, Marko
Dear oVIrt,
A while ago we discussed on ways to change/update content of parameters of domxml in certain action.
As I mentioned before, we have added the VDSMHook 60_nsupdate which removes the DNS record entries when a VM is destroyed:
…
domxml = hooking.read_domxml()
name = domxml.getElementsByTagName('name')[0]
name = " ".join(name.nodeValue for name in name.childNodes if name.nodeType == name.TEXT_NODE)
nsupdate_commands = """server {server_ip}
update delete {vm_name}.example.com a
update delete {vm_name}. example.com aaaa
update delete {vm_name}. example.com txt
send
""".format(server_ip="172.16.1.10", vm_name=name)
…
The goal:
However, we did not want to execute remove dns records when VM is only migrated. Since its considered a “destroy” action we took following approach.
* In state “before_vm_migrate_source add hook which will write flag “is_migration” to domxml
* Once VM is scheduled for migration, this hook should add the flag “is_migration” to domxml
* Once 60_nsupdate is triggered, it will check for the flag and if there, skip executing dns record action, but only remove the flag “is_migration” from domxml of the VM
…
domxml = hooking.read_domxml()
migration = domxml.createElement("is_migration")
domxml.getElementsByTagName("domain")[0].appendChild(migration)
logging.info("domxml_updated {}".format(domxml.toprettyxml()))
hooking.write_domxml(domxml)
…
When executing first time, we observed that flag “
<name>hookiesvm</name>
<uuid>fcfa66cb-b251-43a3-8e2b-f33b3024a749</uuid>
<metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">
<ns0:qos/>
<ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
...skipping...
<address bus="0x00" domain="0x0000" function="0x0" slot="0x09" type="pci"/>
</rng>
</devices>
<seclabel model="selinux" relabel="yes" type="dynamic">
<label>system_u:system_r:svirt_t:s0:c169,c575</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c169,c575</imagelabel>
</seclabel>
<seclabel model="dac" relabel="yes" type="dynamic">
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
<is_migration/>
</domain>
is added to domxml, but was present once 60_nsupdate hook was executed.
The question: How do we make sure that, when domxml is updated, that the update is visible/usable by following hook, in single run? How to pass these changes between hooks?
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
4 years, 5 months
Ansible setup host network fails on comparing sorted dictionaries
by Vrgotic, Marko
Dear oVirt,
While deploying ovirt_infra , the role oivrt.networks fails on Setup Host Networks, with following error:
TypeError: '<' not supported between instances of 'dict' and 'dict'
Full output:
TASK [ovirt.infra/roles/ovirt.networks : Setup host networks] ******************************************************************************************************************
The full traceback is:
Traceback (most recent call last):
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 396, in main
(nic is None or host_networks_module.has_update(nics_service.service(nic.id)))
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 289, in has_update
update = self.__compare_options(get_bond_options(bond.get('mode'), bond.get('options')), getattr(nic.bonding, 'options', []))
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 247, in __compare_options
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
TypeError: '<' not supported between instances of 'dict' and 'dict'
failed: [localhost] (item={'name': 'ovirt-staging-hv-02.avinity.tv', 'check': True, 'save': True, 'bond': {'name': 'bond28', 'mode': 4, 'interfaces': ['p2p1', 'p2p2']}, 'networks': [{'name': 'backbone', 'boot_protocol': 'static', 'address': '172.17.28.212', 'netmask': '255.255.255.0', 'version': 'v4'}]}) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "ovirt-staging-hv-02.avinity.tv",
"nested_attributes": [],
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"item": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"name": "ovirt-staging-hv-02.avinity.tv",
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"save": true
},
"msg": "'<' not supported between instances of 'dict' and 'dict'"
}
Read vars_file 'vars/engine_vars.yml'
Read vars_file 'vars/secrets.yml'
Read vars_file 'vars/ovirt_infra_vars.yml'
Looking further into ovirt_host_network.py I found that issue is reported after following is executed:
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
It seemed to be failing due to not getting the key value to sort the dicts, so I added sorting based on name, to test, and it worked in single test run:
return sorted((get_dict_of_struct(opt) for opt in new_options), key=lambda x: x["name"]) != sorted((get_dict_of_struct(opt) for opt in old_options), key=lambda x: x["name"])
After rerun of the play, with changes above:
TASK [ovirt.infra/roles/ovirt.networks : Setup host networks] ******************************************************************************************************************
task path: /Users/mvrgotic/Git/ovirt-engineering/roles/ovirt.infra/roles/ovirt.networks/tasks/main.yml:25
Using module file /Users/mvrgotic/.local/share/virtualenvs/ovirt-engineering-JaxzXThh/lib/python3.7/site-packages/ansible/modules/cloud/ovirt/ovirt_host_network.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: mvrgotic
<127.0.0.1> EXEC /bin/sh -c '/Users/mvrgotic/.local/share/virtualenvs/ovirt-engineering-JaxzXThh/bin/python3 && sleep 0'
changed: [localhost] => (item={'name': 'ovirt-staging-hv-02.avinity.tv', 'check': True, 'save': True, 'bond': {'name': 'bond28', 'mode': 4, 'interfaces': ['p2p1', 'p2p2']}, 'networks': [{'name': 'backbone', 'boot_protocol': 'static', 'address': '172.17.28.212', 'netmask': '255.255.255.0', 'version': 'v4'}]}) => {
"ansible_loop_var": "item",
"changed": true,
"host_nic": {
"ad_aggregator_id": 6,
"bonding": {
"ad_partner_mac": {
"address": "44:31:92:7c:b3:11"
},
"options": [
{
"name": "mode",
"type": "Dynamic link aggregation (802.3ad)",
"value": "4"
},
{
"name": "xmit_hash_policy",
"value": "2"
}
],
"slaves": [
{
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/01703220-570c-44f5-9729-6717ceead304",
"id": "01703220-570c-44f5-9729-6717ceead304"
},
{
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/e879c1c6-065d-4742-b282-fcdb477f95a7",
"id": "e879c1c6-065d-4742-b282-fcdb477f95a7"
}
]
},
"boot_protocol": "static",
"bridged": false,
"custom_configuration": false,
"host": {
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea",
"id": "aa1f7a65-a867-436a-9a7f-264068f4bdea"
},
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/abce07fa-cb7f-46f2-b967-69d1feaa4075",
"id": "abce07fa-cb7f-46f2-b967-69d1feaa4075",
"ip": {
"address": "172.17.28.212",
"netmask": "255.255.255.0",
"version": "v4"
},
"ipv6": {
"gateway": "::",
"version": "v6"
},
"ipv6_boot_protocol": "none",
"mac": {
"address": "b4:96:91:3f:47:1c"
},
"mtu": 9000,
"name": "bond28",
"network": {
"href": "/ovirt-engine/api/networks/f3ef80cf-bf3a-4fa5-aed9-7d9e7455f804",
"id": "f3ef80cf-bf3a-4fa5-aed9-7d9e7455f804"
},
"network_labels": [],
"properties": [],
"speed": 10000000000,
"statistics": [],
"status": "up"
},
"id": "abce07fa-cb7f-46f2-b967-69d1feaa4075",
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "ovirt-staging-hv-02.avinity.tv",
"nested_attributes": [],
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"id": "3e40ff7d-5384-45f1-b036-13e6f91aff56",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"item": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"name": "ovirt-staging-hv-02.avinity.tv",
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"save": true
}
}
Read vars_file 'vars/engine_vars.yml'
Read vars_file 'vars/secrets.yml'
Read vars_file 'vars/ovirt_infra_vars.yml'
Changes resulted in applying configuration exactly as intended.
Not sure it this was the actual intention, but please let me know if the made change was as initially intended for sorted compare to work.
My pipenv setup:
Python 3.7
ansible==2.8.6
asn1crypto==1.1.0
bcrypt==3.1.7
cffi==1.13.1
cryptography==2.8
dnspython==1.16.0
ipaddress==1.0.23
Jinja2==2.10.3
jmespath==0.9.4
lxml==4.4.1
MarkupSafe==1.1.1
netaddr==0.7.19
ovirt-engine-sdk-python==4.3.3
paramiko==2.6.0
passlib==1.7.1
pyasn1==0.4.5
pycparser==2.19
pycurl==7.43.0.3
PyNaCl==1.3.0
PyYAML==5.1.2
six==1.12.0
Ansible vars and play:
=================================
host_networks:
- name: ovirt-staging-hv-02.avinity.tv
check: true
save: true
bond:
name: bond28
mode: 4
interfaces:
- p2p1
- p2p2
networks:
- name: backbone
boot_protocol: static
address: 172.17.28.212
netmask: 255.255.255.0
version: v4
=================================
- name: Setup host networks
ovirt_host_network:
auth: "{{ ovirt_auth }}"
name: "{{ item.name }}"
state: "{{ item.state | default(omit) }}"
check: "{{ item.check | default(omit) }}"
save: "{{ item.save | default(omit) }}"
bond: "{{ item.bond | default(omit) }}"
networks: "{{ item.networks | default(omit) }}"
labels: "{{ item.labels | default(omit) }}"
interface: "{{ item.interface | default(omit) }}"
with_items:
- "{{ host_networks | default([]) }}"
tags:
- host_networks
- networks
====================================
4 years, 5 months