Users and VM permissions matrix
by Colin Coe
Hi all
I've been tasked with creating a matrix of users/groups and VMs so we can
easily see who has access to what (via SPCIE console).
Google has given me a couple of hints but I can get it over the line.
---
users_service = connection.system_service().users_service()
users = users_service.list()
for user in users:
username = user.user_name.split('@')[0]
# Follow the link to the permissions of the user:
perms = connection.follow_link(user.permissions)
for perm in perms:
if perm.vm:
print(username)
permissions_service =
connection.system_service().permissions_service()
print(perm.vm.id)
---
The problem is with permissions, the output from above is:
---
user1
1b645daf-de26-4f33-9e3b-6a12eadd4618
user2
9c79e763-f78d-4bf9-b8ca-20fe197fd80c
user3
f9d00b30-8003-41c3-95a1-10e0c452fa63
user4
1bbadf96-ef95-4ece-b5f3-1fa112aa3571
user5
e9085627-324e-48d3-bc04-52ff7798ddd0
---
I can't work out how to get the actual permissions rather that the ID.
Any ideas?
Thanks
5 years
Quick generic Questions
by Christian Reiss
Hey folks,
I am looking at setting up a hyperconverged cluster with 3 nodes (and
oVirt 4.3). Before setting up I have some generic questions that I would
love to get hints or even an answer on.
First off, the Servers are outfittet with 24 (SSD) drives each in a
HW-RAID. Due to wear-leveling and speed I am looking at RAID10. So I
would end up with one giant sda device.
a) Partitioning
Using oVirt node installer which will use the full size of /dev/sda is
this still the right solution to Hyperconverged given the gluster issue?
If I understood it correctly gluster is using empty drives or partitions
so a fully utilized drive is of no use here. Does oVirt node installer
have a hyperconverged/ gluster mode?
b) Storage Location
In this 3 node cluster, creating a VM on node01 will the data for node01
always end up in the local node01 server?
c) Starting VMs
Can a VM be migrated or launched from node03 if the data resides on
node01 and node02 (copies 2 with arbiter).
d) Efficiency / High IO Load
If node01 has high IO Load would additional data be loaded from the
other node which has the copy to even the load? I am aware Virtuozzo
does this.
e) Storage Network dies
What would happen with node01, node02 and node03 are operational but
only the storage network dies (frontend is still alive as are the nodes).
f) external isci/ FreeNAS
We have a FreeNAS system with tons of space and fast network
connectivity. Can oVirt handle storage import like remote iscsi target
and run VMs on the ovirt nodes but store data there?
Thank you for your time to clear this up.
I have found many approaches out there that either are old (oVirt 3) or
even contradict themselves (talk about RAID level...)
Cheers!
-Christian.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB alpha-labs.net / \ in eMails
GPG Retrieval https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
5 years
Re: Quick generic Questions
by Strahil
On Nov 7, 2019 15:29, Christian Reiss <email(a)christian-reiss.de> wrote:
>
> Hey folks,
>
> I am looking at setting up a hyperconverged cluster with 3 nodes (and
> oVirt 4.3). Before setting up I have some generic questions that I would
> love to get hints or even an answer on.
>
> First off, the Servers are outfittet with 24 (SSD) drives each in a
> HW-RAID. Due to wear-leveling and speed I am looking at RAID10. So I
> would end up with one giant sda device.
Go with RAID 0 , or RAID5/6 as you will have the same data on all nodes (3 copies in total) .
> a) Partitioning
> Using oVirt node installer which will use the full size of /dev/sda is
> this still the right solution to Hyperconverged given the gluster issue?
> If I understood it correctly gluster is using empty drives or partitions
> so a fully utilized drive is of no use here. Does oVirt node installer
> have a hyperconverged/ gluster mode?
The cockpit installer can prepare the gluster infrastructure and then the oVirt cluster
> b) Storage Location
> In this 3 node cluster, creating a VM on node01 will the data for node01
> always end up in the local node01 server?
Nope , all data is replicated on all 3 nodes (or on 2 nodes when using 'replica 2 arbiter1' volumes).
> c) Starting VMs
> Can a VM be migrated or launched from node03 if the data resides on
> node01 and node02 (copies 2 with arbiter).
As gluster is a shared storage, the VMs can migrate on any host that has access to the storage (in your case any of the 3 nodes).
> d) Efficiency / High IO Load
> If node01 has high IO Load would additional data be loaded from the
> other node which has the copy to even the load? I am aware Virtuozzo
> does this.
Gluster Clients (in this case oVirt node) reads from all 3 nodes simultaneously for better I/O. Same is valid for writes.
> e) Storage Network dies
> What would happen with node01, node02 and node03 are operational but
> only the storage network dies (frontend is still alive as are the nodes).
Nodes will become unoperational and all VMs will be paused until storage is restored.
> f) external isci/ FreeNAS
> We have a FreeNAS system with tons of space and fast network
> connectivity. Can oVirt handle storage import like remote iscsi target
> and run VMs on the ovirt nodes but store data there?
Yep.
> Thank you for your time to clear this up.
> I have found many approaches out there that either are old (oVirt 3) or
> even contradict themselves (talk about RAID level...)
>
> Cheers!
> -Christian
5 years
CPU Pinning topology
by Fabrice Bacchella
I'm trying to understand the field "CPU Pinning topology" in the "Ressource Allocation" tab, that looks it needed by High Performance VM.
I have this hardware:
# lscpu
Architecture: x86_64
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
And I want to run a 22 vCPU vm on it, so no SMT, and 11 vCPU mapped to each socket.
The only explanation I found about this field content was here:
https://rhv.bradmin.org/ovirt-engine/docs/Virtual_Machine_Management_Guid...
and I can wrap my head around it. Do I need to given explicit mapping for all the 22 vCPU ? And dig in lscpu -e or numactl -H to find how each pCPU is identified ?
5 years
Can ovirt-node support multiple network cards ?
by wangyu13476969128@126.com
Can ovirt-node support multiple network cards ?
I download ovirt-node-ng-installer-4.3.5-2019073010.el7.iso from ovirt site. And install on a Dell R730 Server.
The version of ovirt-engine is 4.3.5.5.
I found that the logic network ovirtmgmt can only point to one network card interface.
Can ovirt-node support multiple network cards ?
If yes , please tell me the method.
5 years
Fw: Upgrade 4.2.6 to 4.3.x
by Srivathsa Puliyala
++ users(a)ovirt.org
________________________________
From: Srivathsa Puliyala
Sent: Wednesday, November 6, 2019 11:38 AM
To: users-owner(a)ovirt.org <users-owner(a)ovirt.org>
Subject: Upgrade 4.2.6 to 4.3.x
Hi,
I am trying to follow the documentation for the ovirt-engine upgrade (https://www.ovirt.org/documentation/upgrade-guide/appe-Upgrading_to_oVirt...) but unable to find the package. Could anyone please help on this?
#################
No package ovirt-fast-forward-upgrade available.
Error: Nothing to do
#################
FYI: ovirt engine is running on CentOS Linux 7 (Core)
Thank you.
Srivathsa
5 years
Re: Cannot enable maintenance mode
by Martin Perina
Hi Bruno,
how have you rebooted the host? Have you used Stop/Start/Restart functions
from Power Management menu in host detail in webadmin?
Regards,
Martin
On Tue, Nov 5, 2019 at 2:29 AM Bruno Martins <bruno.o.martins(a)gfi.world>
wrote:
> Hi Lukas,
>
> Thanks for your message!
>
> In my cluster, I have all the VM's running under "HOST2".
> The issue I'm having is that I can't enable maintenance mode on "HOST1",
> in order to re-add it to oVirt Manager (SSL certificate replacement stuff).
> I did it successfully on "HOST2", but the other one seems to ignore my
> actions, with that message in the logs...
>
> Cheers,
>
> Bruno
>
> From: Lukas Svaty <lsvaty(a)redhat.com>
> Sent: 16 de outubro de 2019 10:49
> To: Bruno Martins <bruno.o.martins(a)gfi.world>
> Cc: users(a)ovirt.org
> Subject: Re: [ovirt-users] Re: Cannot enable maintenance mode
>
> Did you
>
> "Consider manual intervention"
>
> such as
>
> "stopping" or "migrating Vms"
>
> which are running on that host?
>
> If you are trying to put host to maintenance, it will migrate all the VMs
> somewhere else, thus... you might have problem with migrations (try to
> migrate them to other destination host) or power them off.
>
> On Wed, Oct 16, 2019 at 9:15 AM Bruno Martins <mailto:
> bruno.o.martins(a)gfi.world> wrote:
> Hey guys,
>
> There are really no options left here? Is there something else I should
> check?
>
> Thank you!
>
> -----Original Message-----
> From: Bruno Martins <mailto:bruno.o.martins@gfi.world>
> Sent: 3 de outubro de 2019 22:41
> To: Benny Zlotnik <mailto:bzlotnik@redhat.com>
> Cc: mailto:users@ovirt.org
> Subject: [ovirt-users] Re: Cannot enable maintenance mode
>
> Hello Benny,
>
> I did. No luck, still...
>
> Cheers!
>
> -----Original Message-----
> From: Benny Zlotnik <mailto:bzlotnik@redhat.com>
> Sent: 2 de outubro de 2019 19:19
> To: Bruno Martins <mailto:bruno.o.martins@gfi.world>
> Cc: mailto:users@ovirt.org
> Subject: Re: [ovirt-users] Re: Cannot enable maintenance mode
>
> Did you try the "Confirm Host has been rebooted" button?
>
> On Wed, Oct 2, 2019 at 9:17 PM Bruno Martins <mailto:
> bruno.o.martins(a)gfi.world> wrote:
> >
> > Hello guys,
> >
> > No ideas for this issue?
> >
> > Thanks for your cooperation!
> >
> > Kind regards,
> >
> > -----Original Message-----
> > From: Bruno Martins <mailto:bruno.o.martins@gfi.world>
> > Sent: 29 de setembro de 2019 16:16
> > To: mailto:users@ovirt.org
> > Subject: [ovirt-users] Cannot enable maintenance mode
> >
> > Hello guys,
> >
> > I am being unable to put a host from a two nodes cluster into
> maintenance mode in order to remove it from the cluster afterwards.
> >
> > This is what I see in engine.log:
> >
> > 2019-09-27 16:20:58,364 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9,
> Job ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom
> Event ID: -1, Message: Host CentOS-H1 cannot change into maintenance mode -
> not all Vms have been migrated successfully. Consider manual intervention:
> stopping/migrating Vms: Non interactive user (User: admin).
> >
> > Host has been rebooted multiple times. vdsClient shows no VM's running.
> >
> > What else can I do?
> >
> > Kind regards,
> >
> > Bruno Martins
> > _______________________________________________
> > Users mailing list -- mailto:users@ovirt.org To unsubscribe send an
> > email to mailto:users-leave@ovirt.org Privacy
> > Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7
> > GXNW6YWRPFWOKA6VU3RH4WD3/
> > _______________________________________________
> > Users mailing list -- mailto:users@ovirt.org To unsubscribe send an
> > email to mailto:users-leave@ovirt.org Privacy
> > Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/DD5DW6KK
> > OOHGL3WFEKIIIS57BN3VWMAQ/
>
> _______________________________________________
> Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email
> to mailto:users-leave@ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/64GZQKZA7LX...
> _______________________________________________
> Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email
> to mailto:users-leave@ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJPEC7RDG3A...
>
>
> --
> LUKAS SVATY
> RHV QE
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YZB2L7MK6SG...
>
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
5 years
How to pass parameters between VDSM Hooks domxml in single run
by Vrgotic, Marko
Dear oVIrt,
A while ago we discussed on ways to change/update content of parameters of domxml in certain action.
As I mentioned before, we have added the VDSMHook 60_nsupdate which removes the DNS record entries when a VM is destroyed:
…
domxml = hooking.read_domxml()
name = domxml.getElementsByTagName('name')[0]
name = " ".join(name.nodeValue for name in name.childNodes if name.nodeType == name.TEXT_NODE)
nsupdate_commands = """server {server_ip}
update delete {vm_name}.example.com a
update delete {vm_name}. example.com aaaa
update delete {vm_name}. example.com txt
send
""".format(server_ip="172.16.1.10", vm_name=name)
…
The goal:
However, we did not want to execute remove dns records when VM is only migrated. Since its considered a “destroy” action we took following approach.
* In state “before_vm_migrate_source add hook which will write flag “is_migration” to domxml
* Once VM is scheduled for migration, this hook should add the flag “is_migration” to domxml
* Once 60_nsupdate is triggered, it will check for the flag and if there, skip executing dns record action, but only remove the flag “is_migration” from domxml of the VM
…
domxml = hooking.read_domxml()
migration = domxml.createElement("is_migration")
domxml.getElementsByTagName("domain")[0].appendChild(migration)
logging.info("domxml_updated {}".format(domxml.toprettyxml()))
hooking.write_domxml(domxml)
…
When executing first time, we observed that flag “
<name>hookiesvm</name>
<uuid>fcfa66cb-b251-43a3-8e2b-f33b3024a749</uuid>
<metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">
<ns0:qos/>
<ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
...skipping...
<address bus="0x00" domain="0x0000" function="0x0" slot="0x09" type="pci"/>
</rng>
</devices>
<seclabel model="selinux" relabel="yes" type="dynamic">
<label>system_u:system_r:svirt_t:s0:c169,c575</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c169,c575</imagelabel>
</seclabel>
<seclabel model="dac" relabel="yes" type="dynamic">
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
<is_migration/>
</domain>
is added to domxml, but was present once 60_nsupdate hook was executed.
The question: How do we make sure that, when domxml is updated, that the update is visible/usable by following hook, in single run? How to pass these changes between hooks?
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
5 years
Ansible setup host network fails on comparing sorted dictionaries
by Vrgotic, Marko
Dear oVirt,
While deploying ovirt_infra , the role oivrt.networks fails on Setup Host Networks, with following error:
TypeError: '<' not supported between instances of 'dict' and 'dict'
Full output:
TASK [ovirt.infra/roles/ovirt.networks : Setup host networks] ******************************************************************************************************************
The full traceback is:
Traceback (most recent call last):
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 396, in main
(nic is None or host_networks_module.has_update(nics_service.service(nic.id)))
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 289, in has_update
update = self.__compare_options(get_bond_options(bond.get('mode'), bond.get('options')), getattr(nic.bonding, 'options', []))
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 247, in __compare_options
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
TypeError: '<' not supported between instances of 'dict' and 'dict'
failed: [localhost] (item={'name': 'ovirt-staging-hv-02.avinity.tv', 'check': True, 'save': True, 'bond': {'name': 'bond28', 'mode': 4, 'interfaces': ['p2p1', 'p2p2']}, 'networks': [{'name': 'backbone', 'boot_protocol': 'static', 'address': '172.17.28.212', 'netmask': '255.255.255.0', 'version': 'v4'}]}) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "ovirt-staging-hv-02.avinity.tv",
"nested_attributes": [],
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"item": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"name": "ovirt-staging-hv-02.avinity.tv",
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"save": true
},
"msg": "'<' not supported between instances of 'dict' and 'dict'"
}
Read vars_file 'vars/engine_vars.yml'
Read vars_file 'vars/secrets.yml'
Read vars_file 'vars/ovirt_infra_vars.yml'
Looking further into ovirt_host_network.py I found that issue is reported after following is executed:
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
It seemed to be failing due to not getting the key value to sort the dicts, so I added sorting based on name, to test, and it worked in single test run:
return sorted((get_dict_of_struct(opt) for opt in new_options), key=lambda x: x["name"]) != sorted((get_dict_of_struct(opt) for opt in old_options), key=lambda x: x["name"])
After rerun of the play, with changes above:
TASK [ovirt.infra/roles/ovirt.networks : Setup host networks] ******************************************************************************************************************
task path: /Users/mvrgotic/Git/ovirt-engineering/roles/ovirt.infra/roles/ovirt.networks/tasks/main.yml:25
Using module file /Users/mvrgotic/.local/share/virtualenvs/ovirt-engineering-JaxzXThh/lib/python3.7/site-packages/ansible/modules/cloud/ovirt/ovirt_host_network.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: mvrgotic
<127.0.0.1> EXEC /bin/sh -c '/Users/mvrgotic/.local/share/virtualenvs/ovirt-engineering-JaxzXThh/bin/python3 && sleep 0'
changed: [localhost] => (item={'name': 'ovirt-staging-hv-02.avinity.tv', 'check': True, 'save': True, 'bond': {'name': 'bond28', 'mode': 4, 'interfaces': ['p2p1', 'p2p2']}, 'networks': [{'name': 'backbone', 'boot_protocol': 'static', 'address': '172.17.28.212', 'netmask': '255.255.255.0', 'version': 'v4'}]}) => {
"ansible_loop_var": "item",
"changed": true,
"host_nic": {
"ad_aggregator_id": 6,
"bonding": {
"ad_partner_mac": {
"address": "44:31:92:7c:b3:11"
},
"options": [
{
"name": "mode",
"type": "Dynamic link aggregation (802.3ad)",
"value": "4"
},
{
"name": "xmit_hash_policy",
"value": "2"
}
],
"slaves": [
{
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/01703220-570c-44f5-9729-6717ceead304",
"id": "01703220-570c-44f5-9729-6717ceead304"
},
{
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/e879c1c6-065d-4742-b282-fcdb477f95a7",
"id": "e879c1c6-065d-4742-b282-fcdb477f95a7"
}
]
},
"boot_protocol": "static",
"bridged": false,
"custom_configuration": false,
"host": {
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea",
"id": "aa1f7a65-a867-436a-9a7f-264068f4bdea"
},
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/abce07fa-cb7f-46f2-b967-69d1feaa4075",
"id": "abce07fa-cb7f-46f2-b967-69d1feaa4075",
"ip": {
"address": "172.17.28.212",
"netmask": "255.255.255.0",
"version": "v4"
},
"ipv6": {
"gateway": "::",
"version": "v6"
},
"ipv6_boot_protocol": "none",
"mac": {
"address": "b4:96:91:3f:47:1c"
},
"mtu": 9000,
"name": "bond28",
"network": {
"href": "/ovirt-engine/api/networks/f3ef80cf-bf3a-4fa5-aed9-7d9e7455f804",
"id": "f3ef80cf-bf3a-4fa5-aed9-7d9e7455f804"
},
"network_labels": [],
"properties": [],
"speed": 10000000000,
"statistics": [],
"status": "up"
},
"id": "abce07fa-cb7f-46f2-b967-69d1feaa4075",
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "ovirt-staging-hv-02.avinity.tv",
"nested_attributes": [],
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"id": "3e40ff7d-5384-45f1-b036-13e6f91aff56",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"item": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"name": "ovirt-staging-hv-02.avinity.tv",
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"save": true
}
}
Read vars_file 'vars/engine_vars.yml'
Read vars_file 'vars/secrets.yml'
Read vars_file 'vars/ovirt_infra_vars.yml'
Changes resulted in applying configuration exactly as intended.
Not sure it this was the actual intention, but please let me know if the made change was as initially intended for sorted compare to work.
My pipenv setup:
Python 3.7
ansible==2.8.6
asn1crypto==1.1.0
bcrypt==3.1.7
cffi==1.13.1
cryptography==2.8
dnspython==1.16.0
ipaddress==1.0.23
Jinja2==2.10.3
jmespath==0.9.4
lxml==4.4.1
MarkupSafe==1.1.1
netaddr==0.7.19
ovirt-engine-sdk-python==4.3.3
paramiko==2.6.0
passlib==1.7.1
pyasn1==0.4.5
pycparser==2.19
pycurl==7.43.0.3
PyNaCl==1.3.0
PyYAML==5.1.2
six==1.12.0
Ansible vars and play:
=================================
host_networks:
- name: ovirt-staging-hv-02.avinity.tv
check: true
save: true
bond:
name: bond28
mode: 4
interfaces:
- p2p1
- p2p2
networks:
- name: backbone
boot_protocol: static
address: 172.17.28.212
netmask: 255.255.255.0
version: v4
=================================
- name: Setup host networks
ovirt_host_network:
auth: "{{ ovirt_auth }}"
name: "{{ item.name }}"
state: "{{ item.state | default(omit) }}"
check: "{{ item.check | default(omit) }}"
save: "{{ item.save | default(omit) }}"
bond: "{{ item.bond | default(omit) }}"
networks: "{{ item.networks | default(omit) }}"
labels: "{{ item.labels | default(omit) }}"
interface: "{{ item.interface | default(omit) }}"
with_items:
- "{{ host_networks | default([]) }}"
tags:
- host_networks
- networks
====================================
5 years