oVirt 4.4.1 HCI single server deployment failed nested-kvm
by wodel youchi
Hi,
I am using these versions for my test :
- ovirt-engine-appliance-4.4-20200723102445.1.el8.x86_64.rpm
- ovirt-node-ng-installer-4.4.1-2020072310.el8.iso
A single HCI server using nested KVM.
The gluster part works now without error, but when I click the
hosted-engine deployment button I get :
*System data could not be retrieved!*
*No valid network interface has been found *
* If you are using Bonds or VLANs Use the following naming conventions: *
- VLAN interfaces: physical_device.VLAN_ID (for example, eth0.23, eth1.128,
enp3s0.50)
- Bond interfaces: bond*number* (for example, bond0, bond1)
- VLANs on bond interfaces: bond*number*.VLAN_ID (for example, bond0.50,
bond1.128)
* Supported bond modes: active-backup, balance-xor, broadcast, 802.3ad
* Networking teaming is not supported and will cause errors
From this log file I get :
cat ovirt-hosted-engine-setup-ansible-get_network_interfaces-xxxxxxxx.log
2020-07-29 15:27:09,246+0100 DEBUG var changed: host "localhost" var
"otopi_host_net" type "<class 'list'>" value: "[
"enp1s0",
"enp2s0"
]"
2020-07-29 15:27:09,246+0100 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-eng
ine-setup/ansible/trigger_role.yml', 'ansible_host': 'localhost',
'ansible_task': 'Filter unsupported interface types', 'task_duration
': 0}
2020-07-29 15:27:09,246+0100 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7fe804bc5940> kwargs
2020-07-29 15:27:09,514+0100 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-ho
sted-engine-setup/ansible/trigger_role.yml', 'ansible_task':
'ovirt.hosted_engine_setup : debug'}
2020-07-29 15:27:09,515+0100 DEBUG ansible on_any args TASK:
ovirt.hosted_engine_setup : debug kwargs is_conditional:False
2020-07-29 15:27:09,515+0100 DEBUG ansible on_any args localhostTASK:
ovirt.hosted_engine_setup : debug kwargs
2020-07-29 15:27:09,792+0100 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-eng
ine-setup/ansible/trigger_role.yml', 'ansible_host': 'localhost',
'ansible_task': '', 'task_duration': 0}
2020-07-29 15:27:09,793+0100 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7fe804cb45f8> kwargs
2020-07-29 15:27:10,059+0100 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-ho
sted-engine-setup/ansible/trigger_role.yml', 'ansible_task':
'ovirt.hosted_engine_setup : Failed if only teaming devices are availible
'}
2020-07-29 15:27:10,059+0100 DEBUG ansible on_any args TASK:
ovirt.hosted_engine_setup : Failed if only teaming devices are availible
kwargs is_conditional:False
2020-07-29 15:27:10,060+0100 DEBUG ansible on_any args localhostTASK:
ovirt.hosted_engine_setup : Failed if only teaming devices are a
vailible kwargs
2020-07-29 15:27:10,376+0100 DEBUG var changed: host "localhost" var
"ansible_play_hosts" type "<class 'list'>" value: "[]"
2020-07-29 15:27:10,376+0100 DEBUG var changed: host "localhost" var
"ansible_play_batch" type "<class 'list'>" value: "[]"
2020-07-29 15:27:10,376+0100 DEBUG var changed: host "localhost" var
"play_hosts" type "<class 'list'>" value: "[]"
2020-07-29 15:27:10,376+0100 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"msg": "The conditional check
*'(otopi_host_net.ansible_facts.otopi_host_net | length == 0)' failed. The
error was: error while evaluating conditional
((otopi_host_net.ansible_facts.otopi_host_net | length == 0)): 'list
object' has no attribute 'ansible_facts'\n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/filter_team_devices.yml':
line 29, column 13, *
but may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n- debug: var=otopi_ho
st_net\n ^ here\n\nThere appears to be both 'k=v' shorthand
syntax and YAML in this task. Only one syntax may be used.\n"
},
"ansible_task": "Failed if only teaming devices are availible",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 0
}
2020-07-29 15:27:10,377+0100 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7fe804c175c0> kwargs
ignor
e_errors:None
2020-07-29 15:27:10,378+0100 INFO ansible stats {
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "00:15 Minutes",
"ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 16,
'failures': 1, 'unreachable': 0, 'changed': 1, 'skipped': 2
, 'rescued': 0, 'ignored': 0}}",
"ansible_type": "finish",
"status": "FAILED"
}
2020-07-29 15:27:10,378+0100 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:02 ] Force facts gathering
[ 00:01 ] Get all active network interfaces
[ < 1 sec ] Filter bonds with bad naming
[ < 1 sec ] Generate output list
[ 00:01 ] Collect interface types
[ < 1 sec ] Get list of Team devices
[ < 1 sec ] Filter unsupported interface types
[ FAILED ] Failed if only teaming devices are availible
2020-07-29 15:27:10,378+0100 DEBUG ansible on_any args
<ansible.executor.stats.AggregateStats object at 0x7fe8074bce80> kwargs
Regards.
4 years, 4 months
Resize partition cause oVirt installation failed
by hkexdong@yahoo.com.hk
I've 2 x 512GB SSD. Purely for oVirt 4.4.1 (CentOS 8.2) installation.
Using auto partitioning will assign 800+ GB to root partition. I think that's too much so manually reduce it to 80 GB.
As there are 2 disks. I want to form RAID too. So, I manually change all the partitions from "LVM thin provisioning" to "RAID" (with RAID 1).
Eventually, cause the installation failed with a message "There was an error running the kickstart script at line ....."
If I don't make any change to the partitions. The installation success.
Is this a known issue? oVirt not allow resize partition and using Linux software RAID?
4 years, 4 months
Unable to start vms in a specific host
by miguel.garcia@toshibagcs.com
I had added a couple new hosts in my cluster (hyp16,hyp17) both followed the same procedure but at the moment to start vms all of them goes to hyp16, if I tried to migrate a vm to hyp 17 the migration task fails.
Here is the log trying to do migration:
2020-07-22 17:19:12,352-04 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 24939bc1-5359-4f9d-b742-b32143c02eb1 Type: VMAction group MIGRATE_VM with role type USER
2020-07-22 17:19:12,421-04 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='15466e1a-1f44-472c-94fc-84b132ff1c7d', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1', srcHost='172.16.99.12', dstVdsId='3663f46b-61db-4b4c-a6c0-03fedc90edf0', dstHost='172.16.99.19:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='625', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=ab
ort, params=[]}}]]', dstQemu='172.16.99.19'}), log id: 4f624ae2
2020-07-22 17:19:12,423-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] START, MigrateBrokerVDSCommand(HostName = hyp10.infra, MigrateVDSCommandParameters:{hostId='15466e1a-1f44-472c-94fc-84b132ff1c7d', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1', srcHost='172.16.99.12', dstVdsId='3663f46b-61db-4b4c-a6c0-03fedc90edf0', dstHost='172.16.99.19:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='625', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntim
e, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='172.16.99.19'}), log id: 7ead484c
2020-07-22 17:19:12,587-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] FINISH, MigrateBrokerVDSCommand, return: , log id: 7ead484c
2020-07-22 17:19:12,589-04 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 4f624ae2
2020-07-22 17:19:12,597-04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: Test, Source: hyp10.infra, Destination: hyp17.infra, User: Miguel.Garcia@).
2020-07-22 17:19:14,050-04 INFO [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-2378) [] User antonio.acosta@ successfully logged out
2020-07-22 17:19:14,069-04 INFO [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default task-2380) [27d07a90] Running command: TerminateSessionsForTokenCommand internal: true.
2020-07-22 17:19:14,405-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] VM '24939bc1-5359-4f9d-b742-b32143c02eb1' was reported as Down on VDS '3663f46b-61db-4b4c-a6c0-03fedc90edf0'(hyp17.infra)
2020-07-22 17:19:14,406-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-12) [] START, DestroyVDSCommand(HostName = hyp17.infra, DestroyVmVDSCommandParameters:{hostId='3663f46b-61db-4b4c-a6c0-03fedc90edf0', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 7d54e8da
2020-07-22 17:19:14,823-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-12) [] Failed to destroy VM '24939bc1-5359-4f9d-b742-b32143c02eb1' because VM does not exist, ignoring
2020-07-22 17:19:14,823-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-12) [] FINISH, DestroyVDSCommand, return: , log id: 7d54e8da
2020-07-22 17:19:14,824-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] VM '24939bc1-5359-4f9d-b742-b32143c02eb1'(Test) was unexpectedly detected as 'Down' on VDS '3663f46b-61db-4b4c-a6c0-03fedc90edf0'(hyp17.infra) (expected on '15466e1a-1f44-472c-94fc-84b132ff1c7d')
2020-07-22 17:19:14,824-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] Migration of VM 'Test' to host 'hyp17.infra' failed: VM destroyed during the startup.
2020-07-22 17:19:14,824-04 WARN [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-3) [] skipping VM '24939bc1-5359-4f9d-b742-b32143c02eb1' from this monitoring cycle - the VM data has changed since fetching the data
2020-07-22 17:19:17,224-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [] VM '24939bc1-5359-4f9d-b742-b32143c02eb1'(Test) moved from 'MigratingFrom' --> 'Up'
2020-07-22 17:19:17,224-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [] Adding VM '24939bc1-5359-4f9d-b742-b32143c02eb1'(Test) to re-run list
2020-07-22 17:19:17,229-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [] Rerun VM '24939bc1-5359-4f9d-b742-b32143c02eb1'. Called from VDS 'hyp10.infra'
2020-07-22 17:19:17,310-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-12994) [] START, MigrateStatusVDSCommand(HostName = hyp10.infra, MigrateStatusVDSCommandParameters:{hostId='15466e1a-1f44-472c-94fc-84b132ff1c7d', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1'}), log id: 74cf526d
2020-07-22 17:19:17,533-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-12994) [] FINISH, MigrateStatusVDSCommand, return: , log id: 74cf526d
2020-07-22 17:19:17,587-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-12994) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: Test, Source: hyp10.infra, Destination: hyp17.infra).
2020-07-22 17:19:17,591-04 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (EE-ManagedThreadFactory-engine-Thread-12994) [] Lock freed to object 'EngineLock:{exclusiveLocks='[24939bc1-5359-4f9d-b742-b32143c02eb1=VM]', sharedLocks=''}'
4 years, 4 months
4.4rc issue with attaching logical network to ib0
by Edward Berger
I'm trying to add a logical network to an omnipath (which appears as an
infinband ib0 under centos kernel driver) and the attempt errors out, under
4.4RC. This used to work under 4.3 on the same hardware.
'Unexpected failure of libnm when running the mainloop: run execution'},
code = -32603
I want to use ib0 for network file systems and VM migrations.
4 years, 4 months
Windows TimeZone UTC
by Erez Zarum
The Engine has the TimeZone types compiled, this may be an issue when one wants to configure a Windows Machine to use UTC as a Timezone instead of GMT.
Using GMT sets the Windows VM to "London, Dublin" time which may cause issues with daylight saving for some users.
There should be an option to supply our own Timezone or the least, make sure it is indeed coordinated with what Windows supports.
4 years, 4 months
oVirt 4.4 Self-Hosted Installation failed
by enmanuelmoreira@gmail.com
Hi There!
I'm trying to install oVirt self-hosted on Fedora 32 with kvm and the install failed. I got the following message on ovirt console:
Host localhost installation failed. Failed to execute Ansible host-deploy role: failed. Please check logs for more details: /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20200727183159-localhost-5de7c489.log.
and the log content:
[root@localhost ~]# tail -100 /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20200727183159-localhost-5de7c489.log
"end" : "2020-07-27 18:35:18.688022",
"delta" : "0:00:00.190619",
"changed" : true,
"invocation" : {
"module_args" : {
"_raw_params" : "'/usr/bin/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-nodes'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.oqj_hat2qemu'\n",
"warn" : true,
"_uses_shell" : false,
"stdin_add_newline" : true,
"strip_empty_ends" : true,
"argv" : null,
"chdir" : null,
"executable" : null,
"creates" : null,
"removes" : null,
"stdin" : null
}
},
"stdout_lines" : [ "-----BEGIN CERTIFICATE REQUEST-----", "MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++", "QCG8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb", "b1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ", "QVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X", "XeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7", "h2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6", "WDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h", "n27hRFQ5n2hwRpLrbnD/KIkicNR9sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq", "FsN6gVI/RQY7BTmgP5chThe+/vtCJjP71K/+5YKpaBCbtpUIo/dERzHMsVS5O730", "iXjFERxAObnrzllohpahc42+dwxPrd4ZMFrRmA5m55/nN9VgCudHTo8Uzrv+iaN7", "0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx", "btRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0xdyfXU/9wFIhf3oFK3Slbo3j", "rXgOHdj+ve1g", "-----END CERTIFICATE REQUEST-----" ],
"stderr_lines" : [ "Generating a RSA private key", "....................................................................................................................................................+++++", ".....................................+++++", "writing new private key to '/tmp/ansible.oqj_hat2qemu'", "-----" ],
"_ansible_no_log" : false,
"failed" : false,
"item" : {
"changed" : true,
"path" : "/tmp/ansible.oqj_hat2qemu",
"uid" : 0,
"gid" : 0,
"owner" : "root",
"group" : "root",
"mode" : "0600",
"state" : "file",
"secontext" : "unconfined_u:object_r:user_tmp_t:s0",
"size" : 0,
"invocation" : {
"module_args" : {
"state" : "file",
"suffix" : "qemu",
"prefix" : "ansible.",
"path" : null
}
},
"failed" : false,
"item" : {
"suffix" : "qemu",
"pending_file" : "libvirt-migrate/server-key.pending.pem",
"req_dir" : "requests-qemu"
},
"ansible_loop_var" : "item"
},
"ansible_loop_var" : "item",
"_ansible_item_label" : {
"changed" : true,
"path" : "/tmp/ansible.oqj_hat2qemu",
"uid" : 0,
"gid" : 0,
"owner" : "root",
"group" : "root",
"mode" : "0600",
"state" : "file",
"secontext" : "unconfined_u:object_r:user_tmp_t:s0",
"size" : 0,
"invocation" : {
"module_args" : {
"state" : "file",
"suffix" : "qemu",
"prefix" : "ansible.",
"path" : null
}
},
"failed" : false,
"item" : {
"suffix" : "qemu",
"pending_file" : "libvirt-migrate/server-key.pending.pem",
"req_dir" : "requests-qemu"
},
"ansible_loop_var" : "item"
}
} ],
"changed" : true,
"msg" : "All items completed"
},
"start" : "2020-07-27T18:35:18.064749",
"end" : "2020-07-27T18:35:18.714265",
"duration" : 0.649516,
"event_loop" : null,
"uuid" : "ce9afcb5-6c75-44a8-a640-5152a8605b52"
}
}
}
2020-07-27 18:35:20 UTC - TASK [ovirt-host-deploy-vdsm-certificates : Copy vdsm and QEMU CSRs] ***********
2020-07-27 18:35:20 UTC -
2020-07-27 18:35:20 UTC - [WARNING]: The loop variable 'item' is already in use. You should set the
`loop_var` value in the `loop_control` option for the task to something else to
avoid variable collisions and unexpected behavior.
2020-07-27 18:35:20 UTC - failed: [localhost] (item={'cmd': ['/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/ansible.sksg5y_jvdsm'], 'stdout': '-----BEGIN CERTIFICATE REQUEST-----\nMIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMo1\nIRs7qJtcFrVWc56TpJlxh54cPlLxASA/Xdu78vkNz4/eLL6Fyud5/mSloHeLuHat\nJcvwxmLWcWDeNcEWlMR9tJTA6WxCS2rvUXXFNkw9QCUtTB9qrmezujdOTbDP4eqO\n/i0/xn0BO+kizB7SPMt3uyLJKAOCefcUwvmMEwZ25nw8oCqf/gICl/0AGKphuP11\nH1HuK5x8CTxboO2LawccpRUcbBXn5vF4LRsxy/VuK4fPMnxTk52F0DKwZMfnM4i6\npBnl7rI0P9JUDBlt2clYhz5fy4/unZmNYiyYpfkfv+YB66/vsFA/Hsa4DdWkYfE0\n0FBpOR5ANVOMselvFC0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQAYQRPcLLqq\nKH9RkEZj/i6WPQWGBUbTmRS2e1uFhRLwVUgaX9yK84m/IS1w7X3JrYuykAh3nGjs\n955U7XqA28nt/VZqB9xXYvmmfhGNFVNphUa/YecBb7OilUlReuRjCugcR6s3JiKM\nVppr6iKxCYzooOiSsMJk7VSSWz2V7k4CGj4vD/1GX/BH9j2J9Xa4uvw0bmZf467q\nKqUhpTw2T/DFg4b+dFjmyjbmjZTLrpfA2xBXGJQMrnHv3v9ca8WlX/zDhZDfEycY\nHIgw9y+I6S0ZTKmzpCMzwpPUOny6StZcPWjKZGCV
tLq1slpCHEH9T8++RdX/9Gn8\nyU7Y/+iqZ/Us\n-----END CERTIFICATE REQUEST-----', 'stderr': "Generating a RSA private key\n......+++++\n.............................+++++\nwriting new private key to '/tmp/ansible.sksg5y_jvdsm'\n-----", 'rc': 0, 'start': '2020-07-27 18:35:18.270606', 'end': '2020-07-27 18:35:18.314757', 'delta': '0:00:00.044151', 'changed': True, 'invocation': {'module_args': {'_raw_params': "'/usr/bin/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-nodes'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.sksg5y_jvdsm'\n", 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['-----BEGIN CERTIFICATE REQUEST-----', 'MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMo1', 'IRs7qJtcFrVWc56TpJlxh54cPlLxASA/Xdu78vkNz4/eLL6Fyud5/mSloHeLuHat', 'JcvwxmLWcWDeNcEWlMR9tJTA6WxCS2rvUXXFNkw9QCUtTB9qrmezujdOTbDP4eqO', '/i0/xn0BO+kizB7S
PMt3uyLJKAOCefcUwvmMEwZ25nw8oCqf/gICl/0AGKphuP11', 'H1HuK5x8CTxboO2LawccpRUcbBXn5vF4LRsxy/VuK4fPMnxTk52F0DKwZMfnM4i6', 'pBnl7rI0P9JUDBlt2clYhz5fy4/unZmNYiyYpfkfv+YB66/vsFA/Hsa4DdWkYfE0', '0FBpOR5ANVOMselvFC0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQAYQRPcLLqq', 'KH9RkEZj/i6WPQWGBUbTmRS2e1uFhRLwVUgaX9yK84m/IS1w7X3JrYuykAh3nGjs', '955U7XqA28nt/VZqB9xXYvmmfhGNFVNphUa/YecBb7OilUlReuRjCugcR6s3JiKM', 'Vppr6iKxCYzooOiSsMJk7VSSWz2V7k4CGj4vD/1GX/BH9j2J9Xa4uvw0bmZf467q', 'KqUhpTw2T/DFg4b+dFjmyjbmjZTLrpfA2xBXGJQMrnHv3v9ca8WlX/zDhZDfEycY', 'HIgw9y+I6S0ZTKmzpCMzwpPUOny6StZcPWjKZGCVtLq1slpCHEH9T8++RdX/9Gn8', 'yU7Y/+iqZ/Us', '-----END CERTIFICATE REQUEST-----'], 'stderr_lines': ['Generating a RSA private key', '......+++++', '.............................+++++', "writing new private key to '/tmp/ansible.sksg5y_jvdsm'", '-----'], 'failed': False, 'item': {'changed': True, 'path': '/tmp/ansible.sksg5y_jvdsm', 'uid': 0, 'gid': 0, 'owner': 'root', 'group': 'root', 'mode': '0600', 'state': 'file', 'secontext'
: 'unconfined_u:object_r:user_tmp_t:s0', 'size': 0, 'invocation': {'module_args': {'state': 'file', 'suffix': 'vdsm', 'prefix': 'ansible.', 'path': None}}, 'failed': False, 'item': {'suffix': 'vdsm', 'pending_file': 'keys/vdsmkey.pending.pem', 'req_dir': 'requests'}, 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "item": {"ansible_loop_var": "item", "changed": true, "cmd": ["/usr/bin/openssl", "req", "-new", "-newkey", "rsa:2048", "-nodes", "-subj", "/", "-keyout", "/tmp/ansible.sksg5y_jvdsm"], "delta": "0:00:00.044151", "end": "2020-07-27 18:35:18.314757", "failed": false, "invocation": {"module_args": {"_raw_params": "'/usr/bin/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-nodes'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.sksg5y_jvdsm'\n", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"an
sible_loop_var": "item", "changed": true, "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"path": null, "prefix": "ansible.", "state": "file", "suffix": "vdsm"}}, "item": {"pending_file": "keys/vdsmkey.pending.pem", "req_dir": "requests", "suffix": "vdsm"}, "mode": "0600", "owner": "root", "path": "/tmp/ansible.sksg5y_jvdsm", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 0, "state": "file", "uid": 0}, "rc": 0, "start": "2020-07-27 18:35:18.270606", "stderr": "Generating a RSA private key\n......+++++\n.............................+++++\nwriting new private key to '/tmp/ansible.sksg5y_jvdsm'\n-----", "stderr_lines": ["Generating a RSA private key", "......+++++", ".............................+++++", "writing new private key to '/tmp/ansible.sksg5y_jvdsm'", "-----"], "stdout": "-----BEGIN CERTIFICATE REQUEST-----\nMIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMo1\nIRs7qJtcFrVWc56TpJlxh54cPlLxASA/Xdu78vkNz4/eLL6Fyud5/mSloHeLuHat\
nJcvwxmLWcWDeNcEWlMR9tJTA6WxCS2rvUXXFNkw9QCUtTB9qrmezujdOTbDP4eqO\n/i0/xn0BO+kizB7SPMt3uyLJKAOCefcUwvmMEwZ25nw8oCqf/gICl/0AGKphuP11\nH1HuK5x8CTxboO2LawccpRUcbBXn5vF4LRsxy/VuK4fPMnxTk52F0DKwZMfnM4i6\npBnl7rI0P9JUDBlt2clYhz5fy4/unZmNYiyYpfkfv+YB66/vsFA/Hsa4DdWkYfE0\n0FBpOR5ANVOMselvFC0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQAYQRPcLLqq\nKH9RkEZj/i6WPQWGBUbTmRS2e1uFhRLwVUgaX9yK84m/IS1w7X3JrYuykAh3nGjs\n955U7XqA28nt/VZqB9xXYvmmfhGNFVNphUa/YecBb7OilUlReuRjCugcR6s3JiKM\nVppr6iKxCYzooOiSsMJk7VSSWz2V7k4CGj4vD/1GX/BH9j2J9Xa4uvw0bmZf467q\nKqUhpTw2T/DFg4b+dFjmyjbmjZTLrpfA2xBXGJQMrnHv3v9ca8WlX/zDhZDfEycY\nHIgw9y+I6S0ZTKmzpCMzwpPUOny6StZcPWjKZGCVtLq1slpCHEH9T8++RdX/9Gn8\nyU7Y/+iqZ/Us\n-----END CERTIFICATE REQUEST-----", "stdout_lines": ["-----BEGIN CERTIFICATE REQUEST-----", "MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMo1", "IRs7qJtcFrVWc56TpJlxh54cPlLxASA/Xdu78vkNz4/eLL6Fyud5/mSloHeLuHat", "JcvwxmLWcWDeNcEWlMR9tJTA6WxCS2rvUXXFNkw9QCUtTB9qrmezujdOTbDP4eqO", "/i0/xn0BO+kizB7SPMt3uyLJK
AOCefcUwvmMEwZ25nw8oCqf/gICl/0AGKphuP11", "H1HuK5x8CTxboO2LawccpRUcbBXn5vF4LRsxy/VuK4fPMnxTk52F0DKwZMfnM4i6", "pBnl7rI0P9JUDBlt2clYhz5fy4/unZmNYiyYpfkfv+YB66/vsFA/Hsa4DdWkYfE0", "0FBpOR5ANVOMselvFC0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQAYQRPcLLqq", "KH9RkEZj/i6WPQWGBUbTmRS2e1uFhRLwVUgaX9yK84m/IS1w7X3JrYuykAh3nGjs", "955U7XqA28nt/VZqB9xXYvmmfhGNFVNphUa/YecBb7OilUlReuRjCugcR6s3JiKM", "Vppr6iKxCYzooOiSsMJk7VSSWz2V7k4CGj4vD/1GX/BH9j2J9Xa4uvw0bmZf467q", "KqUhpTw2T/DFg4b+dFjmyjbmjZTLrpfA2xBXGJQMrnHv3v9ca8WlX/zDhZDfEycY", "HIgw9y+I6S0ZTKmzpCMzwpPUOny6StZcPWjKZGCVtLq1slpCHEH9T8++RdX/9Gn8", "yU7Y/+iqZ/Us", "-----END CERTIFICATE REQUEST-----"]}, "msg": "Failed to connect to the host via ssh: ovirt@localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
2020-07-27 18:35:20 UTC - failed: [localhost] (item={'cmd': ['/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/ansible.oqj_hat2qemu'], 'stdout': '-----BEGIN CERTIFICATE REQUEST-----\nMIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++\nQCG8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb\nb1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ\nQVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X\nXeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7\nh2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6\nWDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h\nn27hRFQ5n2hwRpLrbnD/KIkicNR9sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq\nFsN6gVI/RQY7BTmgP5chThe+/vtCJjP71K/+5YKpaBCbtpUIo/dERzHMsVS5O730\niXjFERxAObnrzllohpahc42+dwxPrd4ZMFrRmA5m55/nN9VgCudHTo8Uzrv+iaN7\n0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx\nbtRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0
xdyfXU/9wFIhf3oFK3Slbo3j\nrXgOHdj+ve1g\n-----END CERTIFICATE REQUEST-----', 'stderr': "Generating a RSA private key\n....................................................................................................................................................+++++\n.....................................+++++\nwriting new private key to '/tmp/ansible.oqj_hat2qemu'\n-----", 'rc': 0, 'start': '2020-07-27 18:35:18.497403', 'end': '2020-07-27 18:35:18.688022', 'delta': '0:00:00.190619', 'changed': True, 'invocation': {'module_args': {'_raw_params': "'/usr/bin/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-nodes'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.oqj_hat2qemu'\n", 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['-----BEGIN CERTIFICATE REQUEST-----', 'MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++', 'QC
G8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb', 'b1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ', 'QVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X', 'XeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7', 'h2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6', 'WDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h', 'n27hRFQ5n2hwRpLrbnD/KIkicNR9sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq', 'FsN6gVI/RQY7BTmgP5chThe+/vtCJjP71K/+5YKpaBCbtpUIo/dERzHMsVS5O730', 'iXjFERxAObnrzllohpahc42+dwxPrd4ZMFrRmA5m55/nN9VgCudHTo8Uzrv+iaN7', '0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx', 'btRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0xdyfXU/9wFIhf3oFK3Slbo3j', 'rXgOHdj+ve1g', '-----END CERTIFICATE REQUEST-----'], 'stderr_lines': ['Generating a RSA private key', '....................................................................................................................................................
+++++', '.....................................+++++', "writing new private key to '/tmp/ansible.oqj_hat2qemu'", '-----'], 'failed': False, 'item': {'changed': True, 'path': '/tmp/ansible.oqj_hat2qemu', 'uid': 0, 'gid': 0, 'owner': 'root', 'group': 'root', 'mode': '0600', 'state': 'file', 'secontext': 'unconfined_u:object_r:user_tmp_t:s0', 'size': 0, 'invocation': {'module_args': {'state': 'file', 'suffix': 'qemu', 'prefix': 'ansible.', 'path': None}}, 'failed': False, 'item': {'suffix': 'qemu', 'pending_file': 'libvirt-migrate/server-key.pending.pem', 'req_dir': 'requests-qemu'}, 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "item": {"ansible_loop_var": "item", "changed": true, "cmd": ["/usr/bin/openssl", "req", "-new", "-newkey", "rsa:2048", "-nodes", "-subj", "/", "-keyout", "/tmp/ansible.oqj_hat2qemu"], "delta": "0:00:00.190619", "end": "2020-07-27 18:35:18.688022", "failed": false, "invocation": {"module_args": {"_raw_params": "'/usr/b
in/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-nodes'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.oqj_hat2qemu'\n", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"ansible_loop_var": "item", "changed": true, "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"path": null, "prefix": "ansible.", "state": "file", "suffix": "qemu"}}, "item": {"pending_file": "libvirt-migrate/server-key.pending.pem", "req_dir": "requests-qemu", "suffix": "qemu"}, "mode": "0600", "owner": "root", "path": "/tmp/ansible.oqj_hat2qemu", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 0, "state": "file", "uid": 0}, "rc": 0, "start": "2020-07-27 18:35:18.497403", "stderr": "Generating a RSA private key\n............................................................................................................................
........................+++++\n.....................................+++++\nwriting new private key to '/tmp/ansible.oqj_hat2qemu'\n-----", "stderr_lines": ["Generating a RSA private key", "....................................................................................................................................................+++++", ".....................................+++++", "writing new private key to '/tmp/ansible.oqj_hat2qemu'", "-----"], "stdout": "-----BEGIN CERTIFICATE REQUEST-----\nMIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++\nQCG8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb\nb1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ\nQVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X\nXeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7\nh2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6\nWDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h\nn27hRFQ5n2hwRpLrbnD/KIkicNR9
sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq\nFsN6gVI/RQY7BTmgP5chThe+/vtCJjP71K/+5YKpaBCbtpUIo/dERzHMsVS5O730\niXjFERxAObnrzllohpahc42+dwxPrd4ZMFrRmA5m55/nN9VgCudHTo8Uzrv+iaN7\n0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx\nbtRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0xdyfXU/9wFIhf3oFK3Slbo3j\nrXgOHdj+ve1g\n-----END CERTIFICATE REQUEST-----", "stdout_lines": ["-----BEGIN CERTIFICATE REQUEST-----", "MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++", "QCG8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb", "b1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ", "QVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X", "XeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7", "h2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6", "WDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h", "n27hRFQ5n2hwRpLrbnD/KIkicNR9sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq", "FsN6gVI/RQY7BTmgP5chThe+/vtCJjP71K/+5YKpaBCb
tpUIo/dERzHMsVS5O730", "iXjFERxAObnrzllohpahc42+dwxPrd4ZMFrRmA5m55/nN9VgCudHTo8Uzrv+iaN7", "0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx", "btRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0xdyfXU/9wFIhf3oFK3Slbo3j", "rXgOHdj+ve1g", "-----END CERTIFICATE REQUEST-----"]}, "msg": "Failed to connect to the host via ssh: ovirt@localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
2020-07-27 18:35:20 UTC - fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"ansible_loop_var": "item", "changed": true, "cmd": ["/usr/bin/openssl", "req", "-new", "-newkey", "rsa:2048", "-nodes", "-subj", "/", "-keyout", "/tmp/ansible.sksg5y_jvdsm"], "delta": "0:00:00.044151", "end": "2020-07-27 18:35:18.314757", "failed": false, "invocation": {"module_args": {"_raw_params": "'/usr/bin/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-nodes'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.sksg5y_jvdsm'\n", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"ansible_loop_var": "item", "changed": true, "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"path": null, "prefix": "ansible.", "state": "file", "suffix": "vdsm"}}, "item": {"pending
_file": "keys/vdsmkey.pending.pem", "req_dir": "requests", "suffix": "vdsm"}, "mode": "0600", "owner": "root", "path": "/tmp/ansible.sksg5y_jvdsm", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 0, "state": "file", "uid": 0}, "rc": 0, "start": "2020-07-27 18:35:18.270606", "stderr": "Generating a RSA private key\n......+++++\n.............................+++++\nwriting new private key to '/tmp/ansible.sksg5y_jvdsm'\n-----", "stderr_lines": ["Generating a RSA private key", "......+++++", ".............................+++++", "writing new private key to '/tmp/ansible.sksg5y_jvdsm'", "-----"], "stdout": "-----BEGIN CERTIFICATE REQUEST-----\nMIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMo1\nIRs7qJtcFrVWc56TpJlxh54cPlLxASA/Xdu78vkNz4/eLL6Fyud5/mSloHeLuHat\nJcvwxmLWcWDeNcEWlMR9tJTA6WxCS2rvUXXFNkw9QCUtTB9qrmezujdOTbDP4eqO\n/i0/xn0BO+kizB7SPMt3uyLJKAOCefcUwvmMEwZ25nw8oCqf/gICl/0AGKphuP11\nH1HuK5x8CTxboO2LawccpRUcbBXn5vF4LRsxy/VuK4fPMnxTk52F0DKwZMfnM4i6\npBnl7rI0
P9JUDBlt2clYhz5fy4/unZmNYiyYpfkfv+YB66/vsFA/Hsa4DdWkYfE0\n0FBpOR5ANVOMselvFC0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQAYQRPcLLqq\nKH9RkEZj/i6WPQWGBUbTmRS2e1uFhRLwVUgaX9yK84m/IS1w7X3JrYuykAh3nGjs\n955U7XqA28nt/VZqB9xXYvmmfhGNFVNphUa/YecBb7OilUlReuRjCugcR6s3JiKM\nVppr6iKxCYzooOiSsMJk7VSSWz2V7k4CGj4vD/1GX/BH9j2J9Xa4uvw0bmZf467q\nKqUhpTw2T/DFg4b+dFjmyjbmjZTLrpfA2xBXGJQMrnHv3v9ca8WlX/zDhZDfEycY\nHIgw9y+I6S0ZTKmzpCMzwpPUOny6StZcPWjKZGCVtLq1slpCHEH9T8++RdX/9Gn8\nyU7Y/+iqZ/Us\n-----END CERTIFICATE REQUEST-----", "stdout_lines": ["-----BEGIN CERTIFICATE REQUEST-----", "MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMo1", "IRs7qJtcFrVWc56TpJlxh54cPlLxASA/Xdu78vkNz4/eLL6Fyud5/mSloHeLuHat", "JcvwxmLWcWDeNcEWlMR9tJTA6WxCS2rvUXXFNkw9QCUtTB9qrmezujdOTbDP4eqO", "/i0/xn0BO+kizB7SPMt3uyLJKAOCefcUwvmMEwZ25nw8oCqf/gICl/0AGKphuP11", "H1HuK5x8CTxboO2LawccpRUcbBXn5vF4LRsxy/VuK4fPMnxTk52F0DKwZMfnM4i6", "pBnl7rI0P9JUDBlt2clYhz5fy4/unZmNYiyYpfkfv+YB66/vsFA/Hsa4DdWkYfE0", "0FBpOR5ANVOMselvFC0CAwEAAaAA
MA0GCSqGSIb3DQEBCwUAA4IBAQAYQRPcLLqq", "KH9RkEZj/i6WPQWGBUbTmRS2e1uFhRLwVUgaX9yK84m/IS1w7X3JrYuykAh3nGjs", "955U7XqA28nt/VZqB9xXYvmmfhGNFVNphUa/YecBb7OilUlReuRjCugcR6s3JiKM", "Vppr6iKxCYzooOiSsMJk7VSSWz2V7k4CGj4vD/1GX/BH9j2J9Xa4uvw0bmZf467q", "KqUhpTw2T/DFg4b+dFjmyjbmjZTLrpfA2xBXGJQMrnHv3v9ca8WlX/zDhZDfEycY", "HIgw9y+I6S0ZTKmzpCMzwpPUOny6StZcPWjKZGCVtLq1slpCHEH9T8++RdX/9Gn8", "yU7Y/+iqZ/Us", "-----END CERTIFICATE REQUEST-----"]}, "msg": "Failed to connect to the host via ssh: ovirt@localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}, {"ansible_loop_var": "item", "item": {"ansible_loop_var": "item", "changed": true, "cmd": ["/usr/bin/openssl", "req", "-new", "-newkey", "rsa:2048", "-nodes", "-subj", "/", "-keyout", "/tmp/ansible.oqj_hat2qemu"], "delta": "0:00:00.190619", "end": "2020-07-27 18:35:18.688022", "failed": false, "invocation": {"module_args": {"_raw_params": "'/usr/bin/openssl'\n'req'\n'-new'\n'-newkey'\n'rsa:2048'\n'-node
s'\n'-subj'\n'/'\n'-keyout'\n'/tmp/ansible.oqj_hat2qemu'\n", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"ansible_loop_var": "item", "changed": true, "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"path": null, "prefix": "ansible.", "state": "file", "suffix": "qemu"}}, "item": {"pending_file": "libvirt-migrate/server-key.pending.pem", "req_dir": "requests-qemu", "suffix": "qemu"}, "mode": "0600", "owner": "root", "path": "/tmp/ansible.oqj_hat2qemu", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 0, "state": "file", "uid": 0}, "rc": 0, "start": "2020-07-27 18:35:18.497403", "stderr": "Generating a RSA private key\n....................................................................................................................................................+++++\n..........................
...........+++++\nwriting new private key to '/tmp/ansible.oqj_hat2qemu'\n-----", "stderr_lines": ["Generating a RSA private key", "....................................................................................................................................................+++++", ".....................................+++++", "writing new private key to '/tmp/ansible.oqj_hat2qemu'", "-----"], "stdout": "-----BEGIN CERTIFICATE REQUEST-----\nMIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++\nQCG8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb\nb1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ\nQVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X\nXeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7\nh2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6\nWDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h\nn27hRFQ5n2hwRpLrbnD/KIkicNR9sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq\nFsN6gVI/RQY7BTmgP5c
hThe+/vtCJjP71K/+5YKpaBCbtpUIo/dERzHMsVS5O730\niXjFERxAObnrzllohpahc42+dwxPrd4ZMFrRmA5m55/nN9VgCudHTo8Uzrv+iaN7\n0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx\nbtRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0xdyfXU/9wFIhf3oFK3Slbo3j\nrXgOHdj+ve1g\n-----END CERTIFICATE REQUEST-----", "stdout_lines": ["-----BEGIN CERTIFICATE REQUEST-----", "MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL++", "QCG8gpsb3isdQFw/noaoOIGxd6zXcoCMdPs2vOP8z7ciPSQPE5r6JxmNTM9tMzCb", "b1sC7ON8PJNaMyRTQ1mVGFoQhQIq54L77GwV27qzVlsjmfM3MCUISVqTGZPWJ/RQ", "QVc03RXDbLYC0UG7C5Y+NRCp7G+67/dLjvzyO4IASZH1rEE7K/PPjSsyJJYaq68X", "XeyckgB7kjQXYZCIexihH3lvMvp7j75wc0RZztEw2bGhhByVsTZCvgouciL/43N7", "h2/8pMZaNbIcx5h8ZIoyWWvYGCe2PaALd94jSLjrgwY6v8lHSt/5S96Ace/C2jt6", "WDwbTfFvthyIosxJRdkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBp0Mj1WO4h", "n27hRFQ5n2hwRpLrbnD/KIkicNR9sFPszAMt6cN84a5jocrSJEcboPnz/Bg1yNlq", "FsN6gVI/RQY7BTmgP5chThe+/vtCJjP71K/+5YKpaBCbtpUIo/dERzHMsVS5O730", "iXjFERxAObnrzllohpahc42+dwxPrd4ZM
FrRmA5m55/nN9VgCudHTo8Uzrv+iaN7", "0G3oPW1IJGnycUArYyGPOIXTHMWRKcxF4irPOxXCp3cyWKxrfrO7vrl8LfOvVsJx", "btRN84AUvUsSPzFDVPuAl8xZ9M0P+Ho9uGvSMlQ0xdyfXU/9wFIhf3oFK3Slbo3j", "rXgOHdj+ve1g", "-----END CERTIFICATE REQUEST-----"]}, "msg": "Failed to connect to the host via ssh: ovirt@localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}]}
2020-07-27 18:35:20 UTC - PLAY RECAP *********************************************************************
localhost : ok=28 ch
I want to install the host and the engine in the same VM (I did before with oVirt 4.3 but it doesn't work anymore)
Could you help me please?
Cheers.
4 years, 4 months
where is error log for OVA import
by Philip Brown
I just tried to import an OVA file.
The GUI status mentions that things seem to go along fairly happily..
it mentions that it creates a disk for it....
but then eventually just says
"failed to import VM xxxxx into datacenter Default"
with zero explanation.
Isnt there a log file or something I can check, somewhere, to find out what the problem is?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com
4 years, 4 months
testing ovirt 4.4 as single node VM inside 4.3
by Philip Brown
I'm having problems importing OVA exports from vmware 4, into ovirt 4.3
I've read that basically, this is a known issue and that ovirt 4.4 improves the import situation greatly.
But I'd like to be able to prove it helps us, before comitting to large undertakings.
Is it going to be doable to run a single node ovirt cluster as a VM inside another one?
Or do I need to run it on bare metal?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com
4 years, 4 months
Re: Changing FQDN
by Alex K
On Fri, Jul 31, 2020, 21:00 Alex K <rightkicktech(a)gmail.com> wrote:
>
>
> On Fri, Jul 31, 2020, 18:26 Dominik Holler <dholler(a)redhat.com> wrote:
>
>>
>>
>> On Fri, Jul 31, 2020 at 4:00 PM Alex K <rightkicktech(a)gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Jul 31, 2020 at 4:50 PM Dominik Holler <dholler(a)redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jul 31, 2020 at 2:44 PM Alex K <rightkicktech(a)gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I am running ovirt 4.2.8
>>>>> I did change ovirt engine FQDN using ovirt-engine-rename tool following
>>>>>
>>>>>
>>>>> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
>>>>>
>>>>> Hosts FQDN is also changed and all seem fine apart from OVN connection
>>>>> and ImageIO proxy.
>>>>>
>>>>> About OVN, i just configured OVN and when I test connection I get:
>>>>>
>>>>> Failed with error Certificate for <new fqdn> doesn't match any of the
>>>>> subject alternative names: [old fqdn] and code 5050)
>>>>>
>>>>
>>>>
>>>> Can you please replace all occurrences of the old fqdn by the new fqdn
>>>> in
>>>> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
>>>> and
>>>> systemctl restart ovirt-provider-ovn
>>>> and let us know if this solves the problem?
>>>>
>>> DId it and the same error is reported.
>>> Then tried to engine-setup and still same issue.
>>>
>>
>> Can you please check if adjusting both URLs of the ovirt-provider-ovn
>> external network provider in oVirt Engine in the
>> oVirt Administration Portal -> Administration -> Providers ->
>> ovirt-provider-ovn -> Edit
>> solves the issue?
>>
> Forgot to mention that I had already changed that to reflect new fqdn.
> Both url and hostname.
>
I think that the issue lies at the ca subject still referring to previous
fqdn.
>
>>
>>
>>>
>>>> This step is not yet automated by ovirt-engine-rename in oVirt-4.2. ,
>>>> please find more details in
>>>> https://bugzilla.redhat.com/1501798
>>>> <https://bugzilla.redhat.com/show_bug.cgi?id=1501798>
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> about Imageio proxy when I test the connection I do get nothing. No
>>>>> error at engine.log or at GUI.
>>>>>
>>>>> Thus it seems that I have to generate/replace new certs.
>>>>> Is there a way I can fix this until I switch to 4.3 and eventually to
>>>>> 4.4 where it seems that this is handled from the rename tool?
>>>>>
>>>>> Thanks for any assistance.
>>>>> Alex.
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list -- users(a)ovirt.org
>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKOUI5FPDDU...
>>>>>
>>>>
4 years, 4 months
4.4.x - VLAN-based logical network can't attach to bonds?
by Mark R
Hello all,
New 4.4.1 hosted-engine install. Pre-deploy, the host already had bond0 and its iSCSI interfaces configured. The deploy correctly sets up the ovirtmgmt interface using the bond, and when everything finishes up it's working as expected. However, I then need to create a VLAN-based network and attach to the same bond0 interface. Editing the host networks and dragging the VLAN 102 interface to the bond alongside ovirtmgmt, then clicking "OK" results in a failure every time. The error returned is:
VDSM ovirt5.domain.com command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Unexpected failure of libnm when running the mainloop: run execution'}
I can break the bond and apply this an any other VLAN-based network at will, but then it's not possible to add the interface I removed to create the bond again. That may be by design and it's not supposed to be possible to attach an interface and create a bond once you've already assigned logical networks to it. The error there is "Error while executing action HostSetupNetworks: Interface already in use".
I'm just putting out feelers to see if this is a known issue that other people are hitting, or are other folks with hosted-engine 4.4.x deploys readily creating that initial bond0 interface and assigning any VLAN-based logical networks they want w/o issue? These same hosts (they still aren't in prod so I rebuild them at will) run 4.3 with no issues and setting up the exact same network configurations works flawlessly, it just becomes an issue on 4.4.1. This host was freshly installed today and has all updates.
Thanks,
Mark
4 years, 4 months