Upgrade Memory of oVirt Nodes
by souvaliotimaria@mail.com
Hello everyone,
I have an oVirt 4.3.2.5 hyperconverged 3 node production environment and we want to add some RAM to it.
Can I upgrade the RAM without my users noticing any disruptions and keep the VMs running?
The way I thought I should do it was to migrate any running VMs to the other nodes, then set one node in maintenance mode, shut it down, place the new memory, bring it back up, remove it from maintenance mode and see how the installation reacts and repeat for the other two nodes. Is this correct or should I follow another way?
Will there be a problem during the time when the nodes will not be identical in their resources?
Thank you for your time,
Souvalioti Maria
3 years, 3 months
Support for Shared SAS storage
by Vinícius Ferrão
Hello,
I’ve two compute nodes with SAS Direct Attached sharing the same disks.
Looking at the supported types I can’t see this on the documentation: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
There’s is local storage on this documentation, but my case is two machines, both using SAS, connected to the same machines. It’s the VRTX hardware from Dell.
Is there any support for this? It should be just like Fibre Channel and iSCSI, but with SAS instead.
Thanks,
3 years, 3 months
oVirt 4.4.0 Release is now generally available
by Sandro Bonazzola
oVirt 4.4.0 Release is now generally available
The oVirt Project is excited to announce the general availability of the
oVirt 4.4.0 Release, as of May 20th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Some of the features included in the oVirt 4.4.0 release require content
that will be available in CentOS Linux 8.2 but cannot be tested on RHEL 8.2
yet due to some incompatibility in the openvswitch package that is shipped
in CentOS Virt SIG, which requires rebuilding openvswitch on top of CentOS
8.2. The cluster switch type OVS is not implemented for CentOS 8 hosts.
Please note that oVirt 4.4 only supports clusters and datacenters with
compatibility version 4.2 and above. If clusters or datacenters are running
with an older compatibility version, you need to upgrade them to at least
4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, megaraid_sas driver is removed. If you use Enterprise Linux 8
hosts you can try to provide the necessary drivers for the deprecated
hardware using the DUD method (See users mailing list thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Installation instructions
For the engine: either use the oVirt appliance or install CentOS Linux 8
minimal by following these steps:
- Install the CentOS Linux 8 image from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...,
selecting the minimal installation.
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- Attach the host to the engine and let it be deployed.
Update instructionsUpdate from oVirt 4.4 Release Candidate
On the engine side and on CentOS hosts, you’ll need to switch from
ovirt44-pre to ovirt44 repositories.
In order to do so, you need to:
1.
dnf remove ovirt-release44-pre
2.
rm -f /etc/yum.repos.d/ovirt-4.4-pre-dependencies.repo
3.
rm -f /etc/yum.repos.d/ovirt-4.4-pre.repo
4.
dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
5.
dnf update
On the engine side you’ll need to run engine-setup only if you were not
already on the latest release candidate.
On oVirt Node, you’ll need to upgrade with:
1.
Move node to maintenance
2.
dnf install
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-im...
3.
Reboot
4.
Activate the host
Update from oVirt 4.3
oVirt 4.4 is available only for CentOS 8. In-place upgrades from previous
installations, based on CentOS 7, are not possible. For the engine, use
backup, and restore that into a new engine. Nodes will need to be
reinstalled.
A 4.4 engine can still manage existing 4.3 hosts, but you can’t add new
ones.
For a standalone engine, please refer to upgrade procedure at
https://ovirt.org/documentation/upgrade_guide/#Upgrading_from_4-3
If needed, run ovirt-engine-rename (see engine rename tool documentation at
https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html )
When upgrading hosts:
You need to upgrade one host at a time.
1.
Turn host to maintenance. Virtual machines on that host should migrate
automatically to a different host.
2.
Remove it from the engine
3.
Re-install it with el8 or oVirt Node as per installation instructions
4.
Re-add the host to the engine
Please note that you may see some issues live migrating VMs from el7 to
el8. If you hit such a case, please turn off the vm on el7 host and get it
started on the new el8 host in order to be able to move the next el7 host
to maintenance.
What’s new in oVirt 4.4.0 Release?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts.
-
Easier network management and configuration flexibility with
NetworkManager.
-
VMs based on a more modern Q35 chipset with legacy SeaBIOS and UEFI
firmware.
-
Support for direct passthrough of local host disks to VMs.
-
Live migration improvements for High Performance guests.
-
New Windows guest tools installer based on WiX framework now moved to
VirtioWin project.
-
Dropped support for cluster level prior to 4.2.
-
Dropped API/SDK v3 support deprecated in past versions.
-
4K block disk support only for file-based storage. iSCSI/FC storage do
not support 4K disks yet.
-
You can export a VM to a data domain.
-
You can edit floating disks.
-
Ansible Runner (ansible-runner) is integrated within the engine,
enabling more detailed monitoring of playbooks executed from the engine.
-
Adding and reinstalling hosts is now completely based on Ansible,
replacing ovirt-host-deploy, which is not used anymore.
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
[image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 3 months
It is possible to export a vm bigger as 5 TB?
by miguel.garcia@toshibagcs.com
We have a vm with many virtual drives and need to backup as ova file. Since this demands a lot of space I had mounted an nfs directory in the host but get next message after try to export ova:
Error whle executing action: Cannot export VM. Invalid target folder: /mnt/shared2 on Host. You may refer to the engine.log file for further details.
Looking at the engine.log file got this mesage:
2020-07-03 11:42:37,268-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Lock Acquired to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:42:37,275-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Executing Ansible command: /usr/bin/ansible-playbook --ssh-common-args=-F /var/lib/ovirt-engine/.ssh/config -v --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory2275650137225503626 --extra-vars=target_directory="/nt/shared2" --extra-vars=validate_only="True" /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml [Logfile: /var/log/ovirt-engine/ova/ovirt-export-ova-validate-ansible-20200703114237-172.16.99.13-e357a397-3dc3-4566-900f-6e0e0cb39030.log]
2020-07-03 11:42:39,554-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Ansible playbook command has exited with value: 2
2020-07-03 11:42:39,555-04 WARN [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Validation of action 'ExportVmToOva' failed for user Miguel.Garcia. Reasons: VAR__ACTION__EXPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_INVALID_OVA_DESTINATION_FOLDER,$vdsName hyp11.infra,$directory /nt/shared2
2020-07-03 11:42:39,556-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Lock freed to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:44:53,021-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Lock Acquired to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:44:53,027-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Executing Ansible command: /usr/bin/ansible-playbook --ssh-common-args=-F /var/lib/ovirt-engine/.ssh/config -v --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory3629946116235444875 --extra-vars=target_directory="/mnt/shared2" --extra-vars=validate_only="True" /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml [Logfile: /var/log/ovirt-engine/ova/ovirt-export-ova-validate-ansible-20200703114453-172.16.99.13-ba070d4b-5f76-4fd7-adc0-53e0f96e6635.log]
2020-07-03 11:44:55,538-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Ansible playbook command has exited with value: 2
2020-07-03 11:44:55,538-04 WARN [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Validation of action 'ExportVmToOva' failed for user Miguel.Garcia. Reasons: VAR__ACTION__EXPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_INVALID_OVA_DESTINATION_FOLDER,$vdsName hyp11.infra,$directory /mnt/shared2
2020-07-03 11:44:55,539-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Lock freed to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:45:17,391-04 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-11) [6d510f0c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f762fb4b-57c1-40d0-bf3f-be4c83f16f44=PROVIDER]', sharedLocks=''}'
I had tried to addthe nfs partition as a Storage Domain data type but that didn't help either.
The mount persmission is as follow:
drwxr-xr-x. 3 vdsm kvm 50 Jul 3 11:47 /mnt/shared2
Any idea how can export this vm?
3 years, 3 months
OVA export creates empty and unusable images
by thomas@hoberg.net
I've tested this now in three distinct farms with CentOS7.8 and the latest oVirt 4.3 release: OVA export files only contain an XML header and then lots of zeros where the disk images should be.
Where an 'ls -l <vmname>.ova' shows a file about the size of the disk, 'du -h <vmname>.ova' shows mere kilobytes, 'strings <vmname>.ova' dumps the XML and then nothing but repeating zeros until the end of the file.
Exporting existing VMs from a CentOS/RHEL 7 farm and importing them after a rebuild on CentOS/RHEL 8 would seem like a safe migration strategy, except when OVA export isn't working.
Please treat with priority!
3 years, 3 months
4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem
by Stephen Panicho
Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
hosted engine deploy. Libvirtd can't restart because of a missing
/etc/pki/CA/cacert.pem file.
The log (tasks seemingly from
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
files]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2 configuration
by vdsm]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt default
network configuration]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable
to start service libvirtd: Job for libvirtd.service failed because the
control process exited with error code.\nSee \"systemctl status
libvirtd.service\" and \"journalctl -xe\" for details.\n"}
journalctl -u libvirtd:
May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
10.el8 (CBS <cbs(a)centos.org>, 2020-02-27-01:09:46, )
May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
'/etc/pki/CA/cacert.pem': No such file or directory
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
code=exited, status=6/NOTCONFIGURED
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
'exit-code'.
May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.
From a fresh CentOS 8.1 minimal install, I've installed the following:
- The 4.4 repo
- cockpit
- ovirt-cockpit-dashboard
- vdsm-gluster (providing glusterfs-server and allowing the Gluster Wizard
to complete)
- gluster-ansible-roles (only on the bootstrap host)
I'm not exactly sure what that initial bit of the playbook does. Comparing
the bootstrap node with another that has yet to be touched, both
/etc/libvirt/libvirtd.conf and /etc/sysconfig/libvirtd are the same on both
hosts. Yet the bootstrap host can no longer start libvirtd while the other
host can. Neither host has the /etc/pki/CA/cacert.pem file.
Please let me know if I can provide any more information. Thanks!
3 years, 3 months
Deploy host engine error: The task includes an option with an undefined variable
by Angel R. Gonzalez
Hi all!
I'm deploying a host engine in a host node with a 8x Intel(R) Xeon(R)
CPU E5410 @ 2.33GHz.
The deploy proccess show the next message
> [INFO]TASK [ovirt.hosted_engine_setup : Convert CPU model name]
> [ERROR]fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute ''\n\nThe error appears to be in
> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
> line 105, column 15, but may\nbe elsewhere in the file depending on
> the exact syntax problem.\n\nThe offending line appears to be:\n\n -
> debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v'
> shorthand syntax and YAML in this task. Only one syntax may be used.\n"}
The ansible deploy script in his 105 line show:
> - name: Parse server CPU list
> set_fact:
> server_cpu_dict: "{{ server_cpu_dict |
> combine({item.split(':')[1]: item.split(':')[3]}) }}"
> with_items: >-
> {{
> server_cpu_list.json['values']['system_option_value'][0]['value'].split(';
> ')|list|difference(['']) }}
> - debug: var=server_cpu_dict
I don´t know ansible and i don't how to resolve this issue. Any idea?
Thanks in advance,
Ángel González.
3 years, 3 months
oVirt 4.4.1 HCI single server deployment failed nested-kvm
by wodel youchi
Hi,
I am using these versions for my test :
- ovirt-engine-appliance-4.4-20200723102445.1.el8.x86_64.rpm
- ovirt-node-ng-installer-4.4.1-2020072310.el8.iso
A single HCI server using nested KVM.
The gluster part works now without error, but when I click the
hosted-engine deployment button I get :
*System data could not be retrieved!*
*No valid network interface has been found *
* If you are using Bonds or VLANs Use the following naming conventions: *
- VLAN interfaces: physical_device.VLAN_ID (for example, eth0.23, eth1.128,
enp3s0.50)
- Bond interfaces: bond*number* (for example, bond0, bond1)
- VLANs on bond interfaces: bond*number*.VLAN_ID (for example, bond0.50,
bond1.128)
* Supported bond modes: active-backup, balance-xor, broadcast, 802.3ad
* Networking teaming is not supported and will cause errors
From this log file I get :
cat ovirt-hosted-engine-setup-ansible-get_network_interfaces-xxxxxxxx.log
2020-07-29 15:27:09,246+0100 DEBUG var changed: host "localhost" var
"otopi_host_net" type "<class 'list'>" value: "[
"enp1s0",
"enp2s0"
]"
2020-07-29 15:27:09,246+0100 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-eng
ine-setup/ansible/trigger_role.yml', 'ansible_host': 'localhost',
'ansible_task': 'Filter unsupported interface types', 'task_duration
': 0}
2020-07-29 15:27:09,246+0100 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7fe804bc5940> kwargs
2020-07-29 15:27:09,514+0100 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-ho
sted-engine-setup/ansible/trigger_role.yml', 'ansible_task':
'ovirt.hosted_engine_setup : debug'}
2020-07-29 15:27:09,515+0100 DEBUG ansible on_any args TASK:
ovirt.hosted_engine_setup : debug kwargs is_conditional:False
2020-07-29 15:27:09,515+0100 DEBUG ansible on_any args localhostTASK:
ovirt.hosted_engine_setup : debug kwargs
2020-07-29 15:27:09,792+0100 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-eng
ine-setup/ansible/trigger_role.yml', 'ansible_host': 'localhost',
'ansible_task': '', 'task_duration': 0}
2020-07-29 15:27:09,793+0100 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7fe804cb45f8> kwargs
2020-07-29 15:27:10,059+0100 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-ho
sted-engine-setup/ansible/trigger_role.yml', 'ansible_task':
'ovirt.hosted_engine_setup : Failed if only teaming devices are availible
'}
2020-07-29 15:27:10,059+0100 DEBUG ansible on_any args TASK:
ovirt.hosted_engine_setup : Failed if only teaming devices are availible
kwargs is_conditional:False
2020-07-29 15:27:10,060+0100 DEBUG ansible on_any args localhostTASK:
ovirt.hosted_engine_setup : Failed if only teaming devices are a
vailible kwargs
2020-07-29 15:27:10,376+0100 DEBUG var changed: host "localhost" var
"ansible_play_hosts" type "<class 'list'>" value: "[]"
2020-07-29 15:27:10,376+0100 DEBUG var changed: host "localhost" var
"ansible_play_batch" type "<class 'list'>" value: "[]"
2020-07-29 15:27:10,376+0100 DEBUG var changed: host "localhost" var
"play_hosts" type "<class 'list'>" value: "[]"
2020-07-29 15:27:10,376+0100 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"msg": "The conditional check
*'(otopi_host_net.ansible_facts.otopi_host_net | length == 0)' failed. The
error was: error while evaluating conditional
((otopi_host_net.ansible_facts.otopi_host_net | length == 0)): 'list
object' has no attribute 'ansible_facts'\n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/filter_team_devices.yml':
line 29, column 13, *
but may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n- debug: var=otopi_ho
st_net\n ^ here\n\nThere appears to be both 'k=v' shorthand
syntax and YAML in this task. Only one syntax may be used.\n"
},
"ansible_task": "Failed if only teaming devices are availible",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 0
}
2020-07-29 15:27:10,377+0100 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7fe804c175c0> kwargs
ignor
e_errors:None
2020-07-29 15:27:10,378+0100 INFO ansible stats {
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "00:15 Minutes",
"ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 16,
'failures': 1, 'unreachable': 0, 'changed': 1, 'skipped': 2
, 'rescued': 0, 'ignored': 0}}",
"ansible_type": "finish",
"status": "FAILED"
}
2020-07-29 15:27:10,378+0100 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:02 ] Force facts gathering
[ 00:01 ] Get all active network interfaces
[ < 1 sec ] Filter bonds with bad naming
[ < 1 sec ] Generate output list
[ 00:01 ] Collect interface types
[ < 1 sec ] Get list of Team devices
[ < 1 sec ] Filter unsupported interface types
[ FAILED ] Failed if only teaming devices are availible
2020-07-29 15:27:10,378+0100 DEBUG ansible on_any args
<ansible.executor.stats.AggregateStats object at 0x7fe8074bce80> kwargs
Regards.
3 years, 3 months
Resize partition cause oVirt installation failed
by hkexdong@yahoo.com.hk
I've 2 x 512GB SSD. Purely for oVirt 4.4.1 (CentOS 8.2) installation.
Using auto partitioning will assign 800+ GB to root partition. I think that's too much so manually reduce it to 80 GB.
As there are 2 disks. I want to form RAID too. So, I manually change all the partitions from "LVM thin provisioning" to "RAID" (with RAID 1).
Eventually, cause the installation failed with a message "There was an error running the kickstart script at line ....."
If I don't make any change to the partitions. The installation success.
Is this a known issue? oVirt not allow resize partition and using Linux software RAID?
3 years, 3 months
Unable to start vms in a specific host
by miguel.garcia@toshibagcs.com
I had added a couple new hosts in my cluster (hyp16,hyp17) both followed the same procedure but at the moment to start vms all of them goes to hyp16, if I tried to migrate a vm to hyp 17 the migration task fails.
Here is the log trying to do migration:
2020-07-22 17:19:12,352-04 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 24939bc1-5359-4f9d-b742-b32143c02eb1 Type: VMAction group MIGRATE_VM with role type USER
2020-07-22 17:19:12,421-04 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='15466e1a-1f44-472c-94fc-84b132ff1c7d', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1', srcHost='172.16.99.12', dstVdsId='3663f46b-61db-4b4c-a6c0-03fedc90edf0', dstHost='172.16.99.19:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='625', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=ab
ort, params=[]}}]]', dstQemu='172.16.99.19'}), log id: 4f624ae2
2020-07-22 17:19:12,423-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] START, MigrateBrokerVDSCommand(HostName = hyp10.infra, MigrateVDSCommandParameters:{hostId='15466e1a-1f44-472c-94fc-84b132ff1c7d', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1', srcHost='172.16.99.12', dstVdsId='3663f46b-61db-4b4c-a6c0-03fedc90edf0', dstHost='172.16.99.19:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='625', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntim
e, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='172.16.99.19'}), log id: 7ead484c
2020-07-22 17:19:12,587-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] FINISH, MigrateBrokerVDSCommand, return: , log id: 7ead484c
2020-07-22 17:19:12,589-04 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 4f624ae2
2020-07-22 17:19:12,597-04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-2378) [35ab9442-856d-4540-a463-d72d1211867f] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: Test, Source: hyp10.infra, Destination: hyp17.infra, User: Miguel.Garcia@).
2020-07-22 17:19:14,050-04 INFO [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-2378) [] User antonio.acosta@ successfully logged out
2020-07-22 17:19:14,069-04 INFO [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default task-2380) [27d07a90] Running command: TerminateSessionsForTokenCommand internal: true.
2020-07-22 17:19:14,405-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] VM '24939bc1-5359-4f9d-b742-b32143c02eb1' was reported as Down on VDS '3663f46b-61db-4b4c-a6c0-03fedc90edf0'(hyp17.infra)
2020-07-22 17:19:14,406-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-12) [] START, DestroyVDSCommand(HostName = hyp17.infra, DestroyVmVDSCommandParameters:{hostId='3663f46b-61db-4b4c-a6c0-03fedc90edf0', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 7d54e8da
2020-07-22 17:19:14,823-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-12) [] Failed to destroy VM '24939bc1-5359-4f9d-b742-b32143c02eb1' because VM does not exist, ignoring
2020-07-22 17:19:14,823-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-12) [] FINISH, DestroyVDSCommand, return: , log id: 7d54e8da
2020-07-22 17:19:14,824-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] VM '24939bc1-5359-4f9d-b742-b32143c02eb1'(Test) was unexpectedly detected as 'Down' on VDS '3663f46b-61db-4b4c-a6c0-03fedc90edf0'(hyp17.infra) (expected on '15466e1a-1f44-472c-94fc-84b132ff1c7d')
2020-07-22 17:19:14,824-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] Migration of VM 'Test' to host 'hyp17.infra' failed: VM destroyed during the startup.
2020-07-22 17:19:14,824-04 WARN [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-3) [] skipping VM '24939bc1-5359-4f9d-b742-b32143c02eb1' from this monitoring cycle - the VM data has changed since fetching the data
2020-07-22 17:19:17,224-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [] VM '24939bc1-5359-4f9d-b742-b32143c02eb1'(Test) moved from 'MigratingFrom' --> 'Up'
2020-07-22 17:19:17,224-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [] Adding VM '24939bc1-5359-4f9d-b742-b32143c02eb1'(Test) to re-run list
2020-07-22 17:19:17,229-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [] Rerun VM '24939bc1-5359-4f9d-b742-b32143c02eb1'. Called from VDS 'hyp10.infra'
2020-07-22 17:19:17,310-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-12994) [] START, MigrateStatusVDSCommand(HostName = hyp10.infra, MigrateStatusVDSCommandParameters:{hostId='15466e1a-1f44-472c-94fc-84b132ff1c7d', vmId='24939bc1-5359-4f9d-b742-b32143c02eb1'}), log id: 74cf526d
2020-07-22 17:19:17,533-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-12994) [] FINISH, MigrateStatusVDSCommand, return: , log id: 74cf526d
2020-07-22 17:19:17,587-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-12994) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: Test, Source: hyp10.infra, Destination: hyp17.infra).
2020-07-22 17:19:17,591-04 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (EE-ManagedThreadFactory-engine-Thread-12994) [] Lock freed to object 'EngineLock:{exclusiveLocks='[24939bc1-5359-4f9d-b742-b32143c02eb1=VM]', sharedLocks=''}'
3 years, 3 months