Error while deploying Hyperconverged oVirt 4.3.3(el7) + GlusterFS
by techbreak@icloud.com
As title, we want to use 3 hosts with hyperconverged solution of oVirt. We installed oVirt and Gluster as reported in the guide https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying...
When we try to deploy, we get some errors which we cannot figure out.
==============================================================
gdeploy creates as configuration rules:
hc_nodes:
hosts:
virtnodetest-0-0:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/isostorage
lvname: gluster_lv_isostorage
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstorage
lvname: gluster_lv_vmstorage
vgname: gluster_vg_sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 150G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_isostorage
lvsize: 250G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstorage
lvsize: 3500G
virtnodetest-0-1:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/isostorage
lvname: gluster_lv_isostorage
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstorage
lvname: gluster_lv_vmstorage
vgname: gluster_vg_sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 150G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_isostorage
lvsize: 250G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstorage
lvsize: 3500G
virtnodetest-0-2:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/isostorage
lvname: gluster_lv_isostorage
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstorage
lvname: gluster_lv_vmstorage
vgname: gluster_vg_sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 150G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_isostorage
lvsize: 250G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstorage
lvsize: 3500G
vars:
gluster_infra_disktype: JBOD
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
- virtnodetest-0-0
- virtnodetest-0-1
- virtnodetest-0-2
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
- volname: isostorage
brick: /gluster_bricks/isostorage/isostorage
arbiter: 0
- volname: vmstorage
brick: /gluster_bricks/vmstorage/vmstorage
arbiter: 0
=========================================================================
The system returns this error:
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] *********************************************************
ok: [virtnodetest-0-2]
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
TASK [Check if valid hostnames are provided] ***********************************
changed: [virtnodetest-0-1] => (item=virtnodetest-0-1)
changed: [virtnodetest-0-1] => (item=virtnodetest-0-0)
changed: [virtnodetest-0-1] => (item=virtnodetest-0-2)
TASK [Check if provided hostnames are valid] ***********************************
ok: [virtnodetest-0-1] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [virtnodetest-0-0] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [virtnodetest-0-2] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [Check if /var/log has enough disk space] *********************************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [Check if the /var is greater than 15G] ***********************************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [Check if disks have logical block size of 512B] **************************
skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [Check if logical block size is 512 bytes] ********************************
skipping: [virtnodetest-0-1] => (item=Logical Block Size)
skipping: [virtnodetest-0-0] => (item=Logical Block Size)
skipping: [virtnodetest-0-2] => (item=Logical Block Size)
TASK [Get logical block size of VDO devices] ***********************************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [Check if logical block size is 512 bytes for VDO devices] ****************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] ***
ok: [virtnodetest-0-2]
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ********
ok: [virtnodetest-0-2] => (item=2049/tcp)
ok: [virtnodetest-0-1] => (item=2049/tcp)
ok: [virtnodetest-0-0] => (item=2049/tcp)
ok: [virtnodetest-0-2] => (item=54321/tcp)
ok: [virtnodetest-0-0] => (item=54321/tcp)
ok: [virtnodetest-0-1] => (item=54321/tcp)
ok: [virtnodetest-0-2] => (item=5900/tcp)
ok: [virtnodetest-0-1] => (item=5900/tcp)
ok: [virtnodetest-0-0] => (item=5900/tcp)
ok: [virtnodetest-0-2] => (item=5900-6923/tcp)
ok: [virtnodetest-0-0] => (item=5900-6923/tcp)
ok: [virtnodetest-0-1] => (item=5900-6923/tcp)
ok: [virtnodetest-0-2] => (item=5666/tcp)
ok: [virtnodetest-0-1] => (item=5666/tcp)
ok: [virtnodetest-0-0] => (item=5666/tcp)
ok: [virtnodetest-0-2] => (item=16514/tcp)
ok: [virtnodetest-0-1] => (item=16514/tcp)
ok: [virtnodetest-0-0] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] ***
ok: [virtnodetest-0-1] => (item=glusterfs)
ok: [virtnodetest-0-0] => (item=glusterfs)
ok: [virtnodetest-0-2] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Check if vdsm-python package is installed or not] ***
changed: [virtnodetest-0-2]
changed: [virtnodetest-0-1]
changed: [virtnodetest-0-0]
TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] *********
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create blacklist template content] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Reload multipathd] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] ***
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] ***
ok: [virtnodetest-0-2]
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] ***********
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] *********
skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend threshold] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend percentage] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vdo block device exists] ****
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : set fact if it will at least install 1 vdo device] ***
TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : set fact about vdo installed deps] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ********
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set VDO maxDiscardSize as 16M] *******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Stop VDO volumes] ********************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Start VDO volumes] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ******
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : include_tasks] ***********************
included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] *****
changed: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
changed: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
changed: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ********
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
ok: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:33.575598', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.009901', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:18:33.565697'})
ok: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:52:56.886693', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.008123', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 10:52:56.878570'})
ok: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:24.420710', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.007307', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:25:24.413403'})
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
skipping: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': []})
skipping: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': []})
skipping: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': []})
TASK [gluster.infra/roles/backend_setup : update LVM fact's] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if thick-lv block devices exists] ***
changed: [virtnodetest-0-0] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
changed: [virtnodetest-0-1] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
changed: [virtnodetest-0-2] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
skipping: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:37.528159', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.010032', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:18:37.518127'})
skipping: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:53:00.863436', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007459', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 10:53:00.855977'})
skipping: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:28.261106', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007818', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:25:28.253288'})
TASK [gluster.infra/roles/backend_setup : include_tasks] ***********************
included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] *****
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ********
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in volume group] ***
TASK [gluster.infra/roles/backend_setup : update LVM fact's] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *********
failed: [virtnodetest-0-1] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " WARNING: Device for PV gx6iUE-369Z-3FDP-aRUQ-Wur0-1Xhf-v4g79j not found or rejected by a filter.\n Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
failed: [virtnodetest-0-0] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
failed: [virtnodetest-0-2] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
virtnodetest-0-0 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
virtnodetest-0-1 : ok=20 changed=4 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
virtnodetest-0-2 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations.
======================================================================
How can we resolve this issue?
3 years, 10 months
Creating Snapshots failed
by jb
Hello Community,
since I upgrade our cluster to ovirt 4.4.6.8-1.el8 I'm not able anymore
to create snapshots on certain VMs. For example I have two debian 10
VMs, from one I can make a snapshot, and from other one not.
Both are up to date and uses the same qemu-guest-agent versions.
I tried to create snapshots over API and on web gui, both gives the same
result.
In the attachment you found a snipped from the engine.log.
Any help would be wonderful!
Regards,
Jonathan
3 years, 10 months
After upgrade only 1/3 hosts is running Node 4.4.6
by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
3 years, 10 months
[ANN] oVirt 4.4.7 First Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.7 First Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.7
First Release Candidate for testing, as of May 27th, 2021.
This update is the seventh in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.7 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.7 (redeploy in case of already being on 4.4.7).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS Stream 8
- oVirt Node NG is already available based on CentOS Stream 8
Additional Resources:
* Read more about the oVirt 4.4.7 release highlights:
http://www.ovirt.org/release/4.4.7/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.7/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 10 months
Unable to migrate VMs
by Jayme
I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
is mix of GlusterFS and NFS. Everything has been running smoothly, but the
other day I noticed many VMs had invalid snapshots. I run a script to
export OVA for VMs for backup purposes, exports seemed to have been fine
but snapshots failed to delete at the end. I was able to manually delete
the snapshots through oVirt admin GUI without any errors/warnings and the
VMs have been running fine and can restart them without problems.
I thought this problem may be due to snapshot bug which is supposedly fixed
in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am now
having a problem with VMs not being able to migrate.
When I migrate any VM (doesn't seem to matter which host to and from) the
process starts but stops at 0-1%. Eventually after 15-30 minutes or more
the tasks are all completed by the VM is not migrated.
I am unable to migrate any VMs and as such I cannot place any host in
maintenance mode.
I've attaching some VDSM logs from source and destination hosts, these were
after initiating a migration of a single VM
I'm seeing some errors in the logs regarding the migration stalling, but
not able to determine why its stalling.
2021-05-27 17:10:22,167+0000 INFO (jsonrpc/4) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
{'policy': [], 'current_values': [{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path': '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdd', 'path': '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/4072fda1-ec82-45c9-b353-91fceb13bf08/891f5982-dead-48b4-8907-caa1e309fa82',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'7e5156de-649d-4904-9092-21a699242a37': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/ca0c1208-a7aa-4ef6-a450-4a40bd4455f3/a2335199-ddd4-429b-b55d-f4d527081fd3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]}}}
from=::1,35012 (api:54)
2021-05-27 17:10:31,118+0000 WARN (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration stalling: remaining
(32863MiB) > lowmark (32863MiB). (migration:801)
2021-05-27 17:10:31,118+0000 INFO (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration Progress: 190.035
seconds elapsed, 1% of data processed, total data: 32864MB, processed data:
0MB, remaining data: 32863MB, transfer speed 0Mbps, zero pages: 160MB,
compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:814)
2021-05-27 17:10:33,827+0000 INFO (jsonrpc/5) [throttled] Current
getAllVmStats: {'f8f4e4a1-b565-4663-8962-c8804dbb86fb': 'Up',
'2b87204f-f695-474a-9f08-47b85fcac366': 'Up',
'26332421-54a3-4afc-90e7-551a7e314c80': 'Up',
'60edbd80-dad7-4bf8-8fd1-e138413cf9f6': 'Up',
'beeefe06-78a0-4e14-a932-cc8d734d542d': 'Up',
'7e5156de-649d-4904-9092-21a699242a37': 'Migration Source'}
(throttledlog:104)
2021-05-27 17:10:37,186+0000 INFO (jsonrpc/5) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
{'policy': [], 'current_values': [{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path': '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdd', 'path': '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/4072fda1-ec82-45c9-b353-91fceb13bf08/891f5982-dead-48b4-8907-caa1e309fa82',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'7e5156de-649d-4904-9092-21a699242a37': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/ca0c1208-a7aa-4ef6-a450-4a40bd4455f3/a2335199-ddd4-429b-b55d-f4d527081fd3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]}}}
from=::1,35012 (api:54)
2021-05-27 17:10:41,120+0000 WARN (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration stalling: remaining
(32863MiB) > lowmark (32863MiB). (migration:801)
2021-05-27 17:10:41,120+0000 INFO (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration Progress: 200.037
seconds elapsed, 1% of data processed, total data: 32864MB, processed data:
0MB, remaining data: 32863MB, transfer speed 0Mbps, zero pages: 160MB,
compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:814)
2021-05-27 17:10:51,121+0000 WARN (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration stalling: remaining
(32863MiB) > lowmark (32863MiB). (migration:801)
2021-05-27 17:10:51,121+0000 INFO (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration Progress: 210.039
seconds elapsed, 1% of data processed, total data: 32864MB, processed data:
0MB, remaining data: 32863MB, transfer speed 0Mbps, zero pages: 160MB,
compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:814)
2021-05-27 17:10:52,211+0000 INFO (jsonrpc/1) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
{'policy': [], 'current_values': [{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path': '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdd', 'path': '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/4072fda1-ec82-45c9-b353-91fceb13bf08/891f5982-dead-48b4-8907-caa1e309fa82',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'7e5156de-649d-4904-9092-21a699242a37': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/ca0c1208-a7aa-4ef6-a450-4a40bd4455f3/a2335199-ddd4-429b-b55d-f4d527081fd3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]}}}
from=::1,35012 (api:54)
2021-05-27 17:11:01,123+0000 WARN (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration stalling: remaining
(32863MiB) > lowmark (32863MiB). (migration:801)
2021-05-27 17:11:01,123+0000 INFO (migmon/7e5156de) [virt.vm]
(vmId='7e5156de-649d-4904-9092-21a699242a37') Migration Progress: 220.041
seconds elapsed, 1% of data processed, total data: 32864MB, processed data:
0MB, remaining data: 32863MB, transfer speed 0Mbps, zero pages: 160MB,
compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:814)ats
return={'86245648-abd8-46e3-9c10-432e8788a074': {'code': 0, 'lastCheck':
'1.6', 'delay': '0.00353497', 'valid': True, 'version': 5, 'acquired':
True, 'actual': True}} from=::1,35010,
task_id=c4e65f55-1367-41d3-9bf6-f357a382df4a (api:54)
2021-05-27 17:09:33,156+0000 INFO (jsonrpc/2) [api.host] START getStats()
from=::ffff:10.11.0.219,54952 (api:48)
3 years, 10 months
Ovirt 4.4.6.8-1 upload ova as template to multiple hosts
by Don Dupuis
I have a single ovirt manager and 2 ovirt hosts, each has a local storage
domain. I am using the upload_ova_as_template.py and my template upload
works on a single host but not both. If I use the gui method, there is the
option of "clone" and putting in a new name for the template. This seems to
work most of the time, but it has failed a couple of times also. I would
like to add the same "clone" and "name" option to the
upload_ova_as_temple.py. What is the best way to do this since I need to
have unique UUIDs for the template disks? This is a unique setup in the
fact that I can't use shared storage and this should be doable as I was
able to do it in the ovirt gui.
Thanks
Don
3 years, 10 months