oVirt 4.6 OS versions
by d03@bornfree.org
Has it yet been decided which OS and version will be used for the oVirt
4.6 Hosted Engine and for 4.6 oVirt Node?
11 months, 1 week
Fresh install ovirt node 4.5.5
by nowak.pawel.mail@gmail.com
Hi,
I made new install ovirt node and i can't login to www panel using the root account (bad user or password). SSH login works correctly. What may be the problem?
11 months, 1 week
Ovirt 4.4.10 - Host deployment failure due to repository timeout
by tflau@polyu.edu.hk
Hi,
I faced host deployment issue on Ovirt 4.4.10 in this morning.
After log review, it seems the repository took longer time to resolve the key from "https://ftp.yz.yamagata-u.ac.jp/pub/" (Default: 30 seconds)
Is there any way to use another repository for resolving the issue?
OS Version: Rocky Linux 8.9
Ovirt Version: 4.4.10
Error Message from host deployment log:
2023-12-19 09:29:10 HKT - TASK [ovirt-host-deploy-vdsm : Install ovirt-hosted-engine-setup package] ******
2023-12-19 09:31:13 HKT - An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: Curl error (28): Timeout was reached for https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel/R
PM-GPG-KEY-EPEL-8 [Operation timed out after 30000 milliseconds with 0 out of 0 bytes received]
fatal: [10.13.5.3]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/dnf/util.py\", line 115, in _urlopen\n repo._repo.downloadUrl(url, fo.fileno())\n File
\"/usr/lib64/python3.6/site-packages/libdnf/repo.py\", line 499, in downloadUrl\n return _repo.Repo_downloadUrl(self, url, fd)\nRuntimeError: Curl error (28): Timeout was reached for https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-pr
ojects/epel/RPM-GPG-KEY-EPEL-8 [Operation timed out after 30000 milliseconds with 0 out of 0 bytes received]\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"<stdin>\"
, line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, r
un_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n F
ile \"/tmp/ansible_dnf_payload_nsszl6ze/ansible_dnf_payload.zip/ansible/modules/packaging/os/dnf.py\", line 1370, in <module>\n File \"/tmp/ansible_dnf_payload_nsszl6ze/ansible_dnf_payload.zip/ansible/modules/packaging/os/dnf.py\", line
1359, in main\n File \"/tmp/ansible_dnf_payload_nsszl6ze/ansible_dnf_payload.zip/ansible/modules/packaging/os/dnf.py\", line 1338, in run\n File \"/tmp/ansible_dnf_payload_nsszl6ze/ansible_dnf_payload.zip/ansible/modules/packaging/os/
dnf.py\", line 1242, in ensure\n File \"/usr/lib/python3.6/site-packages/dnf/base.py\", line 2494, in _get_key_for_package\n keys = dnf.crypto.retrieve(keyurl, repo)\n File \"/usr/lib/python3.6/site-packages/dnf/crypto.py\", line 18
5, in retrieve\n with dnf.util._urlopen(keyurl, repo=repo) as handle:\n File \"/usr/lib/python3.6/site-packages/dnf/util.py\", line 119, in _urlopen\n raise IOError(str(e))\nOSError: Curl error (28): Timeout was reached for https:
//ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel/RPM-GPG-KEY-EPEL-8 [Operation timed out after 30000 milliseconds with 0 out of 0 bytes received]\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact er
ror", "rc": 1}
Time required based on Curl Result: (Over 60 seconds)
yum.repos.d]# curl -vvv --trace-time --connect-timeout 120 https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel/RPM-GPG-KE...
11:15:13.842041 * Trying 133.24.248.18...
11:15:13.842143 * TCP_NODELAY set
11:15:13.906252 * Connected to ftp.yz.yamagata-u.ac.jp (133.24.248.18) port 443 (#0)
11:15:13.907121 * ALPN, offering h2
11:15:13.907139 * ALPN, offering http/1.1
11:15:13.911221 * successfully set certificate verify locations:
11:15:13.911246 * CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
11:15:13.911353 * TLSv1.3 (OUT), TLS handshake, Client hello (1):
11:16:20.151979 * TLSv1.3 (IN), TLS handshake, Server hello (2):
11 months, 1 week
Can`t upgrade my oVirt Node 4.5.0.2 to latest version 4.5.5.x
by Ivan Pashchuk
Dear all,
I already updated oVirt engine to Version 4.5.5-1.el8 and go to update oVirt node,
Removed and re added yum repositories on node host do not help.
Reboot host not help too.
Check update on host from engine said 'no updates found'.
Can anyone help?
# rpm -qa ovirt*
ovirt-hosted-engine-setup-2.6.3-1.el8.noarch
ovirt-imageio-daemon-2.4.3-1.el8.x86_64
ovirt-imageio-common-2.4.3-1.el8.x86_64
ovirt-openvswitch-2.15-3.el8.noarch
ovirt-openvswitch-ipsec-2.15-3.el8.noarch
ovirt-openvswitch-ovn-common-2.15-3.el8.noarch
ovirt-openvswitch-ovn-host-2.15-3.el8.noarch
ovirt-provider-ovn-driver-1.2.36-1.el8.noarch
ovirt-host-dependencies-4.5.0-3.el8.x86_64
ovirt-release-host-node-4.5.0.2-1.el8.x86_64
ovirt-vmconsole-1.0.9-1.el8.noarch
ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
ovirt-openvswitch-ovn-2.15-3.el8.noarch
ovirt-hosted-engine-ha-2.5.0-1.el8.noarch
ovirt-host-4.5.0-3.el8.x86_64
ovirt-vmconsole-host-1.0.9-1.el8.noarch
ovirt-python-openvswitch-2.15-3.el8.noarch
ovirt-ansible-collection-2.0.3-1.el8.noarch
ovirt-imageio-client-2.4.3-1.el8.x86_64
# cat /usr/lib/os.release.d/ovirt-release-host-node
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8.7.2205.0"
VARIANT="oVirt Node 4.5.0.2"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.5.0"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
PLATFORM_ID="platform:el8"
# yum repolist
repo id repo name
centos-ovirt45 CentOS Stream 8 - oVirt 4.5
cs8-extras CentOS Stream 8 - Extras
cs8-extras-common CentOS Stream 8 - Extras common packages
onn-appstream oVirt Node Optional packages from CentOS Stream 8 - AppStream
onn-baseos oVirt Node Optional packages from CentOS Stream 8 - BaseOS
ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
powertools CentOS Stream 8 - PowerTools
11 months, 1 week
help: Hosted-engine lost data and how to recover vm config to deploy new cluster
by Pandey, Shreyas
Hi ovirt team,
There are couple of questions I am struggling to get answers for.
We have ovirt cluster setup on two servers.
1)
The cluster went down and upon troubleshooting we noticed the hosted engine is not able to restart.
[root@j3sv7sr01ctr01 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage yet,
please ensure that ovirt-ha-agent service is running.
[root@j3sv7sr01ctr01 ~]#
ovirt-ha-agent and ovirt-ha-broker both are failing because of storage related issue.
The glusterfs volume which is being used by hosted-engine doesnt have the hosted engine related configuration.
[root@j3sv7sr01ctr01 ~]# cd /rhev/data-center/mnt/glusterSD/
[root@j3sv7sr01ctr01 glusterSD]# ls
10.52.60.131:_j3sv7sr01datastore3
[root@j3sv7sr01ctr01 glusterSD]# cd 10.52.60.131\:_j3sv7sr01datastore3/
[root@j3sv7sr01ctr01 10.52.60.131:_j3sv7sr01datastore3]#
[root@j3sv7sr01ctr01 10.52.60.131:_j3sv7sr01datastore3]# ls -al
total 1
drwxr-xr-x 4 vdsm kvm 95 Dec 12 20:26 .
drwxr-xr-x 3 vdsm kvm 47 Dec 13 19:16 ..
[root@j3sv7sr01ctr01 10.52.60.131:_j3sv7sr01datastore3]#
[root@j3sv7sr01ctr01 10.52.60.131:_j3sv7sr01datastore3]#
Also we dont have the snapshots of the glusterfs and so it looks like we cant get the hosted-engine data now.
Is there anyway to recover the cluster from this state?
We still have the metadata of the vms as shown below -
[root@j3sv7sr01stg01 01a2b8d8-e360-41cc-beea-4080d48f436a]# pwd
/mnt/datastore1/vms/6f2c0622-fa3b-48f4-b412-9bd6f20892cb/images/01a2b8d8-e360-41cc-beea-4080d48f436a
[root@j3sv7sr01stg01 01a2b8d8-e360-41cc-beea-4080d48f436a]#
[root@j3sv7sr01stg01 01a2b8d8-e360-41cc-beea-4080d48f436a]#
[root@j3sv7sr01stg01 01a2b8d8-e360-41cc-beea-4080d48f436a]#
[root@j3sv7sr01stg01 01a2b8d8-e360-41cc-beea-4080d48f436a]#
[root@j3sv7sr01stg01 01a2b8d8-e360-41cc-beea-4080d48f436a]# cd ..
[root@j3sv7sr01stg01 images]# cd ..
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]# ls
dom_md images
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]#
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]# ls images/
01a2b8d8-e360-41cc-beea-4080d48f436a 39edc9c3-0f6a-4dc4-b0c9-8279c0d2301f 6841ade8-515b-438e-a464-87ee269b22aa b76b6031-a2ba-482b-a367-620521be9b9b e61b82ae-dae5-4131-b5f1-68050247ac11
05fc7f51-f217-419f-8dcd-781f363c6ec3 3bcf4941-9352-489d-b1cd-e81f6bac08e5 688f08fb-ec0b-4d4a-ba5f-aeb1ebea3c37 b8b48d39-6891-46a2-866c-dcfbd78d02a8 e6b473f9-e18d-49bd-af60-269baa6801ad
085fbfd5-523f-483c-b65f-b50fcdec4883 3c2c19a8-2c01-4487-b8c8-bf0e700f3a52 6ac1b668-6e17-48f9-b334-e271bfcb7788 b8e4bf92-1b68-4452-a2f5-244543a64467 ec9e26ab-fdc4-434c-bbf3-2959f3c1776c
08e0efb2-82de-4727-8527-dcdd134a75ef 3ebd222a-7f5e-4e27-bcb3-8fcdd3a2cfca 6f4ffa95-9855-46d1-b38f-c6fb90e9c92e bc7d3fd2-83a5-4c64-a68e-b50750ea1bee ef1d7d7d-ca6a-43c1-9f7c-3e52e1153842
0cba2187-6d43-4aa4-af8d-1c917aece6bb 401c7293-4f7f-4c47-b6c8-ff0a6f94fb0e 7a44c0a0-f202-423f-b390-13907fcf333e bcc9bc93-e7e3-4f6c-ae4c-f1911cfbebaf efb1b965-5e04-4537-954f-b5a70b95275b
18d527e7-822c-4faf-91d4-0be5940d3663 455dbdc1-2990-4474-ab1f-17767770bcb1 7d89ceb1-13b6-4646-8e1e-70e10a970b5f c12542d9-f9a9-417e-942e-d2b06a44d8d4 efc1d529-5fad-406d-baa5-100d187f8033
18ff181a-f620-49d8-a18d-92fb7c21e2d1 470e1c0e-f8f1-4477-9059-3220c68bad48 846bc7f5-6380-46ce-b31d-470bf2d10054 c27aeb05-e44e-447f-9563-5a8398eb73fc efd940ae-511a-4068-8c6c-97aaafbddcf4
1a2384e4-b850-4dae-a191-bb165c2833d9 48b7f23b-9041-4bd7-a4d9-6894117636e2 8cdf9897-5777-487c-967c-f2505f22755c c4874372-08e0-4ff3-9c5d-0404bdc7d194 f212e13e-3871-4c4a-8d15-450507202518
1a38eb80-6e21-4848-8518-943ac5625caf 4f709682-9159-40fa-b4c2-3080b26b72c3 903f9c51-9788-4a7d-b336-52a6fe4cc3ac c7ac51e0-4575-4d79-9cd9-247752df45a7 f4110154-4926-4019-8412-d76021aeb841
1d753f80-6811-4671-a807-865b7a04e11f 52634d4a-88f7-48e8-99c1-18e574b3cd23 9df26dbf-0370-40d4-badd-c0ad88cf96be c88fa564-2284-46ba-9784-5b66eac420be f6f910a9-0812-4fb1-8b70-bed869dcf580
21a99266-1c78-4271-a2ba-c65ad10cf26a 55356333-996e-4f13-86b3-ed064ec58ff7 a2b6ae87-71f9-41f7-97e8-90bb16e60517 cb354e40-5b5a-494f-a22d-b0a37fec09b3 fa7d2900-812f-4f9d-8f05-09a9321880d2
2330dbe1-abeb-4a0b-a4bd-e7e5fae68be2 57b097ed-9896-4a2c-a7a7-8ff387388752 a4b980d8-ff6a-4e10-a8ee-d4cfc9a4765a cb94ec8c-b29f-41cd-8b74-77687a69e75b fc63ae3e-35fb-4360-9c83-2089ec5d81c5
280fe3d4-dd16-49c7-ae7f-db8f70143f07 5bc471ba-52c9-4bbd-a1da-d2375e42b6bb a8afa44f-cf4f-4cc4-9fbb-5ce1b6be4bd5 d26d5671-6f6c-4b9b-81bc-d5a7e28eab0e ff0447dc-e950-4419-800b-495af75a5c65
31e2b9a1-946e-400b-be62-3c78575b23bc 5bf990c6-8898-41a9-bcc8-61bc81c872f9 a9fa7d64-70b3-46ce-9682-a976d86380fb da0c9b8e-66de-4283-85b4-f639205dd76a
355d27a1-ecc7-4e5d-bfe5-16d0e83d7df6 5c02176c-7ac1-4067-8cef-2be322798519 b171c219-0bb7-4619-aa45-886583f1dc5f dafdfde9-aa69-46f6-9f6d-9bf4ca7d5c94
39061360-c4d7-4852-a946-dee6a279039b 61dc998c-3ad9-404f-8103-82e66612e31d b578ddb5-da3b-4e73-9630-5ae605b92ee0 e1eb62b4-54a9-47a8-b2b2-bf8e3612f74c
39c0f701-574c-4795-91a4-60bfac700bcc 67f13533-2377-401b-8584-b0c2d70bad7c b6500cfa-d578-437c-98bb-602cd843228d e2fe200c-efa8-470c-a5cb-7a2ed6b50afa
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]#
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]# ls images/01a2b8d8-e360-41cc-beea-4080d48f436a/
e45d87e2-6fdb-41ab-9f1f-ac113db71ba5 e45d87e2-6fdb-41ab-9f1f-ac113db71ba5.meta f7529d80-f20b-46a8-bac4-5827afe2a648.lease
e45d87e2-6fdb-41ab-9f1f-ac113db71ba5.lease f7529d80-f20b-46a8-bac4-5827afe2a648 f7529d80-f20b-46a8-bac4-5827afe2a648.meta
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]#
[root@j3sv7sr01stg01 6f2c0622-fa3b-48f4-b412-9bd6f20892cb]#
2)
If we think of redeploying new cluster , can we use exisinting datastore which has the metdata of the vms to have the vms in its pre-outage state?
What other options do we have here?
Any help would be really important for us.
Thanks!
11 months, 1 week
Failed to migrate VM to Host ovirt3.XXX.cz due to an Error: Fatal error during migration. Trying to migrate to another Host.
by Jirka Simon
Hello there,
after today's update I have problem with live migration to this host.
with message
2023-12-14 10:00:01,089+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-11) [67218183] VM
'77f85710-45e7-43ca-b0f4-69f87766cc43'(ca1.access.prod.hq.sldev.cz) was
unexpectedly detected as 'Down' on VDS
'044b7175-ca36-49b2-b01b-0253f9af7e4f'(ovirt3.corp.sldev.cz) (expected
on '858b8951-9b5a-4b8f-994e-4e11788c34d6')
2023-12-14 10:00:01,090+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-11) [67218183] START, DestroyVDSCommand(HostName
= ovirt3.corp.sldev.cz, DestroyVmV
DSCommandParameters:{hostId='044b7175-ca36-49b2-b01b-0253f9af7e4f',
vmId='77f85710-45e7-43ca-b0f4-69f87766cc43', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 696e7f0e
2023-12-14 10:00:01,336+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-11) [67218183] FINISH, DestroyVDSCommand, return:
, log id: 696e7f0e
2023-12-14 10:00:01,337+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-11) [67218183] VM
'77f85710-45e7-43ca-b0f4-69f87766cc43'(ca1.access.prod.hq.sldev.cz) was
unexpectedly detected as 'Down' on VDS
'044b7175-ca36-49b2-b01b-0253f9af7e4f'(ovirt3.corp.sldev.cz) (expected
on '858b8951-9b5a-4b8f-994e-4e11788c34d6')
2023-12-14 10:00:01,337+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-11) [67218183] Migration of VM
'ca1.access.prod.hq.sldev.cz' to host 'ovirt3.corp.sldev.c
z' failed: VM destroyed during the startup.
When I stop a VM and start it again it starts on affected without any
problem, but migration doesn't work.
thank you for any help.
Jirka
11 months, 1 week