Guest OS console port
by jihwahn1018@naver.com
Hi,
when i connect to vm by console.vv, i found that it allocate 2 ports for spice and 1 port for vnc per vm
and this ports are allocated from 5900 in each host.
In vdsm it looks like 5900-6923 ports are existed for vm console.
-----------------------
# firewall-cmd --info-service vdsm
vdsm
ports: 54321/tcp 5900-6923/tcp 49152-49216/tcp
protocols:
source-ports:
modules:
destination:
includes:
helpers:
-------------------
Can i change this range to specific port like (8000-8080) or reduce this range?
(for example, 5900-6000)?
Thanks.
2 years, 9 months
Upgrade oVirt 4.3 -> 4.4 Cluster "Emulated maschine" - Error
by fabian.mohren@qbeyond.de
Hi,
++ Setup: 1 Selfhosted oVirt with EL- Hosts (one ol7, second ol8)
we 've updated the ovirt-engine from 4.3 to 4.4 - works, after this, we updated the first host (test-host2) to Oracle Linux 8.5.0
After this we install /add (over the oVirt GUI) the new Host to the Cluster.
oVirt install all. We can see the Host and Configure the Host Networks ... but the Status is "NonOperational".
At the Host we can see this error message:
Host test-host2 does not comply with the cluster Test-Cluster emulated machine. The cluster emulated machine is pc-i440fx-2.12 and the host emulated machines are q35,pc-q35-rhel8.3.0,pc,pc-i440fx-2.11,pc-q35-rhel7.3.0,pc-q35-rhel7.6.0,pc-q35-rhel8.1.0,pc-q35-rhel7.4.0,pc-q35-rhel8.0.0,pc-i440fx-rhel7.3.0,pc-q35-rhel7.5.0,pc-i440fx-4.2,pc-i440fx-rhel7.0.0,pc-i440fx-rhel7.6.0,pc-i440fx-rhel7.4.0,pc-q35-rhel8.4.0,pc-q35-rhel8.5.0,pc-q35-rhel8.2.0,pc-i440fx-rhel7.1.0,pc-i440fx-rhel7.2.0,pc-i440fx-rhel7.5.0.
The Host Version:
/usr/libexec/qemu-kvm -M ? |grep pc-i440fx-2
>> pc-i440fx-2.11 Standard PC (i440FX + PIIX, 1996) (deprecated)
The Cluster has the Compatibility Version "4.3" - if i change to 4.4 he complains that is a old (OL-7) Host in the Cluster (right ;))
How can i update the emulated machines for the new Host?
"yum update" allready done :)
Best Regards,
Fabian
2 years, 9 months
Unable to start VMs after upgrade from 4.4.9 to 4.4.10
by Christoph Timm
Hi List,
I receive the following error while starting some of our VMs after the
upgrade to 4.4.10.
VM v4-probe is down with error. Exit message: internal error: process
exited while connecting to monitor: 2022-03-24T11:27:23.098838Z
qemu-kvm: -blockdev
{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-3-format"}:
Failed to get "write" lock
Is another process using the image
[/rhev/data-center/mnt/lxskinner:_exports_skinner__2tb__1/28da0c79-b6f1-4740-bd24-e7bafcb62c75/images/90b28e7e-98a3-4f80-a136-c5a52c4a3e05/c1450651-b1e5-47fd-906b-8ebd62ace8a4]?.
The example disk is located on a NFS share.
Any idea how to tell oVirt that there is no other process using this disk?
Best regards and thx
Christoph
2 years, 9 months
change mount point for hosted_storage
by tpike@psc.edu
Hello:
I have an oVirt 4.4.10 cluster that is working fine. All storage is on NFS. I would like to change the mount point for the hosted_storage domain from localhost:/... to <IP>/... This will be the same physical volume, all I want to do is not run my NFS mounts through my local hosts but instead mount directly from the NFS server.
I have used the "hosted-engine --set-shared-config storage" command to change the mount point for the storage. Looking at the hosted-engine.conf file confirms that the new path is set correctly. When I look at the storage inside the hosted-engine, however, it still shows the old value. How can I get the cluster to use the new path instead of the old path? I changed it both using he_local and he_shared keys. Thanks!
Tod Pike
2 years, 9 months
Correct way to search VM by mac
by Gianluca Cecchi
Hello,
today I had a problem about conflicting MAC.
The source of problem was a VM transferred from an engine environment
2 years, 9 months
Big problem on Ovirt 4.4
by Nenhum de Nos
Hi,
I run a small test node and engine of ovirt since 4.3. Its in my home, but I kind messed up and now I can't use or reinstall it without loosing all my vm's data (at least this is the point my knowledge of ovirt is).
I got the issue where I could not update yum since CentOS 8 mirrors would not answer. And I tried some solution I found in the list archives, and got really bad here.
I had a power outage last Saturday and my node got to nonresponsive. No matter what I did, never left this state. Kept complaining about the CPU Type, I run it on Ryzen 3400G (used to run on Intel i5, also desktop CPU, and never had an issue, when I changed to Ryzen, every outage from power here is a problem), and the node wouldn't come back. So I tried to upgrade my cluster to 4.4 level (was 4.3) and it didn't help. So I tried to get back to 4.3 and a message said just empty clusters could do that. I then removed my ryzen node from it and it got back to 4.3.But now I could not install the node. I tried changing its name, no good. Failed on how to get packages from GLuster8 repo.That is where my story got sad. I tried to update my repo using the steps in the last post from https://lists.ovirt.org/archives/list/users@ovirt.org/thread/D3FEXQ5.... Bad idea, and another one was I didn't do a backup of the engine.
Now my engine won't start, the backup won't run:
engine-backup --mode=backup --file=/root/backup_deuruim --log=/root/backup_deuruim.logStart of engine-backup with mode 'backup'scope: allarchive file: /root/backup_deuruimlog file: /root/backup_deuruim.logBacking up:Notifying engineNotifying engineFATAL: Failed notifying engine
pg_dump: error: Dumping the contents of table "audit_log" failed: PQgetResult() failed.pg_dump: error: Error message from server: ERROR: MultiXactId 1076887572 has not been created yet -- apparent wraparoundpg_dump: error: The command was: COPY public.audit_log (audit_log_id, user_id, user_name, vm_id, vm_name, vm_template_id, vm_template_name, vds_id, vds_name, log_time, log_type_name, log_type, severity, message, processed, storage_pool_id, storage_pool_name, storage_domain_id, storage_domain_name, cluster_id, cluster_name, correlation_id, job_id, quota_id, quota_name, gluster_volume_id, gluster_volume_name, origin, custom_event_id, event_flood_in_sec, custom_data, deleted, call_stack, brick_id, brick_path, custom_id) TO stdout;2022-03-24 13:16:10 3461: FATAL: Database engine backup failed
the Admin portal won't work, and I get a message on the login screen from engine saying I have a web console at port 9090. I don't get it.
As my last resort I am here, trying at least to know a way I could backup it and install the engine elsewhere. All vm data is fine, but as the names are cryptographic, import them would have me guessing things, right?
The number of VM's is low, 6 or 7, so if there is a way to do it manually, I would try. Installing it again would be my nightmare here.
thanks,
matheus
ps: if needed, I can provide more info.
2 years, 9 months
dnf update fails with oVirt 4.4 on centos 8 stream due to ansible package conflicts.
by Daniel McCoshen
Hey all,
I'm running ovirt 4.4 in production (4.4.5-11-1.el8), and I'm attempting to update the OS on my hosts. The hosts are all centos 8 stream, and dnf update is failing on all of them with the following output:
[root@ovirthost ~]# dnf update
Last metadata expiration check: 1:36:32 ago on Thu 17 Feb 2022 12:01:25 PM CST.
Error:
Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires ansible, but none of the providers can be installed
- package ansible-2.9.27-2.el8.noarch conflicts with ansible-core > 2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.27-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.27-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.17-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.18-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.20-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.21-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.23-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 provided by ansible-2.9.24-2.el8.noarch
- cannot install the best update candidate for package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
- cannot install the best update candidate for package ansible-2.9.27-2.el8.noarch
- package ansible-2.9.20-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.16-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.19-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.23-1.el8.noarch is filtered out by exclude filtering
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
cockpit-ovirt-dashboard.noarch is at 0.15.1-1.el8, and it looks like that conflicting ansible-core package was added to the 8-stream repo two days ago. That's when I first noticed the issue, but I it might be older. When the eariler issues with the centos 8 deprecation happened, I had swapped out the repos on some of these hosts for the new ones, and have since added new hosts as well, using the updated repos. Both hosts that had been moved from the old repos, and ones created with the new repos are experienceing this issue.
ansible-core is being pulled from the centos 8 stream AppStream repo, and the ansible package that cockpit-ovirt-dashboard.noarch is trying to use as a dependency is comming from ovirt-4.4-centos-ovirt44
I'm tempted to blacklist ansible-core in my dnf conf, but that seems like a hacky work-around and not the actual fix here.
Thanks,
Dan
2 years, 9 months
gluster in ovirt-node in 4.5
by Yedidyah Bar David
Hi all,
In relation to a recent question here (thread "[ovirt-devel] [ANN]
Schedule for oVirt 4.5.0"), we are now blocked with the following
chain of changes/dependencies:
1. ovirt-ansible-collection recently moved from ansible-2.9 to
ansible-core 2.12.
2. ovirt-hosted-engine-setup followed it.
3. ovirt-release-host-node (the package including dependencies for
ovirt-node) requires gluster-ansible-roles.
4. gluster-ansible-roles right now requires 'ansible >= 2.9' (not
core), and I only checked one of its dependencies,
gluster-ansible-infra, and this one requires 'ansible >= 2.5'.
5. ansible-core does not 'Provide: ansible', IIUC intentionally.
So we should do one of:
1. Fix gluster-ansible* packages to work with ansible-core 2.12.
2. Only patch gluster-ansible* packages to require ansible-core,
without making sure they actually work with it. This will satisfy all
deps (I guess), make the thing installable, but will likely break when
actually used. Not sure it's such a good option, but nonetheless
relevant. Might make sense if someone is going to work on (1.) soon
but not immediately. This is what would have happened in practice, if
ansible-core would have 'Provide:'-ed ansible.
3. Patch ovirt-release-host-node to not require gluster-ansible*
anymore. This means it will not be included in ovirt-node. Users that
will want to use it will have to install the dependencies manually,
somehow, presumably after (1.) is done independently.
Our team (RHV integration) does not have capacity for (1.). I intend
to do (3.) very soon, unless we get volunteers for doing (1.) or
strong voices for (2.).
Best regards,
--
Didi
2 years, 9 months
Commvault Backup fails to attach disk snapshots
by martin.kaufmann@snt.at
Hello all,
since we updated our Commvault to the version "service pack 26 hotfix 16", random VM backups are failing to attach disk snapshots to the backup proxy VM.
We are running oVirt Version 4.4.10.6-1.el8 and the hypervisor is oVirt Node 4.4.9.
These are the logs from the managerVM:
##############
2022-03-24 11:13:59,390+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug>
<devices>
<disk snapshot="no" type="file" device="disk">
<target dev="sda" bus="scsi"/>
<source file="/rhev/data-center/mnt/blockSD/6da85fb1-ff4b-4665-ade3-0f32f83250bc/images/e3c292e0-dfd2-4cd5-a27f-f87308a18e64/8b524896-902c-445f-a33a-65335cb75eff">
<seclabel model="dac" type="none" relabel="no"/>
</source>
<driver name="qemu" io="threads" type="qcow2" error_policy="stop" cache="writethrough"/>
<alias name="ua-e3c292e0-dfd2-4cd5-a27f-f87308a18e64"/>
<address bus="0" controller="0" unit="0" type="drive" target="0"/>
<serial>e3c292e0-dfd2-4cd5-a27f-f87308a18e64</serial>
</disk>
</devices>
<metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:vm>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>98d981da-d010-11ea-9e4d-00163e1be424</ovirt-vm:poolID>
<ovirt-vm:volumeID>8b524896-902c-445f-a33a-65335cb75eff</ovirt-vm:volumeID>
<ovirt-vm:shared>transient</ovirt-vm:shared>
<ovirt-vm:imageID>e3c292e0-dfd2-4cd5-a27f-f87308a18e64</ovirt-vm:imageID>
<ovirt-vm:domainID>6da85fb1-ff4b-4665-ade3-0f32f83250bc</ovirt-vm:domainID>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
</hotplug>
2022-03-24 11:13:59,898+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Failed in 'HotPlugDiskVDS' method
2022-03-24 11:13:59,898+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=Requested operation is not valid: Domain already contains a disk with that address]]'
2022-03-24 11:13:59,898+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] HostName = hypervisor1.domain
2022-03-24 11:13:59,898+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Command 'HotPlugDiskVDSCommand(HostName = hypervisor1.domain, HotPlugDiskVDSParameters:{hostId='c2fd81cd-8fc8-410e-8333-a0a36b87ab2b', vmId='890c105d-333b-435e-8d8b-0b889f4f9c14', diskId='e3c292e0-dfd2-4cd5-a27f-f87308a18e64'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Requested operation is not valid: Domain already contains a disk with that address, code = 45
2022-03-24 11:13:59,898+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] FINISH, HotPlugDiskVDSCommand, return: , log id: 536a4414
2022-03-24 11:13:59,898+01 ERROR [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Command 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Requested operation is not valid: Domain already contains a disk with that address, code = 45 (Failed with error FailedToPlugDisk and code 45)
2022-03-24 11:13:59,899+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Command [id=87712152-4660-40ca-b292-397796c735d5]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.storage.DiskVmElement; snapshot: VmDeviceId:{deviceId='e3c292e0-dfd2-4cd5-a27f-f87308a18e64', vmId='890c105d-333b-435e-8d8b-0b889f4f9c14'}.
2022-03-24 11:13:59,900+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Command [id=87712152-4660-40ca-b292-397796c735d5]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDevice; snapshot: VmDeviceId:{deviceId='e3c292e0-dfd2-4cd5-a27f-f87308a18e64', vmId='890c105d-333b-435e-8d8b-0b889f4f9c14'}.
2022-03-24 11:13:59,910+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] EVENT_ID: USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk VM1_Disk1 to VM backupproxyVM (User: backupagent@internal-authz).
2022-03-24 11:13:59,910+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-1578) [0aeb5794-5c2e-44c6-8fc5-34ec0e81d16b] Lock freed to object 'EngineLock:{exclusiveLocks='[e3c292e0-dfd2-4cd5-a27f-f87308a18e64=DISK]', sharedLocks=''}'
2022-03-24 11:13:59,910+01 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-1578) [] Operation Failed: [Failed to hot-plug disk]
###################
Here are the logs from the hypervisor:
##################
2022-03-24 11:13:59,712+0100 INFO (jsonrpc/4) [virt.vm] (vmId='890c105d-333b-435e-8d8b-0b889f4f9c14') Hotplug disk xml: <?xml version='1.0' encoding='utf-8'?>
<disk device="disk" snapshot="no" type="file">
<address bus="0" controller="0" target="0" type="drive" unit="0" />
<source file="/var/lib/vdsm/transient/6da85fb1-ff4b-4665-ade3-0f32f83250bc-8b524896-902c-445f-a33a-65335cb75eff.pxhtm0_4">
<seclabel model="dac" relabel="no" type="none" />
</source>
<target bus="scsi" dev="sdbzk" />
<serial>e3c292e0-dfd2-4cd5-a27f-f87308a18e64</serial>
<driver cache="writethrough" error_policy="stop" io="threads" name="qemu" type="qcow2" />
<alias name="ua-e3c292e0-dfd2-4cd5-a27f-f87308a18e64" />
</disk>
(vm:3851)
2022-03-24 11:13:59,721+0100 ERROR (jsonrpc/4) [virt.vm] (vmId='890c105d-333b-435e-8d8b-0b889f4f9c14') Hotplug failed (vm:3859)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3857, in hotplugDisk
self._dom.attachDevice(driveXml)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 682, in attachDevice
raise libvirtError('virDomainAttachDevice() failed')
libvirt.libvirtError: Requested operation is not valid: Domain already contains a disk with that address
#####################
Any idea what could cause these errors?
Best regards,
Martin
2 years, 9 months