Help remove snapshot
by Magnus Isaksson
Hi!
I would need some help removing a snapshot that looks like this.
How do i remove it manually or fix snapshot definition?
-----------------
Error while executing action:
TUN_SALDC01:
* The requested snapshot has an invalid parent, either fix the snapshot definition or remove it manually to complete the process.
-----------------
And when i look at the disk on tha tnapshot it show as this:
[cid:98462a7f-dbb6-447e-925a-5b170883d7c7]
And general info:
[cid:9264e2e0-459f-4b54-aba7-08b3bf1f4068]
3 years, 8 months
How to diable dhcp in ovirt network
by Adam Xu
Hi everyone
Recently, when I install okd on ovirt. The installation process was
failed because the vm automatically acquired an IP address from DHCP
before the installer assigned it.
Our DHCP is enabled by a layer 3 switch, I tried to diable dhcp using
network filter. but there is no entry related to blocking the DHCP
feature. Is there any way for ovirt to disable DHCP?
--
Adam Xu
3 years, 8 months
Migration not working
by Juan Pablo Lorier
Hi,
I'm having issues with migration to some hosts. I have a 4 node cluster
and I tried updating to see if it fixes but it's still failing. The
reason is nos clear in the engine log.
2020-08-14 09:45:32,322-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-429) [364e7777] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT
ED(67), Migration initiated by system (VM: medialist2-videoteca, Source:
virt2.tnu.com.uy, Destination: virt3.tnu.com.uy, Reason: ).
2020-08-14 09:45:32,336-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429)
[535a8bdb] Lock Acquired to object 'EngineLock:{exclusiveLocks='[aac7468
5-c969-4761-923f-043400176edf=VM]', sharedLocks=''}'
2020-08-14 09:45:32,355-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429)
[535a8bdb] Running command: MigrateVmToServerCommand internal: true. Ent
ities affected : ID: aac74685-c969-4761-923f-043400176edf Type:
VMAction group MIGRATE_VM with role type USER
2020-08-14 09:45:32,383-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429)
[535a8bdb] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId=
'634f3f64-8945-470c-b31c-b8d4c73109e6',
vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy',
dstVdsId='f9e441a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.
tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false',
migrationDowntime='0', autoConverge='true', migrateCompressed='false',
consoleAddress='null', maxBandwidth=
'125', enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action=
{name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime,
params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}},
{limit=4, action={name=setDowntime, pa
rams=[400]}}, {limit=6, action={name=setDowntime, params=[500]}},
{limit=-1, action={name=abort, params=[]}}]]',
dstQemu='172.16.100.45'}), log id: 1db80717
2020-08-14 09:45:32,385-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-429) [535a8bdb] START, MigrateBrokerVDSCommand(HostName = virt
2.tnu.com.uy,
MigrateVDSCommandParameters:{hostId='634f3f64-8945-470c-b31c-b8d4c73109e6',
vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy',
dstVdsId='f9e4
41a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.tnu.com.uy:54321',
migrationMethod='ONLINE', tunnelMigration='false',
migrationDowntime='0', autoConverge='true', migrateCompre
ssed='false', consoleAddress='null', maxBandwidth='125',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDow
ntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, para
ms=[300]}}, {limit=4, action={name=setDowntime, params=[400]}},
{limit=6, action={name=setDowntime, params=[500]}}, {limit=-1,
action={name=abort, params=[]}}]]', dstQemu='172.1
6.100.45'}), log id: 4085b503
2020-08-14 09:45:32,433-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-429) [535a8bdb] FINISH, MigrateBrokerVDSCommand, return: , log
id: 4085b503
2020-08-14 09:45:32,437-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429)
[535a8bdb] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 1db8
0717
2020-08-14 09:45:32,446-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-429) [535a8bdb] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT
ED(67), Migration initiated by system (VM: reverse_proxy, Source:
virt2.tnu.com.uy, Destination: virt4.tnu.com.uy, Reason: ).
2020-08-14 09:45:32,462-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-429) [535a8bdb] EVENT_ID:
USER_VDS_MAINTENANCE_WITHOUT_REASON(620), Host virt2.tnu.com.uy was
switched to Maintenance mode by jplorier(a)tnu.com.uy.
2020-08-14 09:45:32,681-03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-14)
[7667d149-886e-406b-9cd6-9b7472fa9944] Command 'MaintenanceNumberOfVdss'
id: 'db97a747-115b-4f01-aa3d-6277080e6a44' child commands '[]'
executions were completed, status 'SUCCEEDED'
2020-08-14 09:45:33,024-03 INFO
[org.ovirt.engine.core.vdsbroker.VdsManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-62) [] Received first
domain report for host virt2.tnu.com.uy
2020-08-14 09:45:33,686-03 INFO
[org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-88)
[7667d149-886e-406b-9cd6-9b7472fa9944] Ending command
'org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand' successfully.
2020-08-14 09:45:35,137-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] VM
'fd854ebe-144f-455f-bf0c-48167d7aae9f'(medialist2-videoteca) moved from
'MigratingFrom' --> 'Up'
2020-08-14 09:45:35,137-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] Adding VM
'fd854ebe-144f-455f-bf0c-48167d7aae9f'(medialist2-videoteca) to re-run list
2020-08-14 09:45:35,144-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] Rerun VM
'fd854ebe-144f-455f-bf0c-48167d7aae9f'. Called from VDS 'virt2.tnu.com.uy'
*In the host running the vm, I see no error related to the migration,
only about Engine VM which seems to be working fine otherwise:*
2020-08-14 09:06:23,337-0400 ERROR (jsonrpc/3) [root] failed to retrieve
Hosted Engine HA score '[Errno 2] No such file or directory'Is the
Hosted Engine setup finished? (api:19
*VDSM version in node:*
vdsm.x86_64 4.30.46-1.el7 @ovirt-4.3
All nodes are up to date.
Thanks
3 years, 8 months
oVirt nested on oVirt works, when you enable MAC spoofing on host VMs
by thomas@hoberg.net
Howto: Create a new network profile that doesn't have the MAC spoofing filter included. In my case I used one without any filters, you may want to be more careful.
Background:
I had tried the nested approach with the default settings and found that the hosted-engine setup worked just find to the point where it was running as a nested VM on the install node, but that it failed to tie in the other gluster nodes or to communicate to the outside: There was something wrong with the network, but no obivous failures.
I tried using other, simpler "client hypervisors" (it's all KVM anyway) VirtualBox and 'native' KVM and the symptoms were the same: The VMs ran just fine, but the network didn't connect.
And then I remembered that obviously a Hypervisor need to override the MAC for each client VM if you want full network access and I had come across "no-mac-spoofing" often enough it somehow stayed on my mind... and finally it clicked.
Anyhow, validing full functionality now and the main interest in this is the ability to do a full simulation/testing of oVirt 4.3->4.4 upgrades on 3-node HCI setups, which I'd rather not do on the live physical system.
Of course, it's also quite cool!
3 years, 8 months
Support for Shared SAS storage
by Vinícius Ferrão
Hello,
I’ve two compute nodes with SAS Direct Attached sharing the same disks.
Looking at the supported types I can’t see this on the documentation: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
There’s is local storage on this documentation, but my case is two machines, both using SAS, connected to the same machines. It’s the VRTX hardware from Dell.
Is there any support for this? It should be just like Fibre Channel and iSCSI, but with SAS instead.
Thanks,
3 years, 8 months
[ANN] oVirt 4.4.2 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.2 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.2
Third Release Candidate for testing, as of August 13th, 2020.
This update is the second in a series of stabilization updates to the 4.4
series.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.2 release highlights:
http://www.ovirt.org/release/4.4.2/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.2/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 8 months
Is the udev settling issue more wide spread? Getting failed 'qemu-img -convert' also while copying disks between data and vmstore domains
by thomas@hoberg.net
While trying to diagnose an issue with a set of VMs that get stopped for I/O problems at startup, I try to deal with the fact that their boot disks cause this issue, no matter where I connect them. They might have been the first disks I ever tried to sparsify and I was afraid that might have messed them up. The images are for a nested oVirt deployment and they worked just fine, before I shut down those VMs...
So I first tried to hook them as secondary disks to another VM to have a look, but that just cause the other VM to stop at boot.
Also tried downloading, exporting, and plain copying the disks to no avail, OVA exports on the entire VM fail again (fix is in!).
So to make sure copying disks between volumes *generally* work, I tried copying a disk from a working (but stopped) VM from 'vmstore' to 'data' on my 3nHCI farm, but that failed, too!
Plenty of space all around, but all disks are using thin/sparse/VDO on SSD underneath.
Before I open a bug, I'd like to have some feedback if this is a standard QA test, this is happening to you etc.
Still on oVirt 4.3.11 with pack_ova.py patched to wait for the udev settle,
This is from the engine.log on the hosted-engine:
2020-08-12 00:04:15,870+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM gem2 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 131072: Transport endpoint is not connected\\nqemu-img: error while reading sector 135168: Transport endpoint is not connected\\nqemu-img: error while reading sector 139264: Transport
endpoint is not connected\\nqemu-img: error while reading sector 143360: Transport endpoint is not connected\\nqemu-img: error while reading sector 147456: Transport endpoint is not connected\\nqemu-img: error while reading sector 151552: Transport endpoint is not connected\\n')",)
and this is from the vdsm.log on the gem2 node:
Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 131072: Transport endpoint is not connected\nqemu-img: error while reading sector 135168: Transport endpoint is not connected\nqemu-img: error while reading sector 139264: Transport endpoint is not connected\nqemu-img: error while reading sector 143360: Transport endpoint is not connected\nqemu-img: error while reading sector 147456: Transport endpoint is not connected\nqemu-img: error while reading sector 151552: Transport endpoint is not connected\n')
2020-08-12 00:03:15,428+0200 ERROR (tasks/7) [storage.Image] Unexpected error (image:849)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 837, in copyCollapsed
raise se.CopyImageError(str(e))
CopyImageError: low level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 131072: Transport endpoint is not connected\\nqemu-img: error while reading sector 135168: Transport endpoint is not connected\\nqemu-img: error while reading sector 139264: Transport endpoint is not connected\\nqemu-img: error while reading sector 143360: Transport endpoint is not connected\\nqemu-img: error while reading sector 147456: Transport endpoint is not connected\\nqemu-img: error while reading sector 151552: T
ransport endpoint is not connected\\n')",)
2020-08-12 00:03:15,429+0200 ERROR (tasks/7) [storage.TaskManager.Task] (Task='6399d533-e96a-412d-b0c3-0548e24d658d') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
return method(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1633, in copyImage
postZero, force, discard)
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 837, in copyCollapsed
raise se.CopyImageError(str(e))
CopyImageError: low level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/aca27b96-7215-476f-b793-fb0396543a2e/311f853c-e9cc-4b9e-8a00-5885ec7adf14', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/192.168.0.91:_data/32129b5f-d47c-495b-a282-7eae1079257e/images/f6a08d2a-4ddb-42da-88e6-4f92a38b9c95/e0d00d46-61a1-4d8c-8cb4-2e5f1683d7f5'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 131072: Transport endpoint is not connected\\nqemu-img: error while reading sector 135168: Transport endpoint is not connected\\nqemu-img: error while reading sector 139264: Transport endpoint is not connected\\nqemu-img: error while reading sector 143360: Transport endpoint is not connected\\nqemu-img: error while reading sector 147456: Transport endpoint is not connected\\nqemu-img: error while reading sector 151552: T
ransport endpoint is not connected\\n')",)
3 years, 8 months