Moving raw/sparse disk from NFS to iSCSI fails on oVirt 4.5.1
by Guillaume Pavese
On a 4.5.1 DC, I have imported a vm and its disk from an old 4.3 DC
(through an export domain if that's relevant)
The DC/Cluster compat level is 4.7 and the vm was upgraded to it.
"Original custom compatibility version 4.3 of imported VM xxx is not
supported. Changing it to the lowest supported version: 4.7."
The disk is raw and sparse :
<format>raw</format>
<sparse>true</sparse>
I initially put the VM's disks on an NFS storage domain, but I want to move
the disks to an iSCSI one
However, after copying data for a while the task fails "User has failed to
move disk VM-TEMPLATE-COS7_Disk1 to domain iSCSI-STO-FR-301"
in engine.log :
qemu-img: error while writing at byte xxx: No space left on device
2022-07-21 08:58:23,240+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
[65fed1dc-e33b-471e-bc49-8b9662400e5f] FINISH, GetHostJobsVDSCommand,
return:
{0aa2d519-8130-4e2f-bc4f-892e5f7b5206=HostJobInfo:{id='0aa2d519-8130-4e2f-bc4f-892e5f7b5206',
type='storage', description='copy_data', status='failed', progress='79',
error='VDSError:{code='GeneralException', message='General Exception:
("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T',
'none', '-f', 'raw', '-O', 'qcow2', '-o', 'compat=1.1',
'/rhev/data-center/mnt/svc-int-prd-sto-fr-301.hostics.fr:_volume1_ovirt-int-2_data/1ce95c4a-2ec5-47b7-bd24-e540165c6718/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea',
'/rhev/data-center/mnt/blockSD/b5dc9c01-3749-4326-99c5-f84f683190bd/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea']
failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at
byte 13639873536: No space left on device\\n')",)'}'}}, log id: 73f77495
2022-07-21 08:58:23,241+02 INFO
[org.ovirt.engine.core.bll.StorageJobCallback]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
[65fed1dc-e33b-471e-bc49-8b9662400e5f] Command CopyData id:
'521bdf57-8379-40ce-a682-af859fb0cad7': job
'0aa2d519-8130-4e2f-bc4f-892e5f7b5206' execution was completed with VDSM
job status 'failed'
I do want the conversion from raw/sparse to qcow2/sparse to happen, as I
want to activate incremental backups.
I think that it may fail because the virtual size is bigger than the
initial size, as I think someone as explained on this list earlier? Can
anybody confirm?
It seems to be a pretty common use case to support though?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
2 years, 3 months
ovirt-engine manager, certificate issue
by david
hello
I have a problem to log in to ovirt-engine manager in my browser
the warning message in the browser display me this text:
PKIX path validation failed: java.security.cert.CertPathValidatorException:
validity check failed
to solve this problem I am offered to run engine-setup
and here is a question: the engine-setup will have no impact to the
hosts(hypervisors) working?
ovirt version 4.4.4.7-1.el8
thanks
2 years, 3 months
Issue with oVirt 4.5 and Data Warehouse installed on a Separate Machine
by Igor Davidoff
Hello,
i have an issue with 'engine-seup' step on DWH (separate server) aufter upgrade from 4.4.10 to 4.5.
It looks like the ovirt-engine-setup are looking for rpm-package 'ovirt-engine' instead of 'ovirt-engine-dwh'.
the reporting error is:
"
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220502100751-fqwb07.log
[WARNING] Remote engine was not configured to be able to access DWH, please check the logs.
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220502101130-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
"
in Setup log i found:
"
2022-05-02 10:11:30,000+0000 DEBUG otopi.context context._executeMethod:127 Stage validation METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages.Plugin._validation
2022-05-02 10:11:30,001+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), executable='None', cwd='None', env=None
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), rc=1
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stdout:
package ovirt-engine is not installed
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stderr:
2022-05-02 10:11:30,013+0000 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-common/distro-rpm/packages.py", line 463, in _validation
oenginecons.Const.ENGINE_PACKAGE_NAME,
File "/usr/lib/python3.6/site-packages/otopi/plugin.py", line 931, in execute
command=args[0],
RuntimeError: Command '/usr/bin/rpm' failed to execute
2022-05-02 10:11:30,015+0000 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
"
usually the upgrade of minor versions in 4.4 was just:
# yum update ovirt\*setup\*
# engine-setup
# yum update
as it did not work, i tried the fresh installation of centos8 stream and recovery of DWH Database and configuration:
# engine-backup --mode=restore --file=backup.bck --provision-all-databases
-> no luck
the last idea was fresh installation centos8 stream + fresh installation of ovirt-engine-dwh 4.5 (without recovery)
-> the same error.
the engine side works fine.
i compared the current setup logs with the installation and all the minor upgrades of ovirt-engine-dwh before 4.5
and only found the rpm-validation for the package 'ovirt-engine-dwh':
"
2022-02-08 16:11:29,846+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), executable='None', cwd='None', env=None
2022-02-08 16:11:29,877+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), rc=0
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stdout:
ovirt-engine-dwh-4.4.10-1.el8.noarch
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stderr:
2022-02-08 16:11:29,878+0000 DEBUG otopi.transaction transaction.commit:152 committing 'DWH Engine database Transaction'
"
It looks like engine-setup knows it is the DWH-Server, but is trying to validate the wrong rpm package.
Any Ideas, how to work around this.
Thank you!
2 years, 3 months
Gluster volume "deleted" by accident --- Is it possible to recover?
by itforums51@gmail.com
hi everyone,
I have a 3x node ovirt 4.4.6 cluster in HC setup.
Today I was intending to extend the data and vmstore volume adding another brick each; then by accident I pressed the "cleanup" button. Basically it looks that the volume were deleted.
I am wondering whether there is a process of trying to recover these volumes and therefore all VMs (including the Hosted-Engine).
```
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_data-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.45
gluster_lv_engine gluster_vg_sda4 -wi-a----- 100.00g
gluster_lv_vmstore gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_vmstore-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.33
gluster_thinpool_gluster_vg_sda4 gluster_vg_sda4 twi-aot--- <7.07t 11.46 0.89
```
I would appreciate any advice.
TIA
2 years, 3 months
Engine update failing 4.5.1.3-1 -> 4.5.2-1
by xavierl@rogers.com
Hi there,
I'm trying to update my engine version so I can mitigate a bug which was in the previous update and is failing. The log file is proving to be very difficult to pinpoint the cause of the error however I can provide it. Wondering if anyone else has had this issue and if there's a way to fix it while preserving the VMs I have on this single host setup.
[ INFO ] Stage: Setup validation
During execution engine service will be stopped (OK, Cancel) [OK]:
[ INFO ] Hosted Engine HA is in Global Maintenance mode.
Setup version: 4.5.2-1.el8
Engine version: 4.5.1.3-1.el8
[ ERROR ] Failed to execute stage 'Setup validation': Setup and (updated) Engine versions must match
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220728194403-9ev4xe.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220728194429-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
2 years, 3 months
ovirt vms NotResponding related to firewalls
by Bill James
we recently added a bunch of firewalls around vlans where our VMs live and
it may be causing a lot of confusion to ovirt management node?
Though I don't know why firewall would cause intermittent problems.
We see a lot of errors like: (in engine.log)
2022-07-28 19:39:47,923-07 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-1) [] VM '6e177451-d43f-4e
91-85ad-6b8f3bcfc4ad'(epayfraud2.j2noc.com) moved from 'NotResponding' -->
'Up'
...
2022-07-28 19:39:53,676-07 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineSch
eduled-Thread-58) [] EVENT_ID: VM_NOT_RESPONDING(126), VM epayfraud
2.j2noc.com is not responding.
...
2022-07-28 19:40:36,189-07 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-10) [] VM '6e177451-d43f-4e91-85ad-6b8f3bcfc4ad'(
epayfraud2.j2noc.com) moved from 'NotResponding' --> 'Up'
I guess my first question is how does ovirt determine if VM is up or not,
is it the management node or the host VM is running on and what ports does
it use to determine health?
2 years, 3 months
Six node HCI
by Gaurang Patel
Hi,
I am very new to Ovirt 6 Node Gluster.
We can create three node cluster successfully with the 4.5.1 version but after expanding the gluster cluster with additional 3 nodes, Ovirt-engine is going to not responding mode.
If you can help me with step by step document would be very greatful to us.
Thank in advance.
Gaurang Patel
DISCLAIMER : The content of this email is confidential and intended for the recipient specified in message only. It is strictly forbidden to share any part of this message with any third party, without a written consent of the sender. If you received this message by mistake, please reply to this message and follow with its deletion, so that we can ensure such a mistake does not occur in the future.
2 years, 3 months
Re: Veeam Backup for RHV (oVirt)
by markeczzz@gmail.com
Ah, I see..
Then, is there any good guide or documentation how to revert from Keycloak to AAA?
All I could find is how to move from AAA to Keycloak, but not reverse.
2 years, 3 months
Import machines
by Demeter Tibor
Dear Listmembers,
Is it possible to import ovirt 4.3's VMs into a new ovirt environment via our old storage domains?
I have an old 4.3 self-hosted system with 3 storage domain, but it seems very difficult upgrade until 4.5.
I just wondering, what if I do install a complety new system and just attach the old storge domains to that.
Is it will work ? If yes, what will do it with our VM-s and networks.
Thanks in advance,
Regards.
Tibor
2 years, 3 months
Veeam Backup for RHV (oVirt)
by markeczzz@gmail.com
Hi!
Not really sure if this is right place to ask, but..
I am trying to use Veeam Backup for Red Hat Virtualization on oVirt 4.5.1.
I have been using it on version 4.4.10.7 and it works ok there.
On Veeam Release page it says that supported OS is RHV 4.4SP1 (oVirt 4.5).
When i try to do backup, this is what i get from Veeam backup.
No errors in vdsm.log and engine.log..
2022-07-27 08:08:44.153 00039 [19545] INFO | [LoggingEventsManager_39]: Add to storage LoggingEvent [id: 34248685-a193-4df5-8ff2-838f738e211c, Type: BackupStartedNew]
2022-07-27 08:08:44.168 00039 [19545] INFO | [TaskManager_39]: Create and run new async task. [call method:'RunBackupChain']
2022-07-27 08:08:44.168 00039 [19545] INFO | [AsyncTask_39]: New AsynTask created [id:'aafa22ac-ff2e-4647-becb-dca88e3eb67f', description:'', type:'BACKUP_VM_POLICY']
2022-07-27 08:08:44.168 00039 [19545] INFO | [AsyncTask_39]: Prepare AsynTask to run [id:'aafa22ac-ff2e-4647-becb-dca88e3eb67f']
2022-07-27 08:08:44.176 00031 [19545] INFO | [BackupPolicy_31]: Refresh VMs for policy '6e090c98-d44b-4785-acb4-82a627da5d9b'
2022-07-27 08:08:44.176 00031 [19545] INFO | [BackupPolicy_31]: Begin updating list of active VMs for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = True]
2022-07-27 08:08:44.176 00031 [19545] INFO | [RhevCluster_31]: Test connection to cluster [IP: engine.example.org, Port: 443, User: admin@ovirt@internalsso]
2022-07-27 08:08:44.189 00039 [19545] INFO | [TaskManager_39]: AsyncTask registered. [id:'aafa22ac-ff2e-4647-becb-dca88e3eb67f']
2022-07-27 08:08:44.371 00031 [19545] INFO | [RhevCluster_31]: Test connection to cluster success. Status: Success. Message:
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicyManager_31]: Refreshing the policies data...
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: Begin updating list of active VMs for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = False]
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: List of active VMs updated for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = False]. Number of active VMs '1'
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicyManager_31]: Policies data has been refreshed.
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: List of active VMs updated for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = True]. Number of active VMs '1'
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: Found the '1' VMs to backup in policy '6e090c98-d44b-4785-acb4-82a627da5d9b'
2022-07-27 08:08:44.564 00031 [19545] INFO | [BackupPolicy_31]: * Parallel policy runner has started * for policy [Name:'test5', ID: '6e090c98-d44b-4785-acb4-82a627da5d9b'
2022-07-27 08:08:44.564 00031 [19545] INFO | [VeeamBackupServer_31]: Test connection to backup server [IP: 'veeambr.example.org', Port: '10006', User: 'rhvproxy']
2022-07-27 08:08:44.931 00031 [19545] INFO | [VeeamBackupServer_31]: Test connection to backup server [IP: 'veeambr.example.org', Port: '10006']. Connection status: ConnectionSuccess. Version: 11.0.1.1261
2022-07-27 08:08:45.423 00031 [19545] INFO | [BackupPolicy_31]: Successfully called CreateVeeamPolicySession for job [UID: '6e090c98-d44b-4785-acb4-82a627da5d9b'], session [UID: 'aafa22ac-ff2e-4647-becb-dca88e3eb67f']
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: Successfully called RetainPolicyVms for job [UID: '6e090c98-d44b-4785-acb4-82a627da5d9b'] with VMs: 50513a65-6ccc-479b-9b61-032e0961b016
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: Start calculating maxPointsCount
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: End calculating maxPointsCount. Result = 7
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: Starting validate repository schedule. Repository [UID: '237e41d6-7c67-4a1f-80bf-d7c73c481209', MaxPointsCount: '7', IsPeriodicFullRequired: 'False']
2022-07-27 08:08:46.595 00031 [19545] INFO | [BackupPolicy_31]: End validate repository schedule. Result: [IsScheduleValid: 'True', ErrorMessage: '']
2022-07-27 08:08:46.597 00031 [19545] INFO | [SessionManager_31]: Start registering a new session[Id: 'b6f3f0e1-7aab-41cb-b0e7-10f5b2ed6708']
2022-07-27 08:08:46.639 00031 [19545] INFO | [SessionManager_31]: Session registered. [Id:'b6f3f0e1-7aab-41cb-b0e7-10f5b2ed6708']
2022-07-27 08:08:46.639 00031 [19545] INFO | [BackupPolicy_31]: Backup VM [id:'50513a65-6ccc-479b-9b61-032e0961b016'] starting...
2022-07-27 08:08:46.639 00031 [19545] INFO | [BackupPolicy_31]: RetentionMergeDisabled: false
2022-07-27 08:08:46.640 00031 [19545] INFO | [TaskManager_31]: Create new async task. [call method:'DoBackup']
2022-07-27 08:08:46.640 00031 [19545] INFO | [AsyncTask_31]: New AsynTask created [id:'15a63192-f3d2-4e3a-af56-240f3733bded', description:'', type:'BACKUP_VM']
2022-07-27 08:08:46.640 00031 [19545] INFO | [AsyncTask_31]: Prepare AsynTask to run [id:'15a63192-f3d2-4e3a-af56-240f3733bded']
2022-07-27 08:08:46.661 00031 [19545] INFO | [TaskManager_31]: AsyncTask registered. [id:'15a63192-f3d2-4e3a-af56-240f3733bded']
2022-07-27 08:08:47.295 00031 [19545] ERROR | [BackupPolicy_31]: Backup VM [id:'50513a65-6ccc-479b-9b61-032e0961b016'] failed. Error ('test: VM UUID=50513a65-6ccc-479b-9b61-032e0961b016 was not found: * Line 1, Column 1
Syntax error: value, object or array expected.
* Line 1, Column 1
A valid JSON document must be either an array or an object value.
')
Are there any changes between version 4.5 and 4.5.1 that are preventing using this?
Any solution?
Regards,
2 years, 3 months