importing vm's from storage pool from 4.1.6.2-1 in to 4.4.6 fails due to lingering snapshot configuration in source
by Charles Kozler
I know the subject is a bit wordy but let me try to explain
I have a 4.1 installation that I am migrating to 4.4. I placed the storage
domain into maintenance and then rsync'ed it to the storage I am using for
4.4. In 4.4 I import the domain and it comes in fine, however, when trying
to import some VM's it fails because it says it cannot find a disk. The
disk ID's I cannot find anywhere in 4.1 or 4.4 so I opened OVF_STORE after
upgrading the pool in 4.4 and look at the ovf file for some of the VM's
that are failing to import
I see this in the ovf configuration file -
<Section xsi:type="ovf:SnapshotsSection_Type">
<Snapshot ovf:id="e300fa3b-eee4-4b96-9103-45115081db57">
<Type>ACTIVE</Type>
<Description>Active VM</Description>
<CreationDate>2019/12/19 16:47:48</CreationDate>
<Memory>07d3447c-c640-4163-9cf1-74ce126d702a,59b5cfdf-015b-0333-004b-000000000397,8c48ea33-dc34-403a-90d2-a3e78776404c,1757809b-9685-4315-a92e-3ea82dc6d8a8,94a2c2b3-e02e-4b05-80eb-6f4b7a9701e7,248a9fd1-61f0-4c5a-b6ab-4630d433c6fa</Memory>
</Snapshot>
</Section>
</Content>
</ovf:Envelope>
These are the disk ID's I can see failing to import in engine.log
I am aware that importing a VM cannot support snapshots and have found
multiple mailing list posts around this, however, I do not have any
snapshots visible in 4.1 UI.
I also tried modifying OVF_STORE by untar'ing it and editing the ovf files
directly, however, this does not work as it ends up overwritten by ovirt
with the previous configuration
Any ideas? As noted, I cannot see any snapshots listed in 4.1 source pool
so I therefore cannot remove this stale configuration from the source and
upon importing in to 4.4 it fails because it cannot find the disks
I tried import partial and that still fails which to me seems like a bug
because the idea of the import partial is to still register the VM even if
it cant find disks
I also tried scanning domain for disks and update OVF's in the UI and it
does not help anything
--
*Notice to Recipient*: https://www.fixflyer.com/disclaimer
<https://www.fixflyer.com/disclaimer>
3 years, 5 months
Grafana oVirt 4.4.4 Node install - Grafana Monitoring Portal not available
by simon@justconnect.ie
Having installed oVirt 4.4.4 HCI 3 node cluster, the Grafana 'Monitoring Portal' was not visible on the Default page.
It appears that Grafana is installed but not configured.
I have checked a 4.4.3 environment and it is installed and running, so is this a bug with 4.4.4.7-1.el8?
Any help would be appreciated.
3 years, 5 months
Replacing hosted engine on Gluster
by Chris Adams
I've run oVirt on iSCSI storage for years, and I've had to replace the
hosted engine a couple of times (upgrade from CentOS 6 to 7, then moving
to new storage).
I'm looking at oVirt on hyperconverged Gluster storage. I see pages
about how to replace a host, but how do you replace the engine when
needed? Is it possible to connect a new hosted engine to the existing
Gluster storage?
I'm just trying to understand all the differences of iSCSI vs. Gluster
before deploying it for real; if there's info online that I missed, feel
free to point me to it.
Thanks!
--
Chris Adams <cma(a)cmadams.net>
3 years, 5 months
[ovirt-devel] Failed to execute stage 'Setup validation': Trying to upgrade from unsupported versions: 3.5
by Miguel Angel Costas
Hi All,
I am trying to upgrade my ovirt from 3.6.7.5 to 4.0 but after restoring the backup and execute engine-setup an error message appear related to the cluster version. i have checked on the portal and the cluster version is set up in 3.6. Do you have any idea about how to solve that?
[root@na5lovm01 ~]# engine-backup --mode=restore --no-restore-permissions --provision-db --file=/ovirt_bkp/miga-090621.tar --log=engine-backup-restore.log
Preparing to restore:
- Unpacking file '/ovirt_bkp/miga-090621.tar'
Restoring:
- Files
------------------------------------------------------------------------------
Please note:
Operating system is different from the one used during backup.
Current operating system: centos7
Operating system at backup: centos6
Apache httpd configuration will not be restored.
You will be asked about it on the next engine-setup run.
------------------------------------------------------------------------------
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
Restoring:
- Engine database 'engine'
- Cleaning up temporary tables in engine database 'engine'
------------------------------------------------------------------------------
Please note:
The engine database was backed up at 2021-06-09 13:13:29.000000000 -0300 .
Objects that were added, removed or changed after this date, such as virtual
machines, disks, etc., are missing in the engine, and will probably require
recovery or recreation.
------------------------------------------------------------------------------
You should now run engine-setup.
Done.
[root@na5lovm01 ~]# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20210609140045-iiue97.log
Version: otopi-1.5.2 (otopi-1.5.2-1.el7.centos)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
Configure Image I/O Proxy on this host? (Yes, No) [Yes]:
Please note: Data Warehouse is required for the engine. If you choose to not configure it on this host, you have to configure it on a remote host, and then configure the engine on this host so that it can access the database of the remote Data Warehouse host.
Configure Data Warehouse on this host (Yes, No) [Yes]:
--== PACKAGES ==--
[ INFO ] Checking for product updates...
[ INFO ] No product updates found
--== NETWORK CONFIGURATION ==--
Setup can automatically configure the firewall on this system.
Note: automatic configuration of the firewall may overwrite current settings.
Do you want Setup to configure the firewall? (Yes, No) [Yes]: No
--== DATABASE CONFIGURATION ==--
Where is the DWH database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== OVIRT ENGINE CONFIGURATION ==--
--== STORAGE CONFIGURATION ==--
--== PKI CONFIGURATION ==--
--== APACHE CONFIGURATION ==--
Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== SYSTEM CONFIGURATION ==--
--== MISC CONFIGURATION ==--
Please choose Data Warehouse sampling scale:
(1) Basic
(2) Full
(1, 2)[1]:
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[ ERROR ] The following Clusters have a too old compatibility level, please upgrade them:
Default
[ ERROR ] Failed to execute stage 'Setup validation': Trying to upgrade from unsupported versions: 3.5
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20210609140045-iiue97.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20210609140247-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
I checked the engine database and I couldn't find date related to the cluster version. I just found that:
-bash-4.1$ psql -U postgres -d engine
psql (8.4.20)
Type "help" for help.
engine=# select * from cluster_features;
feature_id | feature_name | version | category | description
--------------------------------------+--------------------------+---------+----------+------------------------------------
00000017-0017-0017-0017-000000000066 | GLUSTER_GEO_REPLICATION | 3.5 | 2 | Gluster Geo-Replication
00000018-0018-0018-0018-000000000093 | GLUSTER_SNAPSHOT | 3.5 | 2 | Gluster Volume Snapshot Management
00000019-0019-0019-0019-000000000300 | GLUSTER_BRICK_MANAGEMENT | 3.5 | 2 | Gluster Brick Provisioning
Best regards
3 years, 5 months
Operation canceled: CA not found.
by jesado74@gmail.com
Hello; when I try to enter the graphical console to a vm I get the following message:
Operation canceled: CA not found.
Change the certificates as the procedure says https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/...
And all services: ovirt-engine.service / ovirt-websocket-proxy / ovirt-imagenio-proxy.service / ovirt-provider-ovn.service - are working properly.
But there is something wrong.
This has happened to someone else when changing the certificates from the ovirt engine to a third-party CA certificate.
Thanks.
3 years, 5 months
Why cannot connect to KVM Libvirtd using oVirt ??
by tommy
The error log is :
VDSM olvms3 command GetVmsNamesFromExternalProviderVDS failed: Cannot recv
data: Host key verification failed.: Connection reset by peer
But if I use the virsh, it can connect.
[root@olvms1 ~]# virsh -c qemu+ssh://root@192.168.10.175/system
root(a)192.168.10.175's <mailto:root@192.168.10.175's> password:
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh #
virsh # list
Id Name State
--------------------
virsh #
virsh #
virsh # list --all
Id Name State
------------------------
- 1.vm1 shut off
virsh #
Why ??
3 years, 5 months
Hosted-Engine import
by Harry O
Hi,
Is it possible to import hosted engine vm from vm files on gluster only?
If yes, how?
3 years, 5 months
Recovering from corrupted hosted-engine
by timothy.dilbert@bmt.ky
Hi Guys,
We had a 2-node self-supported RHEV cluster that we used for our development environment. We're in the middle of a migration from RHEV over to VMware - one host has already been converted to VMware and we have actively been running VM migrations over to VMware. During the migration, we had an extended power outage and had to improperly shut down the RHEV host. Since bringing it back up we've not been able to start the hosted-engine. Each time we try to start the hosted engine we're getting the following messages:
## START
[root@bmrhev01 ~]# hosted-engine --vm-start
VM exists and is down, cleaning up and restarting
VM in WaitForLaunch
[root@bmrhev01 ~]# hosted-engine --vm-status
--== Host bmrhev01 (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : bmrhev01
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 325ff4b3
local_conf_timestamp : 7920
Host timestamp : 7920
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7920 (Tue Jun 15 14:59:32 2021)
host-id=1
score=3400
vm_conf_refresh_time=7920 (Tue Jun 15 14:59:32 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
END ##
We've tried rebooting a number of times. Restarting various oVirt services. Nothing appears to be resolving the issue. At this point, the only thing I truly care about is migrating the guest VMs over to VMware. There's nothing else in the RHEV environment I care about. I'm happy to settle with any of the following, providing they can salvage the guest VMs:
a) Deleting and redeploying the hosted-engine.
b) Abandoning the hosted-engine and some how converting the guest VMs over to VMware.
I don't have enough experience to know if any of the above is possible or the repercussions of either of them. Any help from anyone would be very helpful. I'm sorry to be a leach, rather than a contributor. I'm seriously in trouble here and a kind heart would be much appreciated.
Thanks, Tim.
3 years, 5 months
oVirt + Gluster issues
by suporte@logicworks.pt
Hello,
running ovirt 4.4.4.7-1.el8 and gluster 8.3.
When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error.
When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks.
With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK.
--
Jose Ferradeira
http://www.logicworks.pt
3 years, 5 months