4.3.3 restore fails
by magnus@boden.one
Hello,
I had a ovirt-engine-4.3.0 that I am migrating to a new machine. I have done a backup of that machine but the restore fails on the new machine.
Fresh install of centos and the 4.3.3 ovirt-engine.
[root@oengine ~]# engine-backup --mode=restore --file=ovirt-engine-backup-20190426214711.backup --log=02.17.log --provision-db --provision-dwh-db --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: ovirt-engine-backup-20190426214711.backup
log file: 02.17.log
Preparing to restore:
- Unpacking file 'ovirt-engine-backup-20190426214711.backup'
Restoring:
- Files
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
- user 'ovirt_engine_history', database 'ovirt_engine_history'
Restoring:
- Engine database 'engine'
FATAL: Errors while restoring database engine
[root@oengine ~]# cat 02.17.log
2019-04-27 02:17:55 32343: Start of engine-backup mode restore scope all file ovirt-engine-backup-20190426214711.backup
2019-04-27 02:17:55 32343: OUTPUT: Start of engine-backup with mode 'restore'
2019-04-27 02:17:55 32343: OUTPUT: scope: all
2019-04-27 02:17:55 32343: OUTPUT: archive file: ovirt-engine-backup-20190426214711.backup
2019-04-27 02:17:55 32343: OUTPUT: log file: 02.17.log
2019-04-27 02:17:55 32343: Setting scl env for rh-postgresql10
2019-04-27 02:17:55 32343: OUTPUT: Preparing to restore:
2019-04-27 02:17:55 32343: OUTPUT: - Unpacking file 'ovirt-engine-backup-20190426214711.backup'
2019-04-27 02:17:55 32343: Opening tarball ovirt-engine-backup-20190426214711.backup to /tmp/engine-backup.NVv4Bxjb2k
2019-04-27 02:17:55 32343: Verifying hash
2019-04-27 02:17:55 32343: Verifying version
2019-04-27 02:17:55 32343: Reading config
2019-04-27 02:17:55 32343: OUTPUT: Restoring:
2019-04-27 02:17:55 32343: OUTPUT: - Files
2019-04-27 02:17:55 32343: Restoring files
2019-04-27 02:17:56 32343: Reloading configuration
2019-04-27 02:17:56 32343: OUTPUT: Provisioning PostgreSQL users/databases:
2019-04-27 02:17:56 32343: provisionDB: user engine host localhost port 5432 database engine secured False secured_host_validation False
2019-04-27 02:17:56 32343: OUTPUT: - user 'engine', database 'engine'
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf', '/tmp/engine-backup.NVv4Bxjb2k/pg-provision-answer-file']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20190427021756-hxbrea.log
Version: otopi-1.8.1 (otopi-1.8.1-1.el7)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
[ INFO ] Stage: Setup validation
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Creating PostgreSQL 'engine' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20190427021756-hxbrea.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of provisiondb completed successfully
2019-04-27 02:17:58 32343: provisionDB: user ovirt_engine_history host localhost port 5432 database ovirt_engine_history secured False secured_host_validation False
2019-04-27 02:17:58 32343: OUTPUT: - user 'ovirt_engine_history', database 'ovirt_engine_history'
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf', '/tmp/engine-backup.NVv4Bxjb2k/pg-provision-answer-file']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20190427021758-s2av1b.log
Version: otopi-1.8.1 (otopi-1.8.1-1.el7)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
[ INFO ] Stage: Setup validation
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Creating PostgreSQL 'ovirt_engine_history' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20190427021758-s2av1b.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of provisiondb completed successfully
2019-04-27 02:18:00 32343: OUTPUT: Restoring:
2019-04-27 02:18:00 32343: Generating pgpass
2019-04-27 02:18:00 32343: Verifying connection
2019-04-27 02:18:00 32343: pg_cmd running: psql -w -U engine -h localhost -p 5432 engine -c select 1
?column?
----------
1
(1 row)
2019-04-27 02:18:00 32343: pg_cmd running: psql -w -U engine -h localhost -p 5432 engine -t -c show lc_messages
2019-04-27 02:18:00 32343: pg_cmd running: pg_dump -w -U engine -h localhost -p 5432 engine -s
2019-04-27 02:18:01 32343: pg_cmd running: psql -w -U ovirt_engine_history -h localhost -p 5432 ovirt_engine_history -c select 1
?column?
----------
1
(1 row)
2019-04-27 02:18:01 32343: pg_cmd running: psql -w -U ovirt_engine_history -h localhost -p 5432 ovirt_engine_history -t -c show lc_messages
2019-04-27 02:18:01 32343: pg_cmd running: pg_dump -w -U ovirt_engine_history -h localhost -p 5432 ovirt_engine_history -s
2019-04-27 02:18:01 32343: OUTPUT: - Engine database 'engine'
2019-04-27 02:18:01 32343: Restoring engine database backup at /tmp/engine-backup.NVv4Bxjb2k/db/engine_backup.db
2019-04-27 02:18:01 32343: restoreDB: backupfile /tmp/engine-backup.NVv4Bxjb2k/db/engine_backup.db user engine host localhost port 5432 database engine orig_user compressor format custom jobsnum 2
2019-04-27 02:18:01 32343: pg_cmd running: pg_restore -w -U engine -h localhost -p 5432 -d engine -j 2 /tmp/engine-backup.NVv4Bxjb2k/db/engine_backup.db
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 7624; 0 0 COMMENT EXTENSION plpgsql
pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql
Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
pg_restore: [archiver (db)] Error from TOC entry 7625; 0 0 COMMENT EXTENSION "uuid-ossp"
pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension uuid-ossp
Command was: COMMENT ON EXTENSION "uuid-ossp" IS 'generate universally unique identifiers (UUIDs)';
pg_restore: [archiver (db)] Error from TOC entry 6400; 2606 18298 CONSTRAINT external_variable pk_external_variable engine
pg_restore: [archiver (db)] could not execute query: ERROR: could not create unique index "pk_external_variable"
DETAIL: Key (var_name)=(fence-kdump-listener-heartbeat) is duplicated.
Command was: ALTER TABLE ONLY public.external_variable
ADD CONSTRAINT pk_external_variable PRIMARY KEY (var_name);
WARNING: errors ignored on restore: 4
2019-04-27 02:18:43 32343: Non-ignored-errors in pg_restore log:
pg_restore: [archiver (db)] could not execute query: ERROR: could not create unique index "pk_external_variable"
2019-04-27 02:18:43 32343: FATAL: Errors while restoring database engine
If this is a problem while restoring a 4.3.0 backup to a 4.3.3 engine is it possible to install the 4.3.0 version?
Following the installation part of the release notes of 4.3.0 will result in 4.3.3 being installed.
Best Regards
Magnus
5 years, 6 months
[Users] importing from kvm into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
5 years, 6 months
Upgraded ovirt-4.2 to ovirt-4.3 and VM run fails
by Gobinda Das
Hi,
I have upgraded ovirt-4.2 to ovirt-4.3 and after upgrade while starting
vms giving me error:
Cannot run VM. There is no host that satisfies current scheduling
constraints. See below for details: The host *node1.test.com
<http://node1.test.com>* did not satisfy internal filter
ClusterInMaintenance because the cluster is in maintenance and only highly
available VMs are allowed to start. The host *node2.test.com
<http://node2.test.com>* did not satisfy internal filter
ClusterInMaintenance because the cluster is in maintenance and only highly
available VMs are allowed to start. The host *node3.test.com
<http://node3.test.com>* did not satisfy internal filter
ClusterInMaintenance because the cluster is in maintenance and only highly
available VMs are allowed to start.
Eventhough cluster is not in maintenance mode.
This setup is using gluster6 for storage.
--
Thanks,
Gobinda
5 years, 6 months
HA VMs fail to start on host failure
by tezmobile@googlemail.com
In a host failure situation, we see that oVirt tries to restart the VMs on other hosts in the cluster but this (more often than not) fails due to kvm being unable to acquire a write lock on the qcow2 image. We see ovirt attempt to restart the VMs several times, each time on different hosts but with the same outcome after which it gives up trying.
After this we must log into the oVirt web interface and start the VM manually, which works fine (by this time we assume enough time has passed for the lock to clear itself).
This behaviour is experienced with Centos 7.6, Libvirt 4.5.0-10, vdsm 4.30.13-1
Log excerpt from hosted engine:
2019-04-24 17:05:26,653+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [] VM 'ef7e04f0-764a-4cfe-96bf-c0862f1f5b83'(vm-21.example.local) moved from 'WaitForLaunch' --> 'Down'
2019-04-24 17:05:26,710+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm-21.example.local is down with error. Exit message: internal error: process exited while connecting to monitor: 2019-04-24T16:04:48.049352Z qemu-kvm: -drive file=/rhev/data-center/mnt/192.168.111.111:_/21a1390b-b73b-46b1-85b9-2bbf9bba5308/images/c9d96ab6-cb0b-4fba-9b07-096ff750c7f7/16da3660-1afe-40a3-b868-3a74e74bab2f,format=qcow2,if=none,id=drive-ua-c9d96ab6-cb0b-4fba-9b07-096ff750c7f7,serial=c9d96ab6-cb0b-4fba-9b07-096ff750c7f7,werror=stop,rerror=stop,cache=none,aio=threads: 'serial' is deprecated, please use the corresponding option of '-device' instead
2019-04-24T16:04:48.079989Z qemu-kvm: -drive file=/rhev/data-center/mnt/192.168.111.111:_/21a1390b-b73b-46b1-85b9-2bbf9bba5308/images/c9d96ab6-cb0b-4fba-9b07-096ff750c7f7/16da3660-1afe-40a3-b868-3a74e74bab2f,format=qcow2,if=none,id=drive-ua-c9d96ab6-cb0b-4fba-9b07-096ff750c7f7,serial=c9d96ab6-cb0b-4fba-9b07-096ff750c7f7,werror=stop,rerror=stop,cache=none,aio=threads: Failed to get "write" lock
So my question is, how can I either force oVirt to continue to try restarting the VM or delay the initial VM restart for enough time to allow locks to clear?
5 years, 7 months
Template Disk Corruption
by Alex McWhirter
1. Create server template from server VM (so it's a full copy of the
disk)
2. From template create a VM, override server to desktop, so that it
become a qcow2 overlay to the template raw disk.
3. Boot VM
4. Shutdown VM
5. Delete VM
Template disk is now corrupt, any new machines made from it will not
boot.
I can't see why this happens as the desktop optimized VM should have
just been an overlay qcow file...
5 years, 7 months
oVirt 4.3.2 Error: genev_sys_6081 is not present in the system
by Dee Slaw
Hello, I've installed oVirt 4.3.2 and the problem is that it log messages:
VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in
Open Virtualization Manager.
It also keeps on logging in /var/log/messages:
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]: <info> [1553244663.9861] device
(genev_sys_6081): carrier: link connected
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]: <info> [1553244663.9864] manager:
(genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/34160)
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]: <info> [1553244663.9866] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]: <info> [1553244663.9906] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:03 ovirt-04 kernel: device genev_sys_6081 left promiscuous mode
Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 entered promiscuous mode
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]: <info> [1553244664.0038] device
(genev_sys_6081): carrier: link connected
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]: <info> [1553244664.0042] manager:
(genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/34161)
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]: <info> [1553244664.0044] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]: <info> [1553244664.0082] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 left promiscuous mode
Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 entered promiscuous mode
Also I can see the following in /var/log/openvswitch:
2019-03-22T08:53:12.413Z|131047|bridge|WARN|could not add network device ovn-034a1c-0 to
ofproto (File exists)
2019-03-22T08:53:12.431Z|131048|tunnel|WARN|ovn-a21088-0: attempting to add tunnel port
with same config as port 'ovn-0836d7-0' (::->192.168.10.24, key=flow,
legacy_l2, dp
port=2)
2019-03-22T08:53:12.466Z|131055|connmgr|WARN|accept failed (Too many open files)
2019-03-22T08:53:12.466Z|131056|unixctl|WARN|punix:/var/run/openvswitch/ovs-vswitchd.9103.ctl:
accept failed: Too many open files
In ovsdb-server.log:
2019-03-22T03:24:12.583Z|05792|jsonrpc|WARN|unix#28684: receive error: Connection reset by
peer
2019-03-22T03:24:12.583Z|05793|reconnect|WARN|unix#28684: connection dropped (Connection
reset by peer)
How to fix this issue with geneve tunnels?
5 years, 7 months
unable to upload disk images after upgrade to 4.3.3 -- ticket failures
by Edward Berger
Previously I had issues with the upgrades to 4.3.3 failing because of
"stale" image transfer data, so I removed it from the database using the
info given here on the mailing list and was able to complete the oVirt node
and engine upgrades.
Now I have a new problem. I can't upload a disk image anymore, which used
to work.
"test connection" returns success.
In the dashboard Storage/Disks view, it starts and then stops with "paused
by system"
I tried pointing it at another node, same problem. I tried restarting
services ovirt-imageio-proxy on engine and ovirt-imageio-daemon on the
node, same failure.
in the dashboard events when trying to upload disk I get this error
Transfer was stopped by system. Reason: failed to add image ticket to
ovirt-imageio-proxy.
or when trying to resume transfer "paused by system" giving it the local
path again.
Transfer was stopped by system. Reason: failure in transfer image
ticket renewal.
I'm not sure what logs I should be looking deeper into..
Here's what the database says
engine=# select * from image_transfers;
-[ RECORD 1 ]-------------+-------------------------------------
command_id | 59dce2b1-f8ba-44dd-9df9-c6e39773a3f9
command_type | 1024
phase | 4
last_updated | 2019-04-23 12:22:25.098-04
message | Uploading from byte 0
vds_id |
disk_id | 6693d5ac-d3eb-43d7-9abe-97c5197efc23
imaged_ticket_id |
proxy_uri |
signed_ticket |
bytes_sent | 0
bytes_total | 1996488704
type | 2
active | f
daemon_uri |
client_inactivity_timeout | 60
-[ RECORD 2 ]-------------+-------------------------------------
command_id | 16898219-be5b-4826-8f20-3355fa47272a
command_type | 1024
phase | 4
last_updated | 2019-04-23 16:15:34.591-04
message | Uploading from byte 0
vds_id |
disk_id | 3b8e3053-bfe4-49d3-abd2-5452b1674400
imaged_ticket_id |
proxy_uri |
signed_ticket |
bytes_sent | 0
bytes_total | 1998585856
type | 2
active | f
daemon_uri |
client_inactivity_timeout | 60
I really need the image uploading to work. Any suggestions on what to do
next?
5 years, 7 months