Re: Move Self Hosted Engine to Standalone
by Jeremy Tourville
I did get a little further.
First I ran engine-cleanup on my new engine where I will be running the restore operation. (I had previously run the engine-setup script on this machine)
Then I ran this-
[root@engine ~]# engine-backup --mode=restore --file=ovirt-engine-backup-20200217125040.backup --log=ovirt-engine-backup-20200217125040.log --provision-db --provision-dwh-db --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: ovirt-engine-backup-20200217125040.backup
log file: ovirt-engine-backup-20200217125040.log
Preparing to restore:
- Unpacking file 'ovirt-engine-backup-20200217125040.backup'
Restoring:
- Files
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
FATAL: Existing database 'engine' or user 'engine' found and temporary ones created - Please clean up everything and try again
Time to research further for that FATAL error message. Isn't that the purpose of engine-cleanup though? Why the conflict?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 1:42 PM
To: Jeremy Tourville <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
Try looking at the RHEV info here and go to section 6.2.2 and see if that helps.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/...
________________________________________
From: Jeremy Tourville <jeremy_tourville(a)hotmail.com>
Sent: Monday, February 17, 2020 2:26 PM
To: Robert Webb; users(a)ovirt.org
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
OK, I was able to get the backup completed. I am a little confused on how to do the restore though. https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto...
Is this link even applicable? My environment is a single node, not EL based. Anyhow, here is what I have so far-
[root@engine glusterfs]# engine-backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20200217125040.log
Backing up:
Notifying engine
- Files
- Engine database 'engine'
- DWH database 'ovirt_engine_history'
Packing into file '/var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup'
Notifying engine
Done.
[root@engine glusterfs]#
I moved the backup file to my new engine.
How do I perform the restore?
The directions say:
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
What is the file name and log_file name? Do I need to do something to unpack my backup file?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 9:13 AM
To: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: RE: [ovirt-users] Move Self Hosted Engine to Standalone
Can you take a backup of the original, build the new one, then do a restore?
> -----Original Message-----
> From: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>
> Sent: Monday, February 17, 2020 10:11 AM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Move Self Hosted Engine to Standalone
>
> I have a single oVirt host running a self-hosted engine. I'd like to move the
> engine off the host and run it on a standalone server. I am running Software
> Version:4.3.6.6-1.el7 Can anyone tell me what the procedure is for that?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7IRADM7HZW
> RAD6Y6F76T5CS4ABQ3Y3R/
4 years, 10 months
What is this error message from?
by jeremy_tourville@hotmail.com
I have seen this error message repeatedly when reviewing events.
VDSM vmh.cyber-range.lan command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/c651575f-75a0-492e-959e-8cfee6b6a7b5/9b5601fe-9627-4a8a-8a98-4959f68fb137', '-O', 'qcow2', '-o', 'compat=1.1', u'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/6a2ce11a-deec-41e0-a726-9de6ba6d4ddd/6d738c08-0f8c-4a10-95cd-eeaa2d638db5'] failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 24117243: No such file or directory\\n')",)
Can someone explain what this is? How do I get this cleared up/resolved?
4 years, 10 months
oVirt & Zabbix agent
by Diggy Mc
Are there any known issues with running the Zabbix agent on either the Hosted Engine (4.3.8) or oVirt Nodes (4.3.8) ??? I'd like to install the agent while not crashing my hosting environment.
4 years, 10 months
Re: Move Self Hosted Engine to Standalone
by Robert Webb
Try looking at the RHEV info here and go to section 6.2.2 and see if that helps.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/...
________________________________________
From: Jeremy Tourville <jeremy_tourville(a)hotmail.com>
Sent: Monday, February 17, 2020 2:26 PM
To: Robert Webb; users(a)ovirt.org
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
OK, I was able to get the backup completed. I am a little confused on how to do the restore though. https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto...
Is this link even applicable? My environment is a single node, not EL based. Anyhow, here is what I have so far-
[root@engine glusterfs]# engine-backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20200217125040.log
Backing up:
Notifying engine
- Files
- Engine database 'engine'
- DWH database 'ovirt_engine_history'
Packing into file '/var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup'
Notifying engine
Done.
[root@engine glusterfs]#
I moved the backup file to my new engine.
How do I perform the restore?
The directions say:
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
What is the file name and log_file name? Do I need to do something to unpack my backup file?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 9:13 AM
To: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: RE: [ovirt-users] Move Self Hosted Engine to Standalone
Can you take a backup of the original, build the new one, then do a restore?
> -----Original Message-----
> From: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>
> Sent: Monday, February 17, 2020 10:11 AM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Move Self Hosted Engine to Standalone
>
> I have a single oVirt host running a self-hosted engine. I'd like to move the
> engine off the host and run it on a standalone server. I am running Software
> Version:4.3.6.6-1.el7 Can anyone tell me what the procedure is for that?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7IRADM7HZW
> RAD6Y6F76T5CS4ABQ3Y3R/
4 years, 10 months
Re: Move Self Hosted Engine to Standalone
by Jeremy Tourville
OK, I was able to get the backup completed. I am a little confused on how to do the restore though. https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto...
Is this link even applicable? My environment is a single node, not EL based. Anyhow, here is what I have so far-
[root@engine glusterfs]# engine-backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20200217125040.log
Backing up:
Notifying engine
- Files
- Engine database 'engine'
- DWH database 'ovirt_engine_history'
Packing into file '/var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup'
Notifying engine
Done.
[root@engine glusterfs]#
I moved the backup file to my new engine.
How do I perform the restore?
The directions say:
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
What is the file name and log_file name? Do I need to do something to unpack my backup file?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 9:13 AM
To: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: RE: [ovirt-users] Move Self Hosted Engine to Standalone
Can you take a backup of the original, build the new one, then do a restore?
> -----Original Message-----
> From: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>
> Sent: Monday, February 17, 2020 10:11 AM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Move Self Hosted Engine to Standalone
>
> I have a single oVirt host running a self-hosted engine. I'd like to move the
> engine off the host and run it on a standalone server. I am running Software
> Version:4.3.6.6-1.el7 Can anyone tell me what the procedure is for that?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7IRADM7HZW
> RAD6Y6F76T5CS4ABQ3Y3R/
4 years, 10 months
Move Self Hosted Engine to Standalone
by jeremy_tourville@hotmail.com
I have a single oVirt host running a self-hosted engine. I'd like to move the engine off the host and run it on a standalone server. I am running Software Version:4.3.6.6-1.el7 Can anyone tell me what the procedure is for that?
4 years, 10 months
hosted-engine --deploy --restore-from-file fails with error "Domain format is different from master storage domain format"
by djagoo
Hi there,
for two weeks we are trying to move our hosted engine to a glusterfs storage resulting in an error "Domain format is different from master storage domain format".
The newly created storage domain has version 5 (default since compatibility level 4.3 according to documentation).
Hosted engine version is 4.3.8.2-1.el7
All hosts are updated to the latest versions and are rebooted.
Cluster and DataCenter compatibility version is 4.3.
Master data domain and all other domains are Format V4 and there is no V5 available in dropdown menus. Even if I try to create a new storage domain from manager there is only V4 available.
The system and all hosts where installed march 2019 so it was ovirt release 4.3.2 or 4.3.1 which created the existing domains.
Is there a way to update the master storage domain to V5? It seems I cannot downgrade the datacenter to compat 4.2 and then raise it again.
After two weeks I'm out of ideas.
Can anyone help please?
Regards,
Marcel
4 years, 10 months
where can I find the repositories specific to 4.3.7
by adrianquintero@gmail.com
Hi am trying to upgrade our ovirt hosts and engine in our Hyperconverged 4.3.6 environment to 4.3.7, we do not want to go to 4.3.8.
Where can I find the correct repositories for 4.3.7 specific to the engine and the ones for the hosts?
Thank you
Adrian
4 years, 10 months
oVirt 4.3.7 and Gluster 6.6 multiple issues
by adrianquintero@gmail.com
Hi,
I am having a couple of issues with fresh ovirt 4.3.7 HCI setup with 3 nodes
------------------------------------------------------------------------------------------------------------------------------------------------------------
1.-vdsm is showing the following errors for HOST1 and HOST2 (HOST3 seems to be ok):
------------------------------------------------------------------------------------------------------------------------------------------------------------
service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-02-11 18:50:28 PST; 28min ago
Process: 25457 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 25549 (vdsmd)
Tasks: 76
CGroup: /system.slice/vdsmd.service
├─25549 /usr/bin/python2 /usr/share/vdsm/vdsmd
├─25707 /usr/libexec/ioprocess --read-pipe-fd 52 --write-pipe-fd 51 --max-threads 10 --max-queued-requests 10
├─26314 /usr/libexec/ioprocess --read-pipe-fd 92 --write-pipe-fd 86 --max-threads 10 --max-queued-requests 10
├─26325 /usr/libexec/ioprocess --read-pipe-fd 96 --write-pipe-fd 93 --max-threads 10 --max-queued-requests 10
└─26333 /usr/libexec/ioprocess --read-pipe-fd 102 --write-pipe-fd 101 --max-threads 10 --max-queued-requests 10
Feb 11 18:50:28 tij-059-ovirt1.grupolucerna.local vdsmd_init_common.sh[25457]: vdsm: Running test_space
Feb 11 18:50:28 tij-059-ovirt1.grupolucerna.local vdsmd_init_common.sh[25457]: vdsm: Running test_lo
Feb 11 18:50:28 tij-059-ovirt1.grupolucerna.local systemd[1]: Started Virtual Desktop Server Manager.
Feb 11 18:50:29 tij-059-ovirt1.grupolucerna.local vdsm[25549]: WARN MOM not available.
Feb 11 18:50:29 tij-059-ovirt1.grupolucerna.local vdsm[25549]: WARN MOM not available, KSM stats will be missing.
Feb 11 18:51:25 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to retrieve Hosted Engine HA score
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:51:34 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to retrieve Hosted Engine HA score
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:51:35 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to retrieve Hosted Engine HA score
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:51:43 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to retrieve Hosted Engine HA score
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:56:32 tij-059-ovirt1.grupolucerna.local vdsm[25549]: WARN ping was deprecated in favor of ping2 and confirmConnectivity
------------------------------------------------------------------------------------------------------------------------------------------------------------
2.-"gluster vol engine heal info" is showing the following and it never finishes healing
------------------------------------------------------------------------------------------------------------------------------------------------------------
[root@host2~]# gluster vol heal engine info
Brick host1:/gluster_bricks/engine/engine
/7a68956e-3736-46d1-8932-8576f8ee8882/images/86196e10-8103-4b00-bd3e-0f577a8bb5b2/98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
/7a68956e-3736-46d1-8932-8576f8ee8882/images/b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd/ed569aed-005e-40fd-9297-dd54a1e4946c.meta
Status: Connected
Number of entries: 2
Brick host2:/gluster_bricks/engine/engine
/7a68956e-3736-46d1-8932-8576f8ee8882/images/86196e10-8103-4b00-bd3e-0f577a8bb5b2/98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
/7a68956e-3736-46d1-8932-8576f8ee8882/images/b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd/ed569aed-005e-40fd-9297-dd54a1e4946c.meta
Status: Connected
Number of entries: 2
Brick host3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
------------------------------------------------------------------------------------------------------------------------------------------------------------
3.-Every hour I see the following entries/errors
------------------------------------------------------------------------------------------------------------------------------------------------------------
VDSM command SetVolumeDescriptionVDS failed: Could not acquire resource. Probably resource factory threw an exception.: ()
------------------------------------------------------------------------------------------------------------------------------------------------------------
4.- I am also seeing the following pertaining to the engine volume
------------------------------------------------------------------------------------------------------------------------------------------------------------
Failed to update OVF disks 86196e10-8103-4b00-bd3e-0f577a8bb5b2, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hosted_storage).
------------------------------------------------------------------------------------------------------------------------------------------------------------
5.-hosted-engine --vm-status
------------------------------------------------------------------------------------------------------------------------------------------------------------
--== Host host1 (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : host1
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : be592659
local_conf_timestamp : 480218
Host timestamp : 480217
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=480217 (Tue Feb 11 19:22:20 2020)
host-id=1
score=3400
vm_conf_refresh_time=480218 (Tue Feb 11 19:22:21 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host host3 (id: 2) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : host3
Host ID : 2
Engine status : {"health": "good", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 1f4a8597
local_conf_timestamp : 436681
Host timestamp : 436681
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=436681 (Tue Feb 11 19:22:18 2020)
host-id=2
score=3400
vm_conf_refresh_time=436681 (Tue Feb 11 19:22:18 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
--== Host host2 (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : host2
Host ID : 3
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_missing", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : ca5c1918
local_conf_timestamp : 479644
Host timestamp : 479644
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=479644 (Tue Feb 11 19:22:21 2020)
host-id=3
score=3400
vm_conf_refresh_time=479644 (Tue Feb 11 19:22:22 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
------------------------------------------------------------------------------------------------------------------------------------------------------------
Any ideas on what might be going ?
4 years, 10 months