Veeam Backup for RHV (oVirt)
by markeczzz@gmail.com
Hi!
Not really sure if this is right place to ask, but..
I am trying to use Veeam Backup for Red Hat Virtualization on oVirt 4.5.1.
I have been using it on version 4.4.10.7 and it works ok there.
On Veeam Release page it says that supported OS is RHV 4.4SP1 (oVirt 4.5).
When i try to do backup, this is what i get from Veeam backup.
No errors in vdsm.log and engine.log..
2022-07-27 08:08:44.153 00039 [19545] INFO | [LoggingEventsManager_39]: Add to storage LoggingEvent [id: 34248685-a193-4df5-8ff2-838f738e211c, Type: BackupStartedNew]
2022-07-27 08:08:44.168 00039 [19545] INFO | [TaskManager_39]: Create and run new async task. [call method:'RunBackupChain']
2022-07-27 08:08:44.168 00039 [19545] INFO | [AsyncTask_39]: New AsynTask created [id:'aafa22ac-ff2e-4647-becb-dca88e3eb67f', description:'', type:'BACKUP_VM_POLICY']
2022-07-27 08:08:44.168 00039 [19545] INFO | [AsyncTask_39]: Prepare AsynTask to run [id:'aafa22ac-ff2e-4647-becb-dca88e3eb67f']
2022-07-27 08:08:44.176 00031 [19545] INFO | [BackupPolicy_31]: Refresh VMs for policy '6e090c98-d44b-4785-acb4-82a627da5d9b'
2022-07-27 08:08:44.176 00031 [19545] INFO | [BackupPolicy_31]: Begin updating list of active VMs for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = True]
2022-07-27 08:08:44.176 00031 [19545] INFO | [RhevCluster_31]: Test connection to cluster [IP: engine.example.org, Port: 443, User: admin@ovirt@internalsso]
2022-07-27 08:08:44.189 00039 [19545] INFO | [TaskManager_39]: AsyncTask registered. [id:'aafa22ac-ff2e-4647-becb-dca88e3eb67f']
2022-07-27 08:08:44.371 00031 [19545] INFO | [RhevCluster_31]: Test connection to cluster success. Status: Success. Message:
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicyManager_31]: Refreshing the policies data...
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: Begin updating list of active VMs for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = False]
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: List of active VMs updated for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = False]. Number of active VMs '1'
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicyManager_31]: Policies data has been refreshed.
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: List of active VMs updated for policy '6e090c98-d44b-4785-acb4-82a627da5d9b' [forceRefresh = True]. Number of active VMs '1'
2022-07-27 08:08:44.556 00031 [19545] INFO | [BackupPolicy_31]: Found the '1' VMs to backup in policy '6e090c98-d44b-4785-acb4-82a627da5d9b'
2022-07-27 08:08:44.564 00031 [19545] INFO | [BackupPolicy_31]: * Parallel policy runner has started * for policy [Name:'test5', ID: '6e090c98-d44b-4785-acb4-82a627da5d9b'
2022-07-27 08:08:44.564 00031 [19545] INFO | [VeeamBackupServer_31]: Test connection to backup server [IP: 'veeambr.example.org', Port: '10006', User: 'rhvproxy']
2022-07-27 08:08:44.931 00031 [19545] INFO | [VeeamBackupServer_31]: Test connection to backup server [IP: 'veeambr.example.org', Port: '10006']. Connection status: ConnectionSuccess. Version: 11.0.1.1261
2022-07-27 08:08:45.423 00031 [19545] INFO | [BackupPolicy_31]: Successfully called CreateVeeamPolicySession for job [UID: '6e090c98-d44b-4785-acb4-82a627da5d9b'], session [UID: 'aafa22ac-ff2e-4647-becb-dca88e3eb67f']
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: Successfully called RetainPolicyVms for job [UID: '6e090c98-d44b-4785-acb4-82a627da5d9b'] with VMs: 50513a65-6ccc-479b-9b61-032e0961b016
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: Start calculating maxPointsCount
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: End calculating maxPointsCount. Result = 7
2022-07-27 08:08:45.820 00031 [19545] INFO | [BackupPolicy_31]: Starting validate repository schedule. Repository [UID: '237e41d6-7c67-4a1f-80bf-d7c73c481209', MaxPointsCount: '7', IsPeriodicFullRequired: 'False']
2022-07-27 08:08:46.595 00031 [19545] INFO | [BackupPolicy_31]: End validate repository schedule. Result: [IsScheduleValid: 'True', ErrorMessage: '']
2022-07-27 08:08:46.597 00031 [19545] INFO | [SessionManager_31]: Start registering a new session[Id: 'b6f3f0e1-7aab-41cb-b0e7-10f5b2ed6708']
2022-07-27 08:08:46.639 00031 [19545] INFO | [SessionManager_31]: Session registered. [Id:'b6f3f0e1-7aab-41cb-b0e7-10f5b2ed6708']
2022-07-27 08:08:46.639 00031 [19545] INFO | [BackupPolicy_31]: Backup VM [id:'50513a65-6ccc-479b-9b61-032e0961b016'] starting...
2022-07-27 08:08:46.639 00031 [19545] INFO | [BackupPolicy_31]: RetentionMergeDisabled: false
2022-07-27 08:08:46.640 00031 [19545] INFO | [TaskManager_31]: Create new async task. [call method:'DoBackup']
2022-07-27 08:08:46.640 00031 [19545] INFO | [AsyncTask_31]: New AsynTask created [id:'15a63192-f3d2-4e3a-af56-240f3733bded', description:'', type:'BACKUP_VM']
2022-07-27 08:08:46.640 00031 [19545] INFO | [AsyncTask_31]: Prepare AsynTask to run [id:'15a63192-f3d2-4e3a-af56-240f3733bded']
2022-07-27 08:08:46.661 00031 [19545] INFO | [TaskManager_31]: AsyncTask registered. [id:'15a63192-f3d2-4e3a-af56-240f3733bded']
2022-07-27 08:08:47.295 00031 [19545] ERROR | [BackupPolicy_31]: Backup VM [id:'50513a65-6ccc-479b-9b61-032e0961b016'] failed. Error ('test: VM UUID=50513a65-6ccc-479b-9b61-032e0961b016 was not found: * Line 1, Column 1
Syntax error: value, object or array expected.
* Line 1, Column 1
A valid JSON document must be either an array or an object value.
')
Are there any changes between version 4.5 and 4.5.1 that are preventing using this?
Any solution?
Regards,
2 years, 4 months
oVirt 4.2 host cert expired
by Don Dupuis
Hello
I have an environment with quite a lot of hosts using local storage
domains. The engine and hosts cert expired. I ran engine-setup on the
ovirt-engine so that the engine cert would get updated and then followed
this https://access.redhat.com/solutions/3532921 to manually update the
hosts certs so that hopefully the engine can talk to vdsm and then carry
out the cert enrollment process, but no luck. I am getting is error in
vdsm.log:
2022-07-26 18:32:12,743-0500 INFO (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:
192.168.50.26:58194 (protocoldetector:61)
2022-07-26 18:32:12,760-0500 ERROR (Reactor thread)
[ProtocolDetector.SSLHandshakeDispatcher] ssl handshake: SSLError, address:
::ffff:192.168.50.26 (sslutils:263)
and the engine.log:
2022-07-26 03:30:13,242-05 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to host01/192.168.50.72
2022-07-26 03:30:13,257-05 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) []
Unable to process messages General SSLEngine problem
2022-07-26 03:30:13,260-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host01 command Get Host
Capabilities failed: General SSLEngine problem
I substituted host01 for the real FQDN for this post.
I can't get the hosts in a mode so that I can put it in maintenance mode
and I also want to be carefull about reinstalling because the vms are
stored on local storage domain on host. Fingerprints match on the certs and
when I sign the vdsmcert on the engine and then copy back to the proper
localtions, libvirtd and vdsmd restart fine, just the SSL ERROR.
Anyone have any ideas on how to solve this cert issue?
Thanks
Don
2 years, 4 months
Intel xeon gold 6346 supported
by atatur.yildirim@gmail.com
Hello all,
Is intel xeon gold 6346 cpu supported in ovirt, I couldn't exactly determine from the docs.
Thank you..
2 years, 4 months
Failed to connect to the host via ssh: root@192.168.4.17: Permission denied (publickey, gssapi-keyex, gssapi-with-mic, password).
by konstantinluschaev@ya.ru
An attempt to deploy a new host fails. Deploy I see the following error in the deploy log:
192.168.4.17: Failed to connect to host via ssh: root(a)192.168.4.17 :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Despite the fact that the root user logs into the host via ssh from the ovirt-engine without any problems.
So I understand that ansible scripts cannot get through ssh.
Do you have any ideas about this?
ovirt-engine version 4.4.10.6-1.el8 Centos Stream 8
host - CentOS Stream 8
2 years, 4 months
oVirt over gluster: Replacing a dead host
by Gilboa Davara
Hello all,
I'm attempting to replace a dead host in a replica 2 + arbiter gluster
setup and replace it with a new host.
I've already set up a new host (same hostname..localdomain) and got into
the cluster.
$ gluster peer status
Number of Peers: 2
Hostname: office-wx-hv3-lab-gfs
Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
State: Peer in Cluster (Connected)
Hostname: office-wx-hv1-lab-gfs.localdomain <------ This is a new host.
Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
State: Peer in Cluster (Connected)
$ gluster volume info GV2Data
Volume Name: GV2Data
Type: Replicate
Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick <------ This is the
dead host.
Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)
...
Looking at the docs, it seems that I need to remove the dead brick.
$ gluster volume remove-brick GV2Data
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick start
Running remove-brick with cluster.force-migration enabled can result in
data corruption. It is safer to disable this option so that files that
receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the
remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings?
(y/n) y
volume remove-brick start: failed: Removing bricks from replicate
configuration is not allowed without reducing replica count explicitly
So I guess I need to drop from replica 2 + arbiter to replica 1 + arbiter
(?).
$ gluster volume remove-brick GV2Data replica 1
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick start
Running remove-brick with cluster.force-migration enabled can result in
data corruption. It is safer to disable this option so that files that
receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the
remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings?
(y/n) y
volume remove-brick start: failed: need 2(xN) bricks for reducing replica
count of the volume from 3 to 1
... What am I missing?
- Gilboa
2 years, 4 months
Task Run PKI enroll request for vdsm and QEMU failed to execute. Ovirt 4.5.1
by xavierl@rogers.com
Hi there,
I am currently at a loss as to why I am unable to install additional nodes and am receiving the error: "Task Run PKI enroll request for vdsm and QEMU failed to execute"
Running oVirt node 4.5.1 and would appreciate any assistance as there's no other forums discussing this issue. I have Logs ready at request.
Cheers
2 years, 4 months
ovirt-aaa-jdbc-tool and Keycloak
by Gilboa Davara
Hello all,
I have a number of oVirt clusters that were recently upgraded to v.4.5.1.
As far as I can see, once keycloak is enabled, local users can no longer
login. (only admin@ovirt works).
Assuming I need to continue using local AAA users (created using
ovirt-aaa-jdbc-tool), should I disable keycloak support? How do I do it?
(engine-setup?).
- Gilboa
2 years, 4 months