Latest 4.5.5 centos node install username password not working.
by antonio.riggio@mail.com
I been trying to install 4.5.5 testing and used the node installer iso. I cant login what is the username and password I have tried what others have done in the forums with no luck admin@ovirt . root admin . Also can the engine be installed from the node via the cockpit still? How can I get logged in to do this? I have used 4.3 and didnt have any of these problems. It was really simple to do.
Any suggestions?
grazie
5 months, 1 week
Error attach snapshot via python SDK after update to ovirt 4.5
by luis.figueiredo10@gmail.com
Hi,
I have 1 problem when using the Python SDK to make backups of virtual machine images, after updating to Ovirt 4.5.
Before the update and with Ovirt 4.4 and Centos 8 everything worked.
Scenario:
Ovirt-engine - installed standalone, centos 9 Stream
Host -> Centos 9 Strem - ISO used was ovirt-node-ng-installer-lastest-el0.iso
Storage -> Iscsi
Ovirt-backup -> Vm runnnig on host with centos 8 Stream with python3-ovirt-engine-sdk4 installed
Versions:
Version 4.5.6-1.el9
To reproduce the error, do the following:
1 -> Create a snapshot via interface GUI
2 -> Run this script to find my snaphot id
```
#!/bin/python
import sys
import printf
import ovirtsdk4 as sdk
import time
import configparser
import re
cfg = configparser.ConfigParser()
cfg.readfp(open("/opt/VirtBKP/default.conf"))
url=cfg.get('ovirt-engine', 'api_url')
user=cfg.get('ovirt-engine', 'api_user')
password=cfg.get('ovirt-engine', 'api_password')
ca_file=cfg.get('ovirt-engine', 'api_ca_file')
connection = None
try:
connection = sdk.Connection(url,user,password,ca_file)
# printf.OK("Connection to oVIrt API success %s" % url)
except Exception as ex:
print(ex)
printf.ERROR("Connection to oVirt API has failed")
vm_service = connection.service("vms")
system_service = connection.system_service()
vms_service = system_service.vms_service()
vms = vm_service.list()
for vm in vms:
vm_service = vms_service.vm_service(vm.id)
snaps_service = vm_service.snapshots_service()
snaps_map = {
snap.id: snap.description
for snap in snaps_service.list()
}
for snap_id, snap_description in snaps_map.items():
snap_service = snaps_service.snapshot_service(snap_id)
print("VM: "+vm.name+": "+snap_description+" "+snap_id)
# Close the connection to the server:
connection.close()
```
When i run the script i have the result of VM name + snap ID, so connection with api is OK.
```
[root@ovirt-backup-lab VirtBKP]# ./list_machines_with_snapshots_all
VM: ovirt-backup: Active VM d9631ff9-3a67-49af-b6ae-ed4d164c38ee
VM: ovirt-backup: clean 07478a39-a1e8-4d42-bdf9-aa3464ee85a2
VM: testes: Active VM 46f6ea97-b182-496a-89db-278ca8bcc952
VM: testes: testes123 729d64cb-c939-400c-b7f7-8e675c37a882
```
3- The problem occurs when I try to attach the snapshot to the VM
I run the script below thats run perfect on previous ovirt version 4.4 and centos 8
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import subprocess
import os
import time
DATE = str((time.strftime("%Y%m%d-%H%M")))
# In order to send events we need to also send unique integer ids. These
# should usually come from an external database, but in this example we
# will just generate them from the current time in seconds since Jan 1st
# 1970.
event_id = int(time.time())
from dateutil.relativedelta import relativedelta
import uuid
import ovirtsdk4 as sdk
import ovirtsdk4.types as types
# Init the logging
import logging
logging.basicConfig(level=logging.DEBUG, filename='/var/log/virtbkp/dumpdebug.log')
# This function try to find the device by the SCSI-SERIAL
# which its the disk id.
#
def get_logical_name(diskid):
logicalname="None"
loop=True
timeout=60
i=int(1)
while loop:
if i <= timeout:
logging.debug('[%s] Looking for disk with id \'%s\' (%s/%s).',str((time.strftime("%Y%m%d-%H%M"))),diskid,str(i),str(timeout))
# import udev rules
import pyudev
devices = pyudev.Context().list_devices()
for d in devices.match_property('SCSI_IDENT_SERIAL',diskid):
if d.properties.get('DEVTYPE') == "disk":
logging.debug('[%s] found disk with logical name \'%s\'',str((time.strftime("%Y%m%d-%H%M"))),d.device_node)
logicalname = d.device_node
loop=False
continue
if i == int(timeout/3) or i == int((timeout/3)*2):
os.system("udevadm control --reload-rules && udevadm trigger")
logging.debug('[%s] Reloading udev.',str((time.strftime("%Y%m%d-%H%M"))))
i+=1
time.sleep(1)
else:
logging.error('[%s] Timeout reached, something wrong because we did not find the disk!',str((time.strftime("%Y%m%d-%H%M"))))
loop=False
return logicalname
# cmd="for d in `echo /sys/block/[sv]d*`; do disk=\"`echo $d | cut -d '/' -f4`\"; udevadm info --query=property --name /dev/${disk} | grep '"+diskid+"' 1>/dev/null && echo ${disk}; done"
#
# logging.debug('[%s] using cmd \'[%s]\'.',str((time.strftime("%Y%m%d-%H%M"))),cmd)
# while loop:
# try:
# logging.debug('[%s] running command %s/%s: \'%s\'.',str((time.strftime("%Y%m%d-%H%M"))),str(i),str(timeout),cmd)
# path = subprocess.check_output(cmd, shell=True, universal_newlines=True).replace("\n","")
# logging.debug('[%s] path is \'[%s]\'.',str((time.strftime("%Y%m%d-%H%M"))),str(path))
# if path.startswith("vd") or path.startswith("sd") :
# logicalname = "/dev/" + path
# except:
# if i <= timeout:
# logging.debug('[%s] Looking for disk with id \'%s\'. %s/%s.',str((time.strftime("%Y%m%d-%H%M"))),diskid,str(i),str(timeout))
# time.sleep(1)
# else:
# logging.debug('[%s] something wrong because we did not find this, will dump the disks attached now!',str((time.strftime("%Y%m%d-%H%M"))))
# cmd="for disk in `echo /dev/sd*`; do echo -n \"${disk}: \"; udevadm info --query=property --name $disk|grep SCSI_SERIAL; done"
# debug = subprocess.check_output(cmd, shell=True, universal_newlines=True)
# logging.debug('%s',str(debug))
# loop=False
# i+=1
# continue
# if str(logicalname) != "None":
# logging.debug('[%s] Found disk with id \'%s\' have logical name \'%s\'.',str((time.strftime("%Y%m%d-%H%M"))),diskid,logicalname)
# loop=False
# return logicalname
# This function it's intended to be used to create the image
# from identified this in agent machine.
# We assume this will be run on the guest machine with the
# the disks attached.
def create_qemu_backup(backupdir,logicalname,diskid,diskalias,event_id):
# Advanced options for qemu-img convert, check "man qemu-img"
qemu_options = "-o cluster_size=2M"
# Timout defined for the qemu execution time
# 3600 = 1h, 7200 = 2h, ...
qemu_exec_timeout = 7200
# Define output file name and path
ofile = backupdir + "/" + diskalias + ".qcow2"
# Exec command for making the backup
cmd = "qemu-img convert -O qcow2 "+qemu_options+" "+logicalname+" "+ofile
logging.debug('[%s] Will backup with command \'%s\' with defined timeout \'%s\' seconds.',str((time.strftime("%Y%m%d-%H%M"))),cmd,str(qemu_exec_timeout))
try:
disktimeStarted = time.time()
logging.info('[%s] QEMU backup starting, please hang on while we finish...',str((time.strftime("%Y%m%d-%H%M"))))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.NORMAL,
custom_id=event_id,
description=(
'QEMU backup starting for disk \'%s\'.' % diskalias
),
),
)
event_id += 1
run = subprocess.check_output(cmd,shell=True, timeout=qemu_exec_timeout,universal_newlines=True,stderr=subprocess.STDOUT)
disktimeDelta = time.time() - disktimeStarted
diskrt = relativedelta(seconds=disktimeDelta)
diskexectime=('{:02d}:{:02d}:{:02d}'.format(int(diskrt.hours), int(diskrt.minutes), int(diskrt.seconds)))
logging.info('[%s] Backup finished successfully for disk \'%s\' in \'%s\' .',str((time.strftime("%Y%m%d-%H%M"))),diskalias,str(diskexectime))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.NORMAL,
custom_id=event_id,
description=(
'QEMU backup finished for disk \'%s\' in \'%s\'.' % (diskalias, str(diskexectime))
),
),
)
event_id += 1
return event_id
except subprocess.TimeoutExpired as t:
logging.error('[%s] Timeout of \'%s\' seconds expired, process \'%s\' killed.',str((time.strftime("%Y%m%d-%H%M"))),str(t.timeout),cmd)
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.ERROR,
custom_id=event_id,
description=(
'Timeout of \'%s\' seconds expired, process \'%s\' killed.' % (str((time.strftime("%Y%m%d-%H%M"))),str(t.timeout))
),
),
)
event_id += 1
return event_id
except subprocess.CalledProcessError as e:
logging.error('[%s] Execution error, command output was:',str((time.strftime("%Y%m%d-%H%M"))))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.ERROR,
custom_id=event_id,
description=('Execution error.'),
),
)
event_id += 1
logging.error('[%s] %s',str((time.strftime("%Y%m%d-%H%M"))),str(e.output))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.ERROR,
custom_id=event_id,
description=(
'\'%s\'' % (str(e.output))
),
),
)
event_id += 1
return event_id
# Arguments
import sys
DATA_VM_BYPASSDISKS="None"
if len(sys.argv) < 3:
print("You must specify the right arguments!")
exit(1)
elif len(sys.argv) < 4:
DATA_VM_NAME = sys.argv[1]
SNAP_ID = sys.argv[2]
else:
exit(2)
logging.debug(
'[%s] Launched with arguments on vm \'%s\' and bypass disks \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))),
DATA_VM_NAME,
DATA_VM_BYPASSDISKS
)
# Parse de ini file with the configurations
import configparser
cfg = configparser.ConfigParser()
cfg.readfp(open("/opt/VirtBKP/default.conf"))
#BCKDIR = '/mnt/backups'
BCKDIR = cfg.get('ovirt-engine','backupdir')
# The connection details:
API_URL = cfg.get('ovirt-engine','api_url')
API_USER = cfg.get('ovirt-engine','api_user')
API_PASSWORD = cfg.get('ovirt-engine','api_password')
# The file containing the certificat of the CA used by the server. In
# an usual installation it will be in the file '/etc/pki/ovirt-engine/ca.pem'.
#API_CA_FILE = '/opt/VirtBKP/ca.crt'
API_CA_FILE = cfg.get('ovirt-engine','api_ca_file')
# The name of the application, to be used as the 'origin' of events
# sent to the audit log:
APPLICATION_NAME = 'Image Backup Service'
# The name of the virtual machine where we will attach the disks in
# order to actually back-up them. This virtual machine will usually have
# some kind of back-up software installed.
#AGENT_VM_NAME = 'ovirt-backup'
AGENT_VM_NAME = cfg.get('ovirt-engine','agent_vm_name')
## Connect to the server:
#connection = sdk.Connection(
# url=API_URL,
# username=API_USER,
# password=API_PASSWORD,
# ca_file=API_CA_FILE,
# debug=True,
# log=logging.getLogger(),
#)
#logging.info('[%s] Connected to the server.',str((time.strftime("%Y%m%d-%H%M"))))
cfg = configparser.ConfigParser()
cfg.readfp(open("/opt/VirtBKP/default.conf"))
url=cfg.get('ovirt-engine', 'api_url')
user=cfg.get('ovirt-engine', 'api_user')
password=cfg.get('ovirt-engine', 'api_password')
ca_file=cfg.get('ovirt-engine', 'api_ca_file')
connection = None
try:
connection = sdk.Connection(url,user,password,ca_file)
logging.info('[%s] Connected to the server.',str((time.strftime("%Y%m%d-%H%M"))))
except Exception as ex:
print(ex)
printf.ERROR("Connection to oVirt API has failed")
# Get the reference to the root of the services tree:
system_service = connection.system_service()
# Get the reference to the service that we will use to send events to
# the audit log:
events_service = system_service.events_service()
# Timer count for global process
totaltimeStarted = time.time()
# Get the reference to the service that manages the virtual machines:
vms_service = system_service.vms_service()
# Find the virtual machine that we want to back up. Note that we need to
# use the 'all_content' parameter to retrieve the retrieve the OVF, as
# it isn't retrieved by default:
data_vm = vms_service.list(
search='name=%s' % DATA_VM_NAME,
all_content=True,
)[0]
logging.info(
'[%s] Found data virtual machine \'%s\', the id is \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))), data_vm.name, data_vm.id,
)
# Find the virtual machine were we will attach the disks in order to do
# the backup:
agent_vm = vms_service.list(
search='name=%s' % AGENT_VM_NAME,
)[0]
logging.info(
'[%s] Found agent virtual machine \'%s\', the id is \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))), agent_vm.name, agent_vm.id,
)
# Find the services that manage the data and agent virtual machines:
data_vm_service = vms_service.vm_service(data_vm.id)
agent_vm_service = vms_service.vm_service(agent_vm.id)
# Create an unique description for the snapshot, so that it is easier
# for the administrator to identify this snapshot as a temporary one
# created just for backup purposes:
#snap_description = '%s-backup-%s' % (data_vm.name, uuid.uuid4())
snap_description = 'BACKUP_%s_%s' % (data_vm.name, DATE)
# Send an external event to indicate to the administrator that the
# backup of the virtual machine is starting. Note that the description
# of the event contains the name of the virtual machine and the name of
# the temporary snapshot, this way, if something fails, the administrator
# will know what snapshot was used and remove it manually.
#events_service.add(
# event=types.Event(
# vm=types.Vm(
# id=data_vm.id,
# ),
# origin=APPLICATION_NAME,
# severity=types.LogSeverity.NORMAL,
# custom_id=event_id,
# description=(
# 'Backup of virtual machine \'%s\' using snapshot \'%s\' is '
# 'starting.' % (data_vm.name, snap_description)
# ),
# ),
#)
#event_id += 1
# Create the structure we will use to deploy the backup data
#bckfullpath = BCKDIR + "/" + data_vm.name + "/" + str((time.strftime("%Y%m%d-%H%M")))
#mkdir = "mkdir -p " + bckfullpath
#subprocess.call(mkdir, shell=True)
#logging.debug(
# '[%s] Created directory \'%s\' as backup destination.',
# str((time.strftime("%Y%m%d-%H%M"))),
# bckfullpath
#)
# Send the request to create the snapshot. Note that this will return
# before the snapshot is completely created, so we will later need to
# wait till the snapshot is completely created.
# The snapshot will not include memory. Change to True the parameter
# persist_memorystate to get it (in that case the VM will be paused for a while).
snaps_service = data_vm_service.snapshots_service()
#snap = snaps_service.add(
# snapshot=types.Snapshot(
# description=snap_description,
# persist_memorystate=False,
# ),
#)
#logging.info(
# '[%s] Sent request to create snapshot \'%s\', the id is \'%s\'.',
# str((time.strftime("%Y%m%d-%H%M"))), snap.description, snap.id,
#)
# Poll and wait till the status of the snapshot is 'ok', which means
# that it is completely created:
snap_id = SNAP_ID
snap_service = snaps_service.snapshot_service(snap_id)
#while snap.snapshot_status != types.SnapshotStatus.OK:
# logging.info(
# '[%s] Waiting till the snapshot is created, the status is now \'%s\'.',
# str((time.strftime("%Y%m%d-%H%M"))),
# snap.snapshot_status
# )
# time.sleep(1)
# snap = snap_service.get()
#logging.info('[%s] The snapshot is now complete.',str((time.strftime("%Y%m%d-%H%M"))))
# Retrieve the descriptions of the disks of the snapshot:
snap_disks_service = snap_service.disks_service()
snap_disks = snap_disks_service.list()
# Attach all the disks of the snapshot to the agent virtual machine, and
# save the resulting disk attachments in a list so that we can later
# detach them easily:
attachments_service = agent_vm_service.disk_attachments_service()
attachments = []
for snap_disk in snap_disks:
attachment = attachments_service.add(
attachment=types.DiskAttachment(
disk=types.Disk(
id=snap_disk.id,
snapshot=types.Snapshot(
id=snap_id,
),
),
active=True,
bootable=False,
interface=types.DiskInterface.VIRTIO_SCSI,
),
)
attachments.append(attachment)
logging.info(
'[%s] Attached disk \'%s\' to the agent virtual machine \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))),attachment.disk.id, agent_vm.name
)
print(f"Attach disk:{attachment.disk.id} to the agent vm:{agent_vm.name}")
# Now the disks are attached to the virtual agent virtual machine, we
# can then ask that virtual machine to perform the backup. Doing that
# requires a mechanism to talk to the backup software that runs inside the
# agent virtual machine. That is outside of the scope of the SDK. But if
# the guest agent is installed in the virtual machine then we can
# provide useful information, like the identifiers of the disks that have
# just been attached.
#for attachment in attachments:
# if attachment.logical_name is not None:
# logging.info(
# '[%s] Logical name for disk \'%s\' is \'%s\'.',
# str((time.strftime("%Y%m%d-%H%M"))), attachment.disk.id, attachment.logical_name,
# )
# else:
# logging.info(
# '[%s] The logical name for disk \'%s\' isn\'t available. Is the '
# 'guest agent installed?',
# str((time.strftime("%Y%m%d-%H%M"))),
# attachment.disk.id,
# )
# Close the connection to the server:
connection.close()
```
The result of command is
root@ovirt-backup-lab VirtBKP]# ./teste.py ovirt-backup "729d64cb-c939-400c-b7f7-8e675c37a882"
Traceback (most recent call last):
File "./teste.py", line 412, in <module>
interface=types.DiskInterface.VIRTIO_SCSI,
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7147, in add
return self._internal_add(attachment, headers, query, wait)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
return future.wait() if wait else future
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait
return self._code(response)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback
self._check_fault(response)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
self._raise_error(response, body)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Failed to hot-plug disk]". HTTP response code is 400.
On engine log i have this
2024-09-25 18:30:00,132+01 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-27) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2024-09-25 18:30:00,146+01 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-27) [3f712b2] Running command: CreateUserSessionCommand internal: false.
2024-09-25 18:30:00,149+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-27) [3f712b2] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '172.24.0.13' using session 'zVRF1cTF22ex9oRy2wcK/NIjfr2FwY2+AqhcRFv02J6tDPEoAC7YB329VVcrGrPoQcxJLRIokEDvt8j/PySxcg==' logged in.
2024-09-25 18:30:00,404+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Lock Acquired to object 'EngineLock:{exclusiveLocks='[6dc760fb-13bd-4285-b474-2ff5e39af74e=DISK]', sharedLocks=''}'
2024-09-25 18:30:00,415+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Running command: AttachDiskToVmCommand internal: false. Entities affected : ID: e302e3ac-6101-4317-b19d-46d951237122 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 6dc760fb-13bd-4285-b474-2ff5e39af74e Type: DiskAction group ATTACH_DISK with role type USER
2024-09-25 18:30:00,421+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] START, HotPlugDiskVDSCommand(HostName = hv1, HotPlugDiskVDSParameters:{hostId='35bd2f95-464f-4080-98e9-729a05f1a39b', vmId='e302e3ac-6101-4317-b19d-46d951237122', diskId='6dc760fb-13bd-4285-b474-2ff5e39af74e'}), log id: 7c9d787c
2024-09-25 18:30:00,422+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug>
<devices>
<disk snapshot="no" type="file" device="disk">
<target dev="sda" bus="scsi"/>
<source file="/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77">
<seclabel model="dac" type="none" relabel="no"/>
</source>
<driver name="qemu" io="threads" type="qcow2" error_policy="stop" cache="writethrough"/>
<alias name="ua-6dc760fb-13bd-4285-b474-2ff5e39af74e"/>
<address bus="0" controller="0" unit="1" type="drive" target="0"/>
<serial>6dc760fb-13bd-4285-b474-2ff5e39af74e</serial>
</disk>
</devices>
<metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:vm>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>042813a0-7a69-11ef-a340-ac1f6b165d0d</ovirt-vm:poolID>
<ovirt-vm:volumeID>46bcd0f2-33ff-40a3-a32b-f39b34e99f77</ovirt-vm:volumeID>
<ovirt-vm:shared>transient</ovirt-vm:shared>
<ovirt-vm:imageID>6dc760fb-13bd-4285-b474-2ff5e39af74e</ovirt-vm:imageID>
<ovirt-vm:domainID>0b52e59e-fe08-4e18-8273-228955bba3b7</ovirt-vm:domainID>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
</hotplug>
2024-09-25 18:30:11,069+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Failed in 'HotPlugDiskVDS' method
2024-09-25 18:30:11,075+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hv1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory
2024-09-25 18:30:11,075+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory]]'
2024-09-25 18:30:11,075+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] HostName = hv1
2024-09-25 18:30:11,075+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command 'HotPlugDiskVDSCommand(HostName = hv1, HotPlugDiskVDSParameters:{hostId='35bd2f95-464f-4080-98e9-729a05f1a39b', vmId='e302e3ac-6101-4317-b19d-46d951237122', diskId='6dc760fb-13bd-4285-b474-2ff5e39af74e'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory, code = 45
2024-09-25 18:30:11,075+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] FINISH, HotPlugDiskVDSCommand, return: , log id: 7c9d787c
2024-09-25 18:30:11,075+01 ERROR [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory, code = 45 (Failed with error FailedToPlugDisk and code 45)
2024-09-25 18:30:11,076+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command [id=4181b4c0-f988-4e96-8dbc-6917328bfdc5]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.storage.DiskVmElement; snapshot: VmDeviceId:{deviceId='6dc760fb-13bd-4285-b474-2ff5e39af74e', vmId='e302e3ac-6101-4317-b19d-46d951237122'}.
2024-09-25 18:30:11,076+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command [id=4181b4c0-f988-4e96-8dbc-6917328bfdc5]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDevice; snapshot: VmDeviceId:{deviceId='6dc760fb-13bd-4285-b474-2ff5e39af74e', vmId='e302e3ac-6101-4317-b19d-46d951237122'}.
2024-09-25 18:30:11,083+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] EVENT_ID: USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk testes_Disk1 to VM ovirt-backup (User: admin@internal-authz).
2024-09-25 18:30:11,083+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Lock freed to object 'EngineLock:{exclusiveLocks='[6dc760fb-13bd-4285-b474-2ff5e39af74e=DISK]', sharedLocks=''}'
2024-09-25 18:30:11,083+01 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-27) [] Operation Failed: [Failed to hot-plug disk]
This is scenario on my LAB Test, in production i have 4 hosts, 3 on Centos 8 and 1 on Centos 9 Ovirt node image, and if my ovirt-backup machine running on host with centos 9 Ovirt Node i have the same error that mentioned above, but if i migrate my ovirt-backup to another host with Centos 8 do not have this problem.
Any idea what could cause this problem?
Sorry if my English isn't understandable
Thanks
Luís Figueiredo
5 months, 2 weeks
Migrate Ovirt Node from el8 to el9
by devis@gmx.com
Hello,
I tried to find a guide or procedure to upgrade an ovirt node from el8 to el9, but I didn't find anything.
Is it possible migrate node without reinstallation?
Thanks,
Devis
6 months
Installing on iSCSI boot node
by rtartar@gmail.com
I've installed the oVirt node on iSCSI boot server (Cisco UCS chassis) But when I go to add to the manager it fails and the Node loses contact with it's ibft interface. Anyway I can leave the interface alone when adding to the manager? I have 5 physical interfaces attached to the node. Any help would be greatly appreciated. Using the node 4.5 installer. I add the ip=ibft to the installation startup and seems to work mostly...
Thanks in advance
6 months
Error during hosted-engine deploy for Cluster Compatibility Version
by devis@gmx.com
Hi,
I followed guide to install the oVirt Engine. I also used the suggestion to use oVirt master snapshot repositories.
But during the installation I obtain the following error:
"The host has been set in non_operational status, deployment errors: code 154: Host ********* is compatible with versions (4.2,4.3,4.4,4.5,4.6,4.7) and cannot join Cluster Default which is set to version 4.8., code 9000: Failed to verify Power Management configuration for Host *******., fix accordingly and re-deploy."}
How I can solve this problem? I cannot find any suggestion on internet.
6 months
oVirt Node 4.5.5 no virtualization tab
by arkansascontrols@icloud.com
I have installed 4.5.5 and can get to the Web Interface at :9090 on all 3. I have been able to add the SSH keys and from any of the 3 hosts web interface I can get to the others from the drop down. But there is no Virtualization Tab on any of the 3, so no way to create VM's. I have followed a few instructions from some threads I've found on here
https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Deploying_Hyper...
I subscribed all 3 hosts to the repo, and installed the engine, but the cockpit-ovirt-dashboard doesn't exist to install, and I'm guessing that's where I will find what I need to make this usable.
Any help would be appreciated.
6 months, 1 week
Installation issue
by dave nirav
Hi,
Probably this is my first mail to this list.
I am facing lot of dependency issue while installing ovirt-engine, moreover
on getting openstack, ceph and ansible. I have specified the repo paths in
/etc/yum.repo.d/<>, it seems openstack path show EoL and I am unable to
install its dependent packages.
I am working on limited env with customization over RHEL9 though I am not
having any RHSM support.
I am installing things 1 by 1 and would like to get help on the
installation steps and list of rpms needed to do engine-setup.
Any help will be appreciated.
dnf install ovirt-engine
Last metadata expiration check: 1:41:11 ago on Wed Sep 25 11:35:58 2024.
Error:
Problem: cannot install the best candidate for the job
- nothing provides openstack-java-resteasy-connector >= 3.2.9 needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
- nothing provides ansible-runner-service >= 1.0.6 needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
- nothing provides collectd-write_syslog needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
- nothing provides xmlrpc-client needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
- nothing provides ceph-common needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
- nothing provides python3-cinderlib needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
- nothing provides ansible < 2.10.0 needed by
ovirt-engine-4.4.10.7-1.el8.noarch from ovirt-4.4
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to
use not only best candidate packages)
Thanks,
Nirav Dave
6 months, 1 week
Migrating Self Hosted Engine from el8s to Standalone Hosted Engine using el9
by Nur Imam Febrianto
Hi.
Currently we're using oVirt 4.5 that still based on el8s (Both host end the engine), because CentOS 8 Stream had been EOL'ed we want to migrate the engine using Standalone Engine (not the self hosted) and uses any el9 capable OS. What is the proper way to do it ? And after the migration what will happened to storage domain that used by the Self Hosted Engine in el8 ? Maybe anybody here can give me some good advise. 🙂
Thanks before.
Regards,
Nur Imam Febrianto
6 months, 1 week
API to Query OVA files from the hosts/ mounted locations
by Siddu Hadpad
We are storing images in an NFS share that will be mounted to KVM hosts. When I attempt to provision a VM using the mounted path, it successfully provisions the VM from the OVA files. However, if an incorrect path is provided, it results in an 'Operation Failed' error. The error message is not so descriptive.
To prevent this issue, I'd like to validate if the OVA file already exists in the specified path. Unfortunately, I haven't been able to find any APIs to query images directly from mounted locations on the host.
While the web UI does display an error if an incorrect path is provided, I'm wondering if there are any APIs available to validate the presence of the OVA file in the specific path.
6 months, 1 week