SSl vdmsd failed ssl
by Ingeniero especialista Redhat / Suse
Hello, good evening, I want to consult the following case, we have two
ovirt 3.6 servers with hosted-engine, yesterday some multipath servers were
presented and two machines were blocked, which was not possible to start,
checking we found that the ssl of the nodes They are expired and we change
them only in the nodes. We could start the hosted-engine but the two nodes
are not responsive
comes out in vdsm daemon
vdsm [43067]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error
during handshake: unexpected eof
I appreciate any ideas or suggestions to be able to recover normal
operation.
Thanks
2 years, 6 months
Intermittent failure to upload ISOs
by aclysma@gmail.com
This may be the same issue as described here:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/CJISJIDQKSIN...
https://bugzilla.redhat.com/show_bug.cgi?id=1977276
I am on 4.4.8.6-1.el8, installed a couple days ago from the ovirt node ISO. In particular, I noticed if I SSH into the hosted engine and tail -f /var/log/ovirt-imageio/daemon.log, in the failure case I get something like:
2021-09-30 08:15:52,330 INFO (Thread-8) [http] OPEN connection=8 client=::ffff:192.168.1.53
2021-09-30 08:16:23,315 INFO (Thread-8) [http] CLOSE connection=8 client=::ffff:192.168.1.53 [connection 1 ops, 30.984947 s] [dispatch 1 ops, 0.000097 s]
No activity in tail -f /var/log/ovirt-imageio/daemon.log on the host (I only have one host) in the failure case, just the engine. In the success case, there is activity in both logs.
It is very intermittent. Sometimes uploads work most of the time (maybe 4 out of 5), and I've had other times that uploads do not work at all (0 out of 5).
I think when it's behaving particularly badly, restarting the engine (hosted-engine --vm-shutdown, then hosted-engine --vm-start) helps, but I haven't figured out a reliable pattern. (I am logged in as admin.) I've tried several browsers, closing/reopening the browser, etc.
Hoping this info will help in tracking it down.
2 years, 6 months
oVirt - No supported package manager found in your system
by German Sandoval
Probably this isn't the place to ask, but I'm doing a test with an Almalinux Physical host and trying to install a standalone instance and I get this error when I use the Engine-Setup, I'm using a Centos stream guide.
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/10-packaging.conf
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20210915140413-hsjs2f.log
Version: otopi-1.9.5 (otopi-1.9.5-1.el8)
[ ERROR ] Failed to execute stage 'Environment setup': No supported package manager found in your system
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20210915140413-hsjs2f.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20210915140414-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
2021-09-15 14:12:16,421-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/groupKvm=str:'kvm'
2021-09-15 14:12:16,421-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/groupVmConsole=str:'ovirt-vmconsole'
2021-09-15 14:12:16,421-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/hostileServices=str:'ovirt-engine-dwhd,ovirt-engine-notifier'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/memCheckEnabled=bool:'True'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/memCheckMinimumMB=int:'4096'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/memCheckRecommendedMB=int:'16384'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/memCheckThreshold=int:'90'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/nfsConfigEnabled=NoneType:'None'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/nfsConfigEnabled_legacyInPostInstall=bool:'False'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/nfsServiceName=NoneType:'None'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/reservedPorts=set:'set()'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/selinuxBooleans=list:'[]'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/selinuxContexts=list:'[]'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/selinuxPorts=list:'[]'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/selinuxRestorePaths=list:'[]'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/shmmax=int:'68719476736'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/userApache=str:'apache'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/userEngine=str:'ovirt'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/userPostgres=str:'postgres'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/userRoot=str:'root'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/userVdsm=str:'vdsm'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_SYSTEM/userVmConsole=str:'ovirt-vmconsole'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=NoneType:'None'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyPort=int:'2222'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_WSP_RPMDISTRO_PACKAGES=str:'ovirt-engine-websocket-proxy'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV OVESETUP_WSP_RPMDISTRO_PACKAGES_SETUP=str:'ovirt-engine-setup-plugin-websocket-proxy'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/dnfDisabledPlugins=list:'[]'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/dnfExpireCache=bool:'True'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/dnfRollback=bool:'True'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/dnfpackagerEnabled=bool:'False'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/keepAliveInterval=int:'30'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/yumDisabledPlugins=list:'[]'
2021-09-15 14:12:16,428-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/yumEnabledPlugins=list:'[]'
2021-09-15 14:12:16,428-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/yumExpireCache=bool:'True'
2021-09-15 14:12:16,428-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/yumRollback=bool:'True'
2021-09-15 14:12:16,428-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV PACKAGER/yumpackagerEnabled=bool:'True'
2021-09-15 14:12:16,428-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/clockMaxGap=int:'5'
2021-09-15 14:12:16,429-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/clockSet=bool:'False'
2021-09-15 14:12:16,429-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin'
2021-09-15 14:12:16,429-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/reboot=bool:'False'
2021-09-15 14:12:16,429-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/rebootAllow=bool:'True'
2021-09-15 14:12:16,430-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/rebootDeferTime=int:'10'
2021-09-15 14:12:16,430-0400 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
2021-09-15 14:12:16,433-0400 DEBUG otopi.context context._executeMethod:127 Stage pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2021-09-15 14:12:16,433-0400 DEBUG otopi.context context._executeMethod:136 otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate condition False
2021-09-15 14:12:16,435-0400 INFO otopi.context context.runSequence:616 Stage: Termination
2021-09-15 14:12:16,435-0400 DEBUG otopi.context context.runSequence:620 STAGE terminate
2021-09-15 14:12:16,437-0400 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.ovirt_engine_common.base.core.misc.Plugin._terminate
2021-09-15 14:12:16,437-0400 ERROR otopi.plugins.ovirt_engine_common.base.core.misc misc._terminate:153 Execution of setup failed
2021-09-15 14:12:16,441-0400 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2021-09-15 14:12:16,460-0400 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2021-09-15 14:12:16,460-0400 DEBUG otopi.context context._executeMethod:136 otopi.plugins.otopi.dialog.machine.Plugin._terminate condition False
2021-09-15 14:12:16,463-0400 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
I haven't found I guide for Alma Linux, So I can assume maybe oVirt still not supported on this OS, I couldn't find much information regarding this error.
https://bugzilla.redhat.com/show_bug.cgi?id=1908602
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1909965
Thanks for your help.
2 years, 6 months
oVirt/Hyperconverged issue
by topoigerm@gmail.com
I have 4 servers of identical hardware. The documentation says "you need 3", not "you need 3 or more"; is it possible to run hyperconverged with 4 servers. Currently all the 4 nodes server has been crashed n after the 4th node try joining the hyperconverged 3nodes cluster. Kindly advise.
FYI currently i'm trying to reinstall back all the OS back due mentioned incident happen.
BR
Faizal
2 years, 6 months
Help ovirt 3.6
by Ingeniero especialista Redhat / Suse
Hello, good evening, I want to consult the following case, we have two
ovirt 3.6 servers with hosted-engine, yesterday some multipath servers were
presented and two machines were blocked, which was not possible to start,
checking we found that the ssl of the nodes They are expired and we change
them only in the nodes. We could start the hosted-engine but the two nodes
are not responsive
comes out in vdsm daemon
vdsm [43067]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error
during handshake: unexpected eof
I appreciate any ideas or suggestions to be able to recover normal
operation.
Thanks
2 years, 6 months
Host reboots when network switch goes down
by cen
Hi,
we are experiencing a weird issue with our Ovirt setup. We have two
physical hosts (DC1 and DC2) and mounted Lenovo NAS storage for all VM data.
They are connected via a managed network switch.
What happens is that if switch goes down for whatever reason (firmware
update etc), physical host reboots. Not sure if this is an action
performed by Ovirt but I suspect it is because connection to mounted
storage is lost and it performs some kind of an emergency action. I
would need to get some direction pointers to find out
a) who triggers the reboot and why
c) a way to prevent reboots by increasing storage? timeouts
Switch reboot takes 2-3 minutes.
These are the host /var/log/messages just before reboot occurs:
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
[10993]: s11 check_our_lease warning 72 last_success 7690912
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
[10993]: s3 check_our_lease warning 76 last_success 7690908
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
[10993]: s1 check_our_lease warning 68 last_success 7690916
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
[27983]: s11 delta_renew read timeout 10 sec offset 0
/var/run/vdsm/storage/15514c65-5d45-4ba7-bcd4-cc772351c940/fce598a8-11c3-44f9-8aaf-8712c96e00ce/65413499-6970-4a4c-af04-609ef78891a2
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
[27983]: s11 renewal error -202 delta_length 20 last_success 7690912
Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping
7690970 close 7690980 renewal 7690912 expire 7690992 client 10993
sanlock_hosted-engine:2
Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping
7690970 close 7690980 renewal 7690908 expire 7690988 client 10993
sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
Sep 28 16:20:01 ovirtnode02 systemd: Created slice User Slice of root.
Sep 28 16:20:01 ovirtnode02 systemd: Started Session 15148 of user root.
Sep 28 16:20:01 ovirtnode02 systemd: Removed slice User Slice of root.
Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985
[10993]: s11 check_our_lease warning 73 last_success 7690912
Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985
[10993]: s3 check_our_lease warning 77 last_success 7690908
Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985
[10993]: s1 check_our_lease warning 69 last_success 7690916
Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping
7690970 close 7690980 renewal 7690912 expire 7690992 client 10993
sanlock_hosted-engine:2
Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping
7690970 close 7690980 renewal 7690908 expire 7690988 client 10993
sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986
[10993]: s11 check_our_lease warning 74 last_success 7690912
Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986
[10993]: s3 check_our_lease warning 78 last_success 7690908
Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986
[10993]: s1 check_our_lease warning 70 last_success 7690916
Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping
7690970 close 7690980 renewal 7690916 expire 7690996 client 10993
sanlock_15514c65-5d45-4ba7-bcd4-cc772351c940:2
Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping
7690970 close 7690980 renewal 7690912 expire 7690992 client 10993
sanlock_hosted-engine:2
Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping
7690970 close 7690980 renewal 7690908 expire 7690988 client 10993
sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987
[10993]: s11 check_our_lease warning 75 last_success 7690912
Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987
[10993]: s3 check_our_lease warning 79 last_success 7690908
Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987
[10993]: s1 check_our_lease warning 71 last_success 7690916
2 years, 6 months
Ovirt 4.3 Upload of Image fails
by Mark Morgan
Hi, I am trying to upload an image to a Ovirt 4.3 Instance but it keeps
failing.
After a few seconds it says paused by system.
The test connection is successful in the upload image window so we have
installed the certificate properly.
Due to an older
thread(https://www.mail-archive.com/users@ovirt.org/msg50954.html) I
also checked if it has something to do with wifi. But I am not even
using a wifi connection.
Here is a small part of the log, where you can see the transfer failing.
2021-09-29 11:44:43,011+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96804) [d370a18b-bb12-4992-9fc8-7ce6607358f8] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:43,055+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96804) [1cbc3b4f-b1d4-428a-965a-b9745fd0e108] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:43,056+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(default task-96804) [1cbc3b4f-b1d4-428a-965a-b9745fd0e108] Updating
image transfer 0681f799-f44f-4b1e-8369-4d1033bd81e6 (image
ce221b1f-46aa-4eb4-b159-0e0adb762102) phase to Resuming (message: 'Sent
0MB')
2021-09-29 11:44:47,096+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96801) [50849f1b-ef18-41ab-9380-e2c7980a1f73] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:48,878+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Resuming transfer for Upload disk
'CentOS-8.4.2105-x86_64-boot.iso' (disk id:
'ce221b1f-46aa-4eb4-b159-0e0adb762102', image id:
'45896ce1-a602-49f5-9774-4dc17d960589')
2021-09-29 11:44:48,896+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] EVENT_ID:
TRANSFER_IMAGE_RESUMED_BY_USER(1,074), Image transfer was resumed by
user (admin@internal-authz).
2021-09-29 11:44:48,902+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Renewing transfer ticket for
Upload disk 'CentOS-8.4.2105-x86_64-boot.iso' (disk id:
'ce221b1f-46aa-4eb4-b159-0e0adb762102', image id:
'45896ce1-a602-49f5-9774-4dc17d960589')
2021-09-29 11:44:48,903+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] START,
ExtendImageTicketVDSCommand(HostName = virthost01,
ExtendImageTicketVDSCommandParameters:{hostId='15d10fdf-4dc1-4a4c-a12f-cab50c492974',
ticketId='8d09cf8c-baf9-4497-8b52-ea53a97b4a19', timeout='300'}), log
id: 197aba7
2021-09-29 11:44:48,908+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] FINISH,
ExtendImageTicketVDSCommand, return: StatusOnlyReturn [status=Status
[code=0, message=Done]], log id: 197aba7
2021-09-29 11:44:48,908+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Transfer session with ticket id
8d09cf8c-baf9-4497-8b52-ea53a97b4a19 extended, timeout 300 seconds
2021-09-29 11:44:48,920+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Updating image transfer
0681f799-f44f-4b1e-8369-4d1033bd81e6 (image
ce221b1f-46aa-4eb4-b159-0e0adb762102) phase to Transferring (message:
'Sent 0MB')
2021-09-29 11:44:51,379+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96801) [e2247750-524d-40e4-bffb-1176ff13f1f5] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:55,376+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96801) [f9b3dec1-9aac-4695-ba39-43e5e66bdccd] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
Am I doing something wrong?
2 years, 6 months
Failed to update OVF disks / Failed to update VMs/Templates OVF data for Storage Domain
by nicolas@devels.es
Hi,
We upgraded from oVirt 4.3.8 to 4.4.8 and sometimes we're finding events
like these in the event log (3-4 times/day):
Failed to update OVF disks 77818843-f72e-4d40-9354-4e1231da341f, OVF
data isn't updated on those OVF stores (Data Center KVMRojo, Storage
Domain pv04-003).
Failed to update VMs/Templates OVF data for Storage Domain pv02-002
in Data Center KVMRojo.
I found [1], however, it seems not to solve the issue. I restarted all
the hosts and we're still getting the messages.
We couldn't upgrade hosts to 4.4 yet, FWIW. Maybe it's caused by this?
If someone could shed some light about this, I'd be grateful.
Thanks.
[1]: https://access.redhat.com/solutions/3353011
2 years, 6 months
Managed Block Storage and Templates
by Shantur Rathore
Hi all,
Anyone tried using Templates with Managed Block Storage?
I created a VM on MBS and then took a snapshot.
This worked but as soon as I created a Template from snapshot, the
template got created but there is no disk attached to the template.
Anyone seeing something similar?
Thanks
2 years, 6 months