Single Node 4.3 to 4.4 help
by Wesley Stewart
I have read through many posts and I think this process seems fairly simple.
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
But I just wanted to see if anyone had any gotchas. I am thinking of
either using
- RHEL8 (Using developer program, probably best best atm)
- Ovirt Node (Is ovirt node being deprecated since it is based on
centos?)
- Rocky Linux/AlmaLinux/Clear Linux
From my understanding I should:
- Enter global maintenance
- Make a full engine backup
- Reinstall to a supported OS
- Deploy ovirt engine using backup.
Looking forward to trying out Ovirt 4.4!
3 years, 1 month
Ovirt VLAN Primer
by David Johnson
Good morning all,
On my ovirt 4.4.4 cluster, I am trying to use VLan's to separate VM's for
security purposes.
Is there a usable how-to document that describes how to configure the
vlan's so they actually function without taking the host into
non-operational mode?
Thank you in advance.
Regards,
David Johnson
3 years, 1 month
NodeNG persistence for custom vdsm hooks and firmware
by Shantur Rathore
Hi all,
I have NodeNG 4.4.4 installed and want to know what is the best way of
persisting custom vdsm hooks and some firmware binaries across updates.
I tried to update to 4.4.5-pre and lost my hooks and firmware.
Thanks,
Shantur
3 years, 1 month
Storage domain problems. oVirt 4.4.4.
by Yury Valitsky
Hi, all.
I deployed a new oVirt 4.4.4 installation and imported FC storage domains.
However, two storage domains turned out to be empty - they did not have
virtual machines in the "VM import" tab, the virtual disks appeared in the
"disk import" tab with empty aliases.
I downloaded and viewed the OVF_STORE - the old information was partially
overwritten by the new one with an empty number of VMs.
Trying to restore the VM configuration information, I deployed a backup of
the old hosted engine. Version is also 4.4.4.
Thus, it was possible to restore OVF_STORE of the first problematic disk
domain.
But the second storage domain gave an error when updating OVF_STORE.
The same error occurs when trying to export a VM or copy a virtual disk to
another known good storage domain.
At the same time, I successfully download disks from the second storage
domain to my PC by clicking the Download button and saving *.raw.
But snapshots cannot be saved like this.
1. How can I overwrite OVF_STORE on the second storage domain? If for this
I need to restore the oVirt metadata on the storage domain, how can I do it
correctly?
2. How can I download VM configuration that has been saved in the hosted
engine?
Thanks in advance for your help.
WBR, Yury Valitsky
3 years, 1 month
ovirt 4.4 and CentOS 8 and multipath with Equallogic
by Gianluca Cecchi
Hello,
I'm upgrading some environments from 4.3 to 4.4.
Storage domains are iSCSI based, connected to an Equallogic storage array
(PS-6510ES), that is recognized such as this as vendor/product in relation
to multipath configuration
# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 02 Id: 00 Lun: 00
Vendor: DELL Model: PERC H730 Mini Rev: 4.30
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi15 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi16 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi17 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi18 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
#
Passing from 4.3 to 4.4 implies passing from CentOS 7 to 8.
The vendor is not in multipath "database" (it wasn't in 6 and 7 too)
In 7 I used this snip to get no_path_retry set:
# VDSM REVISION 1.5
# VDSM PRIVATE
defaults {
. . .
no_path_retry 4
. . .
}
. . .
devices {
device {
# These settings overrides built-in devices settings. It does
# not apply to devices without built-in settings (these use the
# settings in the "defaults" section), or to devices defined in
# the "devices" section.
all_devs yes
no_path_retry 4
}
device {
vendor "EQLOGIC"
product "100E-00"
path_selector "round-robin 0"
path_grouping_policy multibus
path_checker tur
rr_min_io_rq 10
rr_weight priorities
failback immediate
features "0"
}
}
and confirmation of applied config with
# multipath -l
36090a0c8d04f21111fc4251c7c08d0a3 dm-14 EQLOGIC ,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 16:0:0:0 sdc 8:32 active undef running
`- 18:0:0:0 sde 8:64 active undef running
36090a0d88034667163b315f8c906b0ac dm-13 EQLOGIC ,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 15:0:0:0 sdb 8:16 active undef running
`- 17:0:0:0 sdd 8:48 active undef running
and
# multipath -r -v3 | grep no_path_retry
Jan 29 11:51:17 | 36090a0d88034667163b315f8c906b0ac: no_path_retry = 4
(config file default)
Jan 29 11:51:17 | 36090a0c8d04f21111fc4251c7c08d0a3: no_path_retry = 4
(config file default)
In 8 I get this; see also the strange line about vendor or product missing,
but it is not true...
# multipath -l
Jan 29 11:52:02 | device config in /etc/multipath.conf missing vendor or
product parameter
36090a0c8d04f21111fc4251c7c08d0a3 dm-13 EQLOGIC,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 16:0:0:0 sdc 8:32 active undef running
`- 18:0:0:0 sde 8:64 active undef running
36090a0d88034667163b315f8c906b0ac dm-12 EQLOGIC,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 15:0:0:0 sdb 8:16 active undef running
`- 17:0:0:0 sdd 8:48 active undef running
# multipath -r -v3 | grep no_path_retry
Jan 29 11:53:54 | device config in /etc/multipath.conf missing vendor or
product parameter
Jan 29 11:53:54 | set open fds limit to 8192/262144
Jan 29 11:53:54 | loading /lib64/multipath/libchecktur.so checker
Jan 29 11:53:54 | checker tur: message table size = 3
Jan 29 11:53:54 | loading /lib64/multipath/libprioconst.so prioritizer
Jan 29 11:53:54 | foreign library "nvme" loaded successfully
Jan 29 11:53:54 | delegating command to multipathd
So in my opinion it is currently using queue if no path in 8
Can anyone give me any insight about the message and about how to force the
no_path_retry?
Perhaps any major change in multipath from 7 to 8?
The 8 config is this one:
# VDSM REVISION 2.0
# VDSM PRIVATE
defaults {
. . .
polling_interval 5
. . .
# We use a small number of retries to protect from short outage.
# Assuming the default polling_interval (5 seconds), this gives
# extra 20 seconds grace time before failing the I/O.
no_path_retry 16
. . .
}
# Blacklist internal disk
blacklist {
wwid "36d09466029914f0021e89c5710e256be"
}
# Remove devices entries when overrides section is available.
devices {
device {
# These settings overrides built-in devices settings. It does
# not apply to devices without built-in settings (these use the
# settings in the "defaults" section), or to devices defined in
# the "devices" section.
#### Not valid any more in RH EL 8.3
#### all_devs yes
#### no_path_retry 16
}
device {
vendor "EQLOGIC"
product "100E-00"
path_selector "round-robin 0"
path_grouping_policy multibus
path_checker tur
rr_min_io_rq 10
rr_weight priorities
failback immediate
features "0"
}
}
Thanks in advance,
Gianluca
3 years, 1 month
Mellanox SR-IOV
by Jorge Visentini
Hi.
I have a Mellanox card with two ports and I enabled SR-IOV, however in
Engine it shows only 1 of the ports with SR-IOV enabled.
I shared it on the host through the CLI and shared it right but the VMs do
not travel and the Engine shows that only one of the ports is divided.
When I try to divide by the Engine, it shows the error of the attached
print.
Is this information correct or am I doing something wrong?
[image: photo_2021-02-02_13-07-49.jpg]
[image: photo_2021-02-02_12-37-56.jpg]
[image: Screenshot_10.jpg]
[image: Screenshot_11.jpg]
--
Att,
Jorge Visentini
+55 55 98432-9868
3 years, 1 month
Users cannot create disks in portal
by jwmccullen@pima.edu
Help, banging my head against a wall.
Ovirt Engine Software Version:4.4.4.7-1.el8
RHEL 8.3
I have set up my cluster going to a TrueNAS storage. Everything is going great and works well from Admin or any user with SuperUser. I would like to set it up so that students can create their own machines.
Great, plowing through the documentation it looks like after creating a user, it works well by giving them at the Data Center object the user roles of:
PowerUserRole
VmCreator
At that level, according to my reading of the documentation, it is also supposed to grant DiskCreator automatically.
Well, when I do that it creates the VM but the creation of the disk does not work. Looking in the log I find:
Validation of action 'AddImageFromScratch' failed for user will@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
Could anyone please let me know what this is referring to? I have tried many roll combinations, and have even made my own user roll with full privileges to no avail. I looked for issues and the closest I could find is:
Bug 1511697 - [RFE] Unable to set permission on all but Hosted-Engine VM and Storage Domain
Which showed as resolved in 4.3
Could someone be kind enough to maybe tell me where I am missing it? I'll be the first to admit I can be a little slow at times.
Thanks.
For added detail:
2021-01-30 13:53:55,592-05 INFO [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (default task-26) [c0df3518-23b0-4311-9418-d9e192d9874f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks=''}'
2021-01-30 13:53:55,632-05 INFO [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (default task-26) [] Running command: AddVmFromScratchCommand internal: false. Entities affected : ID: 9e5b3a76-4000-11eb-82a1-00163e3be3c4 Type: ClusterAction group CREATE_VM with role type USER
2021-01-30 13:53:55,691-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [] EVENT_ID: USER_ADD_VM(34), VM willtest4 was created by will@internal-authz.
2021-01-30 13:53:55,693-05 INFO [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (default task-26) [] Lock freed to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks=''}'
2021-01-30 13:53:55,845-05 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] Lock Acquired to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:55,866-05 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] Running command: UpdateVmCommand internal: false. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2021-01-30 13:53:55,881-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] EVENT_ID: USER_UPDATE_VM(35), VM willtest4 configuration was updated by will@internal-authz.
2021-01-30 13:53:55,883-05 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] Lock freed to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:56,210-05 INFO [org.ovirt.engine.core.bll.network.vm.AddVmInterfaceCommand] (default task-26) [55ce945c-425e-400a-876b-b65d4a4f2d7d] Running command: AddVmInterfaceCommand internal: false. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VMAction group CONFIGURE_VM_NETWORK with role type USER, ID: 8501221e-bff1-487c-8db5-685422f95022 Type: VnicProfileAction group CONFIGURE_VM_NETWORK with role type USER
2021-01-30 13:53:56,232-05 INFO [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (default task-26) [7c8579df] Running command: ActivateDeactivateVmNicCommand internal: true. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VM
2021-01-30 13:53:56,237-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [7c8579df] EVENT_ID: NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS(1,012), Network Interface nic1 (VirtIO) was plugged to VM willtest4. (User: will@internal-authz)
2021-01-30 13:53:56,241-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [7c8579df] EVENT_ID: NETWORK_ADD_VM_INTERFACE(932), Interface nic1 (VirtIO) was added to VM willtest4. (User: will@internal-authz)
2021-01-30 13:53:56,430-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Lock Acquired to object 'EngineLock:{exclusiveLocks='', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:56,455-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Running command: AddDiskCommand internal: false. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 768bbdab-3a53-4341-8144-3ceb29db23c9 Type: StorageAction group CREATE_DISK with role type USER
2021-01-30 13:53:56,460-05 WARN [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Validation of action 'AddImageFromScratch' failed for user will@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
2021-01-30 13:53:56,462-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Lock freed to object 'EngineLock:{exclusiveLocks='', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:56,473-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] EVENT_ID: USER_FAILED_ADD_DISK_TO_VM(79), Add-Disk operation failed on VM willtest4 (User: will@internal-authz).
2021-01-30 13:53:56,475-05 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-26) [] Operation Failed: []
2021-01-30 13:53:56,694-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Getting volume info for image '68480589-8585-41e5-a37c-9213b58fd5f6/00000000-0000-0000-0000-000000000000'
2021-01-30 13:53:56,695-05 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Failed to get volume info: org.ovirt.engine.core.common.errors.EngineException: EngineException: No host was found to perform the operation (Failed with error RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommand(VdsCommandsHelper.java:86)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommandWithFailover(VdsCommandsHelper.java:70)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getVolumeInfoFromVdsm(ImagesHandler.java:857)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback.childCommandsExecutionEnded(AddDiskCommandCallback.java:44)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:181)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
2021-01-30 13:53:56,695-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Command 'AddDisk' id: '82b5a690-8b7f-4568-9b41-0f777a461adb' child commands '[acc2d284-5dc3-49f9-b81a-d2612eb2c999]' executions were completed, status 'FAILED'
2021-01-30 13:53:56,698-05 INFO [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Exception in invoking callback of command AddDisk (82b5a690-8b7f-4568-9b41-0f777a461adb): EngineException: EngineException: No host was found to perform the operation (Failed with error RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
2021-01-30 13:53:56,698-05 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Error invoking callback method 'onFailed' for 'EXECUTION_FAILED' command '82b5a690-8b7f-4568-9b41-0f777a461adb'
2021-01-30 13:53:56,698-05 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: No host was found to perform the operation (Failed with error RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommand(VdsCommandsHelper.java:86)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommandWithFailover(VdsCommandsHelper.java:70)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getVolumeInfoFromVdsm(ImagesHandler.java:857)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback.childCommandsExecutionEnded(AddDiskCommandCallback.java:44)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:181)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
2021-01-30 13:53:57,704-05 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
2021-01-30 13:53:57,707-05 ERROR [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' with failure.
2021-01-30 13:53:57,732-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE(2,022), Add-Disk operation failed to complete.
3 years, 1 month
Achieving high availability
by Andrea Chierici
Hi,
I am trying to configure high availability on a critical VM. My ovirt
version is 4.3.10.4-1.el7.
My question is about the best settings to specify, since it's not
completely clear to me how they influence the availability.
What I want is simply to have a specific machine ALWAYS running, or at
least, to be available as much as possible. I have no special
requirements on the VM, if it goes down, simply restarting it is fine.
Here is my current setting. Is it good or I'd better chage anything?
Thanks,
Andrea
--
Andrea Chierici - INFN-CNAF
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463
SkypeID ataruz
--
3 years, 1 month
oVirt Storage DRS feature
by divan@santanas.co.za
Greetings all :)
I'm wondering if oVirt supports the feature that VMware does, which is
in VMware land is called SDRS [1].
The idea is simple and I'm sure you all aware of it.
You have a "cluster" of storage domains, the engine monitors the backend
storage domains and auto balances the VMs across the storage domains
based on IO latency and disk usage.
If not, how are others out there managing this?
One could manually balance, but that's clearly not ideal.
[1]: https://www.youtube.com/watch?v=z77xmaxoNec
--
Divan Santana
https://divansantana.com
3 years, 1 month
Hello.which environment type I choose?
by 欧文
Dear:
Hello. My name is Owen. First, I appreciate you spend time to check my
email. Recently I want to use oVirt to build a project. But after reading
the official document, I have some questions to ask. The project is
intended to set up a physical server that is used to implement
virtualization, Variable quantity of thin clients for users. But I don't
know which environment types I should use to deploy. The number two
question is why the system environment is different between each version of
o-virt?Because centos 7 is the major version. But so many versions of
o-virt is not centos 7 or centos 8.can you tell me the reason. Thank you.At
last, I have a question.I ever joined the IRC, but there is no response to
answer my question. Could you tell me where I can take part in the
community with developers who develope o-virt.Thanks
3 years, 1 month