The he_fqdn proposed for the engine VM resolves on this host - error?
by lejeczek
hi chaps,
a newcomer here. I use cockpit to deploy hosted engine and I
get this error/warning message:
"The he_fqdn proposed for the engine VM resolves on this host"
I should mention that if I remove the IP to which FQDN
resolves off that iface(plain eth no vlans) then I get this:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"msg": "The selected network interface is not valid"}
All these errors seem bit too cryptic to me.
Could you shed bit light on what is oVirt saying exactly and
why it's not happy that way?
many thanks, L.
3 years, 9 months
Upgrade from 4.4.3 to 4.4.4 (oVirt Node) - vdsmd.service/start failed with result 'dependency'
by Marco Fais
Hi all,
I have just upgraded one of my oVirt nodes from 4.4.3 to 4.4.4.
After the reboot, the 4.4.4 image is correctly loaded but vdsmd is not
starting due to this error:
vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
Looks like it has a dependency on mom-vdsm, and this as well has a
dependency issue:
mom-vdsm.service: Job mom-vdsm.service/start failed with result
'dependency'.
After some investigation looks like mom-vdsm has a dependency
on ovsdb-server, and this is the unit creating the problem:
ovs-delete-transient-ports.service: Starting requested but asserts failed.
Assertion failed for Open vSwitch Delete Transient Ports
Failed to start Open vSwitch Database Unit.
Details below:
-- Unit ovsdb-server.service has begun starting up.
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net chown[13658]: /usr/bin/chown:
cannot access '/var/run/openvswitch': No such file or directory
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovs-ctl[13667]:
/etc/openvswitch/conf.db does not exist ... (warning).
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovsdb-tool[13714]:
ovs|00001|lockfile|WARN|/etc/openvswitch/.conf.db.~lock~: failed to open
lock file: Permission denied
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovs-ctl[13667]: Creating
empty database /etc/openvswitch/conf.db ovsdb-tool: I/O error:
/etc/openvswitch/conf.db: failed to lock lockfile (Resource temporarily
unavailable)
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovsdb-tool[13714]:
ovs|00002|lockfile|WARN|/etc/openvswitch/.conf.db.~lock~: failed to lock
file: Resource temporarily unavailable
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovs-ctl[13667]: [FAILED]
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net systemd[1]:
ovsdb-server.service: Control process exited, code=exited status=1
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net systemd[1]:
ovsdb-server.service: Failed with result 'exit-code'.
-- Subject: Unit failed
Any suggestions?
Thanks,
Marco
3 years, 9 months
Storage domain problems. oVirt 4.4.4.
by Yury Valitsky
Hi, all.
I deployed a new oVirt 4.4.4 installation and imported FC storage domains.
However, two storage domains turned out to be empty - they did not have
virtual machines in the "VM import" tab, the virtual disks appeared in the
"disk import" tab with empty aliases.
I downloaded and viewed the OVF_STORE - the old information was partially
overwritten by the new one with an empty number of VMs.
Trying to restore the VM configuration information, I deployed a backup of
the old hosted engine. Version is also 4.4.4.
Thus, it was possible to restore OVF_STORE of the first problematic disk
domain.
But the second storage domain gave an error when updating OVF_STORE.
The same error occurs when trying to export a VM or copy a virtual disk to
another known good storage domain.
At the same time, I successfully download disks from the second storage
domain to my PC by clicking the Download button and saving *.raw.
But snapshots cannot be saved like this.
1. How can I overwrite OVF_STORE on the second storage domain? If for this
I need to restore the oVirt metadata on the storage domain, how can I do it
correctly?
2. How can I download VM configuration that has been saved in the hosted
engine?
Thanks in advance for your help.
WBR, Yury Valitsky
3 years, 9 months
ovirt 4.4 and CentOS 8 and multipath with Equallogic
by Gianluca Cecchi
Hello,
I'm upgrading some environments from 4.3 to 4.4.
Storage domains are iSCSI based, connected to an Equallogic storage array
(PS-6510ES), that is recognized such as this as vendor/product in relation
to multipath configuration
# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 02 Id: 00 Lun: 00
Vendor: DELL Model: PERC H730 Mini Rev: 4.30
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi15 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi16 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi17 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi18 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 8.1
Type: Direct-Access ANSI SCSI revision: 05
#
Passing from 4.3 to 4.4 implies passing from CentOS 7 to 8.
The vendor is not in multipath "database" (it wasn't in 6 and 7 too)
In 7 I used this snip to get no_path_retry set:
# VDSM REVISION 1.5
# VDSM PRIVATE
defaults {
. . .
no_path_retry 4
. . .
}
. . .
devices {
device {
# These settings overrides built-in devices settings. It does
# not apply to devices without built-in settings (these use the
# settings in the "defaults" section), or to devices defined in
# the "devices" section.
all_devs yes
no_path_retry 4
}
device {
vendor "EQLOGIC"
product "100E-00"
path_selector "round-robin 0"
path_grouping_policy multibus
path_checker tur
rr_min_io_rq 10
rr_weight priorities
failback immediate
features "0"
}
}
and confirmation of applied config with
# multipath -l
36090a0c8d04f21111fc4251c7c08d0a3 dm-14 EQLOGIC ,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 16:0:0:0 sdc 8:32 active undef running
`- 18:0:0:0 sde 8:64 active undef running
36090a0d88034667163b315f8c906b0ac dm-13 EQLOGIC ,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 15:0:0:0 sdb 8:16 active undef running
`- 17:0:0:0 sdd 8:48 active undef running
and
# multipath -r -v3 | grep no_path_retry
Jan 29 11:51:17 | 36090a0d88034667163b315f8c906b0ac: no_path_retry = 4
(config file default)
Jan 29 11:51:17 | 36090a0c8d04f21111fc4251c7c08d0a3: no_path_retry = 4
(config file default)
In 8 I get this; see also the strange line about vendor or product missing,
but it is not true...
# multipath -l
Jan 29 11:52:02 | device config in /etc/multipath.conf missing vendor or
product parameter
36090a0c8d04f21111fc4251c7c08d0a3 dm-13 EQLOGIC,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 16:0:0:0 sdc 8:32 active undef running
`- 18:0:0:0 sde 8:64 active undef running
36090a0d88034667163b315f8c906b0ac dm-12 EQLOGIC,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 15:0:0:0 sdb 8:16 active undef running
`- 17:0:0:0 sdd 8:48 active undef running
# multipath -r -v3 | grep no_path_retry
Jan 29 11:53:54 | device config in /etc/multipath.conf missing vendor or
product parameter
Jan 29 11:53:54 | set open fds limit to 8192/262144
Jan 29 11:53:54 | loading /lib64/multipath/libchecktur.so checker
Jan 29 11:53:54 | checker tur: message table size = 3
Jan 29 11:53:54 | loading /lib64/multipath/libprioconst.so prioritizer
Jan 29 11:53:54 | foreign library "nvme" loaded successfully
Jan 29 11:53:54 | delegating command to multipathd
So in my opinion it is currently using queue if no path in 8
Can anyone give me any insight about the message and about how to force the
no_path_retry?
Perhaps any major change in multipath from 7 to 8?
The 8 config is this one:
# VDSM REVISION 2.0
# VDSM PRIVATE
defaults {
. . .
polling_interval 5
. . .
# We use a small number of retries to protect from short outage.
# Assuming the default polling_interval (5 seconds), this gives
# extra 20 seconds grace time before failing the I/O.
no_path_retry 16
. . .
}
# Blacklist internal disk
blacklist {
wwid "36d09466029914f0021e89c5710e256be"
}
# Remove devices entries when overrides section is available.
devices {
device {
# These settings overrides built-in devices settings. It does
# not apply to devices without built-in settings (these use the
# settings in the "defaults" section), or to devices defined in
# the "devices" section.
#### Not valid any more in RH EL 8.3
#### all_devs yes
#### no_path_retry 16
}
device {
vendor "EQLOGIC"
product "100E-00"
path_selector "round-robin 0"
path_grouping_policy multibus
path_checker tur
rr_min_io_rq 10
rr_weight priorities
failback immediate
features "0"
}
}
Thanks in advance,
Gianluca
3 years, 9 months
Users cannot create disks in portal
by jwmccullen@pima.edu
Help, banging my head against a wall.
Ovirt Engine Software Version:4.4.4.7-1.el8
RHEL 8.3
I have set up my cluster going to a TrueNAS storage. Everything is going great and works well from Admin or any user with SuperUser. I would like to set it up so that students can create their own machines.
Great, plowing through the documentation it looks like after creating a user, it works well by giving them at the Data Center object the user roles of:
PowerUserRole
VmCreator
At that level, according to my reading of the documentation, it is also supposed to grant DiskCreator automatically.
Well, when I do that it creates the VM but the creation of the disk does not work. Looking in the log I find:
Validation of action 'AddImageFromScratch' failed for user will@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
Could anyone please let me know what this is referring to? I have tried many roll combinations, and have even made my own user roll with full privileges to no avail. I looked for issues and the closest I could find is:
Bug 1511697 - [RFE] Unable to set permission on all but Hosted-Engine VM and Storage Domain
Which showed as resolved in 4.3
Could someone be kind enough to maybe tell me where I am missing it? I'll be the first to admit I can be a little slow at times.
Thanks.
For added detail:
2021-01-30 13:53:55,592-05 INFO [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (default task-26) [c0df3518-23b0-4311-9418-d9e192d9874f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks=''}'
2021-01-30 13:53:55,632-05 INFO [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (default task-26) [] Running command: AddVmFromScratchCommand internal: false. Entities affected : ID: 9e5b3a76-4000-11eb-82a1-00163e3be3c4 Type: ClusterAction group CREATE_VM with role type USER
2021-01-30 13:53:55,691-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [] EVENT_ID: USER_ADD_VM(34), VM willtest4 was created by will@internal-authz.
2021-01-30 13:53:55,693-05 INFO [org.ovirt.engine.core.bll.AddVmFromScratchCommand] (default task-26) [] Lock freed to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks=''}'
2021-01-30 13:53:55,845-05 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] Lock Acquired to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:55,866-05 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] Running command: UpdateVmCommand internal: false. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2021-01-30 13:53:55,881-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] EVENT_ID: USER_UPDATE_VM(35), VM willtest4 configuration was updated by will@internal-authz.
2021-01-30 13:53:55,883-05 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-26) [13dfcdfc-10af-4840-9433-68d84fd05daf] Lock freed to object 'EngineLock:{exclusiveLocks='[willtest4=VM_NAME]', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:56,210-05 INFO [org.ovirt.engine.core.bll.network.vm.AddVmInterfaceCommand] (default task-26) [55ce945c-425e-400a-876b-b65d4a4f2d7d] Running command: AddVmInterfaceCommand internal: false. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VMAction group CONFIGURE_VM_NETWORK with role type USER, ID: 8501221e-bff1-487c-8db5-685422f95022 Type: VnicProfileAction group CONFIGURE_VM_NETWORK with role type USER
2021-01-30 13:53:56,232-05 INFO [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (default task-26) [7c8579df] Running command: ActivateDeactivateVmNicCommand internal: true. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VM
2021-01-30 13:53:56,237-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [7c8579df] EVENT_ID: NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS(1,012), Network Interface nic1 (VirtIO) was plugged to VM willtest4. (User: will@internal-authz)
2021-01-30 13:53:56,241-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [7c8579df] EVENT_ID: NETWORK_ADD_VM_INTERFACE(932), Interface nic1 (VirtIO) was added to VM willtest4. (User: will@internal-authz)
2021-01-30 13:53:56,430-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Lock Acquired to object 'EngineLock:{exclusiveLocks='', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:56,455-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Running command: AddDiskCommand internal: false. Entities affected : ID: 361a430e-ef3d-4dee-bef6-256651bee6c0 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 768bbdab-3a53-4341-8144-3ceb29db23c9 Type: StorageAction group CREATE_DISK with role type USER
2021-01-30 13:53:56,460-05 WARN [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Validation of action 'AddImageFromScratch' failed for user will@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
2021-01-30 13:53:56,462-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Lock freed to object 'EngineLock:{exclusiveLocks='', sharedLocks='[361a430e-ef3d-4dee-bef6-256651bee6c0=VM]'}'
2021-01-30 13:53:56,473-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] EVENT_ID: USER_FAILED_ADD_DISK_TO_VM(79), Add-Disk operation failed on VM willtest4 (User: will@internal-authz).
2021-01-30 13:53:56,475-05 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-26) [] Operation Failed: []
2021-01-30 13:53:56,694-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Getting volume info for image '68480589-8585-41e5-a37c-9213b58fd5f6/00000000-0000-0000-0000-000000000000'
2021-01-30 13:53:56,695-05 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Failed to get volume info: org.ovirt.engine.core.common.errors.EngineException: EngineException: No host was found to perform the operation (Failed with error RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommand(VdsCommandsHelper.java:86)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommandWithFailover(VdsCommandsHelper.java:70)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getVolumeInfoFromVdsm(ImagesHandler.java:857)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback.childCommandsExecutionEnded(AddDiskCommandCallback.java:44)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:181)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
2021-01-30 13:53:56,695-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Command 'AddDisk' id: '82b5a690-8b7f-4568-9b41-0f777a461adb' child commands '[acc2d284-5dc3-49f9-b81a-d2612eb2c999]' executions were completed, status 'FAILED'
2021-01-30 13:53:56,698-05 INFO [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Exception in invoking callback of command AddDisk (82b5a690-8b7f-4568-9b41-0f777a461adb): EngineException: EngineException: No host was found to perform the operation (Failed with error RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
2021-01-30 13:53:56,698-05 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Error invoking callback method 'onFailed' for 'EXECUTION_FAILED' command '82b5a690-8b7f-4568-9b41-0f777a461adb'
2021-01-30 13:53:56,698-05 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-28) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: No host was found to perform the operation (Failed with error RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommand(VdsCommandsHelper.java:86)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.utils.VdsCommandsHelper.runVdsCommandWithFailover(VdsCommandsHelper.java:70)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getVolumeInfoFromVdsm(ImagesHandler.java:857)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback.childCommandsExecutionEnded(AddDiskCommandCallback.java:44)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:181)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
2021-01-30 13:53:57,704-05 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
2021-01-30 13:53:57,707-05 ERROR [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [b9d4d5c6-9f6a-4fbd-b1af-29b60d5a0e4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' with failure.
2021-01-30 13:53:57,732-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE(2,022), Add-Disk operation failed to complete.
3 years, 9 months
Hello.which environment type I choose?
by 欧文
Dear:
Hello. My name is Owen. First, I appreciate you spend time to check my
email. Recently I want to use oVirt to build a project. But after reading
the official document, I have some questions to ask. The project is
intended to set up a physical server that is used to implement
virtualization, Variable quantity of thin clients for users. But I don't
know which environment types I should use to deploy. The number two
question is why the system environment is different between each version of
o-virt?Because centos 7 is the major version. But so many versions of
o-virt is not centos 7 or centos 8.can you tell me the reason. Thank you.At
last, I have a question.I ever joined the IRC, but there is no response to
answer my question. Could you tell me where I can take part in the
community with developers who develope o-virt.Thanks
3 years, 9 months
oVirt + Proxmox Backup Server
by Diggy Mc
Is anyone using Proxmox Backup Server to backup VMs in an oVirt environment? Is it even possible?
I'm looking for a free open-source solution for my backup needs (backup VMs as well as traditional file backups). Any guidance is appreciated.
3 years, 9 months
move ovirt 4.3.9 to RHEL 8
by Paul Dyer
Hi,
I am laying the groupwork for upgrade ovirt to 4.4. Since the standalone
ovirt mgr is now running on RHEL 7, I have installed RHEL 8 to a new VM.
Unfortunately, after installing ovirt-release43, I am not able to find
documentation for the dnf modules required. All the current documentation
point to ovirt-release44 and dnf modules not available for ovirt 4.3/RHEL 8.
I have these repos enabled...
Repo ID: ansible-2-for-rhel-8-x86_64-rpms
Repo ID: rhel-8-for-x86_64-appstream-rpms
Repo ID: rhel-8-for-x86_64-baseos-rpms
I am looking setup docs for a standalone engine of ovirt 4.3 on RHEL 8.
Thanks!
--
Paul Dyer,
Mercury Consulting Group, RHCE
504-338-8750
3 years, 9 months
Custom Fence Agent
by Robert Tongue
Greetings everyone, I am having another problem that I was hoping to get some assistance with.
I have created my own custom fence agent for some tasmota-flashed wifi smart plugs, that can control plug power input to ovirt nodes. This works great; however I am running into a problem getting it added to ovirt as a power manager. I got the custom fence agent added with engine-config -s, and it shows up in the webui to select as a power management agent, then I put in the details for the plug, IP address, login, password, and press the "test" button, which passes, and shows the status as power=on. Once I save the settings, however, it is logged in the engine.log file that fencing will fail, because there is no node available to proxy the operation. When I go back into the power management settings, and press "test" again, then I get the error: "Test failed: Failed to run fence status-check on host 'ovirt1'. No other host was available to serve as proxy for the operation."
I have the agent script in /usr/sbin/ on all nodes, execute permissions set, and I can run it manually at the command line just fine, so I am really at a loss here as what to check. What am I missing here? Please help.
Thank you for your time.
Script:
#!/usr/libexec/platform-python -tt
from urllib.parse import quote
import requests
import sys
import atexit
sys.path.append("/usr/share/fence")
from fencing import *
def set_power_status(conn, options):
if "on" in options["--action"]:
response = requests.get(buildUrl(options, "on"))
elif "off" in options["--action"]:
response = requests.get(buildUrl(options, "off"))
return
def get_power_status(conn, options):
response = requests.get(buildUrl(options, "status"))
if "\"Power\":0" in response.text:
return "off"
elif "\"Power\":1" in response.text:
return "on"
def buildUrl(options, action):
cmnd = {
'on' : 'Power On',
'off' : 'Power Off',
'status' : 'Status'
}
return "http://" + options["--ip"] + "/cm?user=" + quote(options["--username"]) + "&password=" + quote(options["--password"]) + "&cmnd=" + quote(cmnd.get(action, "Error"))
def main():
device_opt = ["ipaddr", "login", "passwd", "web"]
atexit.register(atexit_handler)
all_opt["power_wait"]["default"] = 5
options = check_input(device_opt, process_input(device_opt))
docs = {}
docs["shortdesc"] = "Fence agent for Tasmota-flashed Smarthome Plugs"
docs["longdesc"] = ""
docs["vendorurl"] = ""
show_docs(options, docs)
##
## Fence operations
####
result = fence_action(None, options, set_power_status, get_power_status)
sys.exit(result)
if __name__ == "__main__":
main()
3 years, 9 months
AFFINITY GROUPS
by LS CHENG
Hi
I would like to know how AFFINITY GROUPS works.
I have 2 VM in 2 hosts, each VM runs in a host. Let's call
HOST_X where vm01 runs
HOST_Y where vm02 runs
I have set up an affinity group where it says vm01 relates to HOST_X, vm02
relates to HOST_Y. VM affinity rule is set to negative and HOST affinity
rule set to positive. I need both vm01 and vm02 to run in their respective
physical host.
I have a problem, when vm01 is stopped and HOST_Y is rebooted vm02 starts
in HOST_X, how can I avoid that?
Thank you
Luis Sanchez
3 years, 9 months