Changing the system resources of the pool VM
by Hyung-Keun Kim
Hi.
What I want to ask is simple.
System resource changes in the Pool VM do not seem to be supported by ovirt.
So let me ask you two questions.
1. Can you tell what the reason is? Are you related to the ovf file?
2. Is there any other way to use sub-templates? (Manual work is also fine)
Is it possible to modify DB value only? If not, what else should I touch?
Waiting for experts to answer.
Thank you.
6 years, 1 month
GlusterFS distributed replicated with arbiter on 4 nodes - settings
by Jarosław Prokopowski
Hi Guys,
I would like to use GlusterFS distributed-replicated with arbiter volume on
4 nodes for oVirt.
Can you tell me what tuning parameters should I set for such volume for
best VM performance?
Does it make any sense to upgrade from Glusterfs 3.12 to 4.1 or even 5.0?
Thanks!
Jarek
6 years, 1 month
non-ovirt node as gluster node
by Tony Brian Albers
Hi guys,
I have 3 machines that I'd like to test oVirt/gluster on.
The idea is that I want two of them to be oVirt nodes and the third to
run the oVirt engine.
They all have 8 hdd's, and I'd like to use these in gluster for storing
the vm images. (on all 3 machines)
So,
machine1 is a gluster server with oVirt engine installed
machine2 is a gluster server and oVirt node
machine3 is a gluster server and oVirt node
Is this doable?
TIA,
/tony
--
Tony Albers
Systems Architect
Systems Director, National Cultural Heritage Cluster
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
6 years, 1 month
Q: Guest shutdown from UPS power off signal with "virsh"
by Andrei Verovski
Hi !
I have a shell script running on all servers which intercepts power off signal (in case of power failure).
Is it safe to use direct KVM “virsh” commands to force guest shutdown on oVirt nodes, bypassing oVirt engine ?
“shutdown now” directly on oVirt nodes v4.2.6 takes an enormous amount of time because for whatever reason guests can’t be suspended. My nodes installed on CentOS 7.5, I’m not using pre-packaged version because each consecutive upgrade simply wipes out all my mods.
Thanks in advance.
Andrei
PS. I found this script to shutdown KVM guests:
***************************************
#!/bin/bash
# https://ask.fedoraproject.org/en/question/8796/make-libvirt-to-shutdown-m...
LIST_VM=`virsh list | grep running | awk '{print $2}'`
TIMEOUT=90
DATE=`date -R`
LOGFILE="/var/log/shutdownkvm.log"
if [ "x$activevm" = "x" ]
then
exit 0
fi
for activevm in $LIST_VM
do
PIDNO=`ps ax | grep $activevm | grep kvm | cut -c 1-6 | head -n1`
echo "$DATE : Shutdown : $activevm : $PIDNO" >> $LOGFILE
virsh shutdown $activevm > /dev/null
COUNT=0
while [ "$COUNT" -lt "$TIMEOUT" ]
do
ps --pid $PIDNO > /dev/null
if [ "$?" -eq "1" ]
then
COUNT=110
else
sleep 5
COUNT=$(($COUNT+5))
fi
done
if [ $COUNT -lt 110 ]
then
echo "$DATE : $activevm not successful force shutdown" >> $LOGFILE
virsh destroy $activevm > /dev/null
fi
done
6 years, 1 month
wrong vm disk size after uploading and attaching to VM
by Jarosław Prokopowski
Hi,
After uploading virutal machine disk to storage domain and attaching it to
virtual machine the disk size is shown in properties as 0 and the VM boots
to dracut complaining about the disk size.
The disk is "thin provision"
Device /dev/sda2 has size of 22528 sectors which is smaller than
corresponding PV size of 166746122 <16%20674%2061%2022>
Is there a way to fix that?
Thanks
Jarek
6 years, 1 month
set up ovirt 4.2 hyperconverged with glusterfs "storage network" over infiniband
by Mike Lykov
Hi All!
I'm try to set up last ovirt version : ovirt-release42-pre.rpm
repository (because bug
https://bugzilla.redhat.com/show_bug.cgi?id=1637468 , for example -
it's not fixed in release42 stable)
Then, I install these (and many deps) rpms:
ovirt-hosted-engine-setup-2.2.30-1.el7.noarch
ovirt-engine-appliance-4.2-20181026.1.el7.noarch
vdsm-4.20.43-1.el7.x86_64
vdsm-gluster-4.20.43-1.el7.x86_64
vdsm-network-4.20.43-1.el7.x86_64
All from that repository, and use webui installer for create glusterfs
volumes (default suggested engine, data, vmstore) and then install
hosted engine on that "engine" volume.
But in my case i try to setup additional "storage network" (for example,
as described there:
https://ovirt.org/develop/release-management/features/gluster/select-netw...
)
These screenshots are too old, and in 4.2 UI changed as I see, but idea
are same.
I have two interface on each host: one ethernet (enp59s0f0 with address
from 172.16.10.0/24 with default gateway) and one "Infiniband" (no
default gateway, only between cluster nodes, no routing, no external
access). Really it is Intel Omni-path fabric :
-----------
[root@ovirtnode1 log]# hfi1_control -i
Driver Version: 10.8-0
Opa Version: 10.8.0.0.204
0: BoardId: Intel Corporation Omni-Path HFI Silicon 100 Series [integrated]
0,1: Status: 5: LinkUp 4: ACTIVE
-------------
It looks like IP-over-IB interface:
6: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc pfifo_fast
state UP group default qlen 256
link/infiniband
80:00:00:02:fe:80:00:00:00:00:00:00:00:11:75:09:01:1a:ee:ea brd
00:ff:ff:ff:ff:12:40:1b:80:01:00:00:00:00:00:00:ff:ff:ff:ff
inet 172.16.100.1/24 brd 172.16.100.255 scope global noprefixroute ib0
valid_lft forever preferred_lft forever
It has this properties in ifcfg-ib0 file:
CONNECTED_MODE=yes
MTU=65520
All IP-s on that interfaces pair have DNS records on external DNS:
ethernet (management network) has ovirtnode{N} names and infiniband
(storage network) has an ovirtstor{N} names.
During webui glusterfs setup I used ovirtstor host names, trusted pool
created:
5a9a0a5f-12f4-48b1-bfbe-24c172adc65c ovirtstor5.miac Connected
41350da9-c944-41c5-afdc-46ff51ab93f6 ovirtstor6.miac Connected
0f50175e-7e47-4839-99c7-c7ced21f090c localhost Connected
Then I log in to web administration console and add two other hosts by
their names
Name Hostname/IP Cluster Data Center Status SPM
ovirtnode1 ovirtnode1 Default Default Up SPM
ovirtnode5 ovirtnode5 Default Default Up Normal
For this setup I have some questions:
1. Where is a webui place when I can configure that i want to use
"storage network" ?
I try to create second network (network->networks->new), but vdsm
overwrite the ifcfg-ib0 file without that properties, as it is "like
ethernet" interface:
Generated by VDSM version 4.20.43-1.el7
DEVICE=ib0
ONBOOT=yes
IPADDR=172.16.100.5
NETMASK=255.255.255.0
BOOTPROTO=none
MTU=65520
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
MTU i entered by hand in General->MTU-Custom field, but:
It cannot be set without "CONNECTED_MODE=yes" property, and now in
networks->"storage"->hosts it always show as "out-of-sync". "Custom
properties" are greyed and not available.
2. If I use checkbox "VM network" when create network and then "setup
host networks" with this network for ib0 interface - all engine hangs. I
think it's because it try to bridge infiniband interface with other, and
that cannot done (i see only "1 task running" that never ends and no
other interface can show any details)
3. Also ovirt try to start send LLDP TLVs on interface ib0, but it
cannot be done:
Nov 6 17:30:01 ovirtnode5 systemd: Starting Link Layer Discovery
Protocol Agent Daemon....
Nov 6 17:30:01 ovirtnode5 kernel: bnx2x:
[bnx2x_dcbnl_set_dcbx:2383(enp59s0f0)]Requested DCBX mode 5 is beyond
advertised capabilities
Nov 6 17:30:02 ovirtnode5 systemd: Started /sbin/ifup ib0.
Nov 6 17:30:02 ovirtnode5 systemd: Starting /sbin/ifup ib0.
Nov 6 17:30:02 ovirtnode5 kernel: IPv6: ADDRCONF(NETDEV_UP): ib0: link
is not ready
Nov 6 17:30:02 ovirtnode5 NetworkManager[1650]: <info>
[1541511002.9642] device (ib0): carrier: link connected
Nov 6 17:30:02 ovirtnode5 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ib0:
link becomes ready
Nov 6 17:30:02 ovirtnode5 lldpad: setsockopt nearest_bridge: Invalid
argument
Nov 6 17:30:41 ovirtnode5 vdsm[127585]: ERROR Internal server
error#012Traceback (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", lin
e 606, in _handle_request#012 res = method(**params)#012 File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 193, in
_dynamicMethod#012 result = fn(*methodArg
s)#012 File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1561,
in getLldp#012 info=supervdsm.getProxy().get_lldp_info(filter))#012
File "/usr/lib/python2.7/site-pack
ages/vdsm/common/supervdsm.py", line 55, in __call__#012 return
callMethod()#012 File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in
<lambda>#012
**kwargs)#012 File "<string>", line 2, in get_lldp_info#012 File
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod#012 raise convert_to_error(kin
d, result)#012TlvReportLldpError: (1, 'Agent instance for device not
found \n', '', 'ib0')
4. Gluster volumes are in strange state:
I "import domain" (storage->domains) that webui installer created on
first step, but in "host to use" there is a drop-down list contains only
host names as it added to cluster, i.e. ovirtnode1, ovirtnode{N}.
And there is a red message "For data integrity make sure that the server
is configured with Quorum (both client and server Quorum)" (it
configured by cockpit webui installer, as i see).
But imported volumes shown as "1 Up 2 Down" bricks, and on host only
"localhost" bricks showed as "Online", in logs there is a message
Nov 6 10:24:24 ovirtnode5 systemd: Started GlusterFS, a clustered
file-system server.
Nov 6 10:24:24 ovirtnode5 glusterd[229325]: [2018-11-06
06:24:24.404149] C [MSGID: 106003]
[glusterd-server-quorum.c:354:glusterd_do_volume_quorum_action]
0-management: Server q
uorum regained for volume data. Starting local bricks.
Nov 6 10:24:24 ovirtnode5 glusterd[229325]: [2018-11-06
06:24:24.450356] C [MSGID: 106003]
[glusterd-server-quorum.c:354:glusterd_do_volume_quorum_action]
0-management: Server q
uorum regained for volume engine. Starting local bricks.
Nov 6 10:24:24 ovirtnode5 glusterd[229325]: [2018-11-06
06:24:24.503677] C [MSGID: 106003]
[glusterd-server-quorum.c:354:glusterd_do_volume_quorum_action]
0-management: Server q
uorum regained for volume vmstore. Starting local bricks.
What does it mean? If "quorum REGAINED" that all bricks must be started,
isn't it?
When i try to create volume (storage->volumes->new), press "add
bricks" - there is a similar drop-down box "Bricks Host" contains only
"ovirtnode" names, not "ovirtstor" ib interfaces..
If I try to use it - It cannot finished with error like "This host not
in trusted pool", its true - in trusted tool there is other interface.
What the right way to configure this?
Are there a "some start guide" for this case with config-steps?
I found https://www.ovirt.org/documentation/quickstart/quickstart-guide/
but it also out-of-date, for example it not describe GlusterFS for
storage domains, has old screenshots (previous interface version), etc...
I cannot find any documentation in
https://www.ovirt.org/documentation/admin-guide/
For example,
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/
Explanation of Settings in the Manage Networks Window
does not contain role "Gluster network" at all...
Many links point to not-existent pages, for example
"For more information on these parameters, see Explanation of bridge
opts Parameters." link to
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/Exp...
that
"404 Not found :(
Sorry, but the page you were trying to view does not exist."
(it is for example, there are MANY 404 links/pages).
--
Mike
6 years, 1 month
Import VM from VMware fails
by Markus Schaufler
Hi,
I configured a VMware Provider (Test success) and tried to import a VM which fails shortly after starting.
Failed to import Vm SK-Test-VSAN to Data Center Default, Cluster Default
from engine.log:
"HSMGetAllTasksStatusesVDS failed: Error creating a new volume: (u"Volume creation 9909f715-cd8a-452a-bb2a-73341b77f8af failed: Invalid parameter: 'initial size=107175936'"
any ideas on that?
# engine.log
2018-11-07 06:26:06,071+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-15) [cf1eb385-cbe4-4e73-a5f2-a307bd39fa0a] Command 'ImportVmFromExternalProvider' (id: 'c8d4cf1b-8465-4501-9930-0b06262477a4') waiting on child command id: '02112e67-8dd5-4ef8-95ac-eeb9a4b39d0a' type:'ConvertVm' to complete
2018-11-07 06:26:09,979+01 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-292) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Lock Acquired to object 'EngineLock:{exclusiveLocks='[42205902-f5d8-fd29-fcb3-926dbf8935e9=VM, SK-Test-VSAN=VM_NAME]', sharedLocks=''}'
2018-11-07 06:26:10,183+01 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Running command: ImportVmFromExternalProviderCommand internal: false. Entities affected : ID: 5e6850cd-5e72-47b3-a19d-8c748c884c42 Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN
2018-11-07 06:26:10,196+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM SK-Test-VSAN has MAC address(es) 00:50:56:a0:1b:18, which is/are out of its MAC pool definitions.
2018-11-07 06:26:10,369+01 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Running command: AddDiskCommand internal: true. Entities affected : ID: 42205902-f5d8-fd29-fcb3-926dbf8935e9 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 5e6850cd-5e72-47b3-a19d-8c748c884c42 Type: StorageAction group CREATE_DISK with role type USER
2018-11-07 06:26:10,383+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 5e6850cd-5e72-47b3-a19d-8c748c884c42 Type: Storage
2018-11-07 06:26:10,402+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] START, CreateImageVDSCommand( CreateImageVDSCommandParameters:{storagePoolId='6d4eea22-c7cb-11e8-a006-00163e699560', ignoreFailoverLimit='false', storageDomainId='5e6850cd-5e72-47b3-a19d-8c748c884c42', imageGroupId='9703780a-ff5d-48e1-929c-6526d63fddb0', imageSizeInBytes='26843545600', volumeFormat='COW', newImageId='9909f715-cd8a-452a-bb2a-73341b77f8af', imageType='Sparse', newImageDescription='{"DiskAlias":"SK-Test_VSAN","DiskDescription":""}', imageInitialSizeInBytes='54874079232'}), log id: 44235172
2018-11-07 06:26:10,402+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] -- executeIrsBrokerCommand: calling 'createVolume' with two new parameters: description and UUID
2018-11-07 06:26:10,457+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] FINISH, CreateImageVDSCommand, return: 9909f715-cd8a-452a-bb2a-73341b77f8af, log id: 44235172
2018-11-07 06:26:10,460+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'e05cf461-111a-47d3-8bb1-1455dde5be3c'
2018-11-07 06:26:10,460+01 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] CommandMultiAsyncTasks::attachTask: Attaching task '6dbfe14e-1ce5-4a77-aff9-58da486a458c' to command 'e05cf461-111a-47d3-8bb1-1455dde5be3c'.
2018-11-07 06:26:10,471+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Adding task '6dbfe14e-1ce5-4a77-aff9-58da486a458c' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2018-11-07 06:26:10,478+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] BaseAsyncTask::startPollingTask: Starting to poll task '6dbfe14e-1ce5-4a77-aff9-58da486a458c'.
2018-11-07 06:26:10,498+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'SK-Test_VSAN' was initiated by the system.
2018-11-07 06:26:10,544+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-39664) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm SK-Test-VSAN to Data Center Default, Cluster Default
2018-11-07 06:26:11,079+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'ImportVmFromExternalProvider' (id: '7a8529e5-d463-45fd-9f1c-2a261c84e458') waiting on child command id: '578ed27c-f891-4279-b0f4-84b63b53db34' type:'AddDisk' to complete
2018-11-07 06:26:11,081+01 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'AddDisk' (id: '578ed27c-f891-4279-b0f4-84b63b53db34') waiting on child command id: 'e05cf461-111a-47d3-8bb1-1455dde5be3c' type:'AddImageFromScratch' to complete
2018-11-07 06:26:13,085+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-48) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'ImportVmFromExternalProvider' (id: '7a8529e5-d463-45fd-9f1c-2a261c84e458') waiting on child command id: '578ed27c-f891-4279-b0f4-84b63b53db34' type:'AddDisk' to complete
2018-11-07 06:26:13,086+01 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-48) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'AddDisk' (id: '578ed27c-f891-4279-b0f4-84b63b53db34') waiting on child command id: 'e05cf461-111a-47d3-8bb1-1455dde5be3c' type:'AddImageFromScratch' to complete
2018-11-07 06:26:15,894+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2018-11-07 06:26:15,898+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] Failed in 'HSMGetAllTasksStatusesVDS' method
2018-11-07 06:26:15,901+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM VIRZ01-101 command HSMGetAllTasksStatusesVDS failed: Error creating a new volume: (u"Volume creation 9909f715-cd8a-452a-bb2a-73341b77f8af failed: Invalid parameter: 'initial size=107175936'",)
2018-11-07 06:26:15,901+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] SPMAsyncTask::PollTask: Polling task '6dbfe14e-1ce5-4a77-aff9-58da486a458c' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'cleanSuccess'.
2018-11-07 06:26:15,903+01 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] BaseAsyncTask::logEndTaskFailure: Task '6dbfe14e-1ce5-4a77-aff9-58da486a458c' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error creating a new volume: (u"Volume creation 9909f715-cd8a-452a-bb2a-73341b77f8af failed: Invalid parameter: 'initial size=107175936'",), code = 205',
-- Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error creating a new volume: (u"Volume creation 9909f715-cd8a-452a-bb2a-73341b77f8af failed: Invalid parameter: 'initial size=107175936'",), code = 205'
2018-11-07 06:26:15,905+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'e05cf461-111a-47d3-8bb1-1455dde5be3c' has ended -> executing 'endAction'
2018-11-07 06:26:15,905+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-83) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'e05cf461-111a-47d3-8bb1-1455dde5be3c'): calling endAction '.
2018-11-07 06:26:15,905+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39669) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2018-11-07 06:26:15,911+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command [id=e05cf461-111a-47d3-8bb1-1455dde5be3c]: Updating status to 'FAILED', The command end method logic will be executed by one of its parent commands.
2018-11-07 06:26:15,911+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2018-11-07 06:26:15,911+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2018-11-07 06:26:15,911+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '6dbfe14e-1ce5-4a77-aff9-58da486a458c'
2018-11-07 06:26:15,911+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='6d4eea22-c7cb-11e8-a006-00163e699560', ignoreFailoverLimit='false', taskId='6dbfe14e-1ce5-4a77-aff9-58da486a458c'}), log id: adf17a1
2018-11-07 06:26:15,912+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] START, HSMClearTaskVDSCommand(HostName = VIRZ01-101, HSMTaskGuidBaseVDSCommandParameters:{hostId='9edfd1cd-261b-4b4f-832c-baa96e229ea7', taskId='6dbfe14e-1ce5-4a77-aff9-58da486a458c'}), log id: 60cae22f
2018-11-07 06:26:15,916+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] FINISH, HSMClearTaskVDSCommand, log id: 60cae22f
2018-11-07 06:26:15,916+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] FINISH, SPMClearTaskVDSCommand, log id: adf17a1
2018-11-07 06:26:15,919+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] BaseAsyncTask::removeTaskFromDB: Removed task '6dbfe14e-1ce5-4a77-aff9-58da486a458c' from DataBase
2018-11-07 06:26:15,919+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-39669) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'e05cf461-111a-47d3-8bb1-1455dde5be3c'
2018-11-07 06:26:16,092+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [cf1eb385-cbe4-4e73-a5f2-a307bd39fa0a] Command 'ImportVmFromExternalProvider' (id: 'c8d4cf1b-8465-4501-9930-0b06262477a4') waiting on child command id: '02112e67-8dd5-4ef8-95ac-eeb9a4b39d0a' type:'ConvertVm' to complete
2018-11-07 06:26:17,097+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'ImportVmFromExternalProvider' (id: '7a8529e5-d463-45fd-9f1c-2a261c84e458') waiting on child command id: '578ed27c-f891-4279-b0f4-84b63b53db34' type:'AddDisk' to complete
2018-11-07 06:26:17,099+01 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'AddDisk' id: '578ed27c-f891-4279-b0f4-84b63b53db34' child commands '[e05cf461-111a-47d3-8bb1-1455dde5be3c]' executions were completed, status 'FAILED'
2018-11-07 06:26:18,104+01 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-81) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
2018-11-07 06:26:18,109+01 ERROR [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-81) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' with failure.
2018-11-07 06:26:18,359+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-81) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE(2,022), Add-Disk operation failed to complete.
2018-11-07 06:26:19,368+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-34) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Command 'ImportVmFromExternalProvider' id: '7a8529e5-d463-45fd-9f1c-2a261c84e458' child commands '[578ed27c-f891-4279-b0f4-84b63b53db34]' executions were completed, status 'FAILED'
2018-11-07 06:26:20,394+01 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [bd9e622c-574f-4cc6-a6b3-284a7a6dbdbc] Ending command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand' with failure.
2018-11-07 06:26:20,534+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveAllVmImagesCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Running command: RemoveAllVmImagesCommand internal: true. Entities affected : ID: 42205902-f5d8-fd29-fcb3-926dbf8935e9 Type: VM
2018-11-07 06:26:20,545+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Running command: RemoveImageCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage
2018-11-07 06:26:20,570+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='6d4eea22-c7cb-11e8-a006-00163e699560', ignoreFailoverLimit='false', storageDomainId='5e6850cd-5e72-47b3-a19d-8c748c884c42', imageGroupId='9703780a-ff5d-48e1-929c-6526d63fddb0', postZeros='false', discard='false', forceDelete='false'}), log id: 7de025db
2018-11-07 06:26:20,694+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=9703780a-ff5d-48e1-929c-6526d63fddb0, domain=5e6850cd-5e72-47b3-a19d-8c748c884c42'
2018-11-07 06:26:20,694+01 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='6d4eea22-c7cb-11e8-a006-00163e699560', ignoreFailoverLimit='false', storageDomainId='5e6850cd-5e72-47b3-a19d-8c748c884c42', imageGroupId='9703780a-ff5d-48e1-929c-6526d63fddb0', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=9703780a-ff5d-48e1-929c-6526d63fddb0, domain=5e6850cd-5e72-47b3-a19d-8c748c884c42'
2018-11-07 06:26:20,694+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] FINISH, DeleteImageGroupVDSCommand, log id: 7de025db
2018-11-07 06:26:20,694+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Disk '9703780a-ff5d-48e1-929c-6526d63fddb0' doesn't exist on storage domain '5e6850cd-5e72-47b3-a19d-8c748c884c42', rolling forward
2018-11-07 06:26:20,858+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Removed task '90cd47dc-1b40-45c2-8991-9eca1ff1d63a' from DataBase
2018-11-07 06:26:20,861+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkStatistics; snapshot: 8b933069-ccb8-4752-b936-63d6a37f49a8.
2018-11-07 06:26:20,862+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkInterface; snapshot: 8b933069-ccb8-4752-b936-63d6a37f49a8.
2018-11-07 06:26:20,863+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating TRANSIENT_ENTITY of org.ovirt.engine.core.common.businessentities.ReleaseMacsTransientCompensation; snapshot: org.ovirt.engine.core.common.businessentities.ReleaseMacsTransientCompensation@23fd0e49.
2018-11-07 06:26:20,863+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmStatistics; snapshot: 42205902-f5d8-fd29-fcb3-926dbf8935e9.
2018-11-07 06:26:20,864+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDynamic; snapshot: 42205902-f5d8-fd29-fcb3-926dbf8935e9.
2018-11-07 06:26:20,865+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.Snapshot; snapshot: b1551a3a-ecb7-4031-ad7b-0c47a97d341a.
2018-11-07 06:26:20,866+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Command [id=7a8529e5-d463-45fd-9f1c-2a261c84e458]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmStatic; snapshot: 42205902-f5d8-fd29-fcb3-926dbf8935e9.
2018-11-07 06:26:20,872+01 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] Lock freed to object 'EngineLock:{exclusiveLocks='[42205902-f5d8-fd29-fcb3-926dbf8935e9=VM, SK-Test-VSAN=VM_NAME]', sharedLocks=''}'
2018-11-07 06:26:20,882+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7a1c5fb3] EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm SK-Test-VSAN to Data Center Default, Cluster Default
6 years, 1 month
Hardware for oVirt
by p.staniforth@leedsbeckett.ac.uk
Hello,
we are looking to expand our oVirt system, at present we use HPE ProLiant DL360 Gen9 servers but are considering using Broadberry TN200B7108-X4L Multi Node Servers.
I was wondering if anyone had experience of using them, they are quad-node systems and I assume we can use them as 4 oVirt nodes.
Thanks,
Paul S.
6 years, 1 month
Configuring Ovirt + ACPI
by David Johnson
Good day all,
Is there a simple step-by-step instruction to give my cluster controller
the ability to turn the power for my compute nodes on and off?
I am running ovirt 4.7.2 on Centos 7. My compute nodes are on Supermicro
X7QC3 hosts.
I know it is supposed to be there since every time I look at the dashbaord
it complains that "Host has disabled power management". I imagine it is
supposed to be trivial to configure.
What I keep running into with the documents is the assumption that you
already know what to do before you read the documents. This makes for a
dizzy cycle of digging through various feature documents, manuals, etc, to
end up back where I started.
Any assistance would be appreciated.
Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o) | 479.531.3590 (c)
djohnson(a)maxistechnology.com
[image: Maxis Techncology] <http://www.maxistechnology.com>
www.maxistechnology.com
*stay connected <http://www.linkedin.com/in/pojoguy>*
6 years, 1 month