How to enable vmdisk property ?
by TranceWorldLogic .
Hi,
I want to use local storage for VM and hence I want to enable vmdisk custom
property in ovirt.
How to enable vmdisk in ovirt-engine ?
Thanks,
~Rohit
7 years, 8 months
oVirt, Cinder, Ceph in 2017
by Konstantin Shalygin
This is a multi-part message in MIME format.
--------------7C76FFBC5E752359EE00266D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello.
Try to use this guide Cinder and Glance integration
<https://www.ovirt.org/develop/release-management/features/cinderglance-do...>
- this is actually on 2017? I try to use it and get this error on oVirt
4.1 (CentOS 7.3):
2017-03-22 11:34:19 INFO otopi.plugins.ovirt_engine_setup.dockerc.config
config._misc_deploy:357 Creating rabbitmq
2017-03-22 11:34:19 INFO otopi.plugins.ovirt_engine_setup.dockerc.config
config._misc_deploy:397 Starting rabbitmq
2017-03-22 11:34:19 DEBUG
otopi.plugins.ovirt_engine_setup.dockerc.config config._misc_deploy:402
Container rabbitmq:
da8d020b19010f0a7f1f6ce19977791c20b0f2eabd562578be9e06fd4e116172
2017-03-22 11:34:19 DEBUG otopi.context context._executeMethod:142
method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132,
in _executeMethod
method['method']()
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/dockerc/config.py",
line 434, in _misc_deploy
raise ex
APIError: 400 Client Error: Bad Request ("{"message":"starting container
with HostConfig was deprecated since v1.10 and removed in v1.12"}")
2017-03-22 11:34:19 ERROR otopi.context context._executeMethod:151
Failed to execute stage 'Misc configuration': 400 Client Error: Bad
Request ("{"message":"starting container with HostConfig was deprecated
since v1.10 and removed in v1.12"}")
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'Yum Transaction'
2017-03-22 11:34:19 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:80 Yum Performing yum transaction rollback
Loaded plugins: fastestmirror, versionlock
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'DWH Engine database Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'Database Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'Version Lock Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'DWH database Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for '/etc/ovirt-engine/firewalld/ovirt-http.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-https.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-vmconsole-proxy.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-local-cinder.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-local-glance.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-imageio-proxy.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-websocket-proxy.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-fence-kdump-listener.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-postgres.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-postgres.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for '/etc/ovirt-engine/iptables.example''
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:760
ENVIRONMENT DUMP - BEGIN
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:770 ENV
BASE/error=bool:'True'
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:770 ENV
BASE/exceptionInfo=list:'[(<class 'docker.errors.APIError'>,
APIError(HTTPError('400 Client Error: Bad Request',),), <traceback
object at 0x2923fc8>)]'
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:774
ENVIRONMENT DUMP - END
Okay, back to 2016, make rpm's (and deps):
python-docker-py-1.9.0-1
docker-1.10.3-59
I Installed it, run engine-setup. Now "Deploy Cinder container on this
host" & "Deploy Glance container on this host" are success. On oVirt
Engine Web admin I see external provider
"local-glance-image-repository". But actually I can't use them. Then I
figure out this is unsupported method?
May be I should going another way for "oVirt with Ceph"? Like VM with
OpenStack stuff deployed with Ansible or another
ready_to_go_docker_containers because I need it faster, therefore I want
to go this way "makes it more easier to be accomplished without the need
for a manual complex configuration".
Thanks.
--------------7C76FFBC5E752359EE00266D
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello.<br>
<br>
Try to use this guide <a
href="https://www.ovirt.org/develop/release-management/features/cinderglance-do...">Cinder
and Glance integration</a> - this is actually on 2017? I try to
use it and get this error on oVirt 4.1 (CentOS 7.3):<br>
<br>
2017-03-22 11:34:19 INFO
otopi.plugins.ovirt_engine_setup.dockerc.config
config._misc_deploy:357 Creating rabbitmq<br>
2017-03-22 11:34:19 INFO
otopi.plugins.ovirt_engine_setup.dockerc.config
config._misc_deploy:397 Starting rabbitmq<br>
2017-03-22 11:34:19 DEBUG
otopi.plugins.ovirt_engine_setup.dockerc.config
config._misc_deploy:402 Container rabbitmq:
da8d020b19010f0a7f1f6ce19977791c20b0f2eabd562578be9e06fd4e116172<br>
2017-03-22 11:34:19 DEBUG otopi.context context._executeMethod:142
method exception<br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/otopi/context.py", line
132, in _executeMethod<br>
method['method']()<br>
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/dockerc/config.py",
line 434, in _misc_deploy<br>
raise ex<br>
APIError: 400 Client Error: Bad Request ("{"message":"starting
container with HostConfig was deprecated since v1.10 and removed in
v1.12"}")<br>
2017-03-22 11:34:19 ERROR otopi.context context._executeMethod:151
Failed to execute stage 'Misc configuration': 400 Client Error: Bad
Request ("{"message":"starting container with HostConfig was
deprecated since v1.10 and removed in v1.12"}")<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'Yum Transaction'<br>
2017-03-22 11:34:19 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:80 Yum Performing yum transaction rollback<br>
Loaded plugins: fastestmirror, versionlock<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'DWH Engine database Transaction'<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'Database Transaction'<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'Version Lock Transaction'<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'DWH database Transaction'<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-http.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-https.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-vmconsole-proxy.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-local-cinder.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-local-glance.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-imageio-proxy.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-websocket-proxy.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-fence-kdump-listener.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-postgres.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for
'/etc/ovirt-engine/firewalld/ovirt-postgres.xml''<br>
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119
aborting 'File transaction for '/etc/ovirt-engine/iptables.example''<br>
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:760
ENVIRONMENT DUMP - BEGIN<br>
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:770
ENV BASE/error=bool:'True'<br>
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:770
ENV BASE/exceptionInfo=list:'[(<class
'docker.errors.APIError'>, APIError(HTTPError('400 Client Error:
Bad Request',),), <traceback object at 0x2923fc8>)]'<br>
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:774
ENVIRONMENT DUMP - END<br>
<br>
Okay, back to 2016, make rpm's (and deps):<br>
<br>
python-docker-py-1.9.0-1<br>
docker-1.10.3-59<br>
<br>
I Installed it, run engine-setup. Now "Deploy Cinder container on
this host" & "Deploy Glance container on this host" are success.
On oVirt Engine Web admin I see external provider "<span
id="SubTabProviderGeneralView_formPanel_col0_row0_value"
class="GOJECEMBACD">local-glance-image-repository". But actually I
can't use them. Then I figure out this is unsupported method?<br>
<br>
May be I should going another way for "oVirt with Ceph"? Like VM
with OpenStack stuff deployed with Ansible or another
ready_to_go_docker_containers because I need it faster, therefore
I want to go this way "makes it more easier to be accomplished
without the need for a manual complex configuration".<br>
<br>
Thanks.<br>
</span>
</body>
</html>
--------------7C76FFBC5E752359EE00266D--
7 years, 8 months
Ovirt-Engine Notification via Python SDK
by TranceWorldLogic .
Hi,
I want to register my python script with ovirt to get notification.(e.g
.Host went down or came up.)
But not got any information on same it look to me that polling is the only
one solution.
I am sending this mail to confirm my understanding (ovirt-engine not send
notification to python script or it not have any hook to extend
functionality)
*Why I Want Extend Ovirt-engine functionanlity?*
I want to auto start VM after host come up.
And for this If ound a bug on Ovirt .. and it is very old and still no
solution.
https://bugzilla.redhat.com/show_bug.cgi?id=1325468
Bug 1325468
I am just searching for workaround.
Note: I am using ovirt 4.0.
Please help me to know whether ovirt-engine have any extension mechanism ?
Thanks,
~Rohit
7 years, 8 months
ovirt ENGINE stuck in paused state following new partition
by Ian Neilsen
hi guys
Bit of a pickle. I expanded my engine managers disk with a new partition
and formatted to ext4, however I seem to have been hit by a superblock
issue on the new ext4 partition. The engine is now stuck in paused state.
I rebooted, powered-off the engine vm but it wont come out of PAUSED state.
I cannot obviously access the console, with it giving me paused messages.
VNC console is down also.
Any suggestions on restoring the engine vm?
Logs are not showing anything. Storage is up, vdsm seems happy, broker
seems happy, nothing really jumping out. Obviously the VM is havign start
issues.
--
Ian Neilsen
Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
7 years, 8 months
About Template Sub Version
by 张 余歌
--_000_CY4PR11MB16719F04F82B003AA68D0798903C0CY4PR11MB1671namp_
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
SGVsbG8sRnJpZW5kcy4NCg0KSSB3YW50byB1c2UgIlRlbXBsYXRlIFN1YiBWZXJzaW9uID0gbGF0
ZXN0IiArIlN0YXRlbGVzcyBzdGF0dXMiIHRvIHJlYWxpemUgc29tZSBmdW5jdGlvbixCdXQgSSBm
aW5kIEkgd2lsbCBsb3NzIG15IGRhdGEgaWYgSSB1c2UgdGhpcyB3YXksaXQgc2VlbXMgSSBuZWVk
IHRvIGNyZWF0IGEgRlRQIFNlcnZlciB0byBzb2x2ZSB0aGlzIHByb2JsZW0uSXMgdGhlcmUgYW5v
dGhlciBnb29kIG1ldGhvZCB0byBhdm9pZCBkYXRhIGxvc3M/dGhhbmtzIQ0K
--_000_CY4PR11MB16719F04F82B003AA68D0798903C0CY4PR11MB1671namp_
Content-Type: text/html; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dgb2312">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Arial,Helvetica,sans-serif;" dir=3D"ltr">
<p>Hello,Friends.</p>
<p>I wanto use "Template Sub Version =3D latest" +"State=
less status" to realize some function,But I find I will loss my data i=
f I use this way,it seems I need to creat a FTP Server to solve this proble=
m.Is there another good method to avoid data loss?thanks!<br>
</p>
</div>
</body>
</html>
--_000_CY4PR11MB16719F04F82B003AA68D0798903C0CY4PR11MB1671namp_--
7 years, 8 months
Strange network performance on VirtIIO VM NIC
by FERNANDO FREDIANI
Hello all.
I have a peculiar problem here which perhaps others may have had or know
about and can advise.
I have Virtual Machine with 2 VirtIO NICs. This VM serves around 1Gbps of
traffic with thousands of clients connecting to it. When I do a packet loss
test to the IP pinned to NIC1 it varies from 3% to 10% of packet loss. When
I run the same test on NIC2 the packet loss is consistently 0%.
>From what I gather I may have something to do with possible lack of Multi
Queu VirtIO where NIC1 is managed by a single CPU which might be hitting
100% and causing this packet loss.
Looking at this reference (
https://fedoraproject.org/wiki/Features/MQ_virtio_net) I see one way to
test it is start the VM with 4 queues (for example), but checking on the
qemu-kvm process I don't see option present. Any way I can force it from
the Engine ?
This other reference (
https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature) points to the
same direction about starting the VM with queues=N
Also trying to increase the TX ring buffer within the guest with ethtool -g
eth0 is not possible.
Oh, by the way, the Load on the VM is significantly high despite the CPU
usage isn't above 50% - 60% in average.
Thanks
Fernando
7 years, 8 months
How do you oVirt?
by Sandro Bonazzola
As we continue to develop oVirt 4.2 and future releases, the Development
and Integration teams at Red Hat would value
insights on how you are deploying the oVirt environment.
Please help us to hit the mark by completing this short survey. Survey will
close on April 15th
Here's the link to the survey:
https://docs.google.com/forms/d/e/1FAIpQLSdloxiIP2HrW2HguU0UVbNtKgpSBaJXj...
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 8 months
Changing gateway ping address
by Matteo
Hi all,
I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
Thanks,
Matteo
7 years, 8 months
error on live migraion: ovn?
by Gianluca Cecchi
Hello,
environment on 4.1.
I have a Vm with 2 nics: one of them is on ovn.
Trying to migrate I get error. Ho wto decode it?
Is live migration supported with OVN or mixed nics?
Thanks,
Gianluca
in engine.log
2017-03-21 10:37:26,209+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-26)
[f0cea507-608b-4f36-ac4e-606faec35cd9] Lock Acquired to object
'EngineLock:{exclusiveLocks='[2e571c77-bae1-4c1c-bf98-effaf9fed741=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName c7service>]',
sharedLocks='null'}'
2017-03-21 10:37:26,499+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
Running command: MigrateVmToServerCommand internal: false. Entities
affected : ID: 2e571c77-bae1-4c1c-bf98-effaf9fed741 Type: VMAction group
MIGRATE_VM with role type USER
2017-03-21 10:37:26,786+01 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741', srcHost='ovmsrv07.mydomain',
dstVdsId='02bb501a-b641-4ee1-bab1-5e640804e65f',
dstHost='ovmsrv05.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='null',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 14390155
2017-03-21 10:37:26,787+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
START, MigrateBrokerVDSCommand(HostName = ovmsrv07,
MigrateVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741', srcHost='ovmsrv07.mydomain',
dstVdsId='02bb501a-b641-4ee1-bab1-5e640804e65f',
dstHost='ovmsrv05.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='null',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 5fd9f196
2017-03-21 10:37:27,386+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
FINISH, MigrateBrokerVDSCommand, log id: 5fd9f196
2017-03-21 10:37:27,445+01 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14390155
2017-03-21 10:37:27,475+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
EVENT_ID: VM_MIGRATION_START(62), Correlation ID:
f0cea507-608b-4f36-ac4e-606faec35cd9, Job ID:
514c0562-de81-44a1-bde3-58027662b536, Call Stack: null, Custom Event ID:
-1, Message: Migration started (VM: c7service, Source: ovmsrv07,
Destination: ovmsrv05, User: g.cecchi@internal-authz).
2017-03-21 10:37:30,341+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler4) [53ea57f2] START, FullListVDSCommand(HostName =
ovmsrv07, FullListVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmIds='[2e571c77-bae1-4c1c-bf98-effaf9fed741]'}), log id: 46e85505
2017-03-21 10:37:30,526+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler4) [53ea57f2] FINISH, FullListVDSCommand, return:
[{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0,
vmId=2e571c77-bae1-4c1c-bf98-effaf9fed741,
guestDiskMapping={0QEMU_QEMU_HARDDISK_6af3dfe5-6da7-48e3-9={name=/dev/sda},
QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true,
timeOffset=0, cpuType=Opteron_G2, smp=1, pauseCode=NOERR,
guestNumaNodes=[Ljava.lang.Object;@31e4f47d, smartcardEnable=false,
custom={device_39acb1f5-c31e-4810-a4c3-26d460a6e374=VmDevice:{id='VmDeviceId:{deviceId='39acb1f5-c31e-4810-a4c3-26d460a6e374',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741'}', device='ide',
type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01,
bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false',
plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]',
snapshotId='null', logicalName='null', hostDevice='null'},
device_39acb1f5-c31e-4810-a4c3-26d460a6e374device_c45a9213-8f12-46c7-b749-27a4fcb02c67=VmDevice:{id='VmDeviceId:{deviceId='c45a9213-8f12-46c7-b749-27a4fcb02c67',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741'}', device='unix',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=1}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel0',
customProperties='[]', snapshotId='null', logicalName='null',
hostDevice='null'},
device_39acb1f5-c31e-4810-a4c3-26d460a6e374device_c45a9213-8f12-46c7-b749-27a4fcb02c67device_22513430-5212-4d6a-a413-4c42a216dc9ddevice_f22e3921-5426-403e-9dab-eaf23b0c628e=VmDevice:{id='VmDeviceId:{deviceId='f22e3921-5426-403e-9dab-eaf23b0c628e',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741'}', device='spicevmc',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=3}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel2',
customProperties='[]', snapshotId='null', logicalName='null',
hostDevice='null'},
device_39acb1f5-c31e-4810-a4c3-26d460a6e374device_c45a9213-8f12-46c7-b749-27a4fcb02c67device_22513430-5212-4d6a-a413-4c42a216dc9d=VmDevice:{id='VmDeviceId:{deviceId='22513430-5212-4d6a-a413-4c42a216dc9d',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741'}', device='unix',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=2}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel1',
customProperties='[]', snapshotId='null', logicalName='null',
hostDevice='null'}}, vmType=kvm, memSize=4096, smpCoresPerSocket=1,
vmName=c7service, nice=0, status=Migration Source, maxMemSize=16384,
bootMenuEnable=false, pid=4504, smpThreadsPerCore=1,
memGuaranteedSize=2048, kvmEnable=true, pitReinjection=false,
displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@3f3610df,
display=qxl, maxVCpus=16, clientIp=, statusTime=4533149590,
maxMemSlots=16}], log id: 46e85505
2017-03-21 10:37:30,536+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring]
(DefaultQuartzScheduler4) [53ea57f2] Received a console Device without an
address when processing VM 2e571c77-bae1-4c1c-bf98-effaf9fed741 devices,
skipping device: {device=console, specParams={consoleType=serial,
enableSocket=true}, type=console,
deviceId=dcfbd20d-65f4-47d3-ac7f-5c2cc60e561f, alias=serial0}
2017-03-21 10:37:30,536+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring]
(DefaultQuartzScheduler4) [53ea57f2] Received a spice Device without an
address when processing VM 2e571c77-bae1-4c1c-bf98-effaf9fed741 devices,
skipping device: {device=spice, specParams={fileTransferEnable=true,
displayNetwork=ovirtmgmt, copyPasteEnable=true, displayIp=10.4.168.76,
spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir},
type=graphics, deviceId=5cf3ed21-1471-44d0-81d8-653044fdf51f, tlsPort=5900}
2017-03-21 10:37:30,537+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring]
(DefaultQuartzScheduler4) [53ea57f2] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' managed non pluggable device was
removed unexpectedly from libvirt:
'VmDevice:{id='VmDeviceId:{deviceId='dcfbd20d-65f4-47d3-ac7f-5c2cc60e561f',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741'}', device='console',
type='CONSOLE', bootOrder='0', specParams='[enableSocket=true,
consoleType=serial]', address='', managed='true', plugged='false',
readOnly='false', deviceAlias='', customProperties='[]', snapshotId='null',
logicalName='null', hostDevice='null'}'
2017-03-21 10:37:40,966+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler10) [1949f0c7] Fetched 2 VMs from VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'
2017-03-21 10:37:40,968+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler10) [1949f0c7] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:37:40,968+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler10) [1949f0c7] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:37:55,992+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler2) [560703e7] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:37:55,992+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler2) [560703e7] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:38:11,018+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler2) [560703e7] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:38:11,018+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler2) [560703e7] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:38:26,345+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [707b57d] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:38:26,345+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [707b57d] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:38:42,371+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler9) [44770476] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:38:42,371+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler9) [44770476] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:38:57,397+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [46677852] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:38:57,397+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [46677852] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:39:12,632+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [707b57d] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'MigratingTo' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
(expected on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:39:12,632+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [707b57d] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' is migrating to VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) ignoring it in the refresh
until migration is done
2017-03-21 10:39:12,849+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-1) [] VM '2e571c77-bae1-4c1c-bf98-effaf9fed741' was
reported as Down on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05)
2017-03-21 10:39:12,849+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-1) [] START, DestroyVDSCommand(HostName = ovmsrv05,
DestroyVmVDSCommandParameters:{runAsync='true',
hostId='02bb501a-b641-4ee1-bab1-5e640804e65f',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741', force='false',
secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log
id: 6fa6dc7f
2017-03-21 10:39:13,856+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-1) [] Failed to destroy VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741' because VM does not exist, ignoring
2017-03-21 10:39:13,856+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-1) [] FINISH, DestroyVDSCommand, log id: 6fa6dc7f
2017-03-21 10:39:13,856+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-1) [] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) was unexpectedly detected
as 'Down' on VDS '02bb501a-b641-4ee1-bab1-5e640804e65f'(ovmsrv05) (expected
on '30677d2c-4eb8-4ed9-ba54-0b89945a45fd')
2017-03-21 10:39:22,138+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [707b57d] VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) moved from
'MigratingFrom' --> 'Up'
2017-03-21 10:39:22,139+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [707b57d] Adding VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'(c7service) to re-run list
2017-03-21 10:39:22,161+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(DefaultQuartzScheduler7) [707b57d] Rerun VM
'2e571c77-bae1-4c1c-bf98-effaf9fed741'. Called from VDS 'ovmsrv07'
2017-03-21 10:39:22,273+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-36) [707b57d] START,
MigrateStatusVDSCommand(HostName = ovmsrv07,
MigrateStatusVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741'}), log id: 53f46b57
2017-03-21 10:39:22,577+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-36) [707b57d] FINISH,
MigrateStatusVDSCommand, log id: 53f46b57
2017-03-21 10:39:22,589+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-36) [707b57d] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Correlation ID:
f0cea507-608b-4f36-ac4e-606faec35cd9, Job ID:
514c0562-de81-44a1-bde3-58027662b536, Call Stack: null, Custom Event ID:
-1, Message: Migration failed (VM: c7service, Source: ovmsrv07,
Destination: ovmsrv05).
2017-03-21 10:39:22,595+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-36) [707b57d] Lock freed to object
'EngineLock:{exclusiveLocks='[2e571c77-bae1-4c1c-bf98-effaf9fed741=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName c7service>]',
sharedLocks='null'}'
2017-03-21 10:39:27,658+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler2) [560703e7] Fetched 1 VMs from VDS
'02bb501a-b641-4ee1-bab1-5e640804e65f'
[root@ovmgr1 ovirt-engine-notifier]#
on target host vdsm.log some of these (perhaps related to retries):
017-03-21 10:37:39,826 INFO (jsonrpc/4) [dispatcher] Run and protect:
repoStats, Return response: {u'5e11aa05-9a40-43d8-b173
-c2294cb0c1be': {'code': 0, 'actual': True, 'version': 0, 'acquired': True,
'delay': '0.00054575', 'lastCheck': '5.0', 'valid'
: True}, u'8846db3f-6e07-4468-bbfc-bf5fbc3f8eef': {'code': 0, 'actual':
True, 'version': 0, 'acquired': True, 'delay': '0.0006
64548', 'lastCheck': '5.0', 'valid': True},
u'5ed04196-87f1-480e-9fee-9dd450a3b53b': {'code': 0, 'actual': True,
'version': 4,
'acquired': True, 'delay': '0.000413433', 'lastCheck': '7.0', 'valid':
True}, u'922b5269-ab56-4c4d-838f-49d33427e2ab': {'code
': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay':
'0.00136826', 'lastCheck': '3.3', 'valid': True}} (logUtils:52)
2017-03-21 10:37:39,839 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
Host.getStats succeeded in 0.02 seconds (__init__:5
15)
2017-03-21 10:37:40,953 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer] Internal
server error (__init__:552)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 547,
in _handle_request
res = method(**params)
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 202, in
_dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 1410, in getAllVmIoTunePolicies
io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
File "/usr/share/vdsm/clientIF.py", line 447, in getAllVmIoTunePolicies
vm_io_tune_policies[v.id] = {'policy': v.getIoTunePolicy(),
File "/usr/share/vdsm/virt/vm.py", line 2730, in getIoTunePolicy
qos = self._getVmPolicy()
File "/usr/share/vdsm/virt/vm.py", line 2704, in _getVmPolicy
metadata_xml = self._dom.metadata(
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,
in __getattr__
% self.vmid)
NotConnectedError: VM u'2e571c77-bae1-4c1c-bf98-effaf9fed741' was not
started yet or was shut down
2017-03-21 10:37:40,954 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmIoTunePolicies failed (error -32603) in 0.00 seconds
(__init__:515)
2017-03-21 10:37:40,959 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmStats succeeded in 0.01 seconds (__init__:515)
2017-03-21 10:38:10,999 ERROR (jsonrpc/4) [jsonrpc.JsonRpcServer] Internal
server error (__init__:552)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 547,
in _handle_request
res = method(**params)
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 202, in
_dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 1410, in getAllVmIoTunePolicies
io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
File "/usr/share/vdsm/clientIF.py", line 447, in getAllVmIoTunePolicies
vm_io_tune_policies[v.id] = {'policy': v.getIoTunePolicy(),
File "/usr/share/vdsm/virt/vm.py", line 2730, in getIoTunePolicy
qos = self._getVmPolicy()
File "/usr/share/vdsm/virt/vm.py", line 2704, in _getVmPolicy
metadata_xml = self._dom.metadata(
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,
in __getattr__
% self.vmid)
NotConnectedError: VM u'2e571c77-bae1-4c1c-bf98-effaf9fed741' was not
started yet or was shut down
7 years, 8 months
Event History for a VM
by Sven Achtelik
--_000_BFAB40933B3367488CE6299BAF8592D1014E52AE6F20SOCRATESasl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi All,
I would need to have an Event-History of our VMs for auditing purposes that=
is able to go back until the moment the VM was created/imported. I found t=
he Events Tab in the VM view and found that this is not showing everything =
to the moment of creation. Things that are important for me would be any ch=
ange in CPUs or Host that the VM is pinned to. Are the Events stored in the=
Engine DB and can I read them in any way ? Is there a value that needs to =
be changed in order to keep all Events for a VM ?
Thank you for helping,
Sven
--_000_BFAB40933B3367488CE6299BAF8592D1014E52AE6F20SOCRATESasl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type" CONTENT=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE link=3D"#0563C1" v=
link=3D"#954F72"><div class=3DWordSection1><p class=3DMsoNormal><span lang=
=3DEN-US>Hi All, <o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN=
-US><o:p> </o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>I w=
ould need to have an Event-History of our VMs for auditing purposes that is=
able to go back until the moment the VM was created/imported. I found the =
Events Tab in the VM view and found that this is not showing everything to =
the moment of creation. Things that are important for me would be any chang=
e in CPUs or Host that the VM is pinned to. Are the Events stored in the En=
gine DB and can I read them in any way ? Is there a value that needs to be =
changed in order to keep all Events for a VM ? <o:p></o:p></span></p><p cla=
ss=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p class=3DMs=
oNormal><span lang=3DEN-US>Thank you for helping, <o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US>Sven<o:p></o:p></span></p><p class=3DMsoNor=
mal><o:p> </o:p></p></div></body></html>=
--_000_BFAB40933B3367488CE6299BAF8592D1014E52AE6F20SOCRATESasl_--
7 years, 8 months