Changing gateway ping address
by Matteo
Hi all,
I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
Thanks,
Matteo
7 years, 8 months
[ovirt-shell] update hostnic/nic ???
by Bloemen, Jurriën
--_000_5697777B2050209dmcamcnetworkscom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCkZpcnN0IEkgY3JlYXRlZCBhIGJvbmRpbmcgaW50ZXJmYWNlOg0KDQojIGFkZCBuaWMg
LS1wYXJlbnQtaG9zdC1uYW1lIHNlcnZlcjAxIC0tbmFtZSBib25kMCAtLW5ldHdvcmstbmFtZSBW
TEFONjAyIC0tYm9uZGluZy1zbGF2ZXMtaG9zdF9uaWMgaG9zdF9uaWMubmFtZT1lbm8xIC0tYm9u
ZGluZy1zbGF2ZXMtaG9zdF9uaWMgaG9zdF9uaWMubmFtZT1lbm8yDQoNClRoaXMgd29ya3MgZ3Jl
YXQgYnV0IG5vIElQIGlzIHNldCBvbiBWTEFONjAyLg0KDQpUaGVuIEknbSB0cnlpbmcgdG8gYWRk
IGFuIGlwIGFkZHJlc3MgdG8gYSBuZXR3b3JrIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kOg0K
DQojIHVwZGF0ZSBob3N0bmljIC0tcGFyZW50LWhvc3QtbmFtZSBzZXJ2ZXIwMSAtLW5ldHdvcmst
bmFtZSBWTEFONjAyIC0tYm9vdF9wcm90b2NvbCBzdGF0aWMgLS1pcC1hZGRyZXNzIDEwLjEwLjEw
LjEwIC0taXAtbmV0bWFzayAyNTUuMjU1LjI1NS4wDQoNCj09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09IEVSUk9SID09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgd3JvbmcgbnVtYmVyIG9mIGFyZ3VtZW50cywgdHJ5ICdoZWxwIHVwZGF0ZScgZm9yIGhl
bHAuDQo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PQ0KDQpMb29raW5nIGF0IHRoaXMgZG9jdW1lbnQgaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9k
b2N1bWVudGF0aW9uL2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYt
QmV0YS9odG1sL1JIRVZNX1NoZWxsX0d1aWRlL25pYy5odG1sIEkgbmVlZCB0byB1c2UgIm5pYyIg
aW5zdGVhZCBvZiAiaG9zdG5pYyIgYnV0IHRoZW4gSSBkb24ndCBoYXZlIHRoZSBvcHRpb25zIHRv
IHNheSB0aGlzIGlzIGEgLS1wYXJlbnQtaG9zdC1uYW1lLiBPbmx5IFZNIHJlbGF0ZWQgY29tbWFu
ZCBvcHRpb25zLg0KDQpTbyBJIHRoaW5rIHRoZSBkb2N1bWVudGF0aW9uIGlzIGJlaGluZC4NCg0K
Q2FuIHNvbWVib2R5IGhlbHAgbWUgd2l0aCB3aGF0IHRoZSBjb21tYW5kIGlzIHRvIGFkZCBhIElQ
IHRvIGEgVkxBTi9OZXR3b3JrIGZvciBhIGhvc3Q/DQoNCg0KLS0NCktpbmQgcmVnYXJkcywNCg0K
SnVycmnDq24gQmxvZW1lbg0KDQpUaGlzIG1lc3NhZ2UgKGluY2x1ZGluZyBhbnkgYXR0YWNobWVu
dHMpIG1heSBjb250YWluIGluZm9ybWF0aW9uIHRoYXQgaXMgcHJpdmlsZWdlZCBvciBjb25maWRl
bnRpYWwuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNpcGllbnQsIHBsZWFzZSBub3Rp
ZnkgdGhlIHNlbmRlciBhbmQgZGVsZXRlIHRoaXMgZW1haWwgaW1tZWRpYXRlbHkgZnJvbSB5b3Vy
IHN5c3RlbXMgYW5kIGRlc3Ryb3kgYWxsIGNvcGllcyBvZiBpdC4gWW91IG1heSBub3QsIGRpcmVj
dGx5IG9yIGluZGlyZWN0bHksIHVzZSwgZGlzY2xvc2UsIGRpc3RyaWJ1dGUsIHByaW50IG9yIGNv
cHkgdGhpcyBlbWFpbCBvciBhbnkgcGFydCBvZiBpdCBpZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5k
ZWQgcmVjaXBpZW50DQo=
--_000_5697777B2050209dmcamcnetworkscom_
Content-Type: text/html; charset="utf-8"
Content-ID: <DED479EC8EDE1E4F9CD5EE636812330C(a)chellomedia.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHRleHQ9IiMwMDAwMDAi
IGJnY29sb3I9IiNGRkZGRkYiPg0KPHR0PkhpLDxicj4NCjxicj4NCkZpcnN0IEkgY3JlYXRlZCBh
IGJvbmRpbmcgaW50ZXJmYWNlOjxicj4NCjxicj4NCiMgYWRkIG5pYyAtLXBhcmVudC1ob3N0LW5h
bWUgc2VydmVyMDEgLS1uYW1lIGJvbmQwIC0tbmV0d29yay1uYW1lIFZMQU42MDIgLS1ib25kaW5n
LXNsYXZlcy1ob3N0X25pYyBob3N0X25pYy5uYW1lPWVubzEgLS1ib25kaW5nLXNsYXZlcy1ob3N0
X25pYyBob3N0X25pYy5uYW1lPWVubzI8YnI+DQo8YnI+DQpUaGlzIHdvcmtzIGdyZWF0IGJ1dCBu
byBJUCBpcyBzZXQgb24gVkxBTjYwMi48YnI+DQo8YnI+DQpUaGVuIEknbSB0cnlpbmcgdG8gYWRk
IGFuIGlwIGFkZHJlc3MgdG8gYSBuZXR3b3JrIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kOjxi
cj4NCjxicj4NCiMgdXBkYXRlIGhvc3RuaWMgLS1wYXJlbnQtaG9zdC1uYW1lIHNlcnZlcjAxIC0t
bmV0d29yay1uYW1lIFZMQU42MDIgLS1ib290X3Byb3RvY29sIHN0YXRpYyAtLWlwLWFkZHJlc3Mg
MTAuMTAuMTAuMTAgLS1pcC1uZXRtYXNrIDI1NS4yNTUuMjU1LjA8YnI+DQo8YnI+DQo9PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSBFUlJPUiA9PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08YnI+DQombmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg
d3JvbmcgbnVtYmVyIG9mIGFyZ3VtZW50cywgdHJ5ICdoZWxwIHVwZGF0ZScgZm9yIGhlbHAuPGJy
Pg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08
YnI+DQo8YnI+DQpMb29raW5nIGF0IHRoaXMgZG9jdW1lbnQgPGEgY2xhc3M9Im1vei10eHQtbGlu
ay1mcmVldGV4dCIgaHJlZj0iaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9kb2N1bWVudGF0aW9u
L2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYtQmV0YS9odG1sL1JI
RVZNX1NoZWxsX0d1aWRlL25pYy5odG1sIj4NCmh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9j
dW1lbnRhdGlvbi9lbi1VUy9SZWRfSGF0X0VudGVycHJpc2VfVmlydHVhbGl6YXRpb24vMy42LUJl
dGEvaHRtbC9SSEVWTV9TaGVsbF9HdWlkZS9uaWMuaHRtbDwvYT4gSSBuZWVkIHRvIHVzZSAmcXVv
dDtuaWMmcXVvdDsgaW5zdGVhZCBvZiAmcXVvdDtob3N0bmljJnF1b3Q7IGJ1dCB0aGVuIEkgZG9u
J3QgaGF2ZSB0aGUgb3B0aW9ucyB0byBzYXkgdGhpcyBpcyBhIC0tcGFyZW50LWhvc3QtbmFtZS4g
T25seSBWTSByZWxhdGVkIGNvbW1hbmQNCiBvcHRpb25zLjxicj4NCjxicj4NClNvIEkgdGhpbmsg
dGhlIGRvY3VtZW50YXRpb24gaXMgYmVoaW5kLiA8YnI+DQo8YnI+DQpDYW4gc29tZWJvZHkgaGVs
cCBtZSB3aXRoIHdoYXQgdGhlIGNvbW1hbmQgaXMgdG8gYWRkIGEgSVAgdG8gYSBWTEFOL05ldHdv
cmsgZm9yIGEgaG9zdD88YnI+DQo8YnI+DQo8YnI+DQo8L3R0Pg0KPGRpdiBjbGFzcz0ibW96LXNp
Z25hdHVyZSI+LS0gPGJyPg0KPHRpdGxlPjwvdGl0bGU+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdi
KDAsIDAsIDApOyI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0iZm9udC1zaXplOiAxNHB4
OyBmb250LWZhbWlseToNCiAgICAgICAgICBDYWxpYnJpLCBzYW5zLXNlcmlmOyBtYXJnaW46IDBj
bSAwY20gMC4wMDAxcHQ7Ij4NCjxiPjxmb250IGZhY2U9IkFyaWFsLHNhbnMtc2VyaWYiIGNvbG9y
PSIjMmM4Y2I2Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMHB0OyI+Szwvc3Bhbj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOg0KICAgICAgICAgICAgICAgIDEzcHg7Ij5pPC9zcGFuPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6IDEwcHQ7Ij5uZCByZWdhcmRzLDwvc3Bhbj48L2ZvbnQ+PC9iPjwvcD4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJmb250LXNpemU6IDExcHQ7IGZvbnQtZmFtaWx5
Og0KICAgICAgICAgIENhbGlicmksIHNhbnMtc2VyaWY7IG1hcmdpbjogMGNtIDBjbSAwLjAwMDFw
dDsiPg0KPGI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6IEFyaWFs
LCBzYW5zLXNlcmlmOw0KICAgICAgICAgICAgICBjb2xvcjogcmdiKDQ0LCAxNDAsIDE4Mik7Ij4m
bmJzcDs8L3NwYW4+PC9iPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJmb250LXNp
emU6IDE0cHg7IGZvbnQtZmFtaWx5Og0KICAgICAgICAgIENhbGlicmksIHNhbnMtc2VyaWY7IG1h
cmdpbjogMGNtIDBjbSAwLjAwMDFwdDsiPg0KPGIgc3R5bGU9ImZvbnQtc2l6ZTogMTFwdDsiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6IDEwcHQ7DQogICAgICAgICAgICAgIGZvbnQtZmFtaWx5OiBB
cmlhbCwgc2Fucy1zZXJpZjsgY29sb3I6IHJnYig0NCwgMTQwLCAxODIpOyI+SnVycmnDq24gQmxv
ZW1lbjwvc3Bhbj48L2I+PGIgc3R5bGU9ImZvbnQtc2l6ZTogMTFwdDsiPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6IDEwcHQ7IGZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsNCiAgICAgICAg
ICAgICAgY29sb3I6IGdyYXk7Ij48YnI+DQo8L3NwYW4+PC9iPjxmb250IGZhY2U9IkFyaWFsLHNh
bnMtc2VyaWYiIGNvbG9yPSIjODA4MDgwIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMHB0OyI+
PC9zcGFuPjwvZm9udD48L3A+DQo8YnI+DQo8L2Rpdj4NCjwvZGl2Pg0KVGhpcyBtZXNzYWdlIChp
bmNsdWRpbmcgYW55IGF0dGFjaG1lbnRzKSBtYXkgY29udGFpbiBpbmZvcm1hdGlvbiB0aGF0IGlz
IHByaXZpbGVnZWQgb3IgY29uZmlkZW50aWFsLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQg
cmVjaXBpZW50LCBwbGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXIgYW5kIGRlbGV0ZSB0aGlzIGVtYWls
IGltbWVkaWF0ZWx5IGZyb20geW91ciBzeXN0ZW1zIGFuZCBkZXN0cm95IGFsbCBjb3BpZXMgb2Yg
aXQuIFlvdSBtYXkgbm90LA0KIGRpcmVjdGx5IG9yIGluZGlyZWN0bHksIHVzZSwgZGlzY2xvc2Us
IGRpc3RyaWJ1dGUsIHByaW50IG9yIGNvcHkgdGhpcyBlbWFpbCBvciBhbnkgcGFydCBvZiBpdCBp
ZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_5697777B2050209dmcamcnetworkscom_--
7 years, 9 months
Dedicated NICs for gluster network
by Nicolas Ecarnot
Hello,
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
the hosts].
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
--
Nicolas ECARNOT
8 years, 3 months
oVirt-shell command to move a disk
by Nicolas Ecarnot
Hello,
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
storage domains.
May you help me please?
(oVirt 3.4.4)
--
Nicolas Ecarnot
8 years, 3 months
One RHEV Virtual Machine does not Automatically Resume following Compellent SAN Controller Failover
by Duckworth, Douglas C
Hello --
Not sure if y'all can help with this issue we've been seeing with RHEV...
On 11/13/2015, during Code Upgrade of Compellent SAN at our Disaster
Recovery Site, we Failed Over to Secondary SAN Controller. Most Virtual
Machines in our DR Cluster Resumed automatically after Pausing except VM
"BADVM" on Host "BADHOST."
In Engine.log you can see that BADVM was sent into "VM_PAUSED_EIO" state
at 10:47:57:
"VM BADVM has paused due to storage I/O problem."
On this Red Hat Enterprise Virtualization Hypervisor 6.6
(20150512.0.el6ev) Host, two other VMs paused but then automatically
resumed without System Administrator intervention...
In our DR Cluster, 22 VMs also resumed automatically...
None of these Guest VMs are engaged in high I/O as these are DR site VMs
not currently doing anything.
We sent this information to Dell. Their response:
"The root cause may reside within your virtualization solution, not the
parent OS (RHEV-Hypervisor disc) or Storage (Dell Compellent.)"
We are doing this Failover again on Sunday November 29th so we would
like to know how to mitigate this issue, given we have to manually
resume paused VMs that don't resume automatically.
Before we initiated SAN Controller Failover, all iSCSI paths to Targets
were present on Host tulhv2p03.
VM logs on Host show in /var/log/libvirt/qemu/badhost.log that Storage
error was reported:
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
All disks used by this Guest VM are provided by single Storage Domain
COM_3TB4_DR with serial "270." In syslog we do see that all paths for
that Storage Domain Failed:
Nov 13 16:47:40 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 0
Though these recovered later:
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: sdbg -
tur checker reports path is up
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 8
Does anyone have an idea of why the VM would fail to automatically
resume if the iSCSI paths used by its Storage Domain recovered?
Thanks
Doug
--
Thanks
Douglas Charles Duckworth
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: duckd(a)tulane.edu
O: 504-988-9341
F: 504-988-8505
8 years, 5 months
Can't remove snapshot
by Rik Theys
Hi,
I created a snapshot of a running VM prior to an OS upgrade. The OS
upgrade has now been succesful and I would like to remove the snapshot.
I've selected the snapshot in the UI and clicked Delete to start the task.
After a few minutes, the task has failed. When I click delete again on
the same snapshot, the failed message is returned after a few seconds.
>From browsing through the engine log (attached) it seems the snapshot
was correctly merged in the first try but something went wrong in the
finalizing fase. On retries, the log indicates the snapshot/disk image
no longer exists and the removal of the snapshot fails for this reason.
Is there any way to clean up this snapshot?
I can see the snapshot in the "Disk snapshot" tab of the storage. It has
a status of "illegal". Is it OK to (try to) remove this snapshot? Will
this impact the running VM and/or disk image?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
8 years, 6 months
3.6 upgrade issue
by Jon Archer
Hi all,
Wonder if anyone can shed any light on an error i'm seeing while running
engine-setup.
If just upgraded the packages to the latest 3.6 ones today (from 3.5),
run engine-setup, answered the questions, confirming install then get
presented with:
[ INFO ] Cleaning async tasks and compensations
[ INFO ] Unlocking existing entities
[ INFO ] Checking the Engine database consistency
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ ERROR ] Failed to execute stage 'Misc configuration': function
getdwhhistorytimekeepingbyvarname(unknown) does not exist LINE 2:
select * from GetDwhHistoryTimekeepingByVarName(
^ HINT: No function matches the given name and argument
types. You might need to add explicit type casts.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20150929144137-7u5rhg.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20150929144215-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Any ideas, where to look to fix things?
Thanks
Jon
8 years, 6 months
heavy webadmin
by Nathanaël Blanchet
Hi all,
Since I upgraded engine to 3.6, I noticed that the webadmin takes a lot
of ressources whatever is the browser. It can become very slow even for
small actions, like changing tabs or editing a vm. The browser activity
becomes intensive (100% of cpu) and the processor very hot with a
increased fan activity. I suppose javascript to be responsible of this
behaviour. Is there a way to reduce the resource allocated to the webadmin?
(This is not a weakness of my laptop which is an i7 cpu with 16GB of RAM)
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
8 years, 7 months
ovirt glusterfs performance
by Bill James
I'm setting up a ovirt cluster using glusterfs and noticing not stellar
performance.
Maybe my setup could use some adjustments?
3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM
with one logical volume for system and one for gluster.
They each have 4 NICs,
NIC1 = ovirtmgmt
NIC2 = gluster
NIC3 = VM traffic
I tried with default glusterfs settings and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB
[root@ovirt3 test scripts]# gluster volume info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB
Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
Another VM not in ovirt using nfs:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
Is that expected or is there a better way to set it up to get better
performance?
Thanks.
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates.
8 years, 7 months
Can't start VMs (Unable to get volume size for domain)
by Justin Foreman
I’m running 3.6.2 rc1 with hosted engine on an FCP storage domain.
As of yesterday, I can’t run some VMs. I’ve experience corruption on others (I now have a Windows VM that blue screens on boot).
Here’s the log from my engine.
2016-01-04 16:55:39,446 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-16) [1f1deb62] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,479 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-16) [1f1deb62] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 299a5052
2016-01-04 16:55:39,479 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-16) [1f1deb62] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 299a5052
2016-01-04 16:55:39,517 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Running command: RunVmCommand internal: false. Entities affected : ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:39,579 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dadddaa9'}), log id: 6574710a
2016-01-04 16:55:39,582 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, UpdateVmDynamicDataVDSCommand, log id: 6574710a
2016-01-04 16:55:39,585 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 55e0849d
2016-01-04 16:55:39,586 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVDSCommand(HostName = ov-101, CreateVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 1d5c1c04
2016-01-04 16:55:39,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Bootable disk '9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:39,600 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, cpus=0, nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={heads=1, vram=32768}, device=cirrus, deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, controller=0, unit=0, type=drive, target=0}, imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, bootOrder=1, poolID=00000001-0001-0001-0001-000000000154, volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, {shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, poolID=00000001-0001-0001-0001-000000000154, volumeID=20fc4399-0b02-4da1-8aee-68df1629ca94, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=rtl8139, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=8e00d4cc-6a60-4598-82ee-645d742708de, macAddr=FA:0D:49:9E:A2:E6, network=server-vlan10}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=8ac5777e-375f-4ec6-a6fd-856c7cd7363b}],custom={device_8617fb20-b870-45ea-8232-a70dd8b4551c=VmDevice:{id='VmDeviceId:{deviceId='8617fb20-b870-45ea-8232-a70dd8b4551c', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76device_30bd748e-6ea8-434f-8587-d8ff8db5555e=VmDevice:{id='VmDeviceId:{deviceId='30bd748e-6ea8-434f-8587-d8ff8db5555e', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76=VmDevice:{id='VmDeviceId:{deviceId='f691fc09-31c8-43bf-bd82-c5acac8a1a76', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}},display=vnc,timeOffset=0,spiceSslCipherSuite=DEFAULT,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4054
2016-01-04 16:55:39,627 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, CreateVDSCommand, log id: 1d5c1c04
2016-01-04 16:55:39,631 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 55e0849d
2016-01-04 16:55:39,631 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,634 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: VM adm1 was started by jforeman@us.dignitastech.com(a)Dignitas AD (Host: ov-101).
2016-01-04 16:55:40,724 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] START, DestroyVDSCommand(HostName = ov-101, DestroyVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', force='false', secondsToWait='0', gracefully='false', reason=''}), log id: 7935781d
2016-01-04 16:55:41,730 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] FINISH, DestroyVDSCommand, log id: 7935781d
2016-01-04 16:55:41,747 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM adm1 is down with error. Exit message: Unable to get volume size for domain 1fb79d91-b245-4447-91e0-e57671152a8c volume c736baca-de76-4593-b3dc-28bb8807e7a3.
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] Running on vds during rerun failed vm: '2fe6c27b-9346-4678-8cd3-c9d367ec447f'
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87(adm1) is running in db and not running in VDS 'ov-101'
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'adm1' to HA rerun treatment
2016-01-04 16:55:41,752 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87'. Called from VDS 'ov-101'
2016-01-04 16:55:41,756 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 on Host ov-101.
2016-01-04 16:55:41,760 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:41,770 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 2577cd3a
2016-01-04 16:55:41,770 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2577cd3a
2016-01-04 16:55:41,798 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Running command: RunVmCommand internal: false. Entities affected : ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:41,850 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dbe0ef0a'}), log id: 351fb749
2016-01-04 16:55:41,852 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, UpdateVmDynamicDataVDSCommand, log id: 351fb749
2016-01-04 16:55:41,854 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 3163c7c3
2016-01-04 16:55:41,857 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, CreateVDSCommand(HostName = ov-102, CreateVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 569ec368
2016-01-04 16:55:41,860 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (org.ovirt.thread.pool-8-thread-30) [] Bootable disk '9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:41,869 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, cpus=0, nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={heads=1, vram=32768}, device=cirrus, deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, controller=0, unit=0, type=drive, target=0}, imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, bootOrder=1, poolID=00000001-0001-0001-0001-000000000154, volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, {shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, poolID=00000001-0001-0001-0001-000000000154, volumeID=20fc4399-0b02-4da1-8aee-68df1629ca94, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=rtl8139, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=8e00d4cc-6a60-4598-82ee-645d742708de, macAddr=FA:0D:49:9E:A2:E6, network=server-vlan10}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=8ac5777e-375f-4ec6-a6fd-856c7cd7363b}],custom={device_8617fb20-b870-45ea-8232-a70dd8b4551c=VmDevice:{id='VmDeviceId:{deviceId='8617fb20-b870-45ea-8232-a70dd8b4551c', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76device_30bd748e-6ea8-434f-8587-d8ff8db5555e=VmDevice:{id='VmDeviceId:{deviceId='30bd748e-6ea8-434f-8587-d8ff8db5555e', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76=VmDevice:{id='VmDeviceId:{deviceId='f691fc09-31c8-43bf-bd82-c5acac8a1a76', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}},display=vnc,timeOffset=0,spiceSslCipherSuite=DEFAULT,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4054
2016-01-04 16:55:41,987 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, CreateVDSCommand, log id: 569ec368
2016-01-04 16:55:41,991 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3163c7c3
2016-01-04 16:55:41,992 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:41,994 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: VM adm1 was started by jforeman@us.dignitastech.com(a)Dignitas AD (Host: ov-102).
2016-01-04 16:55:43,069 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] START, DestroyVDSCommand(HostName = ov-102, DestroyVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', force='false', secondsToWait='0', gracefully='false', reason=''}), log id: 43dd93c5
2016-01-04 16:55:44,075 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] FINISH, DestroyVDSCommand, log id: 43dd93c5
2016-01-04 16:55:44,091 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-3) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM adm1 is down with error. Exit message: Unable to get volume size for domain 1fb79d91-b245-4447-91e0-e57671152a8c volume c736baca-de76-4593-b3dc-28bb8807e7a3.
2016-01-04 16:55:44,091 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] Running on vds during rerun failed vm: '65555052-9601-4e4f-88f5-a0f14dcc29eb'
2016-01-04 16:55:44,092 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87(adm1) is running in db and not running in VDS 'ov-102'
2016-01-04 16:55:44,092 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] add VM 'adm1' to HA rerun treatment
2016-01-04 16:55:44,096 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-3) [] Rerun VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87'. Called from VDS 'ov-102'
2016-01-04 16:55:44,128 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-35) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 on Host ov-102.
2016-01-04 16:55:44,132 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:44,141 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-35) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 545236ca
2016-01-04 16:55:44,141 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-35) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 545236ca
2016-01-04 16:55:44,162 WARN [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] CanDoAction of action 'RunVm' failed for user jforeman@us.dignitastech.com(a)Dignitas AD. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2016-01-04 16:55:44,162 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:44,170 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-35) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 (User: jforeman@us.dignitastech.com(a)Dignitas AD).
2016-01-04 16:55:44,173 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-8-thread-46) [48c1f0bd] Running command: ProcessDownVmCommand internal: true.
8 years, 7 months