oVirt Engine Default Web Page - Broken Link "Console Client Resources"
by Beckman, Daniel
--_000_E3FA2CB5652848A6A273A56F0DBEC425ingramcontentcom_
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
SSBqdXN0IG5vdGljZWQgdGhhdCB0aGUgZGVmYXVsdCB3ZWIgcGFnZSBmb3Igb1ZpcnQgZW5naW5l
IChodHRwOi8vPGhvc3RuYW1lPi9vdmlydC1lbmdpbmUvKTxodHRwOi8vJTNjaG9zdG5hbWUlM2Uv
b3ZpcnQtZW5naW5lLyk+LCBhdCBsZWFzdCBmb3IgdmVyc2lvbiA0LjAuNSwgbm93IGhhcyBhIGJy
b2tlbiBsaW5rOg0KDQpDb25zb2xlIENsaWVudCBSZXNvdXJjZXM8aHR0cDovL3d3dy5vdmlydC5v
cmcvZG9jdW1lbnRhdGlvbi9hZG1pbi1ndWlkZS92aXJ0L2NvbnNvbGUtY2xpZW50LXJlc291cmNl
cy8+DQoNCklzIHRoaXMgdHJ1ZSBvZiA0LjA2IG9yIHdhcyBpdCBmaXhlZD8gV2hhdCBhYm91dCA0
LjE/IENhbiBzb21lb25lIHdob+KAmXMgdXBncmFkZWQgdG8gdGhvc2UgdmVyc2lvbnMgY2hlY2s/
DQoNClRoYW5rcywNCkRhbmllbA0K
--_000_E3FA2CB5652848A6A273A56F0DBEC425ingramcontentcom_
Content-Type: text/html; charset=UTF-8
Content-ID: <ED3A4922B676C649A984DB701F6EC841(a)namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8N
CnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBpbjsN
CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWls
eTpDYWxpYnJpO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXByaW9y
aXR5Ojk5Ow0KCWNvbG9yOiMwNTYzQzE7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpzcGFu
LkVtYWlsU3R5bGUxNw0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1jb21wb3NlOw0KCWZvbnQt
ZmFtaWx5OkNhbGlicmk7DQoJY29sb3I6d2luZG93dGV4dDt9DQpzcGFuLm1zb0lucw0KCXttc28t
c3R5bGUtdHlwZTpleHBvcnQtb25seTsNCgltc28tc3R5bGUtbmFtZToiIjsNCgl0ZXh0LWRlY29y
YXRpb246dW5kZXJsaW5lOw0KCWNvbG9yOnRlYWw7fQ0KLk1zb0NocERlZmF1bHQNCgl7bXNvLXN0
eWxlLXR5cGU6ZXhwb3J0LW9ubHk7DQoJZm9udC1mYW1pbHk6Q2FsaWJyaTt9DQpAcGFnZSBXb3Jk
U2VjdGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGlu
IDEuMGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9z
dHlsZT4NCjwvaGVhZD4NCjxib2R5IGJnY29sb3I9IndoaXRlIiBsYW5nPSJFTi1VUyIgbGluaz0i
IzA1NjNDMSIgdmxpbms9IiM5NTRGNzIiPg0KPGRpdiBjbGFzcz0iV29yZFNlY3Rpb24xIj4NCjxw
IGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5JIGp1c3Qg
bm90aWNlZCB0aGF0IHRoZSBkZWZhdWx0IHdlYiBwYWdlIGZvciBvVmlydCBlbmdpbmUgKDxhIGhy
ZWY9Imh0dHA6Ly8lM2Nob3N0bmFtZSUzZS9vdmlydC1lbmdpbmUvKSI+aHR0cDovLyZsdDtob3N0
bmFtZSZndDsvb3ZpcnQtZW5naW5lLyk8L2E+LCBhdCBsZWFzdCBmb3IgdmVyc2lvbiA0LjAuNSwg
bm93IGhhcyBhIGJyb2tlbiBsaW5rOjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48bzpwPiZuYnNwOzwvbzpw
Pjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjExLjBwdCI+PGEgaHJlZj0iaHR0cDovL3d3dy5vdmlydC5vcmcvZG9jdW1lbnRhdGlvbi9hZG1p
bi1ndWlkZS92aXJ0L2NvbnNvbGUtY2xpZW50LXJlc291cmNlcy8iPkNvbnNvbGUgQ2xpZW50IFJl
c291cmNlczwvYT48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPklz
IHRoaXMgdHJ1ZSBvZiA0LjA2IG9yIHdhcyBpdCBmaXhlZD8gV2hhdCBhYm91dCA0LjE/IENhbiBz
b21lb25lIHdob+KAmXMgdXBncmFkZWQgdG8gdGhvc2UgdmVyc2lvbnMgY2hlY2s/DQo8bzpwPjwv
bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05v
cm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPlRoYW5rcyw8bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEx
LjBwdCI+RGFuaWVsPG86cD48L286cD48L3NwYW4+PC9wPg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0
bWw+DQo=
--_000_E3FA2CB5652848A6A273A56F0DBEC425ingramcontentcom_--
7 years, 11 months
safe to reboot ovirt-engine?
by Wout Peeters
------=_Part_5425_1089070027.1485449324567
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
A simple answer to this I'm sure, but is it safe to reboot the ovirt-engine while vms on the vm-hosts connected to it are running? Anything in particular to take into account while doing so?
Thanks.
Kind regards,
Wout
------=_Part_5425_1089070027.1485449324567
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'>Hi,<br><br>A simple answer to this I'm sure, but is it safe to reboot the ovirt-engine while vms on the vm-hosts connected to it are running? Anything in particular to take into account while doing so?<br><br>Thanks.<br><br><div><span name="x"></span>Kind regards,<br><br>Wout<br><br></div><br></div></body></html>
------=_Part_5425_1089070027.1485449324567--
7 years, 11 months
[ANN] oVirt 4.1.0 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release candidate of oVirt 4.1.0 for testing, as of January 26th, 2016
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the second release candidate of the 4.1 release series.
4.1.0 brings more than 260 enhancements and 790 bugfixes, including 340
high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)
* oVirt Node 4.1
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Live iso is already available[5]
- oVirt Node NG iso is already available[5]
- Hosted Engine appliance is already available.
- oVirt Windows Guest Tools iso is already available[5]
A release management page including planned schedule is also available[4]
Additional Resources:
* Read more about the oVirt 4.1.0 release highlights:
http://www.ovirt.org/release/4.1.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.0/
[4]
http://www.ovirt.org/develop/release-management/releases/4.1/release-mana...
[5] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 11 months
posix compliant fs with ceph rbd
by Yura Poltoratskiy
Hi,
I want to use Ceph with oVirt in some non standard way. The main idea is to
map rbd volume to all computes and to get the same block device, say
/dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
option to add Storage Domain.
Am I crazy? If not, what should I do next: create a file system on top of
/dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
work, I mean does oVirt compatible with not clustered file system in this
scenario?
Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
scalability and high availability (for example, when one storage node
failed).
Thanks for advice.
PS. Yes, I know about Gluster but want to use Ceph :)
7 years, 11 months
spice inside spice and Shift F12
by Gianluca Cecchi
Hello,
sometimes it happens that I open a spice console from a browser that
actually is itself inside a VM and connected through a spice console
session.
So Shift+F12 doesn't work at all and to exit from this inner console I have
to shutdown the VM so I loose the focus and I regain it inside the first
spice session.
It seems in the past I used some kind of combination to override this
(probably in plain libvirt+Qemu/KVM and not in oVirt) but I don't
remember...
Could I set for example a different combination for the two environments to
exit console, so that they don't contrast with one another?
Gianluca
7 years, 11 months
guest often looses connectivity I have to ping gateway
by Gianluca Cecchi
Hello,
I'm on 4.0.6 with CentOS 7.3.
The hypervisor is an old blade BL685c G1 and the network adapters used to
provide network to vm are
07:04.0 Ethernet controller: Broadcom Limited NetXtreme BCM5715S Gigabit
Ethernet (rev a3)
07:04.1 Ethernet controller: Broadcom Limited NetXtreme BCM5715S Gigabit
Ethernet (re
managed by tg3 kernel module, as I see in messages:
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.0 eth0: Tigon3
[partno(011276-001) rev 9003] (PCIX:133MHz:64-bit) MAC address
00:1c:c4:46:ef:73
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.0 eth0: attached PHY is
5714 (1000Base-SX Ethernet) (WireSpeed[0], EEE[0])
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.0 eth0: RXcsums[1]
LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1]
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.0 eth0:
dma_rwctrl[76148000] dma_mask[40-bit]
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.1 eth1: Tigon3
[partno(011276-001) rev 9003] (PCIX:133MHz:64-bit) MAC address
00:1c:c4:46:ef:74
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.1 eth1: attached PHY is
5714 (1000Base-SX Ethernet) (WireSpeed[0], EEE[0])
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.1 eth1: RXcsums[1]
LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1]
Jan 21 18:53:33 ovmsrv05 kernel: tg3 0000:07:04.1 eth1:
dma_rwctrl[76148000] dma_mask[40-bit]
The 2 adapters are in bonding active-backup mode.
They are on vlan, so on hypervisor I have bond1.65 device and in vm the
virtual interface is untagged
[root@ovmsrv05 ~]# ifconfig bond1.65
bond1.65: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:1c:c4:46:ef:73 txqueuelen 1000 (Ethernet)
RX packets 4368 bytes 257675 (251.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 238 bytes 28146 (27.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@ovmsrv05 ~]#
Currently Active Slave: enp7s4f0
After a few minutes I loose connectivity with the guest. In this case if I
go in guest console and ping the gateway, the connection is resumed. And I
can maintain it if I leave the ping running, otherwise after a little I
again loose connectivity.
I suspect it is not important but the guest is Oracle Linux 6.5 with
3.8.13-16.2.1.el6uek.x86_64 kernel. The adapter for the vnic is the
default; the qemu-kvm command line generated contains this:
-netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3
I seem to remember some years ago when I used the same blades with plain
qemu-kvm/libvirt I had to make up an ethtool setting for similar problems,
but I don't remember what it was... and possibly I used bnx2 kernel module
with the other embedded network interfaces, I'm not sure...
Currently, the configuration for adapter on the blade is
[root@ovmsrv05 ~]# ethtool -k enp7s4f0
Features for enp7s4f0:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: off [fixed]
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-mpls-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
busy-poll: off [fixed]
tx-sctp-segmentation: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
[root@ovmsrv05 ~]#
systool shows no particular parameters for tg3 kernel module available
[root@ovmsrv05 ~]# systool -v -m tg3
Module = "tg3"
Attributes:
coresize = "170653"
initsize = "0"
initstate = "live"
refcnt = "0"
rhelversion = "7.3"
srcversion = "D276F97F491ADECC61C8284"
taint = ""
uevent = <store method only>
version = "3.137"
Sections:
...
Thanks,
Gianluca
7 years, 11 months
Configuring an ISO domain in RHEV 4.0
by paul.greene.va
This is a multi-part message in MIME format.
--------------897540AC532AE8D2AA9A8328
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
I'm trying to get an ISO domain configured in a new RHEVM manager. I did
not configure it during the initial ovirt-engine-setup, and am manually
configuring it after the fact.
I created an NFS share in /var/lib/exports/iso and dropped a couple of
files in there from elsewhere, just for testing purposes.
On a client system, I can mount the NFS share, but it appears empty.
On the RHEV manager when I try to add an ISO domain, I get this error
message:
"Error while executing action Add Storage Connection: Permission
settings on the specified path do not allow access to the storage.
Verify permission settings on the specified storage path."
I suspect the selinux permissions aren't set right. This is what they
are currently configured to:
On the folder itself
[root@hostname iso]# ll -dZ .
drwxr-xr-x. root root system_u:object_r:nfs_t:s0
On the two files in the folder:
[root@hostname iso]# ll -Z
-rwxrw-rw-. root root unconfined_u:object_r:nfs_t:s0
sosreport-LogCollector-20170117092344.tar.xz
-rwxrw-rw-. root root unconfined_u:object_r:nfs_t:s0 test.txt
Is this correct? If not what should they be set to?
Thanks
Paul
--------------897540AC532AE8D2AA9A8328
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="field field-name-body field-type-text-with-summary
field-label-hidden">
<p>I'm trying to get an ISO domain configured in a new RHEVM
manager. I did not configure it during the initial
ovirt-engine-setup, and am manually configuring it after the
fact.</p>
<p>I created an NFS share in /var/lib/exports/iso and dropped a
couple of files in there from elsewhere, just for testing
purposes.</p>
<p>On a client system, I can mount the NFS share, but it appears
empty.</p>
<p>On the RHEV manager when I try to add an ISO domain, I get this
error message:</p>
<p>"Error while executing action Add Storage Connection:
Permission settings on the specified path do not allow access to
the storage.<br>
Verify permission settings on the specified storage path."</p>
<p>I suspect the selinux permissions aren't set right. This is
what they are currently configured to:</p>
<p>On the folder itself<br>
[root@hostname iso]# ll -dZ .<br>
drwxr-xr-x. root root system_u:object_r:nfs_t:s0</p>
<p>On the two files in the folder:</p>
<p>[root@hostname iso]# ll -Z<br>
-rwxrw-rw-. root root unconfined_u:object_r:nfs_t:s0
sosreport-LogCollector-20170117092344.tar.xz<br>
-rwxrw-rw-. root root unconfined_u:object_r:nfs_t:s0 test.txt</p>
<p>Is this correct? If not what should they be set to?</p>
<p>Thanks</p>
<p>Paul</p>
</div>
</body>
</html>
--------------897540AC532AE8D2AA9A8328--
7 years, 11 months
Size of vm memory still in MB
by Gianluca Cecchi
Hello,
When you create/modify a vm, the memory definition still has to be defined
in megabytes..
Can we switch to GB, or change the input workflow so that there is a field
for the number itself and a list of values box to choose between GB, MB
(and perhaps TB in the future or already now)
Thanks,
Gianluca
7 years, 11 months
Shared disks
by Gianluca Cecchi
Is the documentation described here up 2 date for 4.0.6 and/or 4.1?
https://www.ovirt.org/develop/release-management/features/storage/sharedr...
Is live migration possible for a vm with shared disks?
The disks seem to be only raw.. does this mean that if I edit a qcow2 disk
and make it shared it will be converted to raw?
Any particular hint for usage of shared disks functionality to test RHCS
and Oracle RAC?
Thanks in advance
Gianluca
7 years, 11 months
oVIRT 4.0.6 / Pools creation issue
by Devin Acosta
I'm running a cluster of oVIRT on 4.0.6 and we are using purely NFS storage
for ISO/EXPORT/DATA domain. We are using a Tintri NFS appliance. What I
can't seem to figure out is when I go to create a Pool from a template it
fails. I have tried to export the template to export domain, then delete
the template and re-import the template, however it fails with the logs
below from the engine.log. It appears to give error message trying to
create snapshot from Template? We use to be on a gluster volumes be we
migrated everything off gluster. I don't see any mention of gluster errors
below so hoping someone might have an idea?
2017-01-25 16:55:30,200 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler3) [6e36f82d] Setting new tasks map. The map
contains now 0 tasks
2017-01-25 16:55:30,200 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler3) [6e36f82d] Cleared all tasks of pool
'5824bb37-0258-026d-0107-00000000004d'.
2017-01-25 16:55:30,470 INFO
[org.ovirt.engine.core.bll.AddVmPoolWithVmsCommand] (default task-40)
[12f23e68] Lock Acquired to object
'EngineLock:{exclusiveLocks='[devins-pool=<VM_POOL_NAME,
ACTION_TYPE_FAILED_VM_POOL_IS_BEING_CREATED$VmPoolName devins-pool>]',
sharedLocks='null'}'
2017-01-25 16:55:30,551 INFO
[org.ovirt.engine.core.bll.AddVmPoolWithVmsCommand]
(org.ovirt.thread.pool-6-thread-18) [12f23e68] Running command:
AddVmPoolWithVmsCommand internal: false. Entities affected : ID:
5824bb37-00b7-031b-0035-0000000002b7 Type: ClusterAction group
CREATE_VM_POOL with role type USER, ID:
bca9dcd9-f00f-494d-8d5a-f2903eb8632a Type: VmTemplateAction group CREATE_VM
with role type USER
2017-01-25 16:55:30,552 INFO
[org.ovirt.engine.core.bll.AddVmPoolWithVmsCommand]
(org.ovirt.thread.pool-6-thread-18) [12f23e68] Lock freed to object
'EngineLock:{exclusiveLocks='[devins-pool=<VM_POOL_NAME,
ACTION_TYPE_FAILED_VM_POOL_IS_BEING_CREATED$VmPoolName devins-pool>]',
sharedLocks='null'}'
2017-01-25 16:55:30,679 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(org.ovirt.thread.pool-6-thread-18) [424815a6] Lock Acquired to object
'EngineLock:{exclusiveLocks='[devins-pool-1=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[bca9dcd9-f00f-494d-8d5a-f2903eb8632a=<TEMPLATE,
ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName devins-pool-1>,
612217c0-87e0-41f6-b06d-edb99f177da8=<VM_POOL,
ACTION_TYPE_FAILED_VM_POOL_IS_USED_FOR_CREATE_VM$VmName devins-pool-1>,
1cd672e8-9f78-403e-9d8c-c61c2047b672=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName devins-pool-1>]'}'
2017-01-25 16:55:30,726 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(org.ovirt.thread.pool-6-thread-18) [] Running command: AddVmCommand
internal: true. Entities affected : ID:
5824bb37-00b7-031b-0035-0000000002b7 Type: ClusterAction group CREATE_VM
with role type USER, ID: bca9dcd9-f00f-494d-8d5a-f2903eb8632a Type:
VmTemplateAction group CREATE_VM with role type USER, ID:
f8d5f826-bd95-4229-9e87-9005b79f2448 Type: StorageAction group CREATE_DISK
with role type USER
2017-01-25 16:55:30,808 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [] START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameters:{runAsync='true',
vmId='0d203249-b120-4d04-9b8e-21d874957592', status='ImageLocked',
exitStatus='Normal'}), log id: 7a39ac9f
2017-01-25 16:55:30,813 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [] FINISH, SetVmStatusVDSCommand, log
id: 7a39ac9f
2017-01-25 16:55:30,819 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969] Running command:
CreateSnapshotFromTemplateCommand internal: true. Entities affected : ID:
f8d5f826-bd95-4229-9e87-9005b79f2448 Type: Storage
2017-01-25 16:55:30,834 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969] START,
CreateSnapshotVDSCommand(
CreateSnapshotVDSCommandParameters:{runAsync='true',
storagePoolId='5824bb37-0258-026d-0107-00000000004d',
ignoreFailoverLimit='false',
storageDomainId='f8d5f826-bd95-4229-9e87-9005b79f2448',
imageGroupId='3e53272f-180e-4517-9dac-75813951a56b',
imageSizeInBytes='75161927680', volumeFormat='COW',
newImageId='a9f169f8-6005-42f7-8437-4c6271db016d', newImageDescription='',
imageInitialSizeInBytes='0',
imageId='150f3171-e992-473e-a96d-aac599c9e556',
sourceImageGroupId='1cd672e8-9f78-403e-9d8c-c61c2047b672'}), log id:
2f0c9075
2017-01-25 16:55:30,835 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969] -- executeIrsBrokerCommand:
calling 'createVolume' with two new parameters: description and UUID
2017-01-25 16:55:31,885 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969] FINISH,
CreateSnapshotVDSCommand, return: a9f169f8-6005-42f7-8437-4c6271db016d, log
id: 2f0c9075
2017-01-25 16:55:31,889 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260'
2017-01-25 16:55:31,889 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969]
CommandMultiAsyncTasks::attachTask: Attaching task
'a22c4e91-dac8-40e4-ba47-42807173ea91' to command
'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260'.
2017-01-25 16:55:31,901 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(org.ovirt.thread.pool-6-thread-18) [6a0aa969] Adding task
'a22c4e91-dac8-40e4-ba47-42807173ea91' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..
2017-01-25 16:55:32,178 INFO
[org.ovirt.engine.core.bll.AddGraphicsDeviceCommand]
(org.ovirt.thread.pool-6-thread-18) [25fb544d] Running command:
AddGraphicsDeviceCommand internal: true. Entities affected : ID:
0d203249-b120-4d04-9b8e-21d874957592 Type: VMAction group
EDIT_VM_PROPERTIES with role type USER
2017-01-25 16:55:32,237 INFO
[org.ovirt.engine.core.bll.AddVmToPoolCommand]
(org.ovirt.thread.pool-6-thread-18) [f2d0c64] Running command:
AddVmToPoolCommand internal: true. Entities affected : ID:
612217c0-87e0-41f6-b06d-edb99f177da8 Type: VmPool
2017-01-25 16:55:32,249 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-18) [f2d0c64] Correlation ID: 424815a6, Job
ID: c6b9ef03-c542-45be-b659-8946450b79c3, Call Stack: null, Custom Event
ID: -1, Message: VM devins-pool-1 creation was initiated by
devin.acosta(a)lxi.domain.com-authz.
2017-01-25 16:55:32,249 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-18) [f2d0c64]
BaseAsyncTask::startPollingTask: Starting to poll task
'a22c4e91-dac8-40e4-ba47-42807173ea91'.
2017-01-25 16:55:32,368 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(org.ovirt.thread.pool-6-thread-18) [6504b822] Lock Acquired to object
'EngineLock:{exclusiveLocks='[devins-pool-2=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[bca9dcd9-f00f-494d-8d5a-f2903eb8632a=<TEMPLATE,
ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName devins-pool-2>,
612217c0-87e0-41f6-b06d-edb99f177da8=<VM_POOL,
ACTION_TYPE_FAILED_VM_POOL_IS_USED_FOR_CREATE_VM$VmName devins-pool-2>,
1cd672e8-9f78-403e-9d8c-c61c2047b672=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName devins-pool-2>]'}'
2017-01-25 16:55:32,415 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(org.ovirt.thread.pool-6-thread-18) [] Running command: AddVmCommand
internal: true. Entities affected : ID:
5824bb37-00b7-031b-0035-0000000002b7 Type: ClusterAction group CREATE_VM
with role type USER, ID: bca9dcd9-f00f-494d-8d5a-f2903eb8632a Type:
VmTemplateAction group CREATE_VM with role type USER, ID:
f8d5f826-bd95-4229-9e87-9005b79f2448 Type: StorageAction group CREATE_DISK
with role type USER
2017-01-25 16:55:32,459 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [] START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameters:{runAsync='true',
vmId='4631557a-4aa7-48da-bd0b-e3ba8191cf61', status='ImageLocked',
exitStatus='Normal'}), log id: 68fb2cd8
2017-01-25 16:55:32,463 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [] FINISH, SetVmStatusVDSCommand, log
id: 68fb2cd8
2017-01-25 16:55:32,470 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee] Running command:
CreateSnapshotFromTemplateCommand internal: true. Entities affected : ID:
f8d5f826-bd95-4229-9e87-9005b79f2448 Type: Storage
2017-01-25 16:55:32,482 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee] START,
CreateSnapshotVDSCommand(
CreateSnapshotVDSCommandParameters:{runAsync='true',
storagePoolId='5824bb37-0258-026d-0107-00000000004d',
ignoreFailoverLimit='false',
storageDomainId='f8d5f826-bd95-4229-9e87-9005b79f2448',
imageGroupId='b604b900-83f9-47f8-bc7f-d2267095586f',
imageSizeInBytes='75161927680', volumeFormat='COW',
newImageId='c1f0485d-b2b9-4931-8eb1-6fc5b5f0a106', newImageDescription='',
imageInitialSizeInBytes='0',
imageId='150f3171-e992-473e-a96d-aac599c9e556',
sourceImageGroupId='1cd672e8-9f78-403e-9d8c-c61c2047b672'}), log id:
46e84547
2017-01-25 16:55:32,482 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee] -- executeIrsBrokerCommand:
calling 'createVolume' with two new parameters: description and UUID
2017-01-25 16:55:32,591 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee] FINISH,
CreateSnapshotVDSCommand, return: c1f0485d-b2b9-4931-8eb1-6fc5b5f0a106, log
id: 46e84547
2017-01-25 16:55:32,595 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
'74f8e242-92fe-4f3a-bf4a-a63b482f89c4'
2017-01-25 16:55:32,595 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee]
CommandMultiAsyncTasks::attachTask: Attaching task
'dbad59d1-711b-4807-af30-250ff7cfd1cd' to command
'74f8e242-92fe-4f3a-bf4a-a63b482f89c4'.
2017-01-25 16:55:32,605 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(org.ovirt.thread.pool-6-thread-18) [57bd66ee] Adding task
'dbad59d1-711b-4807-af30-250ff7cfd1cd' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..
2017-01-25 16:55:32,694 INFO
[org.ovirt.engine.core.bll.AddGraphicsDeviceCommand]
(org.ovirt.thread.pool-6-thread-18) [5266def0] Running command:
AddGraphicsDeviceCommand internal: true. Entities affected : ID:
4631557a-4aa7-48da-bd0b-e3ba8191cf61 Type: VMAction group
EDIT_VM_PROPERTIES with role type USER
2017-01-25 16:55:32,703 INFO
[org.ovirt.engine.core.bll.AddVmToPoolCommand]
(org.ovirt.thread.pool-6-thread-18) [15776f98] Running command:
AddVmToPoolCommand internal: true. Entities affected : ID:
612217c0-87e0-41f6-b06d-edb99f177da8 Type: VmPool
2017-01-25 16:55:32,716 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-18) [15776f98] Correlation ID: 6504b822,
Job ID: c6b9ef03-c542-45be-b659-8946450b79c3, Call Stack: null, Custom
Event ID: -1, Message: VM devins-pool-2 creation was initiated by
devin.acosta(a)lxi.domain.com-authz.
2017-01-25 16:55:32,716 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-18) [15776f98]
BaseAsyncTask::startPollingTask: Starting to poll task
'dbad59d1-711b-4807-af30-250ff7cfd1cd'.
2017-01-25 16:55:32,731 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-18) [15776f98] Correlation ID: 12f23e68,
Job ID: c6b9ef03-c542-45be-b659-8946450b79c3, Call Stack: null, Custom
Event ID: -1, Message: VM Pool devins-pool (containing 2 VMs) was created
by devin.acosta(a)lxi.domain.com-authz.
2017-01-25 16:55:32,896 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler7) [424815a6] Command 'AddVmPoolWithVms' (id:
'36264463-029e-4457-85c4-1c73f6cda00d') waiting on child command id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3' type:'AddVm' to complete
2017-01-25 16:55:32,921 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler7) [57bd66ee] Command 'AddVm' (id:
'10a5e5ba-6024-4e5b-a2be-34f107829e80') waiting on child command id:
'74f8e242-92fe-4f3a-bf4a-a63b482f89c4' type:'CreateSnapshotFromTemplate' to
complete
2017-01-25 16:55:32,941 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler7) [6a0aa969] Command 'AddVm' (id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3') waiting on child command id:
'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260' type:'CreateSnapshotFromTemplate' to
complete
2017-01-25 16:55:34,982 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler6) [57bd66ee] Command 'AddVm' (id:
'10a5e5ba-6024-4e5b-a2be-34f107829e80') waiting on child command id:
'74f8e242-92fe-4f3a-bf4a-a63b482f89c4' type:'CreateSnapshotFromTemplate' to
complete
2017-01-25 16:55:37,058 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler10) [424815a6] Command 'AddVmPoolWithVms' (id:
'36264463-029e-4457-85c4-1c73f6cda00d') waiting on child command id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3' type:'AddVm' to complete
2017-01-25 16:55:37,086 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler10) [6a0aa969] Command 'AddVm' (id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3') waiting on child command id:
'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260' type:'CreateSnapshotFromTemplate' to
complete
2017-01-25 16:55:37,644 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler5) [457a14b4] Polling and updating Async Tasks: 2
tasks, 2 tasks to poll now
2017-01-25 16:55:38,584 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [457a14b4] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM dev01-002-001 command failed:
Cannot get parent volume
2017-01-25 16:55:38,588 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [457a14b4] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM dev01-002-001 command failed:
Cannot get parent volume
2017-01-25 16:55:38,588 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler5)
[457a14b4] SPMAsyncTask::PollTask: Polling task
'a22c4e91-dac8-40e4-ba47-42807173ea91' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'cleanSuccess'.
2017-01-25 16:55:38,590 ERROR
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler5)
[457a14b4] BaseAsyncTask::logEndTaskFailure: Task
'a22c4e91-dac8-40e4-ba47-42807173ea91' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with
failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Cannot get parent volume',
-- Exception: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Cannot get parent volume'
2017-01-25 16:55:38,592 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler5) [457a14b4]
CommandAsyncTask::endActionIfNecessary: All tasks of command
'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260' has ended -> executing 'endAction'
2017-01-25 16:55:38,592 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler5) [457a14b4] CommandAsyncTask::endAction: Ending
action for '1' tasks (command ID: 'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260'):
calling endAction '.
2017-01-25 16:55:38,592 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler5)
[457a14b4] SPMAsyncTask::PollTask: Polling task
'dbad59d1-711b-4807-af30-250ff7cfd1cd' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'cleanSuccess'.
2017-01-25 16:55:38,592 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-3) [457a14b4]
CommandAsyncTask::endCommandAction [within thread] context: Attempting to
endAction 'CreateSnapshotFromTemplate', executionIndex: '0'
2017-01-25 16:55:38,594 ERROR
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler5)
[457a14b4] BaseAsyncTask::logEndTaskFailure: Task
'dbad59d1-711b-4807-af30-250ff7cfd1cd' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with
failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Cannot get parent volume',
-- Exception: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Cannot get parent volume'
2017-01-25 16:55:38,597 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler5) [457a14b4]
CommandAsyncTask::endActionIfNecessary: All tasks of command
'74f8e242-92fe-4f3a-bf4a-a63b482f89c4' has ended -> executing 'endAction'
2017-01-25 16:55:38,597 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler5) [457a14b4] CommandAsyncTask::endAction: Ending
action for '1' tasks (command ID: '74f8e242-92fe-4f3a-bf4a-a63b482f89c4'):
calling endAction '.
2017-01-25 16:55:38,597 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-39) [457a14b4]
CommandAsyncTask::endCommandAction [within thread] context: Attempting to
endAction 'CreateSnapshotFromTemplate', executionIndex: '0'
2017-01-25 16:55:38,600 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969] Command
[id=f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260]: Updating status to 'FAILED', The
command end method logic will be executed by one of its parent commands.
2017-01-25 16:55:38,600 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshotFromTemplate' completed, handling the result.
2017-01-25 16:55:38,601 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshotFromTemplate' succeeded, clearing tasks.
2017-01-25 16:55:38,601 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969] SPMAsyncTask::ClearAsyncTask:
Attempting to clear task 'a22c4e91-dac8-40e4-ba47-42807173ea91'
2017-01-25 16:55:38,601 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
storagePoolId='5824bb37-0258-026d-0107-00000000004d',
ignoreFailoverLimit='false',
taskId='a22c4e91-dac8-40e4-ba47-42807173ea91'}), log id: 4cd9d13e
2017-01-25 16:55:38,602 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969] START,
HSMClearTaskVDSCommand(HostName = dev01-002-001,
HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
hostId='fb1dbd7b-c059-44f6-8da8-e1db4540d91c',
taskId='a22c4e91-dac8-40e4-ba47-42807173ea91'}), log id: 61f4ae2a
2017-01-25 16:55:38,604 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee] Command
[id=74f8e242-92fe-4f3a-bf4a-a63b482f89c4]: Updating status to 'FAILED', The
command end method logic will be executed by one of its parent commands.
2017-01-25 16:55:38,604 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshotFromTemplate' completed, handling the result.
2017-01-25 16:55:38,604 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshotFromTemplate' succeeded, clearing tasks.
2017-01-25 16:55:38,604 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee]
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'dbad59d1-711b-4807-af30-250ff7cfd1cd'
2017-01-25 16:55:38,605 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
storagePoolId='5824bb37-0258-026d-0107-00000000004d',
ignoreFailoverLimit='false',
taskId='dbad59d1-711b-4807-af30-250ff7cfd1cd'}), log id: 2d04146e
2017-01-25 16:55:38,619 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969] FINISH,
HSMClearTaskVDSCommand, log id: 61f4ae2a
2017-01-25 16:55:38,619 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969] FINISH,
SPMClearTaskVDSCommand, log id: 4cd9d13e
2017-01-25 16:55:38,620 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee] START,
HSMClearTaskVDSCommand(HostName = dev01-002-001,
HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
hostId='fb1dbd7b-c059-44f6-8da8-e1db4540d91c',
taskId='dbad59d1-711b-4807-af30-250ff7cfd1cd'}), log id: 71c703c8
2017-01-25 16:55:38,622 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969]
BaseAsyncTask::removeTaskFromDB: Removed task
'a22c4e91-dac8-40e4-ba47-42807173ea91' from DataBase
2017-01-25 16:55:38,622 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-3) [6a0aa969]
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
'f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260'
2017-01-25 16:55:39,130 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler4) [57bd66ee] Command 'AddVm' id:
'10a5e5ba-6024-4e5b-a2be-34f107829e80' child commands
'[74f8e242-92fe-4f3a-bf4a-a63b482f89c4]' executions were completed, status
'FAILED'
2017-01-25 16:55:39,438 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee] FINISH,
HSMClearTaskVDSCommand, log id: 71c703c8
2017-01-25 16:55:39,438 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee] FINISH,
SPMClearTaskVDSCommand, log id: 2d04146e
2017-01-25 16:55:39,441 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee]
BaseAsyncTask::removeTaskFromDB: Removed task
'dbad59d1-711b-4807-af30-250ff7cfd1cd' from DataBase
2017-01-25 16:55:39,442 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-39) [57bd66ee]
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
'74f8e242-92fe-4f3a-bf4a-a63b482f89c4'
2017-01-25 16:55:40,179 ERROR [org.ovirt.engine.core.bll.AddVmCommand]
(DefaultQuartzScheduler9) [57bd66ee] Ending command
'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2017-01-25 16:55:40,185 ERROR
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(DefaultQuartzScheduler9) [57bd66ee] Ending command
'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
with failure.
2017-01-25 16:55:40,308 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(DefaultQuartzScheduler9) [] Lock freed to object
'EngineLock:{exclusiveLocks='[devins-pool-2=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[bca9dcd9-f00f-494d-8d5a-f2903eb8632a=<TEMPLATE,
ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName devins-pool-2>,
612217c0-87e0-41f6-b06d-edb99f177da8=<VM_POOL,
ACTION_TYPE_FAILED_VM_POOL_IS_USED_FOR_CREATE_VM$VmName devins-pool-2>,
1cd672e8-9f78-403e-9d8c-c61c2047b672=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName devins-pool-2>]'}'
2017-01-25 16:55:40,319 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler9) [] Correlation ID: 6504b822, Job ID:
c6b9ef03-c542-45be-b659-8946450b79c3, Call Stack: null, Custom Event ID:
-1, Message: Failed to complete VM devins-pool-2 creation.
2017-01-25 16:55:41,390 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler10) [424815a6] Command 'AddVmPoolWithVms' (id:
'36264463-029e-4457-85c4-1c73f6cda00d') waiting on child command id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3' type:'AddVm' to complete
2017-01-25 16:55:43,461 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler6) [424815a6] Command 'AddVmPoolWithVms' (id:
'36264463-029e-4457-85c4-1c73f6cda00d') waiting on child command id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3' type:'AddVm' to complete
2017-01-25 16:55:45,503 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [6a0aa969] Command 'AddVm' id:
'fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3' child commands
'[f4a67fb3-e3ca-4fe1-ab8e-8240bd85f260]' executions were completed, status
'FAILED'
2017-01-25 16:55:46,551 ERROR [org.ovirt.engine.core.bll.AddVmCommand]
(DefaultQuartzScheduler9) [6a0aa969] Ending command
'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2017-01-25 16:55:46,554 ERROR
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(DefaultQuartzScheduler9) [6a0aa969] Ending command
'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
with failure.
2017-01-25 16:55:46,643 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(DefaultQuartzScheduler9) [] Lock freed to object
'EngineLock:{exclusiveLocks='[devins-pool-1=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[bca9dcd9-f00f-494d-8d5a-f2903eb8632a=<TEMPLATE,
ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName devins-pool-1>,
612217c0-87e0-41f6-b06d-edb99f177da8=<VM_POOL,
ACTION_TYPE_FAILED_VM_POOL_IS_USED_FOR_CREATE_VM$VmName devins-pool-1>,
1cd672e8-9f78-403e-9d8c-c61c2047b672=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName devins-pool-1>]'}'
2017-01-25 16:55:46,654 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler9) [] Correlation ID: 424815a6, Job ID:
c6b9ef03-c542-45be-b659-8946450b79c3, Call Stack: null, Custom Event ID:
-1, Message: Failed to complete VM devins-pool-1 creation.
2017-01-25 16:55:47,781 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [6504b822] Command 'AddVmPoolWithVms' id:
'36264463-029e-4457-85c4-1c73f6cda00d' child commands
'[fbe6e9e5-aabf-48d8-a9c0-266b2f420eb3,
10a5e5ba-6024-4e5b-a2be-34f107829e80]' executions were completed, status
'FAILED'
2017-01-25 16:55:48,881 ERROR
[org.ovirt.engine.core.bll.AddVmPoolWithVmsCommand]
(DefaultQuartzScheduler4) [6504b822] Ending command
'org.ovirt.engine.core.bll.AddVmPoolWithVmsCommand' with failure.
2017-01-25 16:55:48,888 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler4) [6504b822] Correlation ID: 12f23e68, Job ID:
c6b9ef03-c542-45be-b659-8946450b79c3, Call Stack: null, Custom Event ID:
-1, Message: VM Pool devins-pool (containing 2 VMs) was created by
devin.acosta(a)lxi.domain.com-authz.
2017-01-25 16:56:02,640 INFO
[org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-58)
[] User devin.acosta(a)lxi.domain.com successfully logged out
2017-01-25 16:56:02,653 INFO
[org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default
task-61) [343a5bd4] Running command: TerminateSessionsForTokenCommand
internal: true.
2017-01-25 16:56:42,271 INFO
[org.ovirt.engine.core.bll.RemoveVmPoolCommand] (default task-25)
[2d22ba57] Lock Acquired to object
'EngineLock:{exclusiveLocks='[612217c0-87e0-41f6-b06d-edb99f177da8=<VM_POOL,
ACTION_TYPE_FAILED_VM_POOL_IS_BEING_REMOVED$VmPoolName devins-pool>]',
sharedLocks='null'}'
2017-01-25 16:56:42,296 INFO
[org.ovirt.engine.core.bll.RemoveVmPoolCommand]
(org.ovirt.thread.pool-6-thread-5) [2d22ba57] Running command:
RemoveVmPoolCommand internal: false. Entities affected : ID:
612217c0-87e0-41f6-b06d-edb99f177da8 Type: VmPoolAction group
DELETE_VM_POOL with role type USER
2017-01-25 16:56:42,301 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-5) [2d22ba57] Correlation ID: 2d22ba57, Job
ID: d1b36387-a69d-425a-88ab-fc302ac20208, Call Stack: null, Custom Event
ID: -1, Message: VM Pool devins-pool removal was initiated by
devin.acosta(a)lxi.domain.com-authz.
2017-01-25 16:56:42,307 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-5) [2d22ba57] Correlation ID: 2d22ba57, Job
ID: d1b36387-a69d-425a-88ab-fc302ac20208, Call Stack: null, Custom Event
ID: -1, Message: VM Pool devins-pool removal was initiated by
devin.acosta(a)lxi.domain.com-authz.
2017-01-25 16:56:42,313 INFO
[org.ovirt.engine.core.bll.RemoveVmPoolCommand]
(org.ovirt.thread.pool-6-thread-5) [2d22ba57] Lock freed to object
'EngineLock:{exclusiveLocks='[612217c0-87e0-41f6-b06d-edb99f177da8=<VM_POOL,
ACTION_TYPE_FAILED_VM_POOL_IS_BEING_REMOVED$VmPoolName devins-pool>]',
sharedLocks='null'}'
2017-01-25 16:56:42,918 INFO
[org.ovirt.engine.core.bll.RemoveVmPoolCommandCallback]
(DefaultQuartzScheduler9) [] Command 'RemoveVmPool' id:
'00ba7c5d-a30d-4ab9-977e-14c139305461' child commands '[]' executions were
completed, status 'SUCCEEDED'
--
Devin Acosta
Red Hat Certified Architect, LinuxStack
devin(a)linuxguru.co
7 years, 11 months