Why not bond0
by 肖力
------=_Part_150959_749523399.1432432244076
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64
SGkgbmljIGJvbmQgd2h5IG5vdCBjaG9pY2UgYm9uZDAgPwoK
------=_Part_150959_749523399.1432432244076
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64
PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6QXJpYWwiPjxkaXY+SGkgbmljIGJvbmQgd2h5IG5vdCBjaG9pY2UgYm9uZDAg
Pzxicj48YnI+PC9kaXY+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+
PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_150959_749523399.1432432244076--
9 years, 6 months
Multiple NICs on hosted engine?
by Chris Adams
I have installed the first node of a new oVirt 3.5 setup with a hosted
engine VM. I have multiple networks: one public-accessible and one
private (with storage, iDRAC/IPMI, etc.). I set the engine VM up on the
public LAN, but now realize that it can't access the power control. I
tried to add a second NIC to the engine VM through the web interface,
but of course that doesn't work (because it isn't really managed there).
How can I add a second NIC to the hosted engine VM?
--
Chris Adams <cma(a)cmadams.net>
9 years, 6 months
win 2008 kvm import
by Grant Pasley
hi all,
i am going to migrate a production windows 2008 image that is currently
running on kvm over to ovirt 3.5.2 this weekend. are there any tips, any
do's and dont's i should be aware of before doing this?
should i install the windows guest tools before moving over or once
done? what about having to install the virtio drivers when doing and new
windows vm in order to see the disks, how is this
achieved with an already created kvm image?
--
thanks and regards,
grant pasley.
---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com
9 years, 6 months
ovirt 3.5 vm startup
by Grant Pasley
hi all,
i have ovirt 3.5.2 allinone running on centos 6.6 and have noticed that
when i reboot the physical host server the vm's do not start up again
automatically, i have to start them manually.
i have worked with kvm and vm manager to date and know where to set the
auto start there, but cannot seem to find this option in ovirt?
--
thanks and regards,
grant pasley.
---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com
9 years, 6 months
Live Merge vanished after upgrade to 3.5.2.1
by Soeren Malchow
--_000_98350E4299364C7E94538A4950A900C1mconnet_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNCkluIG91ciBlbnZpcm9ubWVudCB0aGUg4oCcTGl2ZSBNZXJnZeKAnSBjYXBh
YmlsaXR5IGlzIGdvbmUgYWZ0ZXIgdGhlIHVwZ3JhZGUgdG8gb3ZpcnQgMy41LjIuMQ0KDQpJdCB3
YXMgd29ya2luZyBiZWZvcmUgYW5kIHdlIGhhZCBvdXIgYmFja3VwIHJlbHlpbmcgaW4gdGhpcy4N
Cg0KQW55IGlkZWEgd2hhdCBoYXBwZW5lZCA/DQoNCkVudmlyb25tZW50DQoNCkhvc3RlZCBFbmdp
bmUgb24gQ2VudE9TIDYuNiB3aXRoIG92aXJ0IDMuNS4yLjENCkNvbXB1dGUgaG9zdHMgb24gRmVk
b3JhIDIwIHdpdGggdmRzbSA0LjE2LjE0IGFuZCBsaWJ2aXJ0IDEuMi45LjEgZnJvbSB0aGUgbGli
dmlydC1wcmV2aWV3IHJlcG8gKGZvciBsaXZlIG1lcmdlKQ0KU3RvcmFnZSAtPiBDZW50T1MgNy4x
IHdpdGggZ2x1c3RlciAzLjYuMw0KDQpDaGVlcnMNClNvZXJlbg0K
--_000_98350E4299364C7E94538A4950A900C1mconnet_
Content-Type: text/html; charset="utf-8"
Content-ID: <6DB347CB5E605B439B0C67E36304302E(a)liquidcampaign.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsgY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1zaXplOiAx
NHB4OyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsiPg0KPGRpdj4NCjxkaXY+DQo8
ZGl2PkRlYXIgYWxsLDwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+SW4gb3VyIGVudmly
b25tZW50IHRoZSDigJxMaXZlIE1lcmdl4oCdIGNhcGFiaWxpdHkgaXMgZ29uZSBhZnRlciB0aGUg
dXBncmFkZSB0byBvdmlydCAzLjUuMi4xPC9kaXY+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5J
dCB3YXMgd29ya2luZyBiZWZvcmUgYW5kIHdlIGhhZCBvdXIgYmFja3VwIHJlbHlpbmcgaW4gdGhp
cy48L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PkFueSBpZGVhIHdoYXQgaGFwcGVuZWQg
PzwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+RW52aXJvbm1lbnQ8L2Rpdj4NCjxkaXY+
PGJyPg0KPC9kaXY+DQo8ZGl2Pg0KPGRpdj5Ib3N0ZWQgRW5naW5lIG9uIENlbnRPUyA2LjYgd2l0
aCBvdmlydCAzLjUuMi4xPC9kaXY+DQo8ZGl2PkNvbXB1dGUgaG9zdHMgb24gRmVkb3JhIDIwIHdp
dGggdmRzbSA0LjE2LjE0IGFuZCBsaWJ2aXJ0IDEuMi45LjEgZnJvbSB0aGUgbGlidmlydC1wcmV2
aWV3IHJlcG8gKGZvciBsaXZlIG1lcmdlKTwvZGl2Pg0KPGRpdj5TdG9yYWdlIC0mZ3Q7IENlbnRP
UyA3LjEgd2l0aCBnbHVzdGVyIDMuNi4zPC9kaXY+DQo8L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+
DQo8ZGl2PkNoZWVyczwvZGl2Pg0KPGRpdj5Tb2VyZW4mbmJzcDs8L2Rpdj4NCjxkaXY+DQo8ZGl2
IGlkPSJNQUNfT1VUTE9PS19TSUdOQVRVUkUiPjwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2
Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_98350E4299364C7E94538A4950A900C1mconnet_--
9 years, 6 months
Expand the OS disk windows 2003 crash the VM
by Punit Dambiwal
Hi,
I have created one VM with windows 2003 standard R2....with 3GB disk
space...after installation i have installed all the drivers and shutdown
the VM and expand the disk....after expand the disk VM cannot come up...
Reproduce the problem :-
1. Create the windows 2003 VM
2. Disk will be IDE...
3. Install it via cd-rom..
4. Installed successfully...now install the virtio drivers...
5. shutdown the VM...
6. Edit the Vm disk and increase the disk space...
7. Run the VM...
8. VM cannot boot up...
[image: Inline image 1]
Thanks,
Punit
9 years, 6 months
oVirt Instability with Dell Compellent via iSCSI/Multipath
by Chris Jones - BookIt.com Systems Administrator
--------------090603010402080805030904
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
Engine: oVirt Engine Version: 3.5.2-1.el7.centos
Nodes: oVirt Node - 3.5 - 0.999.201504280931.el7.centos
Remote storage: Dell Compellent SC8000
Storage setup: 2 nics connected to the Compellent. Several domains
backed by LUNs. Several VM disk using direct LUN.
Networking: Dell 10 Gb/s switches
I've been struggling with oVirt completely falling apart due to storage
related issues. By falling apart I mean most to all of the nodes
suddenly losing contact with the storage domains. This results in an
endless loop of the VMs on the failed nodes trying to be migrated and
remigrated as the nodes flap between response and unresponsive. During
these times, engine.log looks like this.
2015-05-19 03:09:42,443 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-50) domain
c46adffc-614a-4fa2-9d2d-954f174f6a39:db_binlog_1 in problem. vds:
blade6c1.ism.ld
2015-05-19 03:09:42,560 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-38) domain
0b1d36e4-7992-43c7-8ac0-740f7c2cadb7:ovirttest1 in problem. vds:
blade2c1.ism.ld
2015-05-19 03:09:45,497 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-24) domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 in problem. vds:
blade3c2.ism.ld
2015-05-19 03:09:51,713 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-46) domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 in problem. vds:
blade4c2.ism.ld
2015-05-19 03:09:57,647 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-13) Domain
c46adffc-614a-4fa2-9d2d-954f174f6a39:db_binlog_1 recovered from problem.
vds: blade6c1.ism.ld
2015-05-19 03:09:57,782 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-6) domain
26929b89-d1ca-4718-90d6-b3a6da585451:generic_data_1 in problem. vds:
blade2c1.ism.ld
2015-05-19 03:09:57,783 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-6) Domain
0b1d36e4-7992-43c7-8ac0-740f7c2cadb7:ovirttest1 recovered from problem.
vds: blade2c1.ism.ld
2015-05-19 03:10:00,639 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-31) Domain
c46adffc-614a-4fa2-9d2d-954f174f6a39:db_binlog_1 recovered from problem.
vds: blade4c1.ism.ld
2015-05-19 03:10:00,703 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-17) domain
64101f40-0f10-471d-9f5f-44591f9e087d:logging_1 in problem. vds:
blade1c1.ism.ld
2015-05-19 03:10:00,712 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-4) Domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 recovered from problem.
vds: blade3c2.ism.ld
2015-05-19 03:10:06,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 recovered from problem.
vds: blade4c2.ism.ld
2015-05-19 03:10:06,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 has recovered from
problem. No active host in the DC is reporting it as problematic, so
clearing the domain recovery timer.
2015-05-19 03:10:06,932 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 recovered from problem.
vds: blade4c2.ism.ld
2015-05-19 03:10:06,933 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 has recovered from
problem. No active host in the DC is reporting it as problematic, so
clearing the domain recovery timer.
2015-05-19 03:10:09,929 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-16) domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 in problem. vds:
blade3c1.ism.ld
My troubleshooting steps so far:
1. Tailing engine.log for "in problem" and "recovered from problem"
2. Shutting down all the VMs.
3. Shutting down all but one node.
4. Bringing up one node at a time to see what the log reports.
When only one node is active everything is fine. When a second node
comes up, I begin to see the log output as shown above. I've been
struggling with this for over a month. I'm sure others have used oVirt
with a Compellent and encountered (and worked around) similar problems.
I'm looking for some help in figuring out if it's oVirt or something
that I'm doing wrong.
We're close to giving up on oVirt completely because of this.
P.S.
I've tested via bare metal and Proxmox with the Compellent. Not at the
same scale but it seems to work fine there.
--
This email was Virus checked by UTM 9. For issues please contact the Windows Systems Admin.
--------------090603010402080805030904
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Engine:
<meta http-equiv="content-type" content="text/html; charset=utf-8">
oVirt Engine Version: 3.5.2-1.el7.centos<br>
Nodes: oVirt Node - 3.5 - 0.999.201504280931.el7.centos<br>
Remote storage: Dell Compellent SC8000<br>
Storage setup: 2 nics connected to the Compellent. Several domains
backed by LUNs. Several VM disk using direct LUN.<br>
Networking: Dell 10 Gb/s switches<br>
<br>
I've been struggling with oVirt completely falling apart due to
storage related issues. By falling apart I mean most to all of the
nodes suddenly losing contact with the storage domains. This results
in an endless loop of the VMs on the failed nodes trying to be
migrated and remigrated as the nodes flap between response and
unresponsive. During these times, engine.log looks like this.<br>
<br>
2015-05-19 03:09:42,443 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-50) domain
c46adffc-614a-4fa2-9d2d-954f174f6a39:db_binlog_1 in problem. vds:
blade6c1.ism.ld<br>
2015-05-19 03:09:42,560 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-38) domain
0b1d36e4-7992-43c7-8ac0-740f7c2cadb7:ovirttest1 in problem. vds:
blade2c1.ism.ld<br>
2015-05-19 03:09:45,497 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-24) domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 in problem. vds:
blade3c2.ism.ld<br>
2015-05-19 03:09:51,713 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-46) domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 in problem. vds:
blade4c2.ism.ld<br>
2015-05-19 03:09:57,647 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-13) Domain
c46adffc-614a-4fa2-9d2d-954f174f6a39:db_binlog_1 recovered from
problem. vds: blade6c1.ism.ld<br>
2015-05-19 03:09:57,782 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-6) domain
26929b89-d1ca-4718-90d6-b3a6da585451:generic_data_1 in problem. vds:
blade2c1.ism.ld<br>
2015-05-19 03:09:57,783 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-6) Domain
0b1d36e4-7992-43c7-8ac0-740f7c2cadb7:ovirttest1 recovered from
problem. vds: blade2c1.ism.ld<br>
2015-05-19 03:10:00,639 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-31) Domain
c46adffc-614a-4fa2-9d2d-954f174f6a39:db_binlog_1 recovered from
problem. vds: blade4c1.ism.ld<br>
2015-05-19 03:10:00,703 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-17) domain
64101f40-0f10-471d-9f5f-44591f9e087d:logging_1 in problem. vds:
blade1c1.ism.ld<br>
2015-05-19 03:10:00,712 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-4) Domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 recovered from
problem. vds: blade3c2.ism.ld<br>
2015-05-19 03:10:06,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 recovered from
problem. vds: blade4c2.ism.ld<br>
2015-05-19 03:10:06,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
05c8fa9c-fcbf-4a17-a3c6-011696a1b9a2:ovirttest2 has recovered from
problem. No active host in the DC is reporting it as problematic, so
clearing the domain recovery timer.<br>
2015-05-19 03:10:06,932 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 recovered from
problem. vds: blade4c2.ism.ld<br>
2015-05-19 03:10:06,933 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-48) Domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 has recovered from
problem. No active host in the DC is reporting it as problematic, so
clearing the domain recovery timer.<br>
2015-05-19 03:10:09,929 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-16) domain
b050c455-5ab1-4107-b055-bfcc811195fc:os_data_1 in problem. vds:
blade3c1.ism.ld<br>
<br>
<br>
My troubleshooting steps so far:<br>
<ol>
<li>Tailing engine.log for "in problem" and "recovered from
problem"</li>
<li>Shutting down all the VMs.</li>
<li>Shutting down all but one node.</li>
<li>Bringing up one node at a time to see what the log reports.</li>
</ol>
<p>When only one node is active everything is fine. When a second
node comes up, I begin to see the log output as shown above. I've
been struggling with this for over a month. I'm sure others have
used oVirt with a Compellent and encountered (and worked around)
similar problems. I'm looking for some help in figuring out if
it's oVirt or something that I'm doing wrong.<br>
</p>
<p>We're close to giving up on oVirt completely because of this.<br>
</p>
<p>P.S.<br>
</p>
<p>I've tested via bare metal and Proxmox with the Compellent. Not
at the same scale but it seems to work fine there.<br>
</p>
</body>
</html>
<pre>
--
This email was Virus checked by UTM 9. For issues please contact the Windows Systems Admin.</pre>
--------------090603010402080805030904--
9 years, 6 months
snapshot deleting
by paf1@email.cz
Hello,
my question is about deleting default snapshots generated via disk
moving between storages.
The snapshot called "Auto-generated for Live Storage Migration" is NOT
automaticaly deleted after movement done.
Is this default or any bug ??
regs.
Pavel
9 years, 6 months
3.5.2 live merge
by Patrick Russell
--_000_D6B620CCC76D45B69F9D7E20570FEFADvolusioncom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGkgZXZlcnlvbmUsDQoNCldl4oCZcmUgbm90IHNlZWluZyBsaXZlIG1lcmdlIHdvcmtpbmcgYXMg
b2YgdGhlIDMuNS4yIHVwZGF0ZS4gV2XigJl2ZSB0ZXN0ZWQgdXNpbmcgZmlicmUgY2hhbm5lbCBh
bmQgTkZTIGF0dGFjaGVkIHN0b3JhZ2UuIEJvdGggdGhyb3dpbmcgdGhlIHNhbWUgZXJyb3IgY29k
ZS4gQXJlIG90aGVyIHBlb3BsZSBzZWVpbmcgc3VjY2VzcyB3aXRoIGxpdmUtbWVyZ2UgYWZ0ZXIg
dGhlIHVwZGF0ZT8NCg0KSGVyZeKAmXMgdGhlIGVudmlyb25tZW50Og0KDQpFbmdpbmUgUnVubmlu
ZyBvbiBDZW50T1MgNng2NCB1cGRhdGVkIHRvIDMuNS4yIHZpYSB5dW0gdXBkYXRlIChzdGFuZGFs
b25lIHBoeXNpY2FsIGJveCwgZHVhbCBzb2NrZXQgaGV4IGNvcmUgKyBoeXBlcnRocmVhZGluZywg
MTZHQiBtZW1vcnkpDQoNCiMgcnBtIC1xYSB8Z3JlcCBvdmlydA0Kb3ZpcnQtZW5naW5lLWNsaS0z
LjUuMC41LTEuZWw2Lm5vYXJjaA0Kb3ZpcnQtZW5naW5lLTMuNS4xLjEtMS5lbDYubm9hcmNoDQpv
dmlydC1lbmdpbmUtc2V0dXAtcGx1Z2luLW92aXJ0LWVuZ2luZS0zLjUuMi0xLmVsNi5ub2FyY2gN
Cm92aXJ0LWVuZ2luZS1zZXR1cC1wbHVnaW4tYWxsaW5vbmUtMy41LjItMS5lbDYubm9hcmNoDQpv
dmlydC1lbmdpbmUtc2V0dXAtMy41LjItMS5lbDYubm9hcmNoDQpvdmlydC1ndWVzdC10b29scy0z
LjUuMC0wLjUubWFzdGVyLm5vYXJjaA0Kb3ZpcnQtaG9zdC1kZXBsb3ktMS4zLjEtMS5lbDYubm9h
cmNoDQpvdmlydC1lbmdpbmUtc2RrLXB5dGhvbi0zLjUuMi4xLTEuZWw2Lm5vYXJjaA0Kb3ZpcnQt
ZW5naW5lLXNldHVwLXBsdWdpbi1vdmlydC1lbmdpbmUtY29tbW9uLTMuNS4yLTEuZWw2Lm5vYXJj
aA0Kb3ZpcnQtZW5naW5lLWJhY2tlbmQtMy41LjEuMS0xLmVsNi5ub2FyY2gNCm92aXJ0LWVuZ2lu
ZS11c2VycG9ydGFsLTMuNS4xLjEtMS5lbDYubm9hcmNoDQpvdmlydC1lbmdpbmUtZGJzY3JpcHRz
LTMuNS4xLjEtMS5lbDYubm9hcmNoDQpvdmlydC1lbmdpbmUtdG9vbHMtMy41LjEuMS0xLmVsNi5u
b2FyY2gNCm92aXJ0LWhvc3QtZGVwbG95LW9mZmxpbmUtMS4zLjEtMS5lbDYueDg2XzY0DQpvdmly
dC1lbmdpbmUtc2V0dXAtcGx1Z2luLXdlYnNvY2tldC1wcm94eS0zLjUuMi0xLmVsNi5ub2FyY2gN
Cm92aXJ0LWVuZ2luZS13ZWJzb2NrZXQtcHJveHktMy41LjItMS5lbDYubm9hcmNoDQpvdmlydC1p
c28tdXBsb2FkZXItMy41LjItMS5lbDYubm9hcmNoDQpvdmlydC1lbmdpbmUtZXh0ZW5zaW9ucy1h
cGktaW1wbC0zLjUuMi0xLmVsNi5ub2FyY2gNCm92aXJ0LWVuZ2luZS1qYm9zcy1hcy03LjEuMS0x
LmVsNi54ODZfNjQNCm92aXJ0LWVuZ2luZS13ZWJhZG1pbi1wb3J0YWwtMy41LjEuMS0xLmVsNi5u
b2FyY2gNCm92aXJ0LWVuZ2luZS1yZXN0YXBpLTMuNS4xLjEtMS5lbDYubm9hcmNoDQpvdmlydC1n
dWVzdC10b29scy1pc28tMy41LTcubm9hcmNoDQpvdmlydC1pbWFnZS11cGxvYWRlci0zLjUuMS0x
LmVsNi5ub2FyY2gNCm92aXJ0LWVuZ2luZS1saWItMy41LjItMS5lbDYubm9hcmNoDQpvdmlydC1l
bmdpbmUtc2V0dXAtYmFzZS0zLjUuMi0xLmVsNi5ub2FyY2gNCm92aXJ0LXJlbGVhc2UzNS0wMDMt
MS5ub2FyY2gNCm92aXJ0LWhvc3QtZGVwbG95LWphdmEtMS4zLjEtMS5lbDYubm9hcmNoDQoNCkh5
cGVydmlzb3JzIGFyZSBydW5uaW5nIG92aXJ0LW5vZGUsIHVwZ3JhZGVkIGZyb20gSVNPIDogaHR0
cDovL3Jlc291cmNlcy5vdmlydC5vcmcvcHViL292aXJ0LTMuNS9pc28vb3ZpcnQtbm9kZS9lbDct
My41LjIvb3ZpcnQtbm9kZS1pc28tMy41LTAuOTk5LjIwMTUwNDI4MDkzMS5lbDcuY2VudG9zLmlz
bw0KDQoNCkhlcmXigJlzIGEgc25pcHBldCBmcm9tIHRoZSBsb2dzOg0KDQoyMDE1LTA0LTI5IDE4
OjQ3OjE2LDk0NyBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9r
ZXIuTWVyZ2VWRFNDb21tYW5kXSAocG9vbC03LXRocmVhZC0yKSBbNDhlYjBiMWRdIEZJTklTSCwg
TWVyZ2VWRFNDb21tYW5kLCBsb2cgaWQ6IDUxMjFlY2M5DQoyMDE1LTA0LTI5IDE4OjQ3OjE2LDk0
NyBFUlJPUiBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5NZXJnZUNvbW1hbmRdIChwb29sLTct
dGhyZWFkLTIpIFs0OGViMGIxZF0gQ29tbWFuZCBvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLk1l
cmdlQ29tbWFuZCB0aHJvdyBWZGMgQmxsIGV4Y2VwdGlvbi4gV2l0aCBlcnJvciBtZXNzYWdlIFZk
Y0JMTEV4Y2VwdGlvbjogb3JnLm92aXJ0LmVuZ2luZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9rZXIu
VkRTRXJyb3JFeGNlcHRpb246IFZEU0dlbmVyaWNFeGNlcHRpb246IFZEU0Vycm9yRXhjZXB0aW9u
OiBGYWlsZWQgdG8gTWVyZ2VWRFMsIGVycm9yID0gTWVyZ2UgZmFpbGVkLCBjb2RlID0gNTIgKEZh
aWxlZCB3aXRoIGVycm9yIG1lcmdlRXJyIGFuZCBjb2RlIDUyKQ0KMjAxNS0wNC0yOSAxODo0Nzox
Niw5NTQgRVJST1IgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuTWVyZ2VDb21tYW5kXSAocG9v
bC03LXRocmVhZC0yKSBbNDhlYjBiMWRdIFRyYW5zYWN0aW9uIHJvbGxlZC1iYWNrIGZvciBjb21t
YW5kOiBvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLk1lcmdlQ29tbWFuZC4NCjIwMTUtMDQtMjkg
MTg6NDc6MTYsOTgxIEVSUk9SIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jy
b2tlci5NZXJnZVZEU0NvbW1hbmRdIChwb29sLTctdGhyZWFkLTMpIFs1NDk1YmRlN10gRmFpbGVk
IGluIE1lcmdlVkRTIG1ldGhvZA0KMjAxNS0wNC0yOSAxODo0NzoxNiw5ODIgSU5GTyAgW29yZy5v
dmlydC5lbmdpbmUuY29yZS52ZHNicm9rZXIudmRzYnJva2VyLk1lcmdlVkRTQ29tbWFuZF0gKHBv
b2wtNy10aHJlYWQtMykgWzU0OTViZGU3XSBDb21tYW5kIG9yZy5vdmlydC5lbmdpbmUuY29yZS52
ZHNicm9rZXIudmRzYnJva2VyLk1lcmdlVkRTQ29tbWFuZCByZXR1cm4gdmFsdWUNCiBTdGF0dXNP
bmx5UmV0dXJuRm9yWG1sUnBjIFttU3RhdHVzPVN0YXR1c0ZvclhtbFJwYyBbbUNvZGU9NTIsIG1N
ZXNzYWdlPU1lcmdlIGZhaWxlZF1dDQoNCg0K
--_000_D6B620CCC76D45B69F9D7E20570FEFADvolusioncom_
Content-Type: text/html; charset="utf-8"
Content-ID: <7419DBD3662F7E40948D48866FF46D49(a)namprd04.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5IaSBldmVy
eW9uZSw8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQpXZeKAmXJl
IG5vdCBzZWVpbmcgbGl2ZSBtZXJnZSB3b3JraW5nIGFzIG9mIHRoZSAzLjUuMiB1cGRhdGUuIFdl
4oCZdmUgdGVzdGVkIHVzaW5nIGZpYnJlIGNoYW5uZWwgYW5kIE5GUyBhdHRhY2hlZCBzdG9yYWdl
LiBCb3RoIHRocm93aW5nIHRoZSBzYW1lIGVycm9yIGNvZGUuIEFyZSBvdGhlciBwZW9wbGUgc2Vl
aW5nIHN1Y2Nlc3Mgd2l0aCBsaXZlLW1lcmdlIGFmdGVyIHRoZSB1cGRhdGU/DQo8ZGl2IGNsYXNz
PSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5IZXJl4oCZcyB0aGUgZW52
aXJvbm1lbnQ6PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRp
diBjbGFzcz0iIj5FbmdpbmUgUnVubmluZyBvbiBDZW50T1MgNng2NCB1cGRhdGVkIHRvIDMuNS4y
IHZpYSB5dW0gdXBkYXRlIChzdGFuZGFsb25lIHBoeXNpY2FsIGJveCwgZHVhbCBzb2NrZXQgaGV4
IGNvcmUgJiM0MzsgaHlwZXJ0aHJlYWRpbmcsIDE2R0IgbWVtb3J5KTwvZGl2Pg0KPGRpdiBjbGFz
cz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIi
PiMgcnBtIC1xYSB8Z3JlcCBvdmlydDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1lbmdpbmUt
Y2xpLTMuNS4wLjUtMS5lbDYubm9hcmNoPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPm92aXJ0LWVuZ2lu
ZS0zLjUuMS4xLTEuZWw2Lm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1lbmdpbmUt
c2V0dXAtcGx1Z2luLW92aXJ0LWVuZ2luZS0zLjUuMi0xLmVsNi5ub2FyY2g8L2Rpdj4NCjxkaXYg
Y2xhc3M9IiI+b3ZpcnQtZW5naW5lLXNldHVwLXBsdWdpbi1hbGxpbm9uZS0zLjUuMi0xLmVsNi5u
b2FyY2g8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQtZW5naW5lLXNldHVwLTMuNS4yLTEuZWw2
Lm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1ndWVzdC10b29scy0zLjUuMC0wLjUu
bWFzdGVyLm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1ob3N0LWRlcGxveS0xLjMu
MS0xLmVsNi5ub2FyY2g8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQtZW5naW5lLXNkay1weXRo
b24tMy41LjIuMS0xLmVsNi5ub2FyY2g8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQtZW5naW5l
LXNldHVwLXBsdWdpbi1vdmlydC1lbmdpbmUtY29tbW9uLTMuNS4yLTEuZWw2Lm5vYXJjaDwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1lbmdpbmUtYmFja2VuZC0zLjUuMS4xLTEuZWw2Lm5vYXJj
aDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1lbmdpbmUtdXNlcnBvcnRhbC0zLjUuMS4xLTEu
ZWw2Lm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1lbmdpbmUtZGJzY3JpcHRzLTMu
NS4xLjEtMS5lbDYubm9hcmNoPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPm92aXJ0LWVuZ2luZS10b29s
cy0zLjUuMS4xLTEuZWw2Lm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1ob3N0LWRl
cGxveS1vZmZsaW5lLTEuMy4xLTEuZWw2Lng4Nl82NDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmly
dC1lbmdpbmUtc2V0dXAtcGx1Z2luLXdlYnNvY2tldC1wcm94eS0zLjUuMi0xLmVsNi5ub2FyY2g8
L2Rpdj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQtZW5naW5lLXdlYnNvY2tldC1wcm94eS0zLjUuMi0x
LmVsNi5ub2FyY2g8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQtaXNvLXVwbG9hZGVyLTMuNS4y
LTEuZWw2Lm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1lbmdpbmUtZXh0ZW5zaW9u
cy1hcGktaW1wbC0zLjUuMi0xLmVsNi5ub2FyY2g8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQt
ZW5naW5lLWpib3NzLWFzLTcuMS4xLTEuZWw2Lng4Nl82NDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5v
dmlydC1lbmdpbmUtd2ViYWRtaW4tcG9ydGFsLTMuNS4xLjEtMS5lbDYubm9hcmNoPC9kaXY+DQo8
ZGl2IGNsYXNzPSIiPm92aXJ0LWVuZ2luZS1yZXN0YXBpLTMuNS4xLjEtMS5lbDYubm9hcmNoPC9k
aXY+DQo8ZGl2IGNsYXNzPSIiPm92aXJ0LWd1ZXN0LXRvb2xzLWlzby0zLjUtNy5ub2FyY2g8L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+b3ZpcnQtaW1hZ2UtdXBsb2FkZXItMy41LjEtMS5lbDYubm9hcmNo
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPm92aXJ0LWVuZ2luZS1saWItMy41LjItMS5lbDYubm9hcmNo
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPm92aXJ0LWVuZ2luZS1zZXR1cC1iYXNlLTMuNS4yLTEuZWw2
Lm5vYXJjaDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5vdmlydC1yZWxlYXNlMzUtMDAzLTEubm9hcmNo
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPm92aXJ0LWhvc3QtZGVwbG95LWphdmEtMS4zLjEtMS5lbDYu
bm9hcmNoPC9kaXY+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPkh5cGVydmlzb3JzIGFyZSBydW5uaW5nIG92aXJ0LW5vZGUsIHVwZ3Jh
ZGVkIGZyb20gSVNPIDombmJzcDs8YSBocmVmPSJodHRwOi8vcmVzb3VyY2VzLm92aXJ0Lm9yZy9w
dWIvb3ZpcnQtMy41L2lzby9vdmlydC1ub2RlL2VsNy0zLjUuMi9vdmlydC1ub2RlLWlzby0zLjUt
MC45OTkuMjAxNTA0MjgwOTMxLmVsNy5jZW50b3MuaXNvIiBjbGFzcz0iIj5odHRwOi8vcmVzb3Vy
Y2VzLm92aXJ0Lm9yZy9wdWIvb3ZpcnQtMy41L2lzby9vdmlydC1ub2RlL2VsNy0zLjUuMi9vdmly
dC1ub2RlLWlzby0zLjUtMC45OTkuMjAxNTA0MjgwOTMxLmVsNy5jZW50b3MuaXNvPC9hPjwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJy
IGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkhlcmXigJlzIGEgc25pcHBldCBmcm9t
IHRoZSBsb2dzOg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xh
c3M9IiI+DQo8ZGl2IGNsYXNzPSIiPjIwMTUtMDQtMjkgMTg6NDc6MTYsOTQ3IElORk8gJm5ic3A7
W29yZy5vdmlydC5lbmdpbmUuY29yZS52ZHNicm9rZXIudmRzYnJva2VyLk1lcmdlVkRTQ29tbWFu
ZF0gKHBvb2wtNy10aHJlYWQtMikgWzQ4ZWIwYjFkXSBGSU5JU0gsIE1lcmdlVkRTQ29tbWFuZCwg
bG9nIGlkOiA1MTIxZWNjOTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4yMDE1LTA0LTI5IDE4OjQ3OjE2
LDk0NyBFUlJPUiBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5NZXJnZUNvbW1hbmRdIChwb29s
LTctdGhyZWFkLTIpIFs0OGViMGIxZF0gQ29tbWFuZCBvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxs
Lk1lcmdlQ29tbWFuZCB0aHJvdyBWZGMgQmxsIGV4Y2VwdGlvbi4gV2l0aCBlcnJvciBtZXNzYWdl
IFZkY0JMTEV4Y2VwdGlvbjogb3JnLm92aXJ0LmVuZ2luZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9r
ZXIuVkRTRXJyb3JFeGNlcHRpb246DQogVkRTR2VuZXJpY0V4Y2VwdGlvbjogVkRTRXJyb3JFeGNl
cHRpb246IEZhaWxlZCB0byBNZXJnZVZEUywgZXJyb3IgPSBNZXJnZSBmYWlsZWQsIGNvZGUgPSA1
MiAoRmFpbGVkIHdpdGggZXJyb3IgbWVyZ2VFcnIgYW5kIGNvZGUgNTIpPC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPjIwMTUtMDQtMjkgMTg6NDc6MTYsOTU0IEVSUk9SIFtvcmcub3ZpcnQuZW5naW5lLmNv
cmUuYmxsLk1lcmdlQ29tbWFuZF0gKHBvb2wtNy10aHJlYWQtMikgWzQ4ZWIwYjFkXSBUcmFuc2Fj
dGlvbiByb2xsZWQtYmFjayBmb3IgY29tbWFuZDogb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5N
ZXJnZUNvbW1hbmQuPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjIwMTUtMDQtMjkgMTg6NDc6MTYsOTgx
IEVSUk9SIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5NZXJnZVZE
U0NvbW1hbmRdIChwb29sLTctdGhyZWFkLTMpIFs1NDk1YmRlN10gRmFpbGVkIGluIE1lcmdlVkRT
IG1ldGhvZDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4yMDE1LTA0LTI5IDE4OjQ3OjE2LDk4MiBJTkZP
ICZuYnNwO1tvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5NZXJnZVZE
U0NvbW1hbmRdIChwb29sLTctdGhyZWFkLTMpIFs1NDk1YmRlN10gQ29tbWFuZCBvcmcub3ZpcnQu
ZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5NZXJnZVZEU0NvbW1hbmQgcmV0dXJuIHZh
bHVlJm5ic3A7PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPiZuYnNwO1N0YXR1c09ubHlSZXR1cm5Gb3JY
bWxScGMgW21TdGF0dXM9U3RhdHVzRm9yWG1sUnBjIFttQ29kZT01MiwgbU1lc3NhZ2U9TWVyZ2Ug
ZmFpbGVkXV08L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0i
Ij4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYm9keT4N
CjwvaHRtbD4NCg==
--_000_D6B620CCC76D45B69F9D7E20570FEFADvolusioncom_--
9 years, 6 months
Bug in Snapshot Removing
by Soeren Malchow
--_000_D1915E46D966soerenmalchowmconnet_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Dear all
I am not sure if the mail just did not get any attention between all the ma=
ils and this time it is also going to the libvirt mailing list.
I am experiencing a problem with VM becoming unresponsive when removing Sna=
pshots (Live Merge) and i think there is a serious problem.
Here are the previous mails,
http://lists.ovirt.org/pipermail/users/2015-May/033083.html
The problem is on a system with everything on the latest version, CentOS 7.=
1 and ovirt 3.5.2.1 all upgrades applied.
This Problem did NOT exist before upgrading to CentOS 7.1 with an environme=
nt running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-preview rep=
o activated.
I think this is a bug in libvirt, not ovirt itself, but i am not sure. The =
actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm.py, =
line 697).
We are very willing to help, test and supply log files in anyway we can.
Regards
Soeren
--_000_D1915E46D966soerenmalchowmconnet_
Content-Type: text/html; charset="us-ascii"
Content-ID: <44E2907177D7344D9986671B228BCE48(a)liquidcampaign.com>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-lin=
e-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-fami=
ly: Calibri, sans-serif;">
<div>Dear all</div>
<div><br>
</div>
<div>I am not sure if the mail just did not get any attention between all t=
he mails and this time it is also going to the libvirt mailing list.</div>
<div><br>
</div>
<div>I am experiencing a problem with VM becoming unresponsive when removin=
g Snapshots (Live Merge) and i think there is a serious problem.</div>
<div><br>
</div>
<div>Here are the previous mails,</div>
<div><br>
</div>
<div><a href=3D"http://lists.ovirt.org/pipermail/users/2015-May/033083.html=
">http://lists.ovirt.org/pipermail/users/2015-May/033083.html</a></div>
<div><br>
</div>
<div>The problem is on a system with everything on the latest version, Cent=
OS 7.1 and ovirt 3.5.2.1 all upgrades applied.</div>
<div><br>
</div>
<div>This Problem did NOT exist before upgrading to CentOS 7.1 with an envi=
ronment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-previe=
w repo activated.</div>
<div><br>
</div>
<div>I think this is a bug in libvirt, not ovirt itself, but i am not sure.=
The actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm=
.py, line 697).</div>
<div><br>
</div>
<div>We are very willing to help, test and supply log files in anyway we ca=
n. </div>
<div><br>
</div>
<div>Regards</div>
<div>Soeren </div>
<div><br>
</div>
</body>
</html>
--_000_D1915E46D966soerenmalchowmconnet_--
9 years, 6 months