<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<style type="text/css">.mceResizeHandle {position: absolute;border: 1px solid black;background: #FFF;width: 5px;height: 5px;z-index: 10000}.mceResizeHandle:hover {background: #000}img[data-mce-selected] {outline: 1px solid black}img.mceClonedResizable, table.mceClonedResizable {position: absolute;outline: 1px dashed black;opacity: .5;z-index: 10000}
</style></head><body style=""><div><br>> Jan Siml <jsiml@plusline.net> hat am 28. August 2015 um 15:15 geschrieben:<br>> <br>> <br>> Hello Juergen,<br>> <br>> > got exactly the same issue, with all nice side effects like performance<br>> > degradation. Until now i was not able to fix this, or to fool the engine<br>> > somehow that it whould show the image as ok again and give me a 2nd<br>> > chance to drop the snapshot.<br>> > in some cases this procedure helped (needs 2nd storage domain)<br>> > -> image live migration to a different storage domain (check which<br>> > combinations are supported, iscsi -> nfs domain seems unsupported. iscsi<br>> > -> iscsi works)<br>> > -> snapshot went into ok state, and in ~50% i was able to drop the<br>> > snapshot than. space had been reclaimed, so seems like this worked<br>> <br>> okay, seems interesting. But I'm afraid of not knowing which image files <br>> Engine uses when live migration is demanded. If Engine uses the ones <br>> which are actually used and updates the database afterwards -- fine. But <br>> if the images are used that are referenced in Engine database, we will <br>> take a journey into the past.</div>
<div> </div>
<div>knocking on wood. so far no problems, and i used this way for sure 50 times +</div>
<div> </div>
<div>in cases where the live merge failed, offline merging worked in another 50%. those which fail offline, too went back to illegal snap state</div>
<div><br>> <br>> > other workaround is through exporting the image onto a nfs export<br>> > domain, here you can tell the engine to not export snapshots. after<br>> > re-importing everything is fine<br>> > the snapshot feature (live at least) should be avoided at all<br>> > currently.... simply not reliable enaugh.<br>> > your way works, too. already did that, even it was a pita to figure out<br>> > where to find what. this symlinking mess between /rhev /dev and<br>> > /var/lib/libvirt is really awesome. not.<br>> > > Jan Siml <jsiml@plusline.net> hat am 28. August 2015 um 12:56<br>> > geschrieben:<br>> > ><br>> > ><br>> > > Hello,<br>> > ><br>> > > if no one has an idea how to correct the Disk/Snapshot paths in Engine<br>> > > database, I see only one possible way to solve the issue:<br>> > ><br>> > > Stop the VM and copy image/meta files target storage to source storage<br>> > > (the one where Engine thinks the files are located). Start the VM.<br>> > ><br>> > > Any concerns regarding this procedure? But I still hope that someone<br>> > > from oVirt team can give an advice how to correct the database entries.<br>> > > If necessary I would open a bug in Bugzilla.<br>> > ><br>> > > Kind regards<br>> > ><br>> > > Jan Siml<br>> > ><br>> > > >> after a failed live storage migration (cause unknown) we have a<br>> > > >> snapshot which is undeletable due to its status 'illegal' (as seen<br>> > > >> in storage/snapshot tab). I have already found some bugs [1],[2],[3]<br>> > > >> regarding this issue, but no way how to solve the issue within oVirt<br>> > > > > 3.5.3.<br>> > > >><br>> > > >> I have attached the relevant engine.log snippet. Is there any way to<br>> > > >> do a live merge (and therefore delete the snapshot)?<br>> > > >><br>> > > >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157<br>> > > >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links to [3]<br>> > > >> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no access)<br>> > > ><br>> > > > some additional informations. I have checked the images on both<br>> > storages<br>> > > > and verified the disk paths with virsh's dumpxml.<br>> > > ><br>> > > > a) The images and snapshots are on both storages.<br>> > > > b) The images on source storage aren't used. (modification time)<br>> > > > c) The images on target storage are used. (modification time)<br>> > > > d) virsh -r dumpxml tells me disk images are located on _target_<br>> > storage.<br>> > > > e) Admin interface tells me, that images and snapshot are located on<br>> > > > _source_ storage, which isn't true, see b), c) and d).<br>> > > ><br>> > > > What can we do, to solve this issue? Is this to be corrected in<br>> > database<br>> > > > only?<br>> <br>> Kind regards<br>> <br>> Jan Siml</div></body></html>