<div dir="ltr">I have a few VM's that have their root LVM file system corrupted because of improper shutdown. I experienced an issue described here which cause a node to restart<br clear="all"><div><a href="http://lists.ovirt.org/pipermail/users/2013-July/015247.html">http://lists.ovirt.org/pipermail/users/2013-July/015247.html</a><br>
<br></div><div>But now I'm experiencing the same problem with other VM's. A DB VM was in single user mode when it was powered off. I created snapshot to clone it. Powered the CentOS 6 VM back on and it cannot execute the mount command on boot and dropped me to a root maintenance prompt.<br>
<br>Running fsck comes back with way too many errors, many of them about files that shouldnt have even been open. Countless Innode issues and clone multiply-claimed blocks. Running fsck -y seems to fix the file system and it comes back as clean but upon restart, the VM load the files system as read only and trying to remount it as rw, the mount command throws a segmentation fault.<br>
<br></div><div>I dont know if its the new version of ovirt engine. I'm afraid to shutdown any VM using the ovirt UI as they dont always come backup. I have tried repairing the file system with the live CD and such to no avail. Given the fact the alot of the corrupt files are plain static html, or archived tar files, I assume it has something to do with ovirt. The only corrupting for data should be and live application data (open files).<br>
<br></div><div>Please advise on how to proceed. I can provide whatever logs you may requires.<br><br>Thanks,<br></div><div>-- <br>Usman Aslam<br>401.6.99.66.55<br><a href="mailto:usman@linkshift.com">usman@linkshift.com</a>
</div></div>