I have a few VM's that have their root LVM file system corrupted because of improper shutdown. I experienced an issue described here which cause a node to restart
But now I'm experiencing the same problem with other VM's. A DB VM was in single user mode when it was powered off. I created snapshot to clone it. Powered the CentOS 6 VM back on and it cannot execute the mount command on boot and dropped me to a root maintenance prompt.
Running fsck comes back with way too many errors, many of them about files that shouldnt have even been open. Countless Innode issues and clone multiply-claimed blocks. Running fsck -y seems to fix the file system and it comes back as clean but upon restart, the VM load the files system as read only and trying to remount it as rw, the mount command throws a segmentation fault.
I dont know if its the new version of ovirt engine. I'm afraid to shutdown any VM using the ovirt UI as they dont always come backup. I have tried repairing the file system with the live CD and such to no avail. Given the fact the alot of the corrupt files are plain static html, or archived tar files, I assume it has something to do with ovirt. The only corrupting for data should be and live application data (open files).
Please advise on how to proceed. I can provide whatever logs you may requires.
Thanks,