Yes.

Apparently, somehow I had corrupted the template file in question by copying over it when restoring a vdisk file because the template file is hard linked in the vdisk dir.

Thanks!
Usman


On Tue, Nov 5, 2013 at 4:30 AM, Itamar Heim <iheim@redhat.com> wrote:
On 10/08/2013 06:50 AM, Usman Aslam wrote:
I have a few VM's that have their root LVM file system corrupted because
of improper shutdown. I experienced an issue described here which cause
a node to restart
http://lists.ovirt.org/pipermail/users/2013-July/015247.html

But now I'm experiencing the same problem with other VM's. A DB VM was
in single user mode when it was powered off. I created snapshot to clone
it. Powered the CentOS 6 VM back on and it cannot execute the mount
command on boot and dropped me to a root maintenance prompt.

Running fsck comes back with way too many errors, many of them about
files that shouldnt have even been open. Countless Innode issues and
clone multiply-claimed blocks. Running fsck -y seems to fix the file
system and it comes back as clean but upon restart, the VM load the
files system as read only and trying to remount it as rw, the mount
command throws a segmentation fault.

I dont know if its the new version of ovirt engine. I'm afraid to
shutdown any VM using the ovirt UI as they dont always come backup. I
have tried repairing the file system with the live CD and such to no
avail. Given the fact the alot of the corrupt files are plain static
html, or archived tar files, I assume it has something to do with ovirt.
The only corrupting for data should be and live application data (open
files).

Please advise on how to proceed. I can provide whatever logs you may
requires.

Thanks,
--
Usman Aslam
401.6.99.66.55
usman@linkshift.com <mailto:usman@linkshift.com>


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


was this resolved?



--
Usman Aslam
401.6.99.66.55
usman@linkshift.com