which means that in some way the restored volumes are either inaccessible
(permissions?) or their metadata is corrupted (but it doesn't seem so).
There is probably another Traceback in the logs that should give us more
information.
Could you post somewhere the entire vdsm log?
Thanks.
--
Federico
----- Original Message -----
From: "Usman Aslam" <usman(a)linkshift.com>
To: "Federico Simoncelli" <fsimonce(a)redhat.com>
Cc: users(a)ovirt.org
Sent: Friday, October 4, 2013 4:49:08 PM
Subject: Re: [Users] VM wont restart after some NFS snapshot restore.
Federico,
The files reside on this mount on the hypervisor
/rhev/data-center/mnt/xyz-02.tufts.edu:
_vol_tusk__vm_tusk__vm/fa3279ec-2912-45ac-b7bc-9fe89151ed99/images/79ccd989-3033-4e6a-80da-ba210c94225a
and are symlinked as described below
[root@xyz-02 430cd986-6488-403b-8d46-29abbc3eba38]# pwd
/rhev/data-center/430cd986-6488-403b-8d46-29abbc3eba38
[root@xyz-02 430cd986-6488-403b-8d46-29abbc3eba38]# ll
total 12
lrwxrwxrwx 1 vdsm kvm 120 Oct 3 12:35 ee2ae498-6e45-448d-8f91-0efca377dcf6
-> /rhev/data-center/mnt/xyz-02.tufts.edu:
_vol_tusk__iso_tusk__iso/ee2ae498-6e45-448d-8f91-0efca377dcf6
lrwxrwxrwx 1 vdsm kvm 118 Oct 3 12:35 fa3279ec-2912-45ac-b7bc-9fe89151ed99
-> /rhev/data-center/mnt/xyz-02.tufts.edu:
_vol_tusk__vm_tusk__vm/fa3279ec-2912-45ac-b7bc-9fe89151ed99
lrwxrwxrwx 1 vdsm kvm 118 Oct 3 12:35 mastersd ->
/rhev/data-center/mnt/xyz-02.tufts.edu:
_vol_tusk__vm_tusk__vm/fa3279ec-2912-45ac-b7bc-9fe89151ed99
I did a diff and the contents of the the *Original *meta file (that works
and VM starts but have bad file system) and the *Backup *meta file (the
files being restored from nfs snapshot) *are exactly the same*.
Contents are listed blow. Also the files sizes for all six related files
are exactly the same.
[root@xyz-02 images]# cat
79ccd989-3033-4e6a-80da-ba210c94225a/039a8482-c267-4051-b1e6-1c1dee49b3d7.meta
DOMAIN=fa3279ec-2912-45ac-b7bc-9fe89151ed99
VOLTYPE=SHARED
CTIME=1368457020
FORMAT=RAW
IMAGE=59b6a429-bd11-40c6-a218-78df840725c6
DISKTYPE=2
PUUID=00000000-0000-0000-0000-000000000000
LEGALITY=LEGAL
MTIME=1368457020
POOL_UUID=
DESCRIPTION=Active VM
TYPE=SPARSE
SIZE=104857600
EOF
[root@tss-tusk-ovirt-02 images]# cat
79ccd989-3033-4e6a-80da-ba210c94225a/8d48505d-846d-49a7-8b50-d972ee051145.meta
DOMAIN=fa3279ec-2912-45ac-b7bc-9fe89151ed99
CTIME=1370303194
FORMAT=COW
DISKTYPE=2
LEGALITY=LEGAL
SIZE=104857600
VOLTYPE=LEAF
DESCRIPTION=
IMAGE=79ccd989-3033-4e6a-80da-ba210c94225a
PUUID=039a8482-c267-4051-b1e6-1c1dee49b3d7
MTIME=1370303194
POOL_UUID=
TYPE=SPARSE
EOF
Any help would be greatly appreciated!
Thanks,
Usman
On Fri, Oct 4, 2013 at 9:50 AM, Federico Simoncelli
<fsimonce(a)redhat.com>wrote:
> Hi Usman,
> can you paste somewhere the content of the meta files?
>
> $ cat 039a8482-c267-4051-b1e6-1c1dee49b3d7.meta
> 8d48505d-846d-49a7-8b50-d972ee051145.meta
>
> could you also provide the absolute path to those files? (in the vdsm host)
>
> Thanks,
> --
> Federico
>
> ----- Original Message -----
> > From: "Usman Aslam" <usman(a)linkshift.com>
> > To: users(a)ovirt.org
> > Sent: Thursday, October 3, 2013 4:29:43 AM
> > Subject: [Users] VM wont restart after some NFS snapshot restore.
> >
> > I have some VM's that live on NFS share. Basically, I had to revert the
> VM
> > disk to a backup from a few days ago. So I powered the VM down, copied
> over
> > the following files
> >
> > 039a8482-c267-4051-b1e6-1c1dee49b3d7
> > 039a8482-c267-4051-b1e6-1c1dee49b3d7.lease
> > 039a8482-c267-4051-b1e6-1c1dee49b3d7.meta
> > 8d48505d-846d-49a7-8b50-d972ee051145
> > 8d48505d-846d-49a7-8b50-d972ee051145.lease
> > 8d48505d-846d-49a7-8b50-d972ee051145.meta
> >
> > and now when I try to power the VM, it complains
> >
> > 2013-Oct-02, 22:02:38
> > Failed to run VM zabbix-prod-01 (User: admin@internal).
> > 2013-Oct-02, 22:02:38
> > Failed to run VM zabbix-prod-01 on Host
> >
tss-tusk-ovirt-01-ovirtmgmt.tusk.tufts.edu .
> > 2013-Oct-02, 22:02:38
> > VM zabbix-prod-01 is down. Exit message: 'truesize'.
> >
> > Any ideas on how I could resolve this? Perhaps a better way of
> approaching
> > the restore on a filesystem level?
> >
> > I see the following the vsdm.log
> >
> > Thread-7843::ERROR::2013-10-02
> > 22:02:37,548::vm::716::vm.Vm::(_startUnderlyingVm)
> > vmId=`8e8764ad-6b4c-48d8-9a19-fa5cf77208ef`::The vm start process failed
> > Traceback (most recent call last):
> > File "/usr/share/vdsm/vm.py", line 678, in _startUnderlyingVm
> > self._run()
> > File "/usr/share/vdsm/libvirtvm.py", line 1467, in _run
> > devices = self.buildConfDevices()
> > File "/usr/share/vdsm/vm.py", line 515, in buildConfDevices
> > self._normalizeVdsmImg(drv)
> > File "/usr/share/vdsm/vm.py", line 408, in _normalizeVdsmImg
> > drv['truesize'] = res['truesize']
> > KeyError: 'truesize'
> > Thread-7843::DEBUG::2013-10-02
> 22:02:37,553::vm::1065::vm.Vm::(setDownStatus)
> > vmId=`8e8764ad-6b4c-48d8-9a19-fa5cf77208ef`::Changed state to Down:
> > 'truesize'
> >
> >
> > Any help would be really nice, thanks!
> > --
> > Usman
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> >
http://lists.ovirt.org/mailman/listinfo/users
> >
>
--
Usman Aslam
401.6.99.66.55
usman(a)linkshift.com