On Tue, Feb 19, 2019, 14:19 Moritz Baumann <moritz.baumann(a)inf.ethz.ch
wrote:
Thank you Simone,
that worked.
On 19.02.19 12:40, Simone Tiraboschi wrote:
>
>
> On Tue, Feb 19, 2019 at 12:18 PM Moritz Baumann
> <moritz.baumann(a)inf.ethz.ch <mailto:moritz.baumann@inf.ethz.ch>> wrote:
>
> After upgrading from 4.2 -> 4.3 I cannot start a vm anymore.
>
> I try to start the vm with run once on a specific node (ovirt-node04)
> and this is the output of /var/log/vdsm/vdsm.log
>
>
> VolumeDoesNotExist: Volume does not exist:
> (u'482698c2-b1bd-4715-9bc5-e222405260df',)
> 2019-02-19 12:08:34,322+0100 INFO (vm/abee17b9)
> [storage.TaskManager.Task]
> (Task='d04f3abb-f3d3-4e2f-902f-d3c5e4fabc36')
> aborting: Task is aborted: "Volume does not exist:
> (u'482698c2-b1bd-4715-9bc5-e222405260df',)" - code 201 (task:1181)
> 2019-02-19 12:08:34,322+0100 ERROR (vm/abee17b9) [storage.Dispatcher]
> FINISH prepareImage error=Volume does not exist:
> (u'482698c2-b1bd-4715-9bc5-e222405260df',) (dispatcher:81)
> 2019-02-19 12:08:34,322+0100 ERROR (vm/abee17b9) [virt.vm]
> (vmId='abee17b9-079e-452c-a97d-99eff951dc39') The vm start process
> failed (vm:937)
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 866, in
> _startUnderlyingVm
> self._run()
> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
2749,
> in _run
> self._devices = self._make_devices()
> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
2589,
> in _make_devices
> disk_objs = self._perform_host_local_adjustment()
> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
2662,
> in _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
1011,
> in _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id
> <
http://self.id>)
> File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
415,
> in prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'index': 1, 'domainID':
>
>
> Hi,
> I think you hit this bug:
https://bugzilla.redhat.com/1666795
>
> Manually setting back all the disk image files in the storage domain as
> vdm:kvm (36:36), 660 is a temporary workaround.
Based on the discussion in tbe bug, this the only way to migrate vms from
version<4.3 to 4.3.
Another waybis to shut down the vm and start it again on 4.3 host.
Fiture vdsm version will fix this issue using libirt hook.
Adding all_squash,anonuid=36,anongid=36 to the configuration of your
NFS
> share should avoid that until a proper fix will be released.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDGU7777DT3...