Hi,
I think the vm or the image may already exists in the domain which is
why you cannot create a new one:
validateImagePath
os.mkdir(imageDir, 0o755)
OSError: [Errno 17] File exists:
'/rhev/data-center/cfc84aa8-8ec4-4e13-8104-370ea5b9d432/8f5f59f9-b3d5-4e13-9c56-b4d33475b277/images/244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3'
0a3c909d-0737-492e-a47c-bc0ab5e1a603::WARNING::2015-06-02
one of the options you have when exporting a vm is to replace existing
files (it might be called force).
Can you please try to select that option?
Thanks,
Dafna
On 06/02/2015 07:42 PM, InterNetX - Juergen Gotteswinter wrote:
Hi,
when trying to export a VM to the NFS Export Domain, the Process fails
nearly immediatly. VDSM drops this:
-- snip --
::Request was made in '/usr/share/vdsm/storage/sp.py' line '1549' at
'moveImage'
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,190::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'8f5f59f9-b3d5-4e13-9c56-b4d33475b277_imageNS.244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3'
for lock type 'shared'
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,191::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation
'lvm reload operation' got the operation mutex
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,192::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/3600144f0db35bc650000534be67e0001|'\'',
'\''r|.*|'\''
] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
8f5f59f9-b3d5-4e13-9c56-b4d33475b277 (cwd None)
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,480::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '
WARNING: lvmetad is running but disabled. Restart lvmetad before
enabling it!\n'; <rc> = 0
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,519::lvm::463::Storage.LVM::(_reloadlvs) lvs reloaded
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,519::lvm::463::Storage.OperationMutex::(_reloadlvs) Operation
'lvm reload operation' released the operation mutex
0a3c909d-0737-492e-a47c-bc0ab5e1a603::ERROR::2015-06-02
20:36:38,520::blockVolume::429::Storage.Volume::(validateImagePath)
Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/blockVolume.py", line 427, in
validateImagePath
os.mkdir(imageDir, 0o755)
OSError: [Errno 17] File exists:
'/rhev/data-center/cfc84aa8-8ec4-4e13-8104-370ea5b9d432/8f5f59f9-b3d5-4e13-9c56-b4d33475b277/images/244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3'
0a3c909d-0737-492e-a47c-bc0ab5e1a603::WARNING::2015-06-02
20:36:38,521::resourceManager::591::Storage.ResourceManager::(registerResource)
Resource factory failed to create resource
'8f5f59f9-b3d5-4e13-9c56-b4d33475b277_imageNS.244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3'.
Canceling request.
Traceback (most recent call last):
File "/usr/share/vdsm/storage/resourceManager.py", line 587, in
registerResource
obj = namespaceObj.factory.createResource(name, lockType)
File "/usr/share/vdsm/storage/resourceFactories.py", line 193, in
createResource
lockType)
File "/usr/share/vdsm/storage/resourceFactories.py", line 122, in
__getResourceCandidatesList
imgUUID=resourceName)
File "/usr/share/vdsm/storage/image.py", line 185, in getChain
srcVol = volclass(self.repoPath, sdUUID, imgUUID, uuidlist[0])
File "/usr/share/vdsm/storage/blockVolume.py", line 80, in __init__
volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
File "/usr/share/vdsm/storage/volume.py", line 144, in __init__
self.validate()
File "/usr/share/vdsm/storage/blockVolume.py", line 89, in validate
volume.Volume.validate(self)
File "/usr/share/vdsm/storage/volume.py", line 156, in validate
self.validateImagePath()
File "/usr/share/vdsm/storage/blockVolume.py", line 430, in
validateImagePath
raise se.ImagePathError(imageDir)
ImagePathError: Image path does not exist or cannot be accessed/created:
('/rhev/data-center/cfc84aa8-8ec4-4e13-8104-370ea5b9d432/8f5f59f9-b3d5-4e13-9c56-b4d33475b277/images/244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3',)
0a3c909d-0737-492e-a47c-bc0ab5e1a603::DEBUG::2015-06-02
20:36:38,522::resourceManager::210::Storage.ResourceManager.Request::(cancel)
ResName=`8f5f59f9-b3d5-4e13-9c56-b4d33475b277_imageNS.244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3`ReqID=`6f861566-b1c9-45c7-9181-452b0bc014d0`::Canceled
request
0a3c909d-0737-492e-a47c-bc0ab5e1a603::WARNING::2015-06-02
20:36:38,523::resourceManager::203::Storage.ResourceManager.Request::(cancel)
ResName=`8f5f59f9-b3d5-4e13-9c56-b4d33475b277_imageNS.244747b4-1b3d-4a9c-8fd9-3a914e6f2bc3`ReqID=`6f861566-b1c9-45c7-9181-452b0bc014d0`::Tried
to cancel a processed request
0a3c909d-0737-492e-a47c-bc0ab5e1a603::ERROR::2015-06-02
20:36:38,523::task::866::Storage.TaskManager.Task::(_setError)
Task=`0a3c909d-0737-492e-a47c-bc0ab5e1a603`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/storage/task.py", line 334, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 1549, in moveImage
imgUUID, srcLock),
File "/usr/share/vdsm/storage/resourceManager.py", line 523, in
acquireResource
raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource
factory threw an exception.: ()
-- snip --
i thought it might be caused by enforcing selinux, but changing selinux
to permissive didnt really change anything.
node versions are:
vdsm-cli-4.16.14-0.el7.noarch
vdsm-python-4.16.14-0.el7.noarch
vdsm-python-zombiereaper-4.16.14-0.el7.noarch
vdsm-jsonrpc-4.16.14-0.el7.noarch
vdsm-yajsonrpc-4.16.14-0.el7.noarch
vdsm-xmlrpc-4.16.14-0.el7.noarch
vdsm-4.16.14-0.el7.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
sanlock-3.2.2-2.el7.x86_64
sanlock-lib-3.2.2-2.el7.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
sanlock-python-3.2.2-2.el7.x86_64
CentOS Linux release 7.1.1503 (Core)
3.10.0-229.4.2.el7.x86_64
#
this happens on every node in the cluster. i tried the vdsm rpm from
3.5.3-pre, which didnt change anything. problem still exists.
the export domain so far can be accessed fine, already existing
templates / exported vm on the nfs share can be deleted. permissions are
set correctly uid/gid 36, nfsvers 3.
anyone got a hint for me ?
Cheers
Juergen
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users