
A slight update to this. I did find that the gluster mount points re appear under SPM /rhev/data-center if I detach and then deactivate the gluster storages under the data storages. As this means shutting down any VMs that are running from that storage, is there any known way of getting the gluster points to reattach without dectivcate/reactivate a storage in the gui? Cheers Peter On 20 May 2014 09:37, Peter Harris <doilooksensible@gmail.com> wrote:
I am trying to migrate my VMs from my old host running ovirt 3.3.4 to a new setup running 3.4.1. My basic set up is:
OLD vmhost1 - ovirt 3.3.4 - NFS storages
NEW ovirtmgr - ovirt 3.4.1 (virt only setup) - gluster storage domains vmhost2 - cluster1 vmhost3 - cluster2 vmhost4 - cluster2 vmhost5 - cluster3 vmhost6 - cluster3
My gluster volumes are created via gluster command line and I have
All hosts are running scientific Linux 6.5, and the intention is to migrate vmhost1 to new environment cluster1.
I have an NFS export storage domain which I am using to migrate VMs from vmhost1. Volume Name: vol-vminf Type: Distributed-Replicate Volume ID: b0b456bb-76e9-42e7-bb95-3415db79d631 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: vmhost3:/storage/inf/br-inf Brick2: vmhost4:/storage/inf/br-inf Brick3: vmhost5:/storage/inf/br-inf Brick4: vmhost6:/storage/inf/br-inf Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 server.allow-insecure: on
Volume Name: vol-vmimages Type: Distribute Volume ID: 91e2cf8b-2662-4c26-b937-84b8f5b62e2b Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: vmhost3:/storage/vmimages/br-vmimages Brick2: vmhost3:/storage/vmimages/br-vmimages Brick3: vmhost3:/storage/vmimages/br-vmimages Brick4: vmhost3:/storage/vmimages/br-vmimages Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 server.allow-insecure: on
I have had many varying fail results which I have tried to match up with threads here, and I am now a bit stuck and would appreciate any help.
I quite regularly cannot import vms, and I now cannot create a new disk for a new VM (no import). The error always seems to boild down to the following error on engine.log (the following specifically for the create of a new disk image on a gluster storage domain):
2014-05-20 08:51:21,136 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [4637af09] Correlation ID: 2b0b55ab, Job ID: 1a583643-e28a-4f09-a39d-46e4fc6d20b8, Call Stack: null, Custom Event ID: -1, Message: Add-Disk operation of rhel-7_Disk1 was initiated on VM rhel-7 by peter.harris. 2014-05-20 08:51:21,137 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [4637af09] BaseAsyncTask::startPollingTask: Starting to poll task 720b4d92-1425-478c-8351-4ff827b8f728. 2014-05-20 08:51:28,077 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-19) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-05-20 08:51:28,084 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-19) Failed in HSMGetAllTasksStatusesVDS method 2014-05-20 08:51:28,085 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-19) SPMAsyncTask::PollTask: Polling task 720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-05-20 08:51:28,104 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-19) BaseAsyncTask::LogEndTaskFailure: Task 720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a', code = 100,^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a', code = 100
Certainly, if I check /rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/ on the SPM server, there is no 967aec77-46d5-418b-8979-d0a86389a77b subdirectory. The only elements I have are NFS mounts.
There appear to be no errors in the SPM vdsm.log for this disk
============= When I tried to import the vm (the one that I then tried to create from scratch above), I had the following errors in the SPM vdsm log: Thread-2220::DEBUG::2014-05-20 08:35:05,255::task::595::TaskManager.Task::(_updateState) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::moving from state init -> state preparing Thread-2220::INFO::2014-05-20 08:35:05,255::logUtils::44::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID='615647e2-1f60-47e1-8e55-be9f7ead6f15', spUUID='06930787-a091-49a3-8217-1418c5a9881e', imgUUID='80ed133c-fd72-4d35-aae5-e1313be3cf23', postZero='false', force='false') Thread-2220::DEBUG::2014-05-20 08:35:05,255::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23`ReqID=`499de454-c563-4156-a3ed-13b7eb9defa6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '1496' at 'deleteImage' Thread-2220::DEBUG::2014-05-20 08:35:05,255::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23' for lock type 'exclusive' Thread-2220::DEBUG::2014-05-20 08:35:05,255::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23' is free. Now locking as 'exclusive' (1 active user) Thread-2220::DEBUG::2014-05-20 08:35:05,256::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23`ReqID=`499de454-c563-4156-a3ed-13b7eb9defa6`::Granted request Thread-2220::DEBUG::2014-05-20 08:35:05,256::task::827::TaskManager.Task::(resourceAcquired) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::_resourcesAcquired: Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23 (exclusive) Thread-2220::DEBUG::2014-05-20 08:35:05,256::task::990::TaskManager.Task::(_decref) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::ref 1 aborting False Thread-2220::DEBUG::2014-05-20 08:35:05,256::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15`ReqID=`73f79517-f13a-4e5b-999a-6f1994d2818a`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '1497' at 'deleteImage' Thread-2220::DEBUG::2014-05-20 08:35:05,256::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15' for lock type 'shared' Thread-2220::DEBUG::2014-05-20 08:35:05,257::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15' is free. Now locking as 'shared' (1 active user) Thread-2220::DEBUG::2014-05-20 08:35:05,257::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15`ReqID=`73f79517-f13a-4e5b-999a-6f1994d2818a`::Granted request Thread-2220::DEBUG::2014-05-20 08:35:05,257::task::827::TaskManager.Task::(resourceAcquired) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::_resourcesAcquired: Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15 (shared) Thread-2220::DEBUG::2014-05-20 08:35:05,257::task::990::TaskManager.Task::(_decref) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::ref 1 aborting False Thread-2220::ERROR::2014-05-20 08:35:05,266::hsm::1502::Storage.HSM::(deleteImage) Empty or not found image 80ed133c-fd72-4d35-aae5-e1313be3cf23 in SD 615647e2-1f60-47e1-8e55-be9f7ead6f15. {'1f41529a-e02e-4cd8-987c-b1ea4fcba2be': ImgsPar(imgs=('290f5cdf-b5d7-462b-958d-d41458a26bf6',), parent=None), '1748a8f0-8668-4f21-9b26-d2e3b180e35b': ImgsPar(imgs=('67fd552b-8b3d-4117-82d2-e801bb600992',), parent=None)} Thread-2220::ERROR::2014-05-20 08:35:05,266::task::866::TaskManager.Task::(_setError) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::Unexpected error
When I installed/setup ovirt-engine, I did chose NFS as the file system.
I am clearly doing something weird