I am trying to migrate my VMs from my old host running ovirt 3.3.4 to a new setup running 3.4.1. My basic set up is:
OLD
vmhost1 - ovirt 3.3.4 - NFS storages
NEW
ovirtmgr - ovirt 3.4.1 (virt only setup) - gluster storage domains
vmhost2 - cluster1
vmhost3 - cluster2
vmhost4 - cluster2
vmhost5 - cluster3
vmhost6 - cluster3
My gluster volumes are created via gluster command line and I have
All hosts are running scientific Linux 6.5, and the intention is to migrate vmhost1 to new environment cluster1.
I have an NFS export storage domain which I am using to migrate VMs from vmhost1.
Volume Name: vol-vminf
Type: Distributed-Replicate
Volume ID: b0b456bb-76e9-42e7-bb95-3415db79d631
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vmhost3:/storage/inf/br-inf
Brick2: vmhost4:/storage/inf/br-inf
Brick3: vmhost5:/storage/inf/br-inf
Brick4: vmhost6:/storage/inf/br-inf
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
server.allow-insecure: on
Volume Name: vol-vmimages
Type: Distribute
Volume ID: 91e2cf8b-2662-4c26-b937-84b8f5b62e2b
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: vmhost3:/storage/vmimages/br-vmimages
Brick2: vmhost3:/storage/vmimages/br-vmimages
Brick3: vmhost3:/storage/vmimages/br-vmimages
Brick4: vmhost3:/storage/vmimages/br-vmimages
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
server.allow-insecure: on
I have had many varying fail results which I have tried to match up with threads here, and I am now a bit stuck and would appreciate any help.
I quite regularly cannot import vms, and I now cannot create a new disk for a new VM (no import). The error always seems to boild down to the following error on engine.log (the following specifically for the create of a new disk image on a gluster storage domain):
2014-05-20 08:51:21,136 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [4637af09] Correlation ID: 2b0b55ab, Job ID: 1a583643-e28a-4f09-a39d-46e4fc6d20b8, Call Stack: null, Custom Event ID: -1, Message: Add-Disk operation of rhel-7_Disk1 was initiated on VM rhel-7 by peter.harris.
2014-05-20 08:51:21,137 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [4637af09] BaseAsyncTask::startPollingTask: Starting to poll task 720b4d92-1425-478c-8351-4ff827b8f728.
2014-05-20 08:51:28,077 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-19) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2014-05-20 08:51:28,084 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-19) Failed in HSMGetAllTasksStatusesVDS method
2014-05-20 08:51:28,085 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-19) SPMAsyncTask::PollTask: Polling task 720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'.
2014-05-20 08:51:28,104 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-19) BaseAsyncTask::LogEndTaskFailure: Task 720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M
-- Result: cleanSuccess^M
-- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a', code = 100,^M
-- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a', code = 100
Certainly, if I check /rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/ on the SPM server, there is no 967aec77-46d5-418b-8979-d0a86389a77b subdirectory. The only elements I have are NFS mounts.