<html><body><div style="font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000"><div>This sounds a lot like a <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1069772" data-mce-href="https://bugzilla.redhat.com/show_bug.cgi?id=1069772">https://bugzilla.redhat.com/show_bug.cgi?id=1069772</a> which we solved for 3.4.1.</div><div>I wonder if we should backport it to 3.3.z </div><div><br></div><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><div dir="ltr"><div><div><div><div>A slight update to this.<br><div><br></div></div>I did find that the gluster mount points re appear under SPM /rhev/data-center if I detach and then deactivate the gluster storages under the data storages.<br>
<br></div>As this means shutting down any VMs that are running from that storage, is there any known way of getting the gluster points to reattach without dectivcate/reactivate a storage in the gui?<br><div><br></div></div>Cheers<br>
<br></div>Peter<br></div><div class="gmail_extra"><br><div><br></div><div class="gmail_quote">On 20 May 2014 09:37, Peter Harris <span dir="ltr"><<a href="mailto:doilooksensible@gmail.com" target="_blank">doilooksensible@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div><div>I am trying to migrate my VMs from my old host running ovirt 3.3.4 to a new setup running 3.4.1. My basic set up is:<br>
<br></div>OLD<br>
vmhost1 - ovirt 3.3.4 - NFS storages<br><div><br></div></div>NEW<br></div><div>ovirtmgr - ovirt 3.4.1 (virt only setup) - gluster storage domains<br></div>vmhost2 - cluster1<br></div>vmhost3 - cluster2<br></div>vmhost4 - cluster2<br>
</div>vmhost5 - cluster3<br></div>vmhost6 - cluster3<br><div><br></div></div><div>My gluster volumes are created via gluster command line and I have <br></div><div><br></div>All hosts are running scientific Linux 6.5, and the intention is to migrate vmhost1 to new environment cluster1.<br>
<br></div>I have an NFS export storage domain which I am using to migrate VMs from vmhost1.<br>Volume Name: vol-vminf<br>Type: Distributed-Replicate<br>Volume ID: b0b456bb-76e9-42e7-bb95-3415db79d631<br>Status: Started<br>
Number of Bricks: 2 x 2 = 4<br>Transport-type: tcp<br>Bricks:<br>Brick1: vmhost3:/storage/inf/br-inf<br>Brick2: vmhost4:/storage/inf/br-inf<br>Brick3: vmhost5:/storage/inf/br-inf<br>Brick4: vmhost6:/storage/inf/br-inf<br>
Options Reconfigured:<br>storage.owner-gid: 36<br>storage.owner-uid: 36<br>server.allow-insecure: on<br><div><br></div>Volume Name: vol-vmimages<br>Type: Distribute<br>Volume ID: 91e2cf8b-2662-4c26-b937-84b8f5b62e2b<br>Status: Started<br>
Number of Bricks: 4<br>Transport-type: tcp<br>Bricks:<br>Brick1: vmhost3:/storage/vmimages/br-vmimages<br>Brick2: vmhost3:/storage/vmimages/br-vmimages<br>Brick3: vmhost3:/storage/vmimages/br-vmimages<br>Brick4: vmhost3:/storage/vmimages/br-vmimages<br>
Options Reconfigured:<br>storage.owner-gid: 36<br>storage.owner-uid: 36<br>server.allow-insecure: on<br><div><br></div></div>I have had many varying fail results which I have tried to match up with threads here, and I am now a bit stuck and would appreciate any help.<br>
<br></div>I quite regularly cannot import vms, and I now cannot create a new disk for a new VM (no import). The error always seems to boild down to the following error on engine.log (the following specifically for the create of a new disk image on a gluster storage domain):<br>
<br>2014-05-20 08:51:21,136 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [4637af09] Correlation ID: 2b0b55ab, Job ID: 1a583643-e28a-4f09-a39d-46e4fc6d20b8, Call Stack: null, Custom Event ID: -1, Message: Add-Disk operation of rhel-7_Disk1 was initiated on VM rhel-7 by peter.harris.<br>
2014-05-20 08:51:21,137 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [4637af09] BaseAsyncTask::startPollingTask: Starting to poll task 720b4d92-1425-478c-8351-4ff827b8f728.<br>2014-05-20 08:51:28,077 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-19) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now<br>
2014-05-20 08:51:28,084 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-19) Failed in HSMGetAllTasksStatusesVDS method<br>2014-05-20 08:51:28,085 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-19) SPMAsyncTask::PollTask: Polling task 720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'.<br>
2014-05-20 08:51:28,104 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-19) BaseAsyncTask::LogEndTaskFailure: Task 720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M<br>
-- Result: cleanSuccess^M<br>-- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a', code = 100,^M<br>
-- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a', code = 100<br>
<br></div>Certainly, if I check /rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/ on the SPM server, there is no 967aec77-46d5-418b-8979-d0a86389a77b subdirectory. The only elements I have are NFS mounts.<br><div><br></div></div>
<div>There appear to be no errors in the SPM vdsm.log for this disk<br></div><div><br>=============<br></div><div>When I tried to import the vm (the one that I then tried to create from scratch above), I had the following errors in the SPM vdsm log:<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,255::task::595::TaskManager.Task::(_updateState) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::moving from state init -> state preparing<br>Thread-2220::INFO::2014-05-20 08:35:05,255::logUtils::44::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID='615647e2-1f60-47e1-8e55-be9f7ead6f15', spUUID='06930787-a091-49a3-8217-1418c5a9881e', imgUUID='80ed133c-fd72-4d35-aae5-e1313be3cf23', postZero='false', force='false')<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,255::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23`ReqID=`499de454-c563-4156-a3ed-13b7eb9defa6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '1496' at 'deleteImage'<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,255::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23' for lock type 'exclusive'<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,255::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23' is free. Now locking as 'exclusive' (1 active user)<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,256::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23`ReqID=`499de454-c563-4156-a3ed-13b7eb9defa6`::Granted request<br>Thread-2220::DEBUG::2014-05-20 08:35:05,256::task::827::TaskManager.Task::(resourceAcquired) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::_resourcesAcquired: Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23 (exclusive)<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,256::task::990::TaskManager.Task::(_decref) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::ref 1 aborting False<br>Thread-2220::DEBUG::2014-05-20 08:35:05,256::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15`ReqID=`73f79517-f13a-4e5b-999a-6f1994d2818a`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '1497' at 'deleteImage'<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,256::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15' for lock type 'shared'<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,257::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15' is free. Now locking as 'shared' (1 active user)<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,257::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15`ReqID=`73f79517-f13a-4e5b-999a-6f1994d2818a`::Granted request<br>Thread-2220::DEBUG::2014-05-20 08:35:05,257::task::827::TaskManager.Task::(resourceAcquired) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::_resourcesAcquired: Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15 (shared)<br>
Thread-2220::DEBUG::2014-05-20 08:35:05,257::task::990::TaskManager.Task::(_decref) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::ref 1 aborting False<br>Thread-2220::ERROR::2014-05-20 08:35:05,266::hsm::1502::Storage.HSM::(deleteImage) Empty or not found image 80ed133c-fd72-4d35-aae5-e1313be3cf23 in SD 615647e2-1f60-47e1-8e55-be9f7ead6f15. {'1f41529a-e02e-4cd8-987c-b1ea4fcba2be': ImgsPar(imgs=('290f5cdf-b5d7-462b-958d-d41458a26bf6',), parent=None), '1748a8f0-8668-4f21-9b26-d2e3b180e35b': ImgsPar(imgs=('67fd552b-8b3d-4117-82d2-e801bb600992',), parent=None)}<br>
Thread-2220::ERROR::2014-05-20 08:35:05,266::task::866::TaskManager.Task::(_setError) Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::Unexpected error<br><div><br></div><br><div><br></div></div><div><br></div>When I installed/setup ovirt-engine, I did chose NFS as the file system.<br>
<br></div>I am clearly doing something weird<br><div><div><div><br><div><div><br><div><br><div><div><div><div><div><div><div><br><div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div><br></div>
<br>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></blockquote><div><br></div></div></body></html>