[Users] Ovirt 3.3 nightly, Gluster 3.4 stable, cannot launch VM with gluster storage domain backed disk

I'm getting an error when attempting to run up a VM with a disk in a gluster storage domain. Note gluster is running on the same host as the Ovirt virt node, but not managed by ovirt manager. *Ovirt Host RPM's:* vdsm-xmlrpc-4.11.0-143.git5fe89d4.fc18.noarch vdsm-python-cpopen-4.11.0-142.git24ad94d.fc18.x86_64 vdsm-python-4.11.0-143.git5fe89d4.fc18.x86_64 vdsm-cli-4.11.0-143.git5fe89d4.fc18.noarch vdsm-4.11.0-143.git5fe89d4.fc18.x86_64 glusterfs-3.4.0-1.fc18.x86_64 glusterfs-fuse-3.4.0-1.fc18.x86_64 glusterfs-server-3.4.0-1.fc18.x86_64 glusterfs-rdma-3.4.0-1.fc18.x86_64 *Ovirt Manager RPM's: * ovirt-engine-webadmin-portal-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-log-collector-3.3.0-0.2.master.20130715.git8affa81.fc18.noarch ovirt-host-deploy-java-1.1.0-0.2.master.20130716.git26f4110.fc18.noarch ovirt-engine-backend-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-iso-uploader-3.3.0-0.2.master.20130715.gitdf42ec9.fc18.noarch ovirt-engine-userportal-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-engine-restapi-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-engine-tools-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-engine-dbscripts-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-host-deploy-1.1.0-0.2.master.20130716.git26f4110.fc18.noarch ovirt-engine-sdk-3.3.0.3-1.20130621.git2bbf0b8.fc18.noarch ovirt-engine-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch ovirt-image-uploader-3.3.0-0.2.master.20130715.git7674462.fc18.noarch ovirt-engine-setup-3.3.0-0.2.master.20130716053857.git3dd1ea3.fc18.noarch *Web-UI displays:* VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a: No such file or directory . VM VM1 was started by admin@internal (Host: ovirt001). The disk VM1_Disk1 was successfully added to VM VM1. *I can see the image on the gluster machine, and it looks to have the correct permissions:* [root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# pwd /mnt/storage1/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa [root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# ll total 1028 -rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-22a39bb1626a -rw-rw----. 2 vdsm kvm 1048576 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.lease -rw-r--r--. 2 vdsm kvm 268 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.meta [root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# *engine.log:* 2013-07-17 11:12:17,474 INFO [org.ovirt.engine.core.bll.AddDiskCommand] (ajp--127.0.0.1-8702-6) Running command: AddDiskCommand internal: false. Entities affected : ID: 8e 2c9057-deee-48a6-8314-a34530fc53cb Type: VM, ID: a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf Type: Storage 2013-07-17 11:12:17,691 INFO [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (ajp--127.0.0.1-8702-6) Running command: AddImageFromScratchCommand internal: true. Enti ties affected : ID: a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf Type: Storage 2013-07-17 11:12:17,746 INFO [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (ajp--127.0.0.1-8702-6) Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-d eee-48a6-8314-a34530fc53cb value: VM_DISK_BOOT , sharedLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM ] 2013-07-17 11:12:17,752 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-6) START, CreateImageVDSCommand( storagePoolId = 5849b03 0-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, compatabilityVersion = 3.3, storageDomainId = a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf, imageGroupId = 238cc6cf-070c- 4483-b686-c0de7ddf0dfa, imageSizeInBytes = 32212254720, volumeFormat = RAW, newImageId = ff2bca2d-4ed1-46c6-93c8-22a39bb1626a, newImageDescription = ), log id: 4a1dbc41 2013-07-17 11:12:17,754 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-6) -- CreateImageVDSCommand::ExecuteIrsBrokerCommand: ca lling 'createVolume' with two new parameters: description and UUID 2013-07-17 11:12:17,755 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-6) -- createVolume parameters: sdUUID=a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 imgGUID=238cc6cf-070c-4483-b686-c0de7ddf0dfa size=32,212,254,720 bytes volFormat=RAW volType=Sparse volUUID=ff2bca2d-4ed1-46c6-93c8-22a39bb1626a descr= srcImgGUID=00000000-0000-0000-0000-000000000000 srcVolUUID=00000000-0000-0000-0000-000000000000 2013-07-17 11:12:17,995 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-6) FINISH, CreateImageVDSCommand, return: ff2bca2d-4ed1- 46c6-93c8-22a39bb1626a, log id: 4a1dbc41 2013-07-17 11:12:18,129 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-6) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 1329503 b-d488-4fec-a5b0-10849679f025 2013-07-17 11:12:18,130 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-6) CommandMultiAsyncTasks::AttachTask: Attaching task f222c17e-2402-492 8-a0db-4f9fcaeb08b6 to command 1329503b-d488-4fec-a5b0-10849679f025. 2013-07-17 11:12:18,156 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-6) Adding task f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2013-07-17 11:12:18,279 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-6) BaseAsyncTask::StartPollingTask: Starting to poll task f222c17e-2402-4928-a0db -4f9fcaeb08b6. 2013-07-17 11:12:19,122 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-92) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2013-07-17 11:12:19,147 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-92) SPMAsyncTask::PollTask: Polling task f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2013-07-17 11:12:19,150 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-92) Finished polling Tasks, will poll again in 10 seconds. 2013-07-17 11:12:29,170 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-97) SPMAsyncTask::PollTask: Polling task f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'success'. 2013-07-17 11:12:29,209 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-97) BaseAsyncTask::OnTaskEndSuccess: Task f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended successfully. 2013-07-17 11:12:29,211 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-97) CommandAsyncTask::EndActionIfNecessary: All tasks of entity 1329503b-d488-4fec-a5b0-10849679f025 has ended -> executing EndAction 2013-07-17 11:12:29,214 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-97) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: 1329503b-d488-4fec-a5b0-10849679f025): calling EndAction . 2013-07-17 11:12:29,219 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-49) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction AddDisk, executionIndex: 0 2013-07-17 11:12:29,265 INFO [org.ovirt.engine.core.bll.AddDiskCommand] (pool-6-thread-49) [1cf36ce5] Ending command successfully: org.ovirt.engine.core.bll.AddDiskCommand 2013-07-17 11:12:29,315 INFO [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (pool-6-thread-49) [78361e89] Ending command successfully: org.ovirt.engine.core.bll.AddImageFromScratchCommand 2013-07-17 11:12:29,343 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (pool-6-thread-49) [78361e89] START, GetImageInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, compatabilityVersion = null, storageDomainId = a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf, imageGroupId = 238cc6cf-070c-4483-b686-c0de7ddf0dfa, imageId = ff2bca2d-4ed1-46c6-93c8-22a39bb1626a), log id: 61787855 2013-07-17 11:12:29,462 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (pool-6-thread-49) [78361e89] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.DiskImage@380026d0, log id: 61787855 2013-07-17 11:12:29,547 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-49) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type AddDisk completed, handling the result. 2013-07-17 11:12:29,549 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-49) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type AddDisk succeeded, clearing tasks. 2013-07-17 11:12:29,562 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-49) SPMAsyncTask::ClearAsyncTask: Attempting to clear task f222c17e-2402-4928-a0db-4f9fcaeb08b6 2013-07-17 11:12:29,566 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-49) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, compatabilityVersion = null, taskId = f222c17e-2402-4928-a0db-4f9fcaeb08b6), log id: fb2eabf 2013-07-17 11:12:29,572 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-49) START, HSMClearTaskVDSCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, taskId=f222c17e-2402-4928-a0db-4f9fcaeb08b6), log id: 2b51a9a6 2013-07-17 11:12:29,600 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-49) FINISH, HSMClearTaskVDSCommand, log id: 2b51a9a6 2013-07-17 11:12:29,601 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-49) FINISH, SPMClearTaskVDSCommand, log id: fb2eabf 2013-07-17 11:12:29,615 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-49) BaseAsyncTask::RemoveTaskFromDB: Removed task f222c17e-2402-4928-a0db-4f9fcaeb08b6 from DataBase 2013-07-17 11:12:29,616 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-49) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 1329503b-d488-4fec-a5b0-10849679f025 2013-07-17 11:12:36,749 INFO [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-5) [f287998] Lock Acquired to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM , sharedLocks= ] 2013-07-17 11:12:36,773 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-5) [f287998] START, IsVmDuringInitiatingVDSCommand( vmId = 8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 482ec087 2013-07-17 11:12:36,773 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-5) [f287998] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 482ec087 2013-07-17 11:12:36,932 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-49) [f287998] Running command: RunVmCommand internal: false. Entities affected : ID: 8e2c9057-deee-48a6-8314-a34530fc53cb Type: VM 2013-07-17 11:12:37,038 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-49) [f287998] START, CreateVmVDSCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 3ae8c11e 2013-07-17 11:12:37,057 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-49) [f287998] START, CreateVDSCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 367ff496 2013-07-17 11:12:37,168 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-49) [f287998] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand spiceSslCipherSuite=DEFAULT,memSize=1024,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,pitReinjection=false,nice=0,display=vnc,smartcardEnable=false,tabletEnable=true,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages=true,vmId=8e2c9057-deee-48a6-8314-a34530fc53cb,devices=[Ljava.util.HashMap;@f1e0215 ,acpiEnable=true,vmName=VM1,cpuType=SandyBridge,custom={} 2013-07-17 11:12:37,172 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-49) [f287998] FINISH, CreateVDSCommand, log id: 367ff496 2013-07-17 11:12:37,253 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-49) [f287998] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3ae8c11e 2013-07-17 11:12:37,255 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-49) [f287998] Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM , sharedLocks= ] 2013-07-17 11:12:39,267 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-2) START, DestroyVDSCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, force=false, secondsToWait=0, gracefully=false), log id: 20fae62 2013-07-17 11:12:39,354 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-2) FINISH, DestroyVDSCommand, log id: 20fae62 2013-07-17 11:12:39,433 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-2) Running on vds during rerun failed vm: null 2013-07-17 11:12:39,437 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-2) vm VM1 running in db and not running in vds - add to rerun treatment. vds ovirt001 2013-07-17 11:12:39,441 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-2) START, FullListVdsCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vds=Host[ovirt001], vmIds=[8e2c9057-deee-48a6-8314-a34530fc53cb]), log id: 119758a 2013-07-17 11:12:39,453 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-2) FINISH, FullListVdsCommand, return: [Ljava.util.HashMap;@2e73b796, log id: 119758a 2013-07-17 11:12:39,478 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-2) Rerun vm 8e2c9057-deee-48a6-8314-a34530fc53cb. Called from vds ovirt001 2013-07-17 11:12:39,574 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-49) Lock Acquired to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM , sharedLocks= ] 2013-07-17 11:12:39,603 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-49) START, IsVmDuringInitiatingVDSCommand( vmId = 8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 497e83ec 2013-07-17 11:12:39,606 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-49) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 497e83ec 2013-07-17 11:12:39,661 INFO [org.ovirt.engine.core.bll.scheduling.VdsSelector] (pool-6-thread-49) VDS ovirt001 d07967ab-3764-47ff-8755-bc539a7feb3b have failed running this VM in the current selection cycle 2013-07-17 11:12:39,663 WARN [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-49) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER 2013-07-17 11:12:39,664 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-49) Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM , sharedLocks= ] 2013-07-17 11:13:49,097 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-42) Setting new tasks map. The map contains now 0 tasks 2013-07-17 11:13:49,099 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-42) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3. *Steve Dainard * Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn<https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook<https://www.facebook.com/miovision> * ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.

On 07/17/2013 09:04 PM, Steve Dainard wrote:
*Web-UI displays:* VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a: No such file or directory . VM VM1 was started by admin@internal (Host: ovirt001). The disk VM1_Disk1 was successfully added to VM VM1.
*I can see the image on the gluster machine, and it looks to have the correct permissions:* [root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# pwd /mnt/storage1/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa [root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# ll total 1028 -rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-22a39bb1626a -rw-rw----. 2 vdsm kvm 1048576 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.lease -rw-r--r--. 2 vdsm kvm 268 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.meta
Can you please try after doing these changes: 1) gluster volume set <volname> server.allow-insecure on 2) Edit /etc/glusterfs/glusterd.vol to contain this line: option rpc-auth-allow-insecure on Post 2), restarting glusterd would be necessary. Thanks, Vijay

Completed changes: *gluster> volume info vol1* Volume Name: vol1 Type: Replicate Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1 Brick2: ovirt002.miovision.corp:/mnt/storage1/vol1 Options Reconfigured: network.remote-dio: on cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 auth.allow: * user.cifs: on nfs.disable: off server.allow-insecure: on *gluster> volume status vol1* Status of volume: vol1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick ovirt001.miovision.corp:/mnt/storage1/vol1 49152 Y 25148 Brick ovirt002.miovision.corp:/mnt/storage1/vol1 49152 Y 16692 NFS Server on localhost 2049 Y 25163 Self-heal Daemon on localhost N/A Y 25167 NFS Server on ovirt002.miovision.corp 2049 Y 16702 Self-heal Daemon on ovirt002.miovision.corp N/A Y 16706 There are no active volume tasks *Same error on VM run:* VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a: No such file or directory . VM VM1 was started by admin@internal (Host: ovirt001). *engine.log:* 2013-07-17 12:39:27,714 INFO [org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--127.0.0.1-8702-3) Running command: LoginAdminUserCommand internal: false. 2013-07-17 12:39:27,886 INFO [org.ovirt.engine.core.bll.LoginUserCommand] (ajp--127.0.0.1-8702-7) Running command: LoginUserCommand internal: false. 2013-07-17 12:39:31,817 ERROR [org.ovirt.engine.core.utils.servlet.ServletUtils] (ajp--127.0.0.1-8702-1) Can't read file "/usr/share/doc/ovirt-engine/manual/DocumentationPat h.csv" for request "/docs/DocumentationPath.csv", will send a 404 error response. 2013-07-17 12:39:49,285 INFO [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-4) [8208368] Lock Acquired to object EngineLock [exclusiveLocks= key: 8e2c9057-de ee-48a6-8314-a34530fc53cb value: VM , sharedLocks= ] 2013-07-17 12:39:49,336 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-4) [8208368] START, IsVmDuringInitiatingVDSCommand( vmId = 8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 20ba16b5 2013-07-17 12:39:49,337 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-4) [8208368] FINISH, IsVmDuringInitiatingVDSCommand, retu rn: false, log id: 20ba16b5 2013-07-17 12:39:49,485 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) [8208368] Running command: RunVmCommand internal: false. Entities affected : ID: 8 e2c9057-deee-48a6-8314-a34530fc53cb Type: VM 2013-07-17 12:39:49,569 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-50) [8208368] START, CreateVmVDSCommand(HostName = ovirt001, HostId = d0796 7ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 3f04954e 2013-07-17 12:39:49,583 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-50) [8208368] START, CreateVDSCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 7e3dd761 2013-07-17 12:39:49,629 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-50) [8208368] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCo mmand spiceSslCipherSuite=DEFAULT,memSize=1024,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,pitReinjection=false,nice=0,display=vnc,smartcardE nable=false,tabletEnable=true,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages =true,vmId=8e2c9057-deee-48a6-8314-a34530fc53cb,devices=[Ljava.util.HashMap;@422d1a47 ,acpiEnable=true,vmName=VM1,cpuType=SandyBridge,custom={} 2013-07-17 12:39:49,632 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-50) [8208368] FINISH, CreateVDSCommand, log id: 7e3dd761 2013-07-17 12:39:49,660 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-50) [8208368] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3f 04954e 2013-07-17 12:39:49,662 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) [8208368] Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6- 8314-a34530fc53cb value: VM , sharedLocks= ] 2013-07-17 12:39:51,459 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-85) START, DestroyVDSCommand(HostName = ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, force=false, secondsToWait=0, gracefully=false), log id: 60626686 2013-07-17 12:39:51,548 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-85) FINISH, DestroyVDSCommand, log id: 60626686 2013-07-17 12:39:51,635 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-85) Running on vds during rerun failed vm: null 2013-07-17 12:39:51,641 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-85) vm VM1 running in db and not running in vds - add to rerun treatment. vds ovirt001 2013-07-17 12:39:51,660 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-85) Rerun vm 8e2c9057-deee-48a6-8314-a34530fc53cb. Called from vds ovirt001 2013-07-17 12:39:51,729 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) Lock Acquired to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a3 4530fc53cb value: VM , sharedLocks= ] 2013-07-17 12:39:51,753 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-50) START, IsVmDuringInitiatingVDSCommand( vmId = 8e2c9057-deee -48a6-8314-a34530fc53cb), log id: 7647c7d4 2013-07-17 12:39:51,753 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-50) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 7647c7d4 2013-07-17 12:39:51,794 INFO [org.ovirt.engine.core.bll.scheduling.VdsSelector] (pool-6-thread-50) VDS ovirt001 d07967ab-3764-47ff-8755-bc539a7feb3b have failed running th is VM in the current selection cycle 2013-07-17 12:39:51,794 WARN [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER 2013-07-17 12:39:51,795 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM , sharedLocks= ] *Steve Dainard * Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn<https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook<https://www.facebook.com/miovision> * ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Wed, Jul 17, 2013 at 12:21 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 07/17/2013 09:04 PM, Steve Dainard wrote:
*Web-UI displays:*
VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-** a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/** ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a,if=none,id=drive-** virtio-disk0,format=raw,**serial=238cc6cf-070c-4483-** b686-c0de7ddf0dfa,cache=none,**werror=stop,rerror=stop,aio=**threads: could not open disk image gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-** a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/** ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a: No such file or directory . VM VM1 was started by admin@internal (Host: ovirt001). The disk VM1_Disk1 was successfully added to VM VM1.
*I can see the image on the gluster machine, and it looks to have the correct permissions:*
[root@ovirt001 238cc6cf-070c-4483-b686-**c0de7ddf0dfa]# pwd /mnt/storage1/vol1/a87a7ef6-**2c74-4d8e-a6e0-a392d0f791cf/** images/238cc6cf-070c-4483-**b686-c0de7ddf0dfa [root@ovirt001 238cc6cf-070c-4483-b686-**c0de7ddf0dfa]# ll total 1028 -rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a -rw-rw----. 2 vdsm kvm 1048576 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a.lease -rw-r--r--. 2 vdsm kvm 268 Jul 17 11:11 ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a.meta
Can you please try after doing these changes:
1) gluster volume set <volname> server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol to contain this line: option rpc-auth-allow-insecure on
Post 2), restarting glusterd would be necessary.
Thanks, Vijay

On 07/17/2013 10:20 PM, Steve Dainard wrote:
Completed changes:
*gluster> volume info vol1* Volume Name: vol1 Type: Replicate Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1 Brick2: ovirt002.miovision.corp:/mnt/storage1/vol1 Options Reconfigured: network.remote-dio: on cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 auth.allow: * user.cifs: on nfs.disable: off server.allow-insecure: on
*Same error on VM run:* VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a: No such file or directory . VM VM1 was started by admin@internal (Host: ovirt001).
Do you see any errors in glusterd log while trying to run the VM? Log file can be found at (/var/log/glusterfs/...) on ovirt001. Thanks, Vijay

On VM start *ovirt001# tail -f /var/log/glusterfs/** ==> cli.log <== [2013-07-17 20:23:09.187585] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket" [2013-07-17 20:23:09.189638] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled [2013-07-17 20:23:09.189660] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread ==> etc-glusterfs-glusterd.vol.log <== [2013-07-17 20:23:09.252219] I [glusterd-handler.c:1007:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req ==> cli.log <== [2013-07-17 20:23:09.252506] I [cli-rpc-ops.c:538:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol: 0 [2013-07-17 20:23:09.252961] E [cli-xml-output.c:2572:cli_xml_output_vol_info_end] 0-cli: Returning 0 [2013-07-17 20:23:09.252998] I [cli-rpc-ops.c:771:gf_cli_get_volume_cbk] 0-cli: Returning: 0 [2013-07-17 20:23:09.253107] I [input.c:36:cli_batch] 0-: Exiting with: 0 *ovirt001# tail -f /var/log/vdsm/** (see attachment) *Steve Dainard * Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn<https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook<https://www.facebook.com/miovision> * ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Wed, Jul 17, 2013 at 2:40 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 07/17/2013 10:20 PM, Steve Dainard wrote:
Completed changes:
*gluster> volume info vol1*
Volume Name: vol1 Type: Replicate Volume ID: 97c3b2a7-0391-4fae-b541-**cf04ce6bde0f Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: ovirt001.miovision.corp:/mnt/**storage1/vol1 Brick2: ovirt002.miovision.corp:/mnt/**storage1/vol1 Options Reconfigured: network.remote-dio: on cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 auth.allow: * user.cifs: on nfs.disable: off server.allow-insecure: on
*Same error on VM run:*
VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-** a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/** ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a,if=none,id=drive-** virtio-disk0,format=raw,**serial=238cc6cf-070c-4483-** b686-c0de7ddf0dfa,cache=none,**werror=stop,rerror=stop,aio=**threads: could not open disk image gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-** a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/** ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a: No such file or directory . VM VM1 was started by admin@internal (Host: ovirt001).
Do you see any errors in glusterd log while trying to run the VM? Log file can be found at (/var/log/glusterfs/...) on ovirt001.
Thanks, Vijay
participants (2)
-
Steve Dainard
-
Vijay Bellur