[Users] Ovirt 3.3 nightly, Gluster 3.4 stable, cannot launch VM with gluster storage domain backed disk

Vijay Bellur vbellur at redhat.com
Wed Jul 17 16:21:43 UTC 2013


On 07/17/2013 09:04 PM, Steve Dainard wrote:

>
>
>
> *Web-UI displays:*
> VM VM1 is down. Exit message: internal error process exited while
> connecting to monitor: qemu-system-x86_64: -drive
> file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads:
> could not open disk image
> gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a:
> No such file or directory .
> VM VM1 was started by admin at internal (Host: ovirt001).
> The disk VM1_Disk1 was successfully added to VM VM1.
>
> *I can see the image on the gluster machine, and it looks to have the
> correct permissions:*
> [root at ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# pwd
> /mnt/storage1/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa
> [root at ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# ll
> total 1028
> -rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11
> ff2bca2d-4ed1-46c6-93c8-22a39bb1626a
> -rw-rw----. 2 vdsm kvm     1048576 Jul 17 11:11
> ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.lease
> -rw-r--r--. 2 vdsm kvm         268 Jul 17 11:11
> ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.meta

Can you please try after doing these changes:

1) gluster volume set <volname> server.allow-insecure on

2) Edit /etc/glusterfs/glusterd.vol to contain this line:
             option rpc-auth-allow-insecure on

Post 2), restarting glusterd would be necessary.

Thanks,
Vijay



More information about the Users mailing list