[Users] Ovirt 3.3 nightly, Gluster 3.4 stable, cannot launch VM with gluster storage domain backed disk
Steve Dainard
sdainard at miovision.com
Wed Jul 17 16:50:15 UTC 2013
Completed changes:
*gluster> volume info vol1*
Volume Name: vol1
Type: Replicate
Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1
Brick2: ovirt002.miovision.corp:/mnt/storage1/vol1
Options Reconfigured:
network.remote-dio: on
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
auth.allow: *
user.cifs: on
nfs.disable: off
server.allow-insecure: on
*gluster> volume status vol1*
Status of volume: vol1
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick ovirt001.miovision.corp:/mnt/storage1/vol1 49152 Y 25148
Brick ovirt002.miovision.corp:/mnt/storage1/vol1 49152 Y 16692
NFS Server on localhost 2049 Y 25163
Self-heal Daemon on localhost N/A Y 25167
NFS Server on ovirt002.miovision.corp 2049 Y 16702
Self-heal Daemon on ovirt002.miovision.corp N/A Y 16706
There are no active volume tasks
*Same error on VM run:*
VM VM1 is down. Exit message: internal error process exited while
connecting to monitor: qemu-system-x86_64: -drive
file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image
gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a:
No such file or directory .
VM VM1 was started by admin at internal (Host: ovirt001).
*engine.log:*
2013-07-17 12:39:27,714 INFO
[org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--127.0.0.1-8702-3)
Running command: LoginAdminUserCommand internal: false.
2013-07-17 12:39:27,886 INFO [org.ovirt.engine.core.bll.LoginUserCommand]
(ajp--127.0.0.1-8702-7) Running command: LoginUserCommand internal: false.
2013-07-17 12:39:31,817 ERROR
[org.ovirt.engine.core.utils.servlet.ServletUtils] (ajp--127.0.0.1-8702-1)
Can't read file "/usr/share/doc/ovirt-engine/manual/DocumentationPat
h.csv" for request "/docs/DocumentationPath.csv", will send a 404 error
response.
2013-07-17 12:39:49,285 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(ajp--127.0.0.1-8702-4) [8208368] Lock Acquired to object EngineLock
[exclusiveLocks= key: 8e2c9057-de
ee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
2013-07-17 12:39:49,336 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--127.0.0.1-8702-4) [8208368] START, IsVmDuringInitiatingVDSCommand(
vmId
= 8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 20ba16b5
2013-07-17 12:39:49,337 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--127.0.0.1-8702-4) [8208368] FINISH, IsVmDuringInitiatingVDSCommand,
retu
rn: false, log id: 20ba16b5
2013-07-17 12:39:49,485 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-50) [8208368] Running command: RunVmCommand internal: false.
Entities affected : ID: 8
e2c9057-deee-48a6-8314-a34530fc53cb Type: VM
2013-07-17 12:39:49,569 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-50)
[8208368] START, CreateVmVDSCommand(HostName = ovirt001, HostId = d0796
7ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb,
vm=VM [VM1]), log id: 3f04954e
2013-07-17 12:39:49,583 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-6-thread-50) [8208368] START, CreateVDSCommand(HostName = ovirt001,
HostId =
d07967ab-3764-47ff-8755-bc539a7feb3b,
vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 7e3dd761
2013-07-17 12:39:49,629 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-6-thread-50) [8208368]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCo
mmand
spiceSslCipherSuite=DEFAULT,memSize=1024,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,pitReinjection=false,nice=0,display=vnc,smartcardE
nable=false,tabletEnable=true,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages
=true,vmId=8e2c9057-deee-48a6-8314-a34530fc53cb,devices=[Ljava.util.HashMap;@422d1a47
,acpiEnable=true,vmName=VM1,cpuType=SandyBridge,custom={}
2013-07-17 12:39:49,632 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-6-thread-50) [8208368] FINISH, CreateVDSCommand, log id: 7e3dd761
2013-07-17 12:39:49,660 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-50)
[8208368] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3f
04954e
2013-07-17 12:39:49,662 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-50) [8208368] Lock freed to object EngineLock
[exclusiveLocks= key: 8e2c9057-deee-48a6-
8314-a34530fc53cb value: VM
, sharedLocks= ]
2013-07-17 12:39:51,459 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-85) START, DestroyVDSCommand(HostName =
ovirt001,
HostId = d07967ab-3764-47ff-8755-bc539a7feb3b,
vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, force=false, secondsToWait=0,
gracefully=false), log id: 60626686
2013-07-17 12:39:51,548 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-85) FINISH, DestroyVDSCommand, log id:
60626686
2013-07-17 12:39:51,635 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-85) Running on vds during rerun failed vm:
null
2013-07-17 12:39:51,641 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-85) vm VM1 running in db and not running in
vds - add to
rerun treatment. vds ovirt001
2013-07-17 12:39:51,660 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-85) Rerun vm
8e2c9057-deee-48a6-8314-a34530fc53cb. Called
from vds ovirt001
2013-07-17 12:39:51,729 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-50) Lock Acquired to object EngineLock [exclusiveLocks= key:
8e2c9057-deee-48a6-8314-a3
4530fc53cb value: VM
, sharedLocks= ]
2013-07-17 12:39:51,753 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-50) START, IsVmDuringInitiatingVDSCommand( vmId =
8e2c9057-deee
-48a6-8314-a34530fc53cb), log id: 7647c7d4
2013-07-17 12:39:51,753 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-50) FINISH, IsVmDuringInitiatingVDSCommand, return: false,
log
id: 7647c7d4
2013-07-17 12:39:51,794 INFO
[org.ovirt.engine.core.bll.scheduling.VdsSelector] (pool-6-thread-50) VDS
ovirt001 d07967ab-3764-47ff-8755-bc539a7feb3b have failed running th
is VM in the current selection cycle
2013-07-17 12:39:51,794 WARN [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-50) CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER
2013-07-17 12:39:51,795 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-50) Lock freed to object EngineLock [exclusiveLocks= key:
8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
*Steve Dainard *
Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> |
**LinkedIn<https://www.linkedin.com/company/miovision-technologies> |
Twitter <https://twitter.com/miovision> |
Facebook<https://www.facebook.com/miovision>
*
------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
On Wed, Jul 17, 2013 at 12:21 PM, Vijay Bellur <vbellur at redhat.com> wrote:
> On 07/17/2013 09:04 PM, Steve Dainard wrote:
>
>
>>
>>
>> *Web-UI displays:*
>>
>> VM VM1 is down. Exit message: internal error process exited while
>> connecting to monitor: qemu-system-x86_64: -drive
>> file=gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-**
>> a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/**
>> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a,if=none,id=drive-**
>> virtio-disk0,format=raw,**serial=238cc6cf-070c-4483-**
>> b686-c0de7ddf0dfa,cache=none,**werror=stop,rerror=stop,aio=**threads:
>> could not open disk image
>> gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-**
>> a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/**
>> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a:
>> No such file or directory .
>> VM VM1 was started by admin at internal (Host: ovirt001).
>> The disk VM1_Disk1 was successfully added to VM VM1.
>>
>> *I can see the image on the gluster machine, and it looks to have the
>> correct permissions:*
>>
>> [root at ovirt001 238cc6cf-070c-4483-b686-**c0de7ddf0dfa]# pwd
>> /mnt/storage1/vol1/a87a7ef6-**2c74-4d8e-a6e0-a392d0f791cf/**
>> images/238cc6cf-070c-4483-**b686-c0de7ddf0dfa
>> [root at ovirt001 238cc6cf-070c-4483-b686-**c0de7ddf0dfa]# ll
>> total 1028
>> -rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11
>> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a
>> -rw-rw----. 2 vdsm kvm 1048576 Jul 17 11:11
>> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a.lease
>> -rw-r--r--. 2 vdsm kvm 268 Jul 17 11:11
>> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a.meta
>>
>
> Can you please try after doing these changes:
>
> 1) gluster volume set <volname> server.allow-insecure on
>
> 2) Edit /etc/glusterfs/glusterd.vol to contain this line:
> option rpc-auth-allow-insecure on
>
> Post 2), restarting glusterd would be necessary.
>
> Thanks,
> Vijay
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130717/44b5972f/attachment-0001.html>
More information about the Users
mailing list