<div dir="ltr">Completed changes:<div><br></div><div><div><b>gluster&gt; volume info vol1</b></div><div> </div><div>Volume Name: vol1</div><div>Type: Replicate</div><div>Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f</div>
<div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1</div><div>Brick2: ovirt002.miovision.corp:/mnt/storage1/vol1</div>
<div>Options Reconfigured:</div><div>network.remote-dio: on</div><div>cluster.eager-lock: enable</div><div>performance.stat-prefetch: off</div><div>performance.io-cache: off</div><div>performance.read-ahead: off</div><div>
performance.quick-read: off</div><div>storage.owner-gid: 36</div><div>storage.owner-uid: 36</div><div>auth.allow: *</div><div>user.cifs: on</div><div>nfs.disable: off</div><div>server.allow-insecure: on</div></div><div><br>
</div><div><div><b>gluster&gt; volume status vol1</b></div><div>Status of volume: vol1</div><div>Gluster process<span class="" style="white-space:pre">                                                </span>Port<span class="" style="white-space:pre">        </span>Online<span class="" style="white-space:pre">        </span>Pid</div>
<div>------------------------------------------------------------------------------</div><div>Brick ovirt001.miovision.corp:/mnt/storage1/vol1<span class="" style="white-space:pre">        </span>49152<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>25148</div>
<div>Brick ovirt002.miovision.corp:/mnt/storage1/vol1<span class="" style="white-space:pre">        </span>49152<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>16692</div><div>NFS Server on localhost<span class="" style="white-space:pre">                                        </span>2049<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>25163</div>
<div>Self-heal Daemon on localhost<span class="" style="white-space:pre">                                </span>N/A<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>25167</div><div>NFS Server on ovirt002.miovision.corp<span class="" style="white-space:pre">                        </span>2049<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>16702</div>
<div>Self-heal Daemon on ovirt002.miovision.corp<span class="" style="white-space:pre">                </span>N/A<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>16706</div><div> </div><div>
There are no active volume tasks</div></div><div><br></div><div><br></div><div><b>Same error on VM run:</b></div><div><div title="VM VM1 is down. Exit message: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a: No such file or directory
." style="outline:none medium" tabindex="0"><div id="gwt-uid-1484_col2_row2">VM
 VM1 is down. Exit message: internal error process exited while 
connecting to monitor: qemu-system-x86_64: -drive 
file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads:
 could not open disk image 
gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a:
 No such file or directory
.</div><div id="gwt-uid-1484_col2_row2"><div title="" style="outline:none medium" tabindex="0"><div id="gwt-uid-1484_col2_row3">VM VM1 was started by admin@internal (Host: ovirt001).</div></div></div></div></div><div><br>
</div><div><b>engine.log:</b></div><div><div>2013-07-17 12:39:27,714 INFO  [org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--127.0.0.1-8702-3) Running command: LoginAdminUserCommand internal: false.</div><div>2013-07-17 12:39:27,886 INFO  [org.ovirt.engine.core.bll.LoginUserCommand] (ajp--127.0.0.1-8702-7) Running command: LoginUserCommand internal: false.</div>
<div>2013-07-17 12:39:31,817 ERROR [org.ovirt.engine.core.utils.servlet.ServletUtils] (ajp--127.0.0.1-8702-1) Can&#39;t read file &quot;/usr/share/doc/ovirt-engine/manual/DocumentationPat</div><div>h.csv&quot; for request &quot;/docs/DocumentationPath.csv&quot;, will send a 404 error response.</div>
<div>2013-07-17 12:39:49,285 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-4) [8208368] Lock Acquired to object EngineLock [exclusiveLocks= key: 8e2c9057-de</div><div>ee-48a6-8314-a34530fc53cb value: VM</div>
<div>, sharedLocks= ]</div><div>2013-07-17 12:39:49,336 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-4) [8208368] START, IsVmDuringInitiatingVDSCommand( vmId </div><div>= 8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 20ba16b5</div>
<div>2013-07-17 12:39:49,337 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-4) [8208368] FINISH, IsVmDuringInitiatingVDSCommand, retu</div><div>rn: false, log id: 20ba16b5</div>
<div>2013-07-17 12:39:49,485 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) [8208368] Running command: RunVmCommand internal: false. Entities affected :  ID: 8</div><div>e2c9057-deee-48a6-8314-a34530fc53cb Type: VM</div>
<div>2013-07-17 12:39:49,569 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-50) [8208368] START, CreateVmVDSCommand(HostName = ovirt001, HostId = d0796</div><div>7ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 3f04954e</div>
<div>2013-07-17 12:39:49,583 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-50) [8208368] START, CreateVDSCommand(HostName = ovirt001, HostId =</div><div> d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 7e3dd761</div>
<div>2013-07-17 12:39:49,629 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-50) [8208368] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCo</div><div>mmand spiceSslCipherSuite=DEFAULT,memSize=1024,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,pitReinjection=false,nice=0,display=vnc,smartcardE</div>
<div>nable=false,tabletEnable=true,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages</div><div>=true,vmId=8e2c9057-deee-48a6-8314-a34530fc53cb,devices=[Ljava.util.HashMap;@422d1a47,acpiEnable=true,vmName=VM1,cpuType=SandyBridge,custom={}</div>
<div>2013-07-17 12:39:49,632 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-50) [8208368] FINISH, CreateVDSCommand, log id: 7e3dd761</div><div>2013-07-17 12:39:49,660 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-50) [8208368] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3f</div>
<div>04954e</div><div>2013-07-17 12:39:49,662 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) [8208368] Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-</div><div>8314-a34530fc53cb value: VM</div>
<div>, sharedLocks= ]</div><div>2013-07-17 12:39:51,459 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-85) START, DestroyVDSCommand(HostName = ovirt001, </div><div>HostId = d07967ab-3764-47ff-8755-bc539a7feb3b, vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, force=false, secondsToWait=0, gracefully=false), log id: 60626686</div>
<div>2013-07-17 12:39:51,548 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-85) FINISH, DestroyVDSCommand, log id: 60626686</div><div>2013-07-17 12:39:51,635 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-85) Running on vds during rerun failed vm: null</div>
<div>2013-07-17 12:39:51,641 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-85) vm VM1 running in db and not running in vds - add to </div><div>rerun treatment. vds ovirt001</div>
<div>2013-07-17 12:39:51,660 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-85) Rerun vm 8e2c9057-deee-48a6-8314-a34530fc53cb. Called</div><div> from vds ovirt001</div><div>2013-07-17 12:39:51,729 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) Lock Acquired to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a3</div>
<div>4530fc53cb value: VM</div><div>, sharedLocks= ]</div><div>2013-07-17 12:39:51,753 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-50) START, IsVmDuringInitiatingVDSCommand( vmId = 8e2c9057-deee</div>
<div>-48a6-8314-a34530fc53cb), log id: 7647c7d4</div><div>2013-07-17 12:39:51,753 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-50) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log </div>
<div>id: 7647c7d4</div><div>2013-07-17 12:39:51,794 INFO  [org.ovirt.engine.core.bll.scheduling.VdsSelector] (pool-6-thread-50)  VDS ovirt001 d07967ab-3764-47ff-8755-bc539a7feb3b have failed running th</div><div>is VM in the current selection cycle</div>
<div>2013-07-17 12:39:51,794 WARN  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER</div>
</div><div><div>2013-07-17 12:39:51,795 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-50) Lock freed to object EngineLock [exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM</div><div>, sharedLocks= ]</div>
</div><div><br></div><div><br></div></div><div class="gmail_extra"><br clear="all"><div><span style="font-family:arial,sans-serif;font-size:16px"><strong>Steve Dainard </strong></span><span style="font-size:12px"></span><br>

<span style="font-family:arial,sans-serif;font-size:12px">Infrastructure Manager<br>
<a href="http://miovision.com/" target="_blank">Miovision</a> | <em>Rethink Traffic</em><br>
519-513-2407 ex.250<br>
877-646-8476 (toll-free)<br>
<br>
<strong style="font-family:arial,sans-serif;font-size:13px;color:#999999"><a href="http://miovision.com/blog" target="_blank">Blog</a>  |  </strong><font color="#999999" style="font-family:arial,sans-serif;font-size:13px"><strong><a href="https://www.linkedin.com/company/miovision-technologies" target="_blank">LinkedIn</a>  |  <a href="https://twitter.com/miovision" target="_blank">Twitter</a>  |  <a href="https://www.facebook.com/miovision" target="_blank">Facebook</a></strong></font> </span>
<hr style="font-family:arial,sans-serif;font-size:13px;color:#333333;clear:both">
<div style="color:#999999;font-family:arial,sans-serif;font-size:13px;padding-top:5px">
        <span style="font-family:arial,sans-serif;font-size:12px">Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3</span><br>
        <span style="font-family:arial,sans-serif;font-size:12px">This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.</span></div>
</div>
<br><br><div class="gmail_quote">On Wed, Jul 17, 2013 at 12:21 PM, Vijay Bellur <span dir="ltr">&lt;<a href="mailto:vbellur@redhat.com" target="_blank">vbellur@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 07/17/2013 09:04 PM, Steve Dainard wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
<br>
*Web-UI displays:*<div class="im"><br>
VM VM1 is down. Exit message: internal error process exited while<br>
connecting to monitor: qemu-system-x86_64: -drive<br>
file=gluster://ovirt001/vol1/<u></u>a87a7ef6-2c74-4d8e-a6e0-<u></u>a392d0f791cf/images/238cc6cf-<u></u>070c-4483-b686-c0de7ddf0dfa/<u></u>ff2bca2d-4ed1-46c6-93c8-<u></u>22a39bb1626a,if=none,id=drive-<u></u>virtio-disk0,format=raw,<u></u>serial=238cc6cf-070c-4483-<u></u>b686-c0de7ddf0dfa,cache=none,<u></u>werror=stop,rerror=stop,aio=<u></u>threads:<br>

could not open disk image<br>
gluster://ovirt001/vol1/<u></u>a87a7ef6-2c74-4d8e-a6e0-<u></u>a392d0f791cf/images/238cc6cf-<u></u>070c-4483-b686-c0de7ddf0dfa/<u></u>ff2bca2d-4ed1-46c6-93c8-<u></u>22a39bb1626a:<br>
No such file or directory .<br>
VM VM1 was started by admin@internal (Host: ovirt001).<br>
The disk VM1_Disk1 was successfully added to VM VM1.<br>
<br></div>
*I can see the image on the gluster machine, and it looks to have the<br>
correct permissions:*<div class="im"><br>
[root@ovirt001 238cc6cf-070c-4483-b686-<u></u>c0de7ddf0dfa]# pwd<br>
/mnt/storage1/vol1/a87a7ef6-<u></u>2c74-4d8e-a6e0-a392d0f791cf/<u></u>images/238cc6cf-070c-4483-<u></u>b686-c0de7ddf0dfa<br>
[root@ovirt001 238cc6cf-070c-4483-b686-<u></u>c0de7ddf0dfa]# ll<br>
total 1028<br>
-rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11<br>
ff2bca2d-4ed1-46c6-93c8-<u></u>22a39bb1626a<br>
-rw-rw----. 2 vdsm kvm     1048576 Jul 17 11:11<br>
ff2bca2d-4ed1-46c6-93c8-<u></u>22a39bb1626a.lease<br>
-rw-r--r--. 2 vdsm kvm         268 Jul 17 11:11<br>
ff2bca2d-4ed1-46c6-93c8-<u></u>22a39bb1626a.meta<br>
</div></blockquote>
<br>
Can you please try after doing these changes:<br>
<br>
1) gluster volume set &lt;volname&gt; server.allow-insecure on<br>
<br>
2) Edit /etc/glusterfs/glusterd.vol to contain this line:<br>
            option rpc-auth-allow-insecure on<br>
<br>
Post 2), restarting glusterd would be necessary.<br>
<br>
Thanks,<br>
Vijay<br>
</blockquote></div><br></div>