<div dir="ltr"><div><div>[Adding gluster-users ML]<br></div>The brick logs are filled with errors :<br>[2016-10-05 19:30:28.659061] E [MSGID: 113077] [posix-handle.c:309:posix_handle_pump] 0-engine-posix: malformed internal link /var/run/vdsm/storage/0a021563-91b5-4f49-9c6b-fff45e85a025/d84f0551-0f2b-457c-808c-6369c6708d43/1b5a5e34-818c-4914-8192-2f05733b5583 for /xpool/engine/brick/.glusterfs/b9/8e/b98ed8d2-3bf9-4b11-92fd-ca5324e131a8
<br>[2016-10-05 19:30:28.659069] E [MSGID: 113091] [posix.c:180:posix_lookup] 0-engine-posix: Failed to create inode handle for path <gfid:b98ed8d2-3bf9-4b11-92fd-ca5324e131a8>
<br>The message "E [MSGID: 113018] [posix.c:198:posix_lookup] 0-engine-posix: lstat on null failed" repeated 3 times between [2016-10-05 19:30:28.656529] and [2016-10-05 19:30:28.659076]
<br>[2016-10-05 19:30:28.659087] W [MSGID: 115005] [server-resolve.c:126:resolve_gfid_cbk] 0-engine-server: b98ed8d2-3bf9-4b11-92fd-ca5324e131a8: failed to resolve (Success)
<br><br></div>- Ravi, the above are from the data brick of the arbiter volume. Can you take a look?<br><div><br></div><div>Jason,<br></div><div><div><div>Could you also provide the mount logs from the first host (/var/log/glusterfs/rhev-data-center-mnt-glusterSD*engine.log) and glusterd log (/var/log/glusterfs/etc-glusterfs-glusterd.vol.log) around the same time frame.<br><br></div><div><br></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Oct 5, 2016 at 3:28 AM, Jason Jeffrey <span dir="ltr"><<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div link="blue" vlink="purple" lang="EN-GB"><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Hi,<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Servers are powered off when I’m not looking at the problem.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">There may have been instances where all three were not powered on, during the same period.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Glusterhd log attached, the xpool-engine-brick log is over 1 GB in size, I’ve taken a sample of the last couple days, looks to be highly repative.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Cheers<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Jason<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US"> Simone Tiraboschi [mailto:<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>] <br><b>Sent:</b> 04 October 2016 16:50</span></p><div><div class="h5"><br><b>To:</b> Jason Jeffrey <<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>><br><b>Cc:</b> users <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br><b>Subject:</b> Re: [ovirt-users] 4.0 - 2nd node fails on deploy<u></u><u></u></div></div><p></p><div><div class="h5"><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">On Tue, Oct 4, 2016 at 5:22 PM, Jason Jeffrey <<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>> wrote:<u></u><u></u></p><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-right:0cm"><div><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Hi,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">DCASTORXX is a hosts entry for dedicated direct 10GB links (each private /28) between the x3 servers i.e 1=> 2&3, 2=> 1&3, etc) planned to be used solely for storage.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">I,e </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">10.100.50.81 dcasrv01</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">10.100.101.1 dcastor01</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">10.100.50.82 dcasrv02</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">10.100.101.2 dcastor02</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">10.100.50.83 dcasrv03</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">10.100.103.3 dcastor03 </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">These were setup with the gluster commands</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:Symbol;color:#1f497d">·</span><span style="font-size:7.0pt;color:#1f497d"> </span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">gluster volume create iso replica 3 arbiter 1 dcastor01:/xpool/iso/brick dcastor02:/xpool/iso/brick dcastor03:/xpool/iso/brick</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:Symbol;color:#1f497d">·</span><span style="font-size:7.0pt;color:#1f497d"> </span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">gluster volume create export replica 3 arbiter 1 dcastor02:/xpool/export/brick dcastor03:/xpool/export/brick dcastor01:/xpool/export/brick </span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:Symbol;color:#1f497d">·</span><span style="font-size:7.0pt;color:#1f497d"> </span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">gluster volume create engine replica 3 arbiter 1 dcastor01:/xpool/engine/brick dcastor02:/xpool/engine/brick dcastor03:/xpool/engine/brick</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:Symbol;color:#1f497d">·</span><span style="font-size:7.0pt;color:#1f497d"> </span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">gluster volume create data replica 3 arbiter 1 dcastor01:/xpool/data/brick dcastor03:/xpool/data/brick dcastor02:/xpool/data/bricky</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">So yes, DCASRV01 is the server (pri) and have local bricks access through DCASTOR01 interface </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Is the issue here not the incorrect soft link ?</span><u></u><u></u></p></div></div></blockquote><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">No, this should be fine.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">The issue is that periodically your gluster volume losses its server quorum and become unavailable.<u></u><u></u></p></div><div><p class="MsoNormal">It happened more than once from your logs.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Can you please attach also gluster logs for that volume?<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-right:0cm"><div><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">lrwxrwxrwx. 1 vdsm kvm 132 Oct 3 17:27 hosted-engine.metadata -> /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/fd44dbf9-473a-<wbr>496a-9996-c8abe3278390/<wbr>cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93 </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 /]# ls -al /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">ls: cannot access /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/: No such file or directory </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">But the data does exist </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 fd44dbf9-473a-496a-9996-<wbr>c8abe3278390]# ls -al</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">drwxr-xr-x. 2 vdsm kvm 4096 Oct 3 17:17 .</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">drwxr-xr-x. 6 vdsm kvm 4096 Oct 3 17:17 ..</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">-rw-rw----. 2 vdsm kvm 1028096 Oct 3 20:48 cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">-rw-rw----. 2 vdsm kvm 1048576 Oct 3 17:17 cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93.lease</span><u></u><u></u></p><p><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">-rw-r--r--. 2 vdsm kvm 283 Oct 3 17:17 cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93.meta </span><u></u><u></u></p><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal">Thanks <u></u><u></u></p><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal">Jason <u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US"> Simone Tiraboschi [mailto:<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>] <br><b>Sent:</b> 04 October 2016 14:40</span><u></u><u></u></p><div><div><p class="MsoNormal"><br><b>To:</b> Jason Jeffrey <<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>><br><b>Cc:</b> users <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br><b>Subject:</b> Re: [ovirt-users] 4.0 - 2nd node fails on deploy<u></u><u></u></p></div></div><div><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal">On Tue, Oct 4, 2016 at 10:51 AM, Simone Tiraboschi <<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>> wrote:<u></u><u></u></p><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal">On Mon, Oct 3, 2016 at 11:56 PM, Jason Jeffrey <<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>> wrote:<u></u><u></u></p><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"><div><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Hi,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Another problem has appeared, after rebooting the primary the VM will not start.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Appears the symlink is broken between gluster mount ref and vdsm</span><u></u><u></u></p></div></div></blockquote><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">The first host was correctly deployed but it seas that you are facing some issue connecting the storage.<u></u><u></u></p></div><div><p class="MsoNormal">Can you please attach vdsm logs and /var/log/messages from the first host?<u></u><u></u></p></div></div></div></div></blockquote><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Thanks Jason,<u></u><u></u></p></div><div><p class="MsoNormal">I suspect that your issue is related to this:<u></u><u></u></p></div><div><div><p class="MsoNormal">Oct 4 18:24:39 dcasrv01 etc-glusterfs-glusterd.vol[<wbr>2252]: [2016-10-04 17:24:39.522620] C [MSGID: 106002] [glusterd-server-quorum.c:351:<wbr>glusterd_do_volume_quorum_<wbr>action] 0-management: Server quorum lost for volume data. Stopping local bricks.<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 18:24:39 dcasrv01 etc-glusterfs-glusterd.vol[<wbr>2252]: [2016-10-04 17:24:39.523272] C [MSGID: 106002] [glusterd-server-quorum.c:351:<wbr>glusterd_do_volume_quorum_<wbr>action] 0-management: Server quorum lost for volume engine. Stopping local bricks.<u></u><u></u></p></div></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">and for some time your gluster volume has been working.<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">But then:<u></u><u></u></p></div><div><div><p class="MsoNormal">Oct 4 19:02:09 dcasrv01 systemd: Started /usr/bin/mount -t glusterfs -o backup-volfile-servers=<wbr>dcastor02:dcastor03 dcastor01:engine /rhev/data-center/mnt/<wbr>glusterSD/dcastor01:engine.<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:09 dcasrv01 systemd: Starting /usr/bin/mount -t glusterfs -o backup-volfile-servers=<wbr>dcastor02:dcastor03 dcastor01:engine /rhev/data-center/mnt/<wbr>glusterSD/dcastor01:engine.<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 ovirt-ha-agent: /usr/lib/python2.7/site-<wbr>packages/yajsonrpc/stomp.py:<wbr>352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 ovirt-ha-agent: pending = getattr(dispatcher, 'pending', lambda: 0)<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 ovirt-ha-agent: /usr/lib/python2.7/site-<wbr>packages/yajsonrpc/stomp.py:<wbr>352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 ovirt-ha-agent: pending = getattr(dispatcher, 'pending', lambda: 0)<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 journal: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.<wbr>agent.Agent ERROR Error: 'Connection to storage server failed' - trying to restart agent<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:11 dcasrv01 ovirt-ha-agent: ERROR:ovirt_hosted_engine_ha.<wbr>agent.agent.Agent:Error: 'Connection to storage server failed' - trying to restart agent<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:12 dcasrv01 etc-glusterfs-glusterd.vol[<wbr>2252]: [2016-10-04 18:02:12.384611] C [MSGID: 106003] [glusterd-server-quorum.c:346:<wbr>glusterd_do_volume_quorum_<wbr>action] 0-management: Server quorum regained for volume data. Starting local bricks.<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:12 dcasrv01 etc-glusterfs-glusterd.vol[<wbr>2252]: [2016-10-04 18:02:12.388981] C [MSGID: 106003] [glusterd-server-quorum.c:346:<wbr>glusterd_do_volume_quorum_<wbr>action] 0-management: Server quorum regained for volume engine. Starting local bricks.<u></u><u></u></p></div></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">And at that point VDSM started complaining that the hosted-engine-storage domain doesn't exist anymore:<u></u><u></u></p></div><div><div><p class="MsoNormal">Oct 4 19:02:30 dcasrv01 journal: ovirt-ha-agent ovirt_hosted_engine_ha.lib.<wbr>image.Image ERROR Error fetching volumes list: Storage domain does not exist: (u'bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf',)<u></u><u></u></p></div><div><p class="MsoNormal">Oct 4 19:02:30 dcasrv01 ovirt-ha-agent: ERROR:ovirt_hosted_engine_ha.<wbr>lib.image.Image:Error fetching volumes list: Storage domain does not exist: (u'bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf',)<u></u><u></u></p></div></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">I see from the logs that the ovirt-ha-agent is trying to mount the hosted-engine storage domain as:<u></u><u></u></p></div><div><p class="MsoNormal">/usr/bin/mount -t glusterfs -o backup-volfile-servers=<wbr>dcastor02:dcastor03 dcastor01:engine /rhev/data-center/mnt/<wbr>glusterSD/dcastor01:engine.<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Pointing to dcastor01, dcastor02 and dcastor03 while your server is dcasrv01.<u></u><u></u></p></div><div><p class="MsoNormal">But at the same time it seams that also dcasrv01 has local bricks for the same engine volume.<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">So, is dcasrv01 just an alias fro dcastor01? if not you probably have some issue with the configuration of your gluster volume.<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"><div><div><div><div><div><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"><div><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">From broker.log</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Thread-169::ERROR::2016-10-04 22:44:16,189::storage_broker::<wbr>138::<a href="http://ovirt_hosted_engine_ha.br" target="_blank">ovirt_hosted_engine_ha.br</a><wbr>oker.storage_broker.<wbr>StorageBroker::(get_raw_stats_<wbr>for_service_type) Failed to read metadata from /rhev/data-center/mnt/<wbr>glusterSD/dcastor01:engine/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/ha_agent/hosted-<wbr>engine.metadata</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 ovirt-hosted-engine-ha]# ls -al /rhev/data-center/mnt/<wbr>glusterSD/dcastor01\:engine/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/ha_agent/</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">total 9</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">drwxrwx---. 2 vdsm kvm 4096 Oct 3 17:27 .</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">drwxr-xr-x. 5 vdsm kvm 4096 Oct 3 17:17 ..</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">lrwxrwxrwx. 1 vdsm kvm 132 Oct 3 17:27 hosted-engine.lockspace -> /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/23d81b73-bcb7-<wbr>4742-abde-128522f43d78/<wbr>11d6a3e1-1817-429d-b2e0-<wbr>9051a3cf41a4</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">lrwxrwxrwx. 1 vdsm kvm 132 Oct 3 17:27 hosted-engine.metadata -> /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/fd44dbf9-473a-<wbr>496a-9996-c8abe3278390/<wbr>cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93 </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 /]# ls -al /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">ls: cannot access /var/run/vdsm/storage/<wbr>bbb70623-194a-46d2-a164-<wbr>76a4876ecaaf/: No such file or directory </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Though file appears to be there </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Gluster is setup as xpool/engine </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 fd44dbf9-473a-496a-9996-<wbr>c8abe3278390]# pwd</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">/xpool/engine/brick/bbb70623-<wbr>194a-46d2-a164-76a4876ecaaf/<wbr>images/fd44dbf9-473a-496a-<wbr>9996-c8abe3278390</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 fd44dbf9-473a-496a-9996-<wbr>c8abe3278390]# ls -al</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">total 2060</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">drwxr-xr-x. 2 vdsm kvm 4096 Oct 3 17:17 .</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">drwxr-xr-x. 6 vdsm kvm 4096 Oct 3 17:17 ..</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">-rw-rw----. 2 vdsm kvm 1028096 Oct 3 20:48 cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">-rw-rw----. 2 vdsm kvm 1048576 Oct 3 17:17 cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93.lease</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">-rw-r--r--. 2 vdsm kvm 283 Oct 3 17:17 cee9440c-4eb8-453b-bc04-<wbr>c47e6f9cbc93.meta </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 fd44dbf9-473a-496a-9996-<wbr>c8abe3278390]# gluster volume info</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume Name: data</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Type: Replicate</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume ID: 54fbcafc-fed9-4bce-92ec-<wbr>fa36cdcacbd4</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status: Started</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Number of Bricks: 1 x (2 + 1) = 3</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Transport-type: tcp</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Bricks:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick1: dcastor01:/xpool/data/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick2: dcastor03:/xpool/data/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick3: dcastor02:/xpool/data/bricky (arbiter)</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Options Reconfigured:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.readdir-ahead: on</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.quick-read: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.read-ahead: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.io-cache: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.stat-prefetch: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">cluster.eager-lock: enable</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">network.remote-dio: enable</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">cluster.quorum-type: auto</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">cluster.server-quorum-type: server</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-uid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-gid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume Name: engine</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Type: Replicate</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume ID: dd4c692d-03aa-4fc6-9011-<wbr>a8dad48dad96</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status: Started</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Number of Bricks: 1 x (2 + 1) = 3</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Transport-type: tcp</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Bricks:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick1: dcastor01:/xpool/engine/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick2: dcastor02:/xpool/engine/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick3: dcastor03:/xpool/engine/brick (arbiter)</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Options Reconfigured:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.readdir-ahead: on</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.quick-read: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.read-ahead: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.io-cache: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.stat-prefetch: off</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">cluster.eager-lock: enable</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">network.remote-dio: enable</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">cluster.quorum-type: auto</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">cluster.server-quorum-type: server</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-uid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-gid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume Name: export</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Type: Replicate</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume ID: 23f14730-d264-4cc2-af60-<wbr>196b943ecaf3</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status: Started</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Number of Bricks: 1 x (2 + 1) = 3</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Transport-type: tcp</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Bricks:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick1: dcastor02:/xpool/export/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick2: dcastor03:/xpool/export/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick3: dcastor01:/xpool/export/brick (arbiter)</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Options Reconfigured:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.readdir-ahead: on</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-uid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-gid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume Name: iso</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Type: Replicate</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Volume ID: b2d3d7e2-9919-400b-8368-<wbr>a0443d48e82a</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status: Started</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Number of Bricks: 1 x (2 + 1) = 3</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Transport-type: tcp</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Bricks:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick1: dcastor01:/xpool/iso/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick2: dcastor02:/xpool/iso/brick</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick3: dcastor03:/xpool/iso/brick (arbiter)</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Options Reconfigured:</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">performance.readdir-ahead: on</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-uid: 36</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">storage.owner-gid: 36 <wbr> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">[root@dcasrv01 fd44dbf9-473a-496a-9996-<wbr>c8abe3278390]# gluster volume status</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status of volume: data</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Gluster process <wbr> TCP Port RDMA Port Online Pid</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor01:/xpool/data/brick <wbr> 49153 0 Y 3076</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor03:/xpool/data/brick <wbr> 49153 0 Y 3019</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor02:/xpool/data/bricky <wbr> 49153 0 Y 3857</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on localhost 2049 0 Y 3097</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on localhost N/A N/A Y 3088</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcastor03 2049 0 Y 3039</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcastor03 N/A N/A Y 3114</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcasrv02 2049 0 Y 3871</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcasrv02 N/A N/A Y 3864</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Task Status of Volume data</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">There are no active volume tasks</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status of volume: engine</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Gluster process <wbr> TCP Port RDMA Port Online Pid</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor01:/xpool/engine/brick <wbr> 49152 0 Y 3131</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor02:/xpool/engine/brick <wbr> 49152 0 Y 3852</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor03:/xpool/engine/brick <wbr> 49152 0 Y 2992</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on localhost 2049 0 Y 3097</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on localhost N/A N/A Y 3088</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcastor03 2049 0 Y 3039</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcastor03 N/A N/A Y 3114</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcasrv02 2049 0 Y 3871</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcasrv02 N/A N/A Y 3864</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Task Status of Volume engine</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">There are no active volume tasks</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status of volume: export</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Gluster process <wbr> TCP Port RDMA Port Online Pid</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor02:/xpool/export/brick <wbr> 49155 0 Y 3872</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor03:/xpool/export/brick <wbr> 49155 0 Y 3147</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor01:/xpool/export/brick <wbr> 49155 0 Y 3150</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on localhost 2049 0 Y 3097</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on localhost N/A N/A Y 3088</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcastor03 2049 0 Y 3039</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcastor03 N/A N/A Y 3114</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcasrv02 2049 0 Y 3871</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcasrv02 N/A N/A Y 3864</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Task Status of Volume export</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">There are no active volume tasks</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Status of volume: iso</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Gluster process <wbr> TCP Port RDMA Port Online Pid</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor01:/xpool/iso/brick <wbr> 49154 0 Y 3152</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor02:/xpool/iso/brick <wbr> 49154 0 Y 3881</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Brick dcastor03:/xpool/iso/brick <wbr> 49154 0 Y 3146</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on localhost 2049 0 Y 3097</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on localhost N/A N/A Y 3088</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcastor03 2049 0 Y 3039</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcastor03 N/A N/A Y 3114</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">NFS Server on dcasrv02 2049 0 Y 3871</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Self-heal Daemon on dcasrv02 N/A N/A Y 3864</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Task Status of Volume iso</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">------------------------------<wbr>------------------------------<wbr>------------------</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">There are no active volume tasks</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> <wbr> <wbr> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Thanks</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Jason</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><div><div style="border:none;border-top:solid #e1e1e1 1.0pt;padding:3.0pt 0cm 0cm 0cm"><p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US"> <a href="mailto:users-bounces@ovirt.org" target="_blank">users-bounces@ovirt.org</a> [mailto:<a href="mailto:users-bounces@ovirt.org" target="_blank">users-bounces@ovirt.<wbr>org</a>] <b>On Behalf Of </b>Jason Jeffrey<br><b>Sent:</b> 03 October 2016 18:40<br><b>To:</b> <a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a></span><u></u><u></u></p><div><div><p class="MsoNormal"><br><b>Subject:</b> Re: [ovirt-users] 4.0 - 2nd node fails on deploy<u></u><u></u></p></div></div></div></div><div><div><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Hi,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Setup log attached for primary</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Regards</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Jason </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p><p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif" lang="EN-US"> Simone Tiraboschi [<a href="mailto:stirabos@redhat.com" target="_blank">mailto:stirabos@redhat.com</a>] <br><b>Sent:</b> 03 October 2016 09:27<br><b>To:</b> Jason Jeffrey <<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>><br><b>Cc:</b> users <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br><b>Subject:</b> Re: [ovirt-users] 4.0 - 2nd node fails on deploy</span><u></u><u></u></p><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal">On Mon, Oct 3, 2016 at 12:45 AM, Jason Jeffrey <<a href="mailto:jason@sudo.co.uk" target="_blank">jason@sudo.co.uk</a>> wrote:<u></u><u></u></p><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"><div><div><p class="MsoNormal"><span lang="EN-US">Hi,</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">I am trying to build a x3 HC cluster, with a self hosted engine using gluster.</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">I have successful built the 1<sup>st</sup> node, however when I attempt to run hosted-engine –deploy on node 2, I get the following error</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[WARNING] A configuration file must be supplied to deploy Hosted Engine on an additional host.</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ ERROR ] 'version' is not stored in the HE configuration image</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ ERROR ] Unable to get the answer file from the shared storage</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ ERROR ] Failed to execute stage 'Environment customization': Unable to get the answer file from the shared storage</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ INFO ] Stage: Clean up</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-<wbr>setup/answers/answers-<wbr>20161002232505.conf'</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ INFO ] Stage: Pre-termination</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ INFO ] Stage: Termination</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[ ERROR ] Hosted Engine deployment failed </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">Looking at the failure in the log file..</span><u></u><u></u></p></div></div></blockquote><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Can you please attach hosted-engine-setup logs from the first host?<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"><div><div><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 WARNING otopi.plugins.gr_he_common.<wbr>core.remote_answerfile remote_answerfile._<wbr>customization:151 A configuration</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">file must be supplied to deploy Hosted Engine on an additional host.</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.<wbr>core.remote_answerfile remote_answerfile._fetch_<wbr>answer_file:61 _fetch_answer_f</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">ile</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.<wbr>core.remote_answerfile remote_answerfile._fetch_<wbr>answer_file:69 fetching from:</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">/rhev/data-center/mnt/<wbr>glusterSD/dcastor02:engine/<wbr>0a021563-91b5-4f49-9c6b-<wbr>fff45e85a025/images/f055216c-<wbr>02f9-4cd1-a22c-d6b56a0a8e9b/7</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">8cb2527-a2e2-489a-9fad-<wbr>465a72221b37</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.<wbr>core.remote_answerfile heconflib._dd_pipe_tar:69 executing: 'sudo -u vdsm dd i</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">f=/rhev/data-center/mnt/<wbr>glusterSD/dcastor02:engine/<wbr>0a021563-91b5-4f49-9c6b-<wbr>fff45e85a025/images/f055216c-<wbr>02f9-4cd1-a22c-d6b56a0a8e9b</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">/78cb2527-a2e2-489a-9fad-<wbr>465a72221b37 bs=4k'</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.<wbr>core.remote_answerfile heconflib._dd_pipe_tar:70 executing: 'tar -tvf -'</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.<wbr>core.remote_answerfile heconflib._dd_pipe_tar:88 stdout:</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.<wbr>core.remote_answerfile heconflib._dd_pipe_tar:89 stderr:</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 ERROR otopi.plugins.gr_he_common.<wbr>core.remote_answerfile heconflib.validateConfImage:<wbr>111 'version' is not stored</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">in the HE configuration image</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">2016-10-02 23:25:05 ERROR otopi.plugins.gr_he_common.<wbr>core.remote_answerfile remote_answerfile._fetch_<wbr>answer_file:73 Unable to get t</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">he answer file from the shared storage</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">Looking at the detected gluster path - /rhev/data-center/mnt/<wbr>glusterSD/dcastor02:engine/<wbr>0a021563-91b5-4f49-9c6b-<wbr>fff45e85a025/images/f055216c-<wbr>02f9-4cd1-a22c-d6b56a0a8e9b/</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">[root@dcasrv02 ~]# ls -al /rhev/data-center/mnt/<wbr>glusterSD/dcastor02:engine/<wbr>0a021563-91b5-4f49-9c6b-<wbr>fff45e85a025/images/f055216c-<wbr>02f9-4cd1-a22c-d6b56a0a8e9b/</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">total 1049609</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">drwxr-xr-x. 2 vdsm kvm 4096 Oct 2 04:46 .</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">drwxr-xr-x. 6 vdsm kvm 4096 Oct 2 04:46 ..</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">-rw-rw----. 1 vdsm kvm 1073741824 Oct 2 04:46 78cb2527-a2e2-489a-9fad-<wbr>465a72221b37</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">-rw-rw----. 1 vdsm kvm 1048576 Oct 2 04:46 78cb2527-a2e2-489a-9fad-<wbr>465a72221b37.lease</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">-rw-r--r--. 1 vdsm kvm 294 Oct 2 04:46 78cb2527-a2e2-489a-9fad-<wbr>465a72221b37.meta </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">78cb2527-a2e2-489a-9fad-<wbr>465a72221b37 is a 1 GB file, is this the engine VM ?</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">Copying the answers file form primary (/etc/ovirt-hosted-engine/<wbr>answers.conf ) to node 2 and rerunning produces the same error : (</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">(hosted-engine --deploy --config-append=/root/answers.<wbr>conf )</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">Also tried on node 3, same issues </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">Happy to provide logs and other debugs</span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span lang="EN-US">Thanks </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US">Jason </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p><p class="MsoNormal"><span style="color:#888888" lang="EN-US"> </span><u></u><u></u></p></div></div><p class="MsoNormal" style="margin-bottom:12.0pt"><br>______________________________<wbr>_________________<br>Users mailing list<br><a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br><a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><u></u><u></u></p></blockquote></div><p class="MsoNormal"> <u></u><u></u></p></div></div></div></div></div></div><p class="MsoNormal" style="margin-bottom:12.0pt"><br>______________________________<wbr>_________________<br>Users mailing list<br><a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br><a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><u></u><u></u></p></blockquote></div></div></div><p class="MsoNormal"> <u></u><u></u></p></div></div></blockquote></div><p class="MsoNormal"> <u></u><u></u></p></div></div></div></div></div></div></blockquote></div><p class="MsoNormal"><u></u> <u></u></p></div></div></div></div></div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>