<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 1, 2018 at 11:31 AM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-">On Wed, Jan 31, 2018 at 11:48 AM, Simone Tiraboschi <span dir="ltr"><<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div class="gmail-m_-3643124572540813672gmail-h5"><br><div><br></div></div></div><div>Ciao Gianluca,</div><div>we have an issue logging messages with special unicode chars from ansible, it's tracked here:</div><div><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1533500" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1533500</a><br></div><div>but this is just hiding your real issue.<br></div><div><br></div><div>I'm almost sure that you are facing an issue writing on NFS and thwn dd returns us an error message with <span style="color:rgb(0,0,0);white-space:pre-wrap">\u2018 and </span><span style="color:rgb(0,0,0);white-space:pre-wrap">\u2019.</span></div><div><span style="color:rgb(0,0,0);white-space:pre-wrap">Can you please check your NFS permissions?</span></div><div> </div></div></div></div></blockquote><div><br></div></span><div>Ciao Simone, thanks for answering.</div><div>I think you were right.</div><div>Previously I had this:</div><div><br></div><div><div>/nfs/SHE_DOMAIN *(rw)</div></div><div><br></div><div>Now I have changed to:</div><div><br></div><div>/nfs/SHE_DOMAIN *(rw,anonuid=36,anongid=36,<wbr>all_squash)<br></div><div><br></div><div>I restarted the deploy with the answer file</div><div><br></div><div><div># hosted-engine --deploy --config-append=/var/lib/<wbr>ovirt-hosted-engine-setup/<wbr>answers/answers-<wbr>20180129164431.conf</div></div><div><br></div><div>and it went ahead... and I have contents inside the directory:</div><div><br></div><div><div># ll /nfs/SHE_DOMAIN/a0351a82-734d-<wbr>4d9a-a75e-3313d2ffe23a/</div><div>total 12</div><div>drwxr-xr-x. 2 vdsm kvm 4096 Jan 29 16:40 dom_md</div><div>drwxr-xr-x. 6 vdsm kvm 4096 Jan 29 16:43 images</div><div>drwxr-xr-x. 4 vdsm kvm 4096 Jan 29 16:40 master</div></div><div><br></div><div>But it ended with a problem regarding engine vm:</div><div><br></div><div><div>[ INFO ] TASK [Wait for engine to start]</div><div>[ INFO ] ok: [localhost]</div><div>[ INFO ] TASK [Set engine pub key as authorized key without validating the TLS/SSL certificates]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Force host-deploy in offline mode]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [include_tasks]</div><div>[ INFO ] ok: [localhost]</div><div>[ INFO ] TASK [Obtain SSO token using username/password credentials]</div><div>[ INFO ] ok: [localhost]</div><div>[ INFO ] TASK [Add host]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Wait for the host to become non operational]</div><div>[ INFO ] ok: [localhost]</div><div>[ INFO ] TASK [Get virbr0 routing configuration]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Get ovirtmgmt route table id]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Check network configuration]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Clean network configuration]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Restore network configuration]</div><div>[ INFO ] changed: [localhost]</div><div>[ INFO ] TASK [Wait for the host to be up]</div><div>[ ERROR ] Error: Failed to read response.</div><div>[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, "msg": "Failed to read response."}</div><span class="gmail-"><div>[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook</div><div>[ INFO ] Stage: Clean up</div><div>[ INFO ] Cleaning temporary resources</div><div>[ INFO ] TASK [Gathering Facts]</div><div>[ INFO ] ok: [localhost]</div><div>[ INFO ] TASK [Remove local vm dir]</div></span><div>[ INFO ] ok: [localhost]</div><div>[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-<wbr>setup/answers/answers-<wbr>20180201104600.conf'</div><span class="gmail-"><div>[ INFO ] Stage: Pre-termination</div><div>[ INFO ] Stage: Termination</div><div>[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy</div></span><div> Log file is located at /var/log/ovirt-hosted-engine-<wbr>setup/ovirt-hosted-engine-<wbr>setup-20180201102603-1of5a1.<wbr>log</div></div><div><br></div><div>Under /var/log/libvirt/qemu of host from where I'm running the hosted-engine deploy I see this</div><div><br></div><div><br></div><div><div>2018-02-01 09:29:05.515+0000: starting up libvirt version: 3.2.0, package: 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" target="_blank">http://bugs.centos.org</a>>, 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" target="_blank">c1bm.rdu2.centos.org</a>), qemu version: 2.9.0(qemu-kvm-ev-2.9.0-16.<wbr>el7_4.13.1), hostname: ov42.mydomain</div><div>LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-<wbr>threads=on -S -object secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-1-<wbr>HostedEngineLocal/master-key.<wbr>aes -machine pc-i440fx-rhel7.4.0,accel=kvm,<wbr>usb=off,dump-guest-core=off -cpu Westmere,+kvmclock -m 6184 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8c8f8163-5b69-4ff5-b67c-<wbr>07b1a9b8f100 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-1-<wbr>HostedEngineLocal/monitor.<wbr>sock,server,nowait -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-<wbr>serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvm1ClXud/<wbr>images/918bbfc1-d599-4170-<wbr>9a92-1ac417bf7658/bb8b3078-<wbr>fddb-4ce3-8da0-0a191768a357,<wbr>format=qcow2,if=none,id=drive-<wbr>virtio-disk0 -device virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x5,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1 -drive file=/var/tmp/localvm1ClXud/<wbr>seed.iso,format=raw,if=none,<wbr>id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=<wbr>drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=26,id=hostnet0,vhost=<wbr>on,vhostfd=28 -device virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:16:3e:<wbr>15:7b:27,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=<wbr>charserial0,id=serial0 -chardev socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channel/<wbr>target/domain-1-<wbr>HostedEngineLocal/org.qemu.<wbr>guest_agent.0,server,nowait -device virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>org.qemu.guest_agent.0 -vnc <a href="http://127.0.0.1:0" target="_blank">127.0.0.1:0</a> -device VGA,id=video0,vgamem_mb=16,<wbr>bus=pci.0,addr=0x2 -object rng-random,id=objrng0,<wbr>filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x6 -msg timestamp=on</div><div>2018-02-01T09:29:05.771459Z qemu-kvm: -chardev pty,id=charserial0: char device redirected to /dev/pts/3 (label charserial0)</div><div>2018-02-01T09:34:19.445774Z qemu-kvm: terminating on signal 15 from pid 6052 (/usr/sbin/libvirtd)</div><div>2018-02-01 09:34:19.668+0000: shutting down, reason=shutdown</div></div><div><br></div><div>In /var/log/messages:</div><div><br></div><div><div>Feb 1 10:29:05 ov42 systemd-machined: New machine qemu-1-HostedEngineLocal.</div><div>Feb 1 10:29:05 ov42 systemd: Started Virtual Machine qemu-1-HostedEngineLocal.</div><div>Feb 1 10:29:05 ov42 systemd: Starting Virtual Machine qemu-1-HostedEngineLocal.</div><div>Feb 1 10:29:05 ov42 kvm: 1 guest now active</div></div><div><div>Feb 1 10:29:06 ov42 python: ansible-command Invoked with warn=True executable=None _uses_shell=True</div><div> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 | awk '{ print $5 }' | cut</div><div> -f1 -d'/' removes=None creates=None chdir=None stdin=None</div><div>Feb 1 10:29:07 ov42 kernel: virbr0: port 2(vnet0) entered learning state</div><div>Feb 1 10:29:09 ov42 kernel: virbr0: port 2(vnet0) entered forwarding state</div><div>Feb 1 10:29:09 ov42 kernel: virbr0: topology change detected, propagating</div><div>Feb 1 10:29:09 ov42 NetworkManager[749]: <info> [1517477349.5180] device (virbr0): link connected</div><div>Feb 1 10:29:16 ov42 python: ansible-command Invoked with warn=True executable=None _uses_shell=True</div><div> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 | awk '{ print $5 }' | cut</div><div> -f1 -d'/' removes=None creates=None chdir=None stdin=None</div><div>Feb 1 10:29:27 ov42 python: ansible-command Invoked with warn=True executable=None _uses_shell=True</div><div> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 | awk '{ print $5 }' | cut</div><div> -f1 -d'/' removes=None creates=None chdir=None stdin=None</div><div>Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPDISCOVER(virbr0) 00:16:3e:15:7b:27</div><div>Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPOFFER(virbr0) 192.168.122.200 00:16:3e:15:7b:27</div><div>Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPREQUEST(virbr0) 192.168.122.200 00:16:3e:15:7b:27</div><div>Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPACK(virbr0) 192.168.122.200 00:16:3e:15:7b:27</div></div><div>. . .<br></div><div><div>Feb 1 10:34:00 ov42 systemd: Starting Virtualization daemon...</div><div>Feb 1 10:34:00 ov42 python: ansible-ovirt_hosts_facts Invoked with pattern=name=ov42.mydomain status=up fetch_nested=False nested_attributes=[] auth={'ca_file': None, 'url': '<a href="https://ov42she.mydomain/ovirt-engine/api" target="_blank">https://ov42she.mydomain/<wbr>ovirt-engine/api</a>', 'insecure': True, 'kerberos': False, 'compress': True, 'headers': None, 'token': 'GOK2wLFZ0PIs1GbXVQjNW-<wbr>yBlUtZoGRa2I92NkCkm6lwdlQV-<wbr>dUdP5EjInyGGN_zEVEHFKgR6nuZ-<wbr>eIlfaM_lw', 'timeout': 0}</div><div>Feb 1 10:34:03 ov42 systemd: Started Virtualization daemon.</div><div>Feb 1 10:34:03 ov42 systemd: Reloading.</div><div>Feb 1 10:34:03 ov42 systemd: [/usr/lib/systemd/system/<wbr>ip6tables.service:3] Failed to add dependency on syslog.target,iptables.<wbr>service, ignoring: Invalid argument</div><div>Feb 1 10:34:03 ov42 systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked.</div><div>Feb 1 10:34:03 ov42 systemd: Starting Cockpit Web Service...</div><div>Feb 1 10:34:03 ov42 dnsmasq[6322]: read /etc/hosts - 4 addresses</div><div>Feb 1 10:34:03 ov42 dnsmasq[6322]: read /var/lib/libvirt/dnsmasq/<wbr>default.addnhosts - 0 addresses</div><div>Feb 1 10:34:03 ov42 dnsmasq-dhcp[6322]: read /var/lib/libvirt/dnsmasq/<wbr>default.hostsfile</div><div>Feb 1 10:34:03 ov42 systemd: Started Cockpit Web Service.</div><div>Feb 1 10:34:03 ov42 cockpit-ws: Using certificate: /etc/cockpit/ws-certs.d/0-<wbr>self-signed.cert</div><div>Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: info : libvirt version: 3.2.0, package: 14.el7_4.7 (CentOS BuildSystem <<a href="http://bugs.centos.org" target="_blank">http://bugs.centos.org</a>>, 2018-01-04-19:31:34, <a href="http://c1bm.rdu2.centos.org" target="_blank">c1bm.rdu2.centos.org</a>)</div><div>Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: info : hostname: ov42.mydomain</div><div>Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: error : virDirOpenInternal:2829 : cannot open directory '/var/tmp/localvm7I0SSJ/<wbr>images/918bbfc1-d599-4170-<wbr>9a92-1ac417bf7658': No such file or directory</div><div>Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : storageDriverAutostart:204 : internal error: Failed to autostart storage pool '918bbfc1-d599-4170-9a92-<wbr>1ac417bf7658': cannot open directory '/var/tmp/localvm7I0SSJ/<wbr>images/918bbfc1-d599-4170-<wbr>9a92-1ac417bf7658': No such file or directory</div><div>Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : virDirOpenInternal:2829 : cannot open directory '/var/tmp/localvm7I0SSJ': No such file or directory</div><div>Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : storageDriverAutostart:204 : internal error: Failed to autostart storage pool 'localvm7I0SSJ': cannot open directory '/var/tmp/localvm7I0SSJ': No such file or directory</div><div>Feb 1 10:34:03 ov42 systemd: Stopping Suspend/Resume Running libvirt Guests...</div><div>Feb 1 10:34:04 ov42 libvirt-guests.sh: Running guests on qemu+tls://ov42.mydomain/<wbr>system URI: HostedEngineLocal</div><div>Feb 1 10:34:04 ov42 libvirt-guests.sh: Shutting down guests on qemu+tls://ov42.mydomain/<wbr>system URI...</div><div>Feb 1 10:34:04 ov42 libvirt-guests.sh: Starting shutdown on guest: HostedEngineLocal</div></div></div></div></div></blockquote><div><br></div>You definitively hit this one:<br><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1539040">https://bugzilla.redhat.com/show_bug.cgi?id=1539040</a><br>host-deploy stops libvirt-guests triggering a shutdown of all the running VMs (including HE one)<div> </div><div>We rebuilt host-deploy with a fix for that today.</div><div>It affects only the host where libvirt-guests has already been configured by a 4.2 host-deploy in the past.</div><div>As a workaround you have to manually stop libvirt-guests before and deconfigure it on /etc/sysconfig/libvirt-guests.conf before running hosted-engine-setup again.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>If I understood corrctly it seems that libvirtd took in charge the ip assignement, using the default 192.168.122.x network, while my host and my engine should be on 10.4.4.x...??</div></div></div></div></blockquote><div><br></div><div>This is absolutely fine.</div><div>Let me explain: with the new ansible based flow we completely reverted the hosted-engine deployment flow.</div><div>In the past hosted-engine-setup was directly preparing the host, the storage, the network and a VM in advance via vdsm and the user was waiting for the engine at the to auto-import everything with a lot of possible issues in the middle.</div><div><br></div><div>Now hosted-engine-setup, doing everything via ansible, bootstraps a local VM on local storage over the default natted libvirt network (that's why you temporary see that address) and it deploys ovirt-engine there.</div><div>Then hosted-engine-setup will use the engine running on the bootstrap local VM to set up everything else (storage, network, vm...) using the well know and tested engine APIs.</div><div>Only at the end it migrates the disk of the local VM over the disk created by engine on the shared storage and ovirt-ha-agent will boot the engine VM from as usual.</div><div>More than that, at this point we don't need auto-import code on engine side since all the involved entities are already know by the engine since it created them.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Currently on host, after the failed deploy, I have:</div><div><br></div><div><div># brctl show</div><div>bridge name<span style="white-space:pre-wrap">        </span>bridge id<span style="white-space:pre-wrap">                </span>STP enabled<span style="white-space:pre-wrap">        </span>interfaces</div><div>;vdsmdummy;<span style="white-space:pre-wrap">                </span>8000.000000000000<span style="white-space:pre-wrap">        </span>no<span style="white-space:pre-wrap">                </span></div><div>ovirtmgmt<span style="white-space:pre-wrap">                </span>8000.001a4a17015d<span style="white-space:pre-wrap">        </span>no<span style="white-space:pre-wrap">                </span>eth0</div><div>virbr0<span style="white-space:pre-wrap">                </span>8000.52540084b832<span style="white-space:pre-wrap">        </span>yes<span style="white-space:pre-wrap">                </span>virbr0-nic</div></div><div><br></div><div>BTW: on host I have network managed by NetworkManager. It is supported now in upcoming 4.2.1, isn't it?</div></div></div></div></blockquote><div><br></div><div>Yes, it is.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-HOEnZb"><font color="#888888"><div><br></div><div>Gianluca</div></font></span></div></div></div>
</blockquote></div><br></div></div>