Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session : [gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4 and in journalctl : févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed. Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I made sure of everything and even stopped firewalld but still can't connect :
[root@vs-inf-int-kvm-fr-301-210 ~]# cat /var/run/libvirt/qemu/HostedEngineLocal.xml <graphics type='vnc' port='*5900*' autoport='yes' *listen='127.0.0.1*'> <listen type='address' address='*127.0.0.1*' fromConfig='1' autoGenerated='no'/>
[root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 tcp 0 0 127.0.0.1:5900 0.0.0.0:* LISTEN 13376/qemu-kvm
I suggest to try ssh tunneling, run ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr
and then remote-viewer vnc://localhost:5900
[root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) *Active: inactive (dead)* *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld - dynamic firewall daemon.*
From my laptop : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *5900* Trying 10.199.210.11... [*nothing gets through...*] ^C
For making sure : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *9090* Trying 10.199.210.11... *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. Escape character is '^]'.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
You can check under /var/run/libvirt/qemu/HostedEngine.xml Search for 'vnc' From there you can look up the port on which the HE VM is available and connect to the same.
On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
1) I am running in a Nested env, but under libvirt/kvm on remote Centos 7.4 Hosts
Please advise how to connect with VNC to the local HE vm. I see it's running, but this is on a remote host, not my local machine : qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat /etc/libvirt/qemu/networks/default.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh net-edit default or other application using the libvirt API. -->
<network> <name>default</name> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:e5:fe:3b'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network> You have new mail in /var/spool/mail/root [root@vs-inf-int-kvm-fr-301-210 ~]
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
He deployment with "hosted-engine --deploy" fails at TASK [ovirt.hosted_engine_setup : Get local VM IP]
See following Error :
2019-02-25 12:46:50,154+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get local VM IP] 2019-02-25 12:55:26,823+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 {u'_ansible_parsed': True, u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []} 2019-02-25 12:55:26,924+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
Here we are just waiting for the bootstrap engine VM to fetch an IP address from default libvirt network over DHCP but it your case it never happened. Possible issues: something went wrong in the bootstrap process for the engine VM or the default libvirt network is not correctly configured.
1. can you try to reach the engine VM via VNC and check what's happening there? (another question, are you running it nested? AFAIK it will not work if nested over ESXi) 2. can you please share the output of cat /etc/libvirt/qemu/networks/default.xml
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y...
_______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV...

Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect I am retrying Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I made sure of everything and even stopped firewalld but still can't connect :
[root@vs-inf-int-kvm-fr-301-210 ~]# cat /var/run/libvirt/qemu/HostedEngineLocal.xml <graphics type='vnc' port='*5900*' autoport='yes' *listen='127.0.0.1*'> <listen type='address' address='*127.0.0.1*' fromConfig='1' autoGenerated='no'/>
[root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 tcp 0 0 127.0.0.1:5900 0.0.0.0:* LISTEN 13376/qemu-kvm
I suggest to try ssh tunneling, run ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr
and then remote-viewer vnc://localhost:5900
[root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) *Active: inactive (dead)* *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld - dynamic firewall daemon.*
From my laptop : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *5900* Trying 10.199.210.11... [*nothing gets through...*] ^C
For making sure : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *9090* Trying 10.199.210.11... *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. Escape character is '^]'.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
You can check under /var/run/libvirt/qemu/HostedEngine.xml Search for 'vnc' From there you can look up the port on which the HE VM is available and connect to the same.
On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
1) I am running in a Nested env, but under libvirt/kvm on remote Centos 7.4 Hosts
Please advise how to connect with VNC to the local HE vm. I see it's running, but this is on a remote host, not my local machine : qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat /etc/libvirt/qemu/networks/default.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh net-edit default or other application using the libvirt API. -->
<network> <name>default</name> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:e5:fe:3b'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network> You have new mail in /var/spool/mail/root [root@vs-inf-int-kvm-fr-301-210 ~]
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> He deployment with "hosted-engine --deploy" fails at TASK > [ovirt.hosted_engine_setup : Get local VM IP] > > See following Error : > > 2019-02-25 12:46:50,154+0100 INFO > otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get > local VM IP] > 2019-02-25 12:55:26,823+0100 DEBUG > otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:103 {u'_ansible_parsed': True, > u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 > :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': > u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', > u'changed': True, u'invocation': {u'module_args': {u'warn': True, > u'executable': > None, u'_uses_shell': True, u'_raw_params': u"virsh -r > net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | > cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, > u'chdir': None, u'std > in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': > 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': > []} > 2019-02-25 12:55:26,924+0100 ERROR > otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:107 fatal: [localhost]: FAILED! => > {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default > | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": > "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": > "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": > "", "stdout_lines": []} >
Here we are just waiting for the bootstrap engine VM to fetch an IP address from default libvirt network over DHCP but it your case it never happened. Possible issues: something went wrong in the bootstrap process for the engine VM or the default libvirt network is not correctly configured.
1. can you try to reach the engine VM via VNC and check what's happening there? (another question, are you running it nested? AFAIK it will not work if nested over ESXi) 2. can you please share the output of cat /etc/libvirt/qemu/networks/default.xml
> > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... > _______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV...

I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* : [ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ... [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before : [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4 Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect
I am retrying
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I made sure of everything and even stopped firewalld but still can't connect :
[root@vs-inf-int-kvm-fr-301-210 ~]# cat /var/run/libvirt/qemu/HostedEngineLocal.xml <graphics type='vnc' port='*5900*' autoport='yes' *listen='127.0.0.1*'> <listen type='address' address='*127.0.0.1*' fromConfig='1' autoGenerated='no'/>
[root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 tcp 0 0 127.0.0.1:5900 0.0.0.0:* LISTEN 13376/qemu-kvm
I suggest to try ssh tunneling, run ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr
and then remote-viewer vnc://localhost:5900
[root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) *Active: inactive (dead)* *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld - dynamic firewall daemon.*
From my laptop : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *5900* Trying 10.199.210.11... [*nothing gets through...*] ^C
For making sure : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *9090* Trying 10.199.210.11... *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. Escape character is '^]'.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
You can check under /var/run/libvirt/qemu/HostedEngine.xml Search for 'vnc' From there you can look up the port on which the HE VM is available and connect to the same.
On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
1) I am running in a Nested env, but under libvirt/kvm on remote Centos 7.4 Hosts
Please advise how to connect with VNC to the local HE vm. I see it's running, but this is on a remote host, not my local machine : qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat /etc/libvirt/qemu/networks/default.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh net-edit default or other application using the libvirt API. -->
<network> <name>default</name> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:e5:fe:3b'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network> You have new mail in /var/spool/mail/root [root@vs-inf-int-kvm-fr-301-210 ~]
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < > guillaume.pavese@interactiv-group.com> wrote: > >> He deployment with "hosted-engine --deploy" fails at TASK >> [ovirt.hosted_engine_setup : Get local VM IP] >> >> See following Error : >> >> 2019-02-25 12:46:50,154+0100 INFO >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >> local VM IP] >> 2019-02-25 12:55:26,823+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:103 {u'_ansible_parsed': True, >> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': >> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', >> u'changed': True, u'invocation': {u'module_args': {u'warn': True, >> u'executable': >> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >> u'chdir': None, u'std >> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': >> 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': >> []} >> 2019-02-25 12:55:26,924+0100 ERROR >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >> "", "stdout_lines": []} >> > > Here we are just waiting for the bootstrap engine VM to fetch an IP > address from default libvirt network over DHCP but it your case it never > happened. > Possible issues: something went wrong in the bootstrap process for > the engine VM or the default libvirt network is not correctly configured. > > 1. can you try to reach the engine VM via VNC and check what's > happening there? (another question, are you running it nested? AFAIK it > will not work if nested over ESXi) > 2. can you please share the output of > cat /etc/libvirt/qemu/networks/default.xml > > >> >> Guillaume Pavese >> Ingénieur Système et Réseau >> Interactiv-Group >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >> > _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV...

OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ...
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu
Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before :
[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect
I am retrying
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I made sure of everything and even stopped firewalld but still can't connect :
[root@vs-inf-int-kvm-fr-301-210 ~]# cat /var/run/libvirt/qemu/HostedEngineLocal.xml <graphics type='vnc' port='*5900*' autoport='yes' *listen='127.0.0.1*'> <listen type='address' address='*127.0.0.1*' fromConfig='1' autoGenerated='no'/>
[root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 tcp 0 0 127.0.0.1:5900 0.0.0.0:* LISTEN 13376/qemu-kvm
I suggest to try ssh tunneling, run ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr
and then remote-viewer vnc://localhost:5900
[root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) *Active: inactive (dead)* *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld - dynamic firewall daemon.*
From my laptop : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *5900* Trying 10.199.210.11... [*nothing gets through...*] ^C
For making sure : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *9090* Trying 10.199.210.11... *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. Escape character is '^]'.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> wrote:
Hey!
You can check under /var/run/libvirt/qemu/HostedEngine.xml Search for 'vnc' From there you can look up the port on which the HE VM is available and connect to the same.
On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> 1) I am running in a Nested env, but under libvirt/kvm on remote > Centos 7.4 Hosts > > Please advise how to connect with VNC to the local HE vm. I see it's > running, but this is on a remote host, not my local machine : > qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 > /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S > -object > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes > -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu > Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp > 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a > -no-user-config -nodefaults -chardev > socket,id=charmonitor,fd=27,server,nowait -mon > chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown > -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot > menu=off,strict=on -device > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive > file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > -drive > file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on > -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev > tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 > -chardev pty,id=charserial0 -device > isa-serial,chardev=charserial0,id=serial0 -chardev > socket,id=charchannel0,fd=31,server,nowait -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 > *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 > -object rng-random,id=objrng0,filename=/dev/random -device > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox > on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny > -msg timestamp=on > > > 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat > /etc/libvirt/qemu/networks/default.xml > <!-- > WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO > BE > OVERWRITTEN AND LOST. Changes to this xml configuration should be > made using: > virsh net-edit default > or other application using the libvirt API. > --> > > <network> > <name>default</name> > <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> > <forward mode='nat'/> > <bridge name='virbr0' stp='on' delay='0'/> > <mac address='52:54:00:e5:fe:3b'/> > <ip address='192.168.122.1' netmask='255.255.255.0'> > <dhcp> > <range start='192.168.122.2' end='192.168.122.254'/> > </dhcp> > </ip> > </network> > You have new mail in /var/spool/mail/root > [root@vs-inf-int-kvm-fr-301-210 ~] > > > > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > > > On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >> guillaume.pavese@interactiv-group.com> wrote: >> >>> He deployment with "hosted-engine --deploy" fails at TASK >>> [ovirt.hosted_engine_setup : Get local VM IP] >>> >>> See following Error : >>> >>> 2019-02-25 12:46:50,154+0100 INFO >>> otopi.ovirt_hosted_engine_setup.ansible_utils >>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>> local VM IP] >>> 2019-02-25 12:55:26,823+0100 DEBUG >>> otopi.ovirt_hosted_engine_setup.ansible_utils >>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': >>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', >>> u'changed': True, u'invocation': {u'module_args': {u'warn': True, >>> u'executable': >>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>> u'chdir': None, u'std >>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': >>> 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': >>> []} >>> 2019-02-25 12:55:26,924+0100 ERROR >>> otopi.ovirt_hosted_engine_setup.ansible_utils >>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>> "", "stdout_lines": []} >>> >> >> Here we are just waiting for the bootstrap engine VM to fetch an IP >> address from default libvirt network over DHCP but it your case it never >> happened. >> Possible issues: something went wrong in the bootstrap process for >> the engine VM or the default libvirt network is not correctly configured. >> >> 1. can you try to reach the engine VM via VNC and check what's >> happening there? (another question, are you running it nested? AFAIK it >> will not work if nested over ESXi) >> 2. can you please share the output of >> cat /etc/libvirt/qemu/networks/default.xml >> >> >>> >>> Guillaume Pavese >>> Ingénieur Système et Réseau >>> Interactiv-Group >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>> >> _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >

I did that but no success yet. I see that "Get local VM IP" task tries the following : virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/' However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything : [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID ------------------------------------------------------------------------------------------------------------------- [root@vs-inf-int-kvm-fr-301-210 ~]# Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it
On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ...
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu
Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before :
[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect
I am retrying
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I made sure of everything and even stopped firewalld but still can't connect :
[root@vs-inf-int-kvm-fr-301-210 ~]# cat /var/run/libvirt/qemu/HostedEngineLocal.xml <graphics type='vnc' port='*5900*' autoport='yes' *listen='127.0.0.1*'> <listen type='address' address='*127.0.0.1*' fromConfig='1' autoGenerated='no'/>
[root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 tcp 0 0 127.0.0.1:5900 0.0.0.0:* LISTEN 13376/qemu-kvm
I suggest to try ssh tunneling, run ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr
and then remote-viewer vnc://localhost:5900
[root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) *Active: inactive (dead)* *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld - dynamic firewall daemon.*
From my laptop : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *5900* Trying 10.199.210.11... [*nothing gets through...*] ^C
For making sure : [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr *9090* Trying 10.199.210.11... *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. Escape character is '^]'.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> wrote:
> Hey! > > You can check under /var/run/libvirt/qemu/HostedEngine.xml > Search for 'vnc' > From there you can look up the port on which the HE VM is available > and connect to the same. > > > On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < > guillaume.pavese@interactiv-group.com> wrote: > >> 1) I am running in a Nested env, but under libvirt/kvm on remote >> Centos 7.4 Hosts >> >> Please advise how to connect with VNC to the local HE vm. I see >> it's running, but this is on a remote host, not my local machine : >> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 >> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S >> -object >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >> -no-user-config -nodefaults -chardev >> socket,id=charmonitor,fd=27,server,nowait -mon >> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >> menu=off,strict=on -device >> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >> -device >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >> -drive >> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >> -chardev pty,id=charserial0 -device >> isa-serial,chardev=charserial0,id=serial0 -chardev >> socket,id=charchannel0,fd=31,server,nowait -device >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >> -object rng-random,id=objrng0,filename=/dev/random -device >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >> -msg timestamp=on >> >> >> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >> /etc/libvirt/qemu/networks/default.xml >> <!-- >> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY >> TO BE >> OVERWRITTEN AND LOST. Changes to this xml configuration should be >> made using: >> virsh net-edit default >> or other application using the libvirt API. >> --> >> >> <network> >> <name>default</name> >> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >> <forward mode='nat'/> >> <bridge name='virbr0' stp='on' delay='0'/> >> <mac address='52:54:00:e5:fe:3b'/> >> <ip address='192.168.122.1' netmask='255.255.255.0'> >> <dhcp> >> <range start='192.168.122.2' end='192.168.122.254'/> >> </dhcp> >> </ip> >> </network> >> You have new mail in /var/spool/mail/root >> [root@vs-inf-int-kvm-fr-301-210 ~] >> >> >> >> Guillaume Pavese >> Ingénieur Système et Réseau >> Interactiv-Group >> >> >> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >> stirabos@redhat.com> wrote: >> >>> >>> >>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>> guillaume.pavese@interactiv-group.com> wrote: >>> >>>> He deployment with "hosted-engine --deploy" fails at TASK >>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>> >>>> See following Error : >>>> >>>> 2019-02-25 12:46:50,154+0100 INFO >>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>> local VM IP] >>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': >>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', >>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True, >>>> u'executable': >>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>> u'chdir': None, u'std >>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>> u'stdout_lines': []} >>>> 2019-02-25 12:55:26,924+0100 ERROR >>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>> "", "stdout_lines": []} >>>> >>> >>> Here we are just waiting for the bootstrap engine VM to fetch an >>> IP address from default libvirt network over DHCP but it your case it never >>> happened. >>> Possible issues: something went wrong in the bootstrap process for >>> the engine VM or the default libvirt network is not correctly configured. >>> >>> 1. can you try to reach the engine VM via VNC and check what's >>> happening there? (another question, are you running it nested? AFAIK it >>> will not work if nested over ESXi) >>> 2. can you please share the output of >>> cat /etc/libvirt/qemu/networks/default.xml >>> >>> >>>> >>>> Guillaume Pavese >>>> Ingénieur Système et Réseau >>>> Interactiv-Group >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: >>>> https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: >>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>> >>> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >> >

On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything :
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it
On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ...
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu
Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before :
[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect
I am retrying
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> I made sure of everything and even stopped firewalld but still can't > connect : > > [root@vs-inf-int-kvm-fr-301-210 ~]# cat > /var/run/libvirt/qemu/HostedEngineLocal.xml > <graphics type='vnc' port='*5900*' autoport='yes' > *listen='127.0.0.1*'> > <listen type='address' address='*127.0.0.1*' fromConfig='1' > autoGenerated='no'/> > > [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 > tcp 0 0 127.0.0.1:5900 0.0.0.0:* > LISTEN 13376/qemu-kvm >
I suggest to try ssh tunneling, run ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr
and then remote-viewer vnc://localhost:5900
> > [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status > firewalld.service > ● firewalld.service - firewalld - dynamic firewall daemon > Loaded: loaded (/usr/lib/systemd/system/firewalld.service; > enabled; vendor preset: enabled) > *Active: inactive (dead)* > *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr > <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld > - dynamic firewall daemon.* > > From my laptop : > [gpavese@sheepora-X230 ~]$ telnet > vs-inf-int-kvm-fr-301-210.hostics.fr *5900* > Trying 10.199.210.11... > [*nothing gets through...*] > ^C > > For making sure : > [gpavese@sheepora-X230 ~]$ telnet > vs-inf-int-kvm-fr-301-210.hostics.fr *9090* > Trying 10.199.210.11... > *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. > Escape character is '^]'. > > > > > > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > > > On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> > wrote: > >> Hey! >> >> You can check under /var/run/libvirt/qemu/HostedEngine.xml >> Search for 'vnc' >> From there you can look up the port on which the HE VM is available >> and connect to the same. >> >> >> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >> guillaume.pavese@interactiv-group.com> wrote: >> >>> 1) I am running in a Nested env, but under libvirt/kvm on remote >>> Centos 7.4 Hosts >>> >>> Please advise how to connect with VNC to the local HE vm. I see >>> it's running, but this is on a remote host, not my local machine : >>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 >>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S >>> -object >>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>> -no-user-config -nodefaults -chardev >>> socket,id=charmonitor,fd=27,server,nowait -mon >>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>> menu=off,strict=on -device >>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>> -device >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>> -drive >>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>> -chardev pty,id=charserial0 -device >>> isa-serial,chardev=charserial0,id=serial0 -chardev >>> socket,id=charchannel0,fd=31,server,nowait -device >>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>> -object rng-random,id=objrng0,filename=/dev/random -device >>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>> -msg timestamp=on >>> >>> >>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>> /etc/libvirt/qemu/networks/default.xml >>> <!-- >>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY >>> TO BE >>> OVERWRITTEN AND LOST. Changes to this xml configuration should be >>> made using: >>> virsh net-edit default >>> or other application using the libvirt API. >>> --> >>> >>> <network> >>> <name>default</name> >>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>> <forward mode='nat'/> >>> <bridge name='virbr0' stp='on' delay='0'/> >>> <mac address='52:54:00:e5:fe:3b'/> >>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>> <dhcp> >>> <range start='192.168.122.2' end='192.168.122.254'/> >>> </dhcp> >>> </ip> >>> </network> >>> You have new mail in /var/spool/mail/root >>> [root@vs-inf-int-kvm-fr-301-210 ~] >>> >>> >>> >>> Guillaume Pavese >>> Ingénieur Système et Réseau >>> Interactiv-Group >>> >>> >>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>> stirabos@redhat.com> wrote: >>> >>>> >>>> >>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>> guillaume.pavese@interactiv-group.com> wrote: >>>> >>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>> >>>>> See following Error : >>>>> >>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>> local VM IP] >>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': >>>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', >>>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True, >>>>> u'executable': >>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>> u'chdir': None, u'std >>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>> u'stdout_lines': []} >>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>> "", "stdout_lines": []} >>>>> >>>> >>>> Here we are just waiting for the bootstrap engine VM to fetch an >>>> IP address from default libvirt network over DHCP but it your case it never >>>> happened. >>>> Possible issues: something went wrong in the bootstrap process >>>> for the engine VM or the default libvirt network is not correctly >>>> configured. >>>> >>>> 1. can you try to reach the engine VM via VNC and check what's >>>> happening there? (another question, are you running it nested? AFAIK it >>>> will not work if nested over ESXi) >>>> 2. can you please share the output of >>>> cat /etc/libvirt/qemu/networks/default.xml >>>> >>>> >>>>> >>>>> Guillaume Pavese >>>>> Ingénieur Système et Réseau >>>>> Interactiv-Group >>>>> _______________________________________________ >>>>> Users mailing list -- users@ovirt.org >>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: >>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>> List Archives: >>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>> >>>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>> >>

I still can't connect with VNC remotely but locally with X forwarding it works. However my connection has too high latency for that to be usable (I'm in Japan, my hosts in France, ~250 ms ping) But I could see that the VM is booted! and in Hosts logs there is : févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210 févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None .... ssh to the vm works too : [root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14 The authenticity of host '192.168.122.14 (192.168.122.14)' can't be established. ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ. ECDSA key fingerprint is MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known hosts. root@192.168.122.14's password: [root@vs-inf-int-ovt-fr-301-210 ~]# But the test that the playbook tries still fails with empty result : [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID ------------------------------------------------------------------------------------------------------------------- [root@vs-inf-int-kvm-fr-301-210 ~]# Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything :
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it
On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ...
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu
Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before :
[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect
I am retrying
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
It fails too : I made sure PermitTunnel=yes in sshd config but when I try to connect to the forwarded port I get the following error on the openened host ssh session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900: vs-inf-int-kvm-fr-301-210.hostics.fr:5900 root@vs-inf-int-kvm-fr-301-210.hostics.fr ... [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144 to ::1 port 5900, nchannels 4 debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > > On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < > guillaume.pavese@interactiv-group.com> wrote: > >> I made sure of everything and even stopped firewalld but still >> can't connect : >> >> [root@vs-inf-int-kvm-fr-301-210 ~]# cat >> /var/run/libvirt/qemu/HostedEngineLocal.xml >> <graphics type='vnc' port='*5900*' autoport='yes' >> *listen='127.0.0.1*'> >> <listen type='address' address='*127.0.0.1*' >> fromConfig='1' autoGenerated='no'/> >> >> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 >> tcp 0 0 127.0.0.1:5900 0.0.0.0:* >> LISTEN 13376/qemu-kvm >> > > > I suggest to try ssh tunneling, run > ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 > root@vs-inf-int-kvm-fr-301-210.hostics.fr > > and then > remote-viewer vnc://localhost:5900 > > > >> >> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status >> firewalld.service >> ● firewalld.service - firewalld - dynamic firewall daemon >> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; >> enabled; vendor preset: enabled) >> *Active: inactive (dead)* >> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr >> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld >> - dynamic firewall daemon.* >> >> From my laptop : >> [gpavese@sheepora-X230 ~]$ telnet >> vs-inf-int-kvm-fr-301-210.hostics.fr *5900* >> Trying 10.199.210.11... >> [*nothing gets through...*] >> ^C >> >> For making sure : >> [gpavese@sheepora-X230 ~]$ telnet >> vs-inf-int-kvm-fr-301-210.hostics.fr *9090* >> Trying 10.199.210.11... >> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. >> Escape character is '^]'. >> >> >> >> >> >> Guillaume Pavese >> Ingénieur Système et Réseau >> Interactiv-Group >> >> >> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> >> wrote: >> >>> Hey! >>> >>> You can check under /var/run/libvirt/qemu/HostedEngine.xml >>> Search for 'vnc' >>> From there you can look up the port on which the HE VM is >>> available and connect to the same. >>> >>> >>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >>> guillaume.pavese@interactiv-group.com> wrote: >>> >>>> 1) I am running in a Nested env, but under libvirt/kvm on remote >>>> Centos 7.4 Hosts >>>> >>>> Please advise how to connect with VNC to the local HE vm. I see >>>> it's running, but this is on a remote host, not my local machine : >>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 >>>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S >>>> -object >>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>>> -no-user-config -nodefaults -chardev >>>> socket,id=charmonitor,fd=27,server,nowait -mon >>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>>> menu=off,strict=on -device >>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>>> -device >>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>>> -drive >>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>>> -chardev pty,id=charserial0 -device >>>> isa-serial,chardev=charserial0,id=serial0 -chardev >>>> socket,id=charchannel0,fd=31,server,nowait -device >>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>>> -object rng-random,id=objrng0,filename=/dev/random -device >>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>> -msg timestamp=on >>>> >>>> >>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>> /etc/libvirt/qemu/networks/default.xml >>>> <!-- >>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY >>>> TO BE >>>> OVERWRITTEN AND LOST. Changes to this xml configuration should be >>>> made using: >>>> virsh net-edit default >>>> or other application using the libvirt API. >>>> --> >>>> >>>> <network> >>>> <name>default</name> >>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>>> <forward mode='nat'/> >>>> <bridge name='virbr0' stp='on' delay='0'/> >>>> <mac address='52:54:00:e5:fe:3b'/> >>>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>>> <dhcp> >>>> <range start='192.168.122.2' end='192.168.122.254'/> >>>> </dhcp> >>>> </ip> >>>> </network> >>>> You have new mail in /var/spool/mail/root >>>> [root@vs-inf-int-kvm-fr-301-210 ~] >>>> >>>> >>>> >>>> Guillaume Pavese >>>> Ingénieur Système et Réseau >>>> Interactiv-Group >>>> >>>> >>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>>> stirabos@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>> >>>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>>> >>>>>> See following Error : >>>>>> >>>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>>> local VM IP] >>>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': >>>>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', >>>>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True, >>>>>> u'executable': >>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>>> u'chdir': None, u'std >>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>>> u'stdout_lines': []} >>>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>>> "", "stdout_lines": []} >>>>>> >>>>> >>>>> Here we are just waiting for the bootstrap engine VM to fetch an >>>>> IP address from default libvirt network over DHCP but it your case it never >>>>> happened. >>>>> Possible issues: something went wrong in the bootstrap process >>>>> for the engine VM or the default libvirt network is not correctly >>>>> configured. >>>>> >>>>> 1. can you try to reach the engine VM via VNC and check what's >>>>> happening there? (another question, are you running it nested? AFAIK it >>>>> will not work if nested over ESXi) >>>>> 2. can you please share the output of >>>>> cat /etc/libvirt/qemu/networks/default.xml >>>>> >>>>> >>>>>> >>>>>> Guillaume Pavese >>>>>> Ingénieur Système et Réseau >>>>>> Interactiv-Group >>>>>> _______________________________________________ >>>>>> Users mailing list -- users@ovirt.org >>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>> oVirt Code of Conduct: >>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>> List Archives: >>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>>> >>>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: >>>> https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: >>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>>> >>>

On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I still can't connect with VNC remotely but locally with X forwarding it works. However my connection has too high latency for that to be usable (I'm in Japan, my hosts in France, ~250 ms ping)
But I could see that the VM is booted!
and in Hosts logs there is :
févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210 févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None ....
ssh to the vm works too :
[root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14 The authenticity of host '192.168.122.14 (192.168.122.14)' can't be established. ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ. ECDSA key fingerprint is MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known hosts. root@192.168.122.14's password: [root@vs-inf-int-ovt-fr-301-210 ~]#
But the test that the playbook tries still fails with empty result :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
This smells like a bug to me: and nothing at all in the output of virsh -r net-dhcp-leases default ?
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything :
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it
On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ...
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu
Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before :
[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Something was definitely wrong ; as indicated, qemu process for guest=HostedEngineLocal was running but the disk file did not exist anymore... No surprise I could not connect
I am retrying
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> It fails too : > I made sure PermitTunnel=yes in sshd config but when I try to > connect to the forwarded port I get the following error on the openened > host ssh session : > > [gpavese@sheepora-X230 ~]$ ssh -v -L 5900: > vs-inf-int-kvm-fr-301-210.hostics.fr:5900 > root@vs-inf-int-kvm-fr-301-210.hostics.fr > ... > [root@vs-inf-int-kvm-fr-301-210 ~]# > debug1: channel 3: free: direct-tcpip: listening port 5900 for > vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 > port 42144 to ::1 port 5900, nchannels 4 > debug1: Connection to port 5900 forwarding to > vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. > debug1: channel 3: new [direct-tcpip] > channel 3: open failed: connect failed: Connection refused > debug1: channel 3: free: direct-tcpip: listening port 5900 for > vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from > 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4 > > > and in journalctl : > > févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: > error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: > failed. > > > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > > > On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> >> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < >> guillaume.pavese@interactiv-group.com> wrote: >> >>> I made sure of everything and even stopped firewalld but still >>> can't connect : >>> >>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>> /var/run/libvirt/qemu/HostedEngineLocal.xml >>> <graphics type='vnc' port='*5900*' autoport='yes' >>> *listen='127.0.0.1*'> >>> <listen type='address' address='*127.0.0.1*' >>> fromConfig='1' autoGenerated='no'/> >>> >>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 >>> tcp 0 0 127.0.0.1:5900 0.0.0.0:* >>> LISTEN 13376/qemu-kvm >>> >> >> >> I suggest to try ssh tunneling, run >> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >> root@vs-inf-int-kvm-fr-301-210.hostics.fr >> >> and then >> remote-viewer vnc://localhost:5900 >> >> >> >>> >>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status >>> firewalld.service >>> ● firewalld.service - firewalld - dynamic firewall daemon >>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; >>> enabled; vendor preset: enabled) >>> *Active: inactive (dead)* >>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr >>> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld >>> - dynamic firewall daemon.* >>> >>> From my laptop : >>> [gpavese@sheepora-X230 ~]$ telnet >>> vs-inf-int-kvm-fr-301-210.hostics.fr *5900* >>> Trying 10.199.210.11... >>> [*nothing gets through...*] >>> ^C >>> >>> For making sure : >>> [gpavese@sheepora-X230 ~]$ telnet >>> vs-inf-int-kvm-fr-301-210.hostics.fr *9090* >>> Trying 10.199.210.11... >>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. >>> Escape character is '^]'. >>> >>> >>> >>> >>> >>> Guillaume Pavese >>> Ingénieur Système et Réseau >>> Interactiv-Group >>> >>> >>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> >>> wrote: >>> >>>> Hey! >>>> >>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml >>>> Search for 'vnc' >>>> From there you can look up the port on which the HE VM is >>>> available and connect to the same. >>>> >>>> >>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >>>> guillaume.pavese@interactiv-group.com> wrote: >>>> >>>>> 1) I am running in a Nested env, but under libvirt/kvm on remote >>>>> Centos 7.4 Hosts >>>>> >>>>> Please advise how to connect with VNC to the local HE vm. I see >>>>> it's running, but this is on a remote host, not my local machine : >>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08 >>>>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S >>>>> -object >>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>>>> -no-user-config -nodefaults -chardev >>>>> socket,id=charmonitor,fd=27,server,nowait -mon >>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>>>> menu=off,strict=on -device >>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>>>> -device >>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>>>> -drive >>>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>>>> -chardev pty,id=charserial0 -device >>>>> isa-serial,chardev=charserial0,id=serial0 -chardev >>>>> socket,id=charchannel0,fd=31,server,nowait -device >>>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>>>> -object rng-random,id=objrng0,filename=/dev/random -device >>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>>> -msg timestamp=on >>>>> >>>>> >>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>> /etc/libvirt/qemu/networks/default.xml >>>>> <!-- >>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE >>>>> LIKELY TO BE >>>>> OVERWRITTEN AND LOST. Changes to this xml configuration should >>>>> be made using: >>>>> virsh net-edit default >>>>> or other application using the libvirt API. >>>>> --> >>>>> >>>>> <network> >>>>> <name>default</name> >>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>>>> <forward mode='nat'/> >>>>> <bridge name='virbr0' stp='on' delay='0'/> >>>>> <mac address='52:54:00:e5:fe:3b'/> >>>>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>>>> <dhcp> >>>>> <range start='192.168.122.2' end='192.168.122.254'/> >>>>> </dhcp> >>>>> </ip> >>>>> </network> >>>>> You have new mail in /var/spool/mail/root >>>>> [root@vs-inf-int-kvm-fr-301-210 ~] >>>>> >>>>> >>>>> >>>>> Guillaume Pavese >>>>> Ingénieur Système et Réseau >>>>> Interactiv-Group >>>>> >>>>> >>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>>>> stirabos@redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>> >>>>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>>>> >>>>>>> See following Error : >>>>>>> >>>>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>>>> local VM IP] >>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': >>>>>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', >>>>>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True, >>>>>>> u'executable': >>>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>>>> u'chdir': None, u'std >>>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>>>> u'stdout_lines': []} >>>>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>>>> "", "stdout_lines": []} >>>>>>> >>>>>> >>>>>> Here we are just waiting for the bootstrap engine VM to fetch >>>>>> an IP address from default libvirt network over DHCP but it your case it >>>>>> never happened. >>>>>> Possible issues: something went wrong in the bootstrap process >>>>>> for the engine VM or the default libvirt network is not correctly >>>>>> configured. >>>>>> >>>>>> 1. can you try to reach the engine VM via VNC and check what's >>>>>> happening there? (another question, are you running it nested? AFAIK it >>>>>> will not work if nested over ESXi) >>>>>> 2. can you please share the output of >>>>>> cat /etc/libvirt/qemu/networks/default.xml >>>>>> >>>>>> >>>>>>> >>>>>>> Guillaume Pavese >>>>>>> Ingénieur Système et Réseau >>>>>>> Interactiv-Group >>>>>>> _______________________________________________ >>>>>>> Users mailing list -- users@ovirt.org >>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> oVirt Code of Conduct: >>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>> List Archives: >>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>>>> >>>>>> _______________________________________________ >>>>> Users mailing list -- users@ovirt.org >>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: >>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>> List Archives: >>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>>>> >>>>

No, as indicated previously, still : [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID ------------------------------------------------------------------------------------------------------------------- [root@vs-inf-int-kvm-fr-301-210 ~]# I did not see any relevant log on the HE vm. Is there something I should look for there? Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I still can't connect with VNC remotely but locally with X forwarding it works. However my connection has too high latency for that to be usable (I'm in Japan, my hosts in France, ~250 ms ping)
But I could see that the VM is booted!
and in Hosts logs there is :
févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210 févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None ....
ssh to the vm works too :
[root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14 The authenticity of host '192.168.122.14 (192.168.122.14)' can't be established. ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ. ECDSA key fingerprint is MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known hosts. root@192.168.122.14's password: [root@vs-inf-int-ovt-fr-301-210 ~]#
But the test that the playbook tries still fails with empty result :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
This smells like a bug to me: and nothing at all in the output of virsh -r net-dhcp-leases default
?
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything :
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it
On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I retried after killing the remaining qemu process and doing ovirt-hosted-engine-cleanup The new attempt failed again at the same step. Then after it fails, it cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources ...
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 grep --color=auto qemu
Before the first Error, while the vm was running for sure and the disk was there, I also unsuccessfuly tried to connect to it with VNC and got the same error I got before :
[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 5900 for vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> Something was definitely wrong ; as indicated, qemu process > for guest=HostedEngineLocal was running but the disk file did not exist > anymore... > No surprise I could not connect > > I am retrying > > > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > > > On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < > guillaume.pavese@interactiv-group.com> wrote: > >> It fails too : >> I made sure PermitTunnel=yes in sshd config but when I try to >> connect to the forwarded port I get the following error on the openened >> host ssh session : >> >> [gpavese@sheepora-X230 ~]$ ssh -v -L 5900: >> vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >> root@vs-inf-int-kvm-fr-301-210.hostics.fr >> ... >> [root@vs-inf-int-kvm-fr-301-210 ~]# >> debug1: channel 3: free: direct-tcpip: listening port 5900 for >> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 >> port 42144 to ::1 port 5900, nchannels 4 >> debug1: Connection to port 5900 forwarding to >> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. >> debug1: channel 3: new [direct-tcpip] >> channel 3: open failed: connect failed: Connection refused >> debug1: channel 3: free: direct-tcpip: listening port 5900 for >> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >> 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4 >> >> >> and in journalctl : >> >> févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr >> sshd[19595]: error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr >> port 5900: failed. >> >> >> Guillaume Pavese >> Ingénieur Système et Réseau >> Interactiv-Group >> >> >> On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < >> stirabos@redhat.com> wrote: >> >>> >>> >>> >>> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < >>> guillaume.pavese@interactiv-group.com> wrote: >>> >>>> I made sure of everything and even stopped firewalld but still >>>> can't connect : >>>> >>>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>> /var/run/libvirt/qemu/HostedEngineLocal.xml >>>> <graphics type='vnc' port='*5900*' autoport='yes' >>>> *listen='127.0.0.1*'> >>>> <listen type='address' address='*127.0.0.1*' >>>> fromConfig='1' autoGenerated='no'/> >>>> >>>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 >>>> tcp 0 0 127.0.0.1:5900 0.0.0.0:* >>>> LISTEN 13376/qemu-kvm >>>> >>> >>> >>> I suggest to try ssh tunneling, run >>> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>> >>> and then >>> remote-viewer vnc://localhost:5900 >>> >>> >>> >>>> >>>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status >>>> firewalld.service >>>> ● firewalld.service - firewalld - dynamic firewall daemon >>>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; >>>> enabled; vendor preset: enabled) >>>> *Active: inactive (dead)* >>>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr >>>> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld >>>> - dynamic firewall daemon.* >>>> >>>> From my laptop : >>>> [gpavese@sheepora-X230 ~]$ telnet >>>> vs-inf-int-kvm-fr-301-210.hostics.fr *5900* >>>> Trying 10.199.210.11... >>>> [*nothing gets through...*] >>>> ^C >>>> >>>> For making sure : >>>> [gpavese@sheepora-X230 ~]$ telnet >>>> vs-inf-int-kvm-fr-301-210.hostics.fr *9090* >>>> Trying 10.199.210.11... >>>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. >>>> Escape character is '^]'. >>>> >>>> >>>> >>>> >>>> >>>> Guillaume Pavese >>>> Ingénieur Système et Réseau >>>> Interactiv-Group >>>> >>>> >>>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth@redhat.com> >>>> wrote: >>>> >>>>> Hey! >>>>> >>>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml >>>>> Search for 'vnc' >>>>> From there you can look up the port on which the HE VM is >>>>> available and connect to the same. >>>>> >>>>> >>>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>> >>>>>> 1) I am running in a Nested env, but under libvirt/kvm on >>>>>> remote Centos 7.4 Hosts >>>>>> >>>>>> Please advise how to connect with VNC to the local HE vm. I see >>>>>> it's running, but this is on a remote host, not my local machine : >>>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 >>>>>> 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on >>>>>> -S -object >>>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>>>>> -no-user-config -nodefaults -chardev >>>>>> socket,id=charmonitor,fd=27,server,nowait -mon >>>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>>>>> menu=off,strict=on -device >>>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>>>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>>>>> -device >>>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>>>>> -drive >>>>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>>>>> -chardev pty,id=charserial0 -device >>>>>> isa-serial,chardev=charserial0,id=serial0 -chardev >>>>>> socket,id=charchannel0,fd=31,server,nowait -device >>>>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>>>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>>>>> -object rng-random,id=objrng0,filename=/dev/random -device >>>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>>>> -msg timestamp=on >>>>>> >>>>>> >>>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>>> /etc/libvirt/qemu/networks/default.xml >>>>>> <!-- >>>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE >>>>>> LIKELY TO BE >>>>>> OVERWRITTEN AND LOST. Changes to this xml configuration should >>>>>> be made using: >>>>>> virsh net-edit default >>>>>> or other application using the libvirt API. >>>>>> --> >>>>>> >>>>>> <network> >>>>>> <name>default</name> >>>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>>>>> <forward mode='nat'/> >>>>>> <bridge name='virbr0' stp='on' delay='0'/> >>>>>> <mac address='52:54:00:e5:fe:3b'/> >>>>>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>>>>> <dhcp> >>>>>> <range start='192.168.122.2' end='192.168.122.254'/> >>>>>> </dhcp> >>>>>> </ip> >>>>>> </network> >>>>>> You have new mail in /var/spool/mail/root >>>>>> [root@vs-inf-int-kvm-fr-301-210 ~] >>>>>> >>>>>> >>>>>> >>>>>> Guillaume Pavese >>>>>> Ingénieur Système et Réseau >>>>>> Interactiv-Group >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>>>>> stirabos@redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>>> >>>>>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>>>>> >>>>>>>> See following Error : >>>>>>>> >>>>>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>>>>> local VM IP] >>>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", >>>>>>>> u'end': u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, >>>>>>>> u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': >>>>>>>> True, u'executable': >>>>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>>>>> u'chdir': None, u'std >>>>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>>>>> u'stdout_lines': []} >>>>>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>>>>> "", "stdout_lines": []} >>>>>>>> >>>>>>> >>>>>>> Here we are just waiting for the bootstrap engine VM to fetch >>>>>>> an IP address from default libvirt network over DHCP but it your case it >>>>>>> never happened. >>>>>>> Possible issues: something went wrong in the bootstrap process >>>>>>> for the engine VM or the default libvirt network is not correctly >>>>>>> configured. >>>>>>> >>>>>>> 1. can you try to reach the engine VM via VNC and check what's >>>>>>> happening there? (another question, are you running it nested? AFAIK it >>>>>>> will not work if nested over ESXi) >>>>>>> 2. can you please share the output of >>>>>>> cat /etc/libvirt/qemu/networks/default.xml >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Guillaume Pavese >>>>>>>> Ingénieur Système et Réseau >>>>>>>> Interactiv-Group >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>>> oVirt Code of Conduct: >>>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>>> List Archives: >>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>>>>> >>>>>>> _______________________________________________ >>>>>> Users mailing list -- users@ovirt.org >>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>> oVirt Code of Conduct: >>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>> List Archives: >>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>>>>> >>>>>

On Mon, Feb 25, 2019 at 7:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
No, as indicated previously, still :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
I did not see any relevant log on the HE vm. Is there something I should look for there?
This smells really bad: I'd suggest to check /var/log/messages and /var/log/libvirt/qemu/HostedEngineLocal.log for libvirt errors; if nothing is there can I ask you to try reexecuting with libvirt debug logs (edit /etc/libvirt/libvirtd.conf). Honestly I'm not able to reproduce it on my side.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I still can't connect with VNC remotely but locally with X forwarding it works. However my connection has too high latency for that to be usable (I'm in Japan, my hosts in France, ~250 ms ping)
But I could see that the VM is booted!
and in Hosts logs there is :
févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210 févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None ....
ssh to the vm works too :
[root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14 The authenticity of host '192.168.122.14 (192.168.122.14)' can't be established. ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ. ECDSA key fingerprint is MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known hosts. root@192.168.122.14's password: [root@vs-inf-int-ovt-fr-301-210 ~]#
But the test that the playbook tries still fails with empty result :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
This smells like a bug to me: and nothing at all in the output of virsh -r net-dhcp-leases default
?
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything :
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi < stirabos@redhat.com> wrote:
OK, try this: temporary edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml around line 120 and edit tasks "Get local VM IP" changing from "retries: 50" to "retries: 500" so that you have more time to debug it
On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> I retried after killing the remaining qemu process and > doing ovirt-hosted-engine-cleanup > The new attempt failed again at the same step. Then after it fails, > it cleans the temporary files (and vm disk) but *qemu still runs!* : > > [ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP] > > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": > true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 > | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end": > "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25 > 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "", > "stdout_lines": []} > [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] > [ INFO ] changed: [localhost] > [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry > in /etc/hosts for the local VM] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a > failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": > "The system may not be provisioned according to the playbook results: > please check the logs for the issue, fix accordingly or re-deploy from > scratch.\n"} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > [ INFO ] Cleaning temporary resources > ... > > [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry > in /etc/hosts for the local VM] > [ INFO ] ok: [localhost] > [ INFO ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs for > the issue, fix accordingly or re-deploy from scratch. > > > > [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu > root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 > /usr/bin/qemu-ga --method=virtio-serial > --path=/dev/virtio-ports/org.qemu.guest_agent.0 > --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status > -F/etc/qemu-ga/fsfreeze-hook > qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 > /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S > -object > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes > -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu > Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp > 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e > -no-user-config -nodefaults -chardev > socket,id=charmonitor,fd=27,server,nowait -mon > chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown > -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot > menu=off,strict=on -device > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive > file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > -drive > file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on > -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev > tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 > -chardev pty,id=charserial0 -device > isa-serial,chardev=charserial0,id=serial0 -chardev > socket,id=charchannel0,fd=31,server,nowait -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 > -vnc 127.0.0.1:0 -device > VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object > rng-random,id=objrng0,filename=/dev/random -device > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox > on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny > -msg timestamp=on > root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 > grep --color=auto qemu > > > Before the first Error, while the vm was running for sure and the > disk was there, I also unsuccessfuly tried to connect to it with VNC and > got the same error I got before : > > [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900 > forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 > requested. > debug1: channel 3: new [direct-tcpip] > channel 3: open failed: connect failed: Connection refused > debug1: channel 3: free: direct-tcpip: listening port 5900 for > vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from > 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4 > > > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > > > On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < > guillaume.pavese@interactiv-group.com> wrote: > >> Something was definitely wrong ; as indicated, qemu process >> for guest=HostedEngineLocal was running but the disk file did not exist >> anymore... >> No surprise I could not connect >> >> I am retrying >> >> >> Guillaume Pavese >> Ingénieur Système et Réseau >> Interactiv-Group >> >> >> On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < >> guillaume.pavese@interactiv-group.com> wrote: >> >>> It fails too : >>> I made sure PermitTunnel=yes in sshd config but when I try to >>> connect to the forwarded port I get the following error on the openened >>> host ssh session : >>> >>> [gpavese@sheepora-X230 ~]$ ssh -v -L 5900: >>> vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>> ... >>> [root@vs-inf-int-kvm-fr-301-210 ~]# >>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 >>> port 42144 to ::1 port 5900, nchannels 4 >>> debug1: Connection to port 5900 forwarding to >>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. >>> debug1: channel 3: new [direct-tcpip] >>> channel 3: open failed: connect failed: Connection refused >>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >>> 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4 >>> >>> >>> and in journalctl : >>> >>> févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr >>> sshd[19595]: error: connect_to >>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed. >>> >>> >>> Guillaume Pavese >>> Ingénieur Système et Réseau >>> Interactiv-Group >>> >>> >>> On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < >>> stirabos@redhat.com> wrote: >>> >>>> >>>> >>>> >>>> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < >>>> guillaume.pavese@interactiv-group.com> wrote: >>>> >>>>> I made sure of everything and even stopped firewalld but still >>>>> can't connect : >>>>> >>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>> /var/run/libvirt/qemu/HostedEngineLocal.xml >>>>> <graphics type='vnc' port='*5900*' autoport='yes' >>>>> *listen='127.0.0.1*'> >>>>> <listen type='address' address='*127.0.0.1*' >>>>> fromConfig='1' autoGenerated='no'/> >>>>> >>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 >>>>> tcp 0 0 127.0.0.1:5900 0.0.0.0:* >>>>> LISTEN 13376/qemu-kvm >>>>> >>>> >>>> >>>> I suggest to try ssh tunneling, run >>>> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>>> >>>> and then >>>> remote-viewer vnc://localhost:5900 >>>> >>>> >>>> >>>>> >>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status >>>>> firewalld.service >>>>> ● firewalld.service - firewalld - dynamic firewall daemon >>>>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; >>>>> enabled; vendor preset: enabled) >>>>> *Active: inactive (dead)* >>>>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr >>>>> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld >>>>> - dynamic firewall daemon.* >>>>> >>>>> From my laptop : >>>>> [gpavese@sheepora-X230 ~]$ telnet >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr *5900* >>>>> Trying 10.199.210.11... >>>>> [*nothing gets through...*] >>>>> ^C >>>>> >>>>> For making sure : >>>>> [gpavese@sheepora-X230 ~]$ telnet >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr *9090* >>>>> Trying 10.199.210.11... >>>>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. >>>>> Escape character is '^]'. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Guillaume Pavese >>>>> Ingénieur Système et Réseau >>>>> Interactiv-Group >>>>> >>>>> >>>>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal < >>>>> dparth@redhat.com> wrote: >>>>> >>>>>> Hey! >>>>>> >>>>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml >>>>>> Search for 'vnc' >>>>>> From there you can look up the port on which the HE VM is >>>>>> available and connect to the same. >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>> >>>>>>> 1) I am running in a Nested env, but under libvirt/kvm on >>>>>>> remote Centos 7.4 Hosts >>>>>>> >>>>>>> Please advise how to connect with VNC to the local HE vm. I >>>>>>> see it's running, but this is on a remote host, not my local machine : >>>>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 >>>>>>> 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on >>>>>>> -S -object >>>>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>>>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>>>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>>>>>> -no-user-config -nodefaults -chardev >>>>>>> socket,id=charmonitor,fd=27,server,nowait -mon >>>>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>>>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>>>>>> menu=off,strict=on -device >>>>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>>>>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>>>>>> -device >>>>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>>>>>> -drive >>>>>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>>>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>>>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>>>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>>>>>> -chardev pty,id=charserial0 -device >>>>>>> isa-serial,chardev=charserial0,id=serial0 -chardev >>>>>>> socket,id=charchannel0,fd=31,server,nowait -device >>>>>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>>>>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>>>>>> -object rng-random,id=objrng0,filename=/dev/random -device >>>>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>>>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>>>>> -msg timestamp=on >>>>>>> >>>>>>> >>>>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>>>> /etc/libvirt/qemu/networks/default.xml >>>>>>> <!-- >>>>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE >>>>>>> LIKELY TO BE >>>>>>> OVERWRITTEN AND LOST. Changes to this xml configuration should >>>>>>> be made using: >>>>>>> virsh net-edit default >>>>>>> or other application using the libvirt API. >>>>>>> --> >>>>>>> >>>>>>> <network> >>>>>>> <name>default</name> >>>>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>>>>>> <forward mode='nat'/> >>>>>>> <bridge name='virbr0' stp='on' delay='0'/> >>>>>>> <mac address='52:54:00:e5:fe:3b'/> >>>>>>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>>>>>> <dhcp> >>>>>>> <range start='192.168.122.2' end='192.168.122.254'/> >>>>>>> </dhcp> >>>>>>> </ip> >>>>>>> </network> >>>>>>> You have new mail in /var/spool/mail/root >>>>>>> [root@vs-inf-int-kvm-fr-301-210 ~] >>>>>>> >>>>>>> >>>>>>> >>>>>>> Guillaume Pavese >>>>>>> Ingénieur Système et Réseau >>>>>>> Interactiv-Group >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>>>>>> stirabos@redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>>>> >>>>>>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>>>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>>>>>> >>>>>>>>> See following Error : >>>>>>>>> >>>>>>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>>>>>> local VM IP] >>>>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", >>>>>>>>> u'end': u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, >>>>>>>>> u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': >>>>>>>>> True, u'executable': >>>>>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>>>>>> u'chdir': None, u'std >>>>>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>>>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>>>>>> u'stdout_lines': []} >>>>>>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>>>>>> "", "stdout_lines": []} >>>>>>>>> >>>>>>>> >>>>>>>> Here we are just waiting for the bootstrap engine VM to fetch >>>>>>>> an IP address from default libvirt network over DHCP but it your case it >>>>>>>> never happened. >>>>>>>> Possible issues: something went wrong in the bootstrap >>>>>>>> process for the engine VM or the default libvirt network is not correctly >>>>>>>> configured. >>>>>>>> >>>>>>>> 1. can you try to reach the engine VM via VNC and check >>>>>>>> what's happening there? (another question, are you running it nested? AFAIK >>>>>>>> it will not work if nested over ESXi) >>>>>>>> 2. can you please share the output of >>>>>>>> cat /etc/libvirt/qemu/networks/default.xml >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Guillaume Pavese >>>>>>>>> Ingénieur Système et Réseau >>>>>>>>> Interactiv-Group >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>> Privacy Statement: >>>>>>>>> https://www.ovirt.org/site/privacy-policy/ >>>>>>>>> oVirt Code of Conduct: >>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>>>> List Archives: >>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>> Users mailing list -- users@ovirt.org >>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> oVirt Code of Conduct: >>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>> List Archives: >>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>>>>>> >>>>>>

journalctl -u libvirtd.service : févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopping Virtualization daemon... févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopped Virtualization daemon. févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Starting Virtualization daemon... févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Started Virtualization daemon. févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq[6310]: read /etc/hosts - 4 addresses févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq[6310]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: read /var/lib/libvirt/dnsmasq/default.hostsfile févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.739+0000: 13551: info : libvirt version: 4.5.0, package: 10.el7_6.4 (CentOS BuildSystem <http://bugs.centos.org>, 2019-01-29-17:31:22, x86-01.bsys.centos.org) févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.739+0000: 13551: info : hostname: vs-inf-int-kvm-fr-301-210.hostics.fr févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.739+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool '15023c8a-e3a7-4851-a97d-3b90996b423b': cannot open directory '/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmdRIozH': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool 'localvmdRIozH': cannot open directory '/var/tmp/localvmdRIozH': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool '15023c8a-e3a7-4851-a97d-3b90996b423b-1': cannot open directory '/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmgmyYik': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool 'localvmgmyYik': cannot open directory '/var/tmp/localvmgmyYik': No such file or directory /var/log/libvirt/qemu/HostedEngineLocal.log : 2019-02-25 17:50:08.694+0000: starting up libvirt version: 4.5.0, package: 10.el7_6.4 (CentOS BuildSystem <http://bugs.centos.org>, 2019-01-29-17:31:22, x86-01.bsys.centos.org), qemu version: 2.12.0qemu-kvm-ev-2.12.0-18.el7_6.3.1, kernel: 3.10.0-957.5.1.el7.x86_64, hostname: vs-inf-int-kvm-fr-301-210.hostics.fr LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 8ba608c8-b721-4b5b-b839-b62f5e919814 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmlF5yTM/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmlF5yTM/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:1d:4b:b6,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on 2019-02-25T17:50:08.904663Z qemu-kvm: -chardev pty,id=charserial0: char device redirected to /dev/pts/4 (label charserial0) 2019-02-25T17:50:08.911239Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] 2019-02-25T17:50:08.917723Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] 2019-02-25T17:50:08.918494Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] 2019-02-25T17:50:08.919217Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] I guess there is something about those last warnings? It should be noted that I previously successfully deployed oVirt 4.2 in the same Nested environment Running libvirt in debug mode will need to wait tomorrow ; my night is already cut to nothing much anymore XD Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Tue, Feb 26, 2019 at 3:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 7:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
No, as indicated previously, still :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
I did not see any relevant log on the HE vm. Is there something I should look for there?
This smells really bad: I'd suggest to check /var/log/messages and /var/log/libvirt/qemu/HostedEngineLocal.log for libvirt errors; if nothing is there can I ask you to try reexecuting with libvirt debug logs (edit /etc/libvirt/libvirtd.conf).
Honestly I'm not able to reproduce it on my side.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I still can't connect with VNC remotely but locally with X forwarding it works. However my connection has too high latency for that to be usable (I'm in Japan, my hosts in France, ~250 ms ping)
But I could see that the VM is booted!
and in Hosts logs there is :
févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210 févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None ....
ssh to the vm works too :
[root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14 The authenticity of host '192.168.122.14 (192.168.122.14)' can't be established. ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ. ECDSA key fingerprint is MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known hosts. root@192.168.122.14's password: [root@vs-inf-int-ovt-fr-301-210 ~]#
But the test that the playbook tries still fails with empty result :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
This smells like a bug to me: and nothing at all in the output of virsh -r net-dhcp-leases default
?
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{ print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r net-dhcp-leases default" never returns anything :
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi < stirabos@redhat.com> wrote:
> OK, try this: > temporary > edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml > around line 120 > and edit tasks "Get local VM IP" > changing from "retries: 50" to "retries: 500" so that you have more > time to debug it > > > > On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < > guillaume.pavese@interactiv-group.com> wrote: > >> I retried after killing the remaining qemu process and >> doing ovirt-hosted-engine-cleanup >> The new attempt failed again at the same step. Then after it fails, >> it cleans the temporary files (and vm disk) but *qemu still runs!* >> : >> >> [ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP] >> >> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, >> "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i >> 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >> "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": >> "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": >> "", "stdout_lines": []} >> [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] >> [ INFO ] changed: [localhost] >> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry >> in /etc/hosts for the local VM] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a >> failure] >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >> "The system may not be provisioned according to the playbook results: >> please check the logs for the issue, fix accordingly or re-deploy from >> scratch.\n"} >> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >> ansible-playbook >> [ INFO ] Stage: Clean up >> [ INFO ] Cleaning temporary resources >> ... >> >> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry >> in /etc/hosts for the local VM] >> [ INFO ] ok: [localhost] >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed: please check the logs >> for the issue, fix accordingly or re-deploy from scratch. >> >> >> >> [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu >> root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 >> /usr/bin/qemu-ga --method=virtio-serial >> --path=/dev/virtio-ports/org.qemu.guest_agent.0 >> --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status >> -F/etc/qemu-ga/fsfreeze-hook >> qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 >> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S >> -object >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >> 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e >> -no-user-config -nodefaults -chardev >> socket,id=charmonitor,fd=27,server,nowait -mon >> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >> menu=off,strict=on -device >> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >> file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >> -device >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >> -drive >> file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 >> -chardev pty,id=charserial0 -device >> isa-serial,chardev=charserial0,id=serial0 -chardev >> socket,id=charchannel0,fd=31,server,nowait -device >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >> -vnc 127.0.0.1:0 -device >> VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object >> rng-random,id=objrng0,filename=/dev/random -device >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >> -msg timestamp=on >> root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 >> grep --color=auto qemu >> >> >> Before the first Error, while the vm was running for sure and the >> disk was there, I also unsuccessfuly tried to connect to it with VNC and >> got the same error I got before : >> >> [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port >> 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 >> requested. >> debug1: channel 3: new [direct-tcpip] >> channel 3: open failed: connect failed: Connection refused >> debug1: channel 3: free: direct-tcpip: listening port 5900 for >> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >> 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4 >> >> >> Guillaume Pavese >> Ingénieur Système et Réseau >> Interactiv-Group >> >> >> On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < >> guillaume.pavese@interactiv-group.com> wrote: >> >>> Something was definitely wrong ; as indicated, qemu process >>> for guest=HostedEngineLocal was running but the disk file did not exist >>> anymore... >>> No surprise I could not connect >>> >>> I am retrying >>> >>> >>> Guillaume Pavese >>> Ingénieur Système et Réseau >>> Interactiv-Group >>> >>> >>> On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < >>> guillaume.pavese@interactiv-group.com> wrote: >>> >>>> It fails too : >>>> I made sure PermitTunnel=yes in sshd config but when I try to >>>> connect to the forwarded port I get the following error on the openened >>>> host ssh session : >>>> >>>> [gpavese@sheepora-X230 ~]$ ssh -v -L 5900: >>>> vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>>> ... >>>> [root@vs-inf-int-kvm-fr-301-210 ~]# >>>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 >>>> port 42144 to ::1 port 5900, nchannels 4 >>>> debug1: Connection to port 5900 forwarding to >>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. >>>> debug1: channel 3: new [direct-tcpip] >>>> channel 3: open failed: connect failed: Connection refused >>>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >>>> 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4 >>>> >>>> >>>> and in journalctl : >>>> >>>> févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr >>>> sshd[19595]: error: connect_to >>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed. >>>> >>>> >>>> Guillaume Pavese >>>> Ingénieur Système et Réseau >>>> Interactiv-Group >>>> >>>> >>>> On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < >>>> stirabos@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < >>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>> >>>>>> I made sure of everything and even stopped firewalld but still >>>>>> can't connect : >>>>>> >>>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>>> /var/run/libvirt/qemu/HostedEngineLocal.xml >>>>>> <graphics type='vnc' port='*5900*' autoport='yes' >>>>>> *listen='127.0.0.1*'> >>>>>> <listen type='address' address='*127.0.0.1*' >>>>>> fromConfig='1' autoGenerated='no'/> >>>>>> >>>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 >>>>>> tcp 0 0 127.0.0.1:5900 0.0.0.0:* >>>>>> LISTEN 13376/qemu-kvm >>>>>> >>>>> >>>>> >>>>> I suggest to try ssh tunneling, run >>>>> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>>>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>>>> >>>>> and then >>>>> remote-viewer vnc://localhost:5900 >>>>> >>>>> >>>>> >>>>>> >>>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status >>>>>> firewalld.service >>>>>> ● firewalld.service - firewalld - dynamic firewall daemon >>>>>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; >>>>>> enabled; vendor preset: enabled) >>>>>> *Active: inactive (dead)* >>>>>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr >>>>>> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld >>>>>> - dynamic firewall daemon.* >>>>>> >>>>>> From my laptop : >>>>>> [gpavese@sheepora-X230 ~]$ telnet >>>>>> vs-inf-int-kvm-fr-301-210.hostics.fr *5900* >>>>>> Trying 10.199.210.11... >>>>>> [*nothing gets through...*] >>>>>> ^C >>>>>> >>>>>> For making sure : >>>>>> [gpavese@sheepora-X230 ~]$ telnet >>>>>> vs-inf-int-kvm-fr-301-210.hostics.fr *9090* >>>>>> Trying 10.199.210.11... >>>>>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. >>>>>> Escape character is '^]'. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Guillaume Pavese >>>>>> Ingénieur Système et Réseau >>>>>> Interactiv-Group >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal < >>>>>> dparth@redhat.com> wrote: >>>>>> >>>>>>> Hey! >>>>>>> >>>>>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml >>>>>>> Search for 'vnc' >>>>>>> From there you can look up the port on which the HE VM is >>>>>>> available and connect to the same. >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >>>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>>> >>>>>>>> 1) I am running in a Nested env, but under libvirt/kvm on >>>>>>>> remote Centos 7.4 Hosts >>>>>>>> >>>>>>>> Please advise how to connect with VNC to the local HE vm. I >>>>>>>> see it's running, but this is on a remote host, not my local machine : >>>>>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 >>>>>>>> 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on >>>>>>>> -S -object >>>>>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>>>>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>>>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>>>>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>>>>>>> -no-user-config -nodefaults -chardev >>>>>>>> socket,id=charmonitor,fd=27,server,nowait -mon >>>>>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>>>>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>>>>>>> menu=off,strict=on -device >>>>>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>>>>>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>>>>>>> -device >>>>>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>>>>>>> -drive >>>>>>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>>>>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>>>>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>>>>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>>>>>>> -chardev pty,id=charserial0 -device >>>>>>>> isa-serial,chardev=charserial0,id=serial0 -chardev >>>>>>>> socket,id=charchannel0,fd=31,server,nowait -device >>>>>>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>>>>>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>>>>>>> -object rng-random,id=objrng0,filename=/dev/random -device >>>>>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>>>>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>>>>>> -msg timestamp=on >>>>>>>> >>>>>>>> >>>>>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>>>>> /etc/libvirt/qemu/networks/default.xml >>>>>>>> <!-- >>>>>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE >>>>>>>> LIKELY TO BE >>>>>>>> OVERWRITTEN AND LOST. Changes to this xml configuration >>>>>>>> should be made using: >>>>>>>> virsh net-edit default >>>>>>>> or other application using the libvirt API. >>>>>>>> --> >>>>>>>> >>>>>>>> <network> >>>>>>>> <name>default</name> >>>>>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>>>>>>> <forward mode='nat'/> >>>>>>>> <bridge name='virbr0' stp='on' delay='0'/> >>>>>>>> <mac address='52:54:00:e5:fe:3b'/> >>>>>>>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>>>>>>> <dhcp> >>>>>>>> <range start='192.168.122.2' end='192.168.122.254'/> >>>>>>>> </dhcp> >>>>>>>> </ip> >>>>>>>> </network> >>>>>>>> You have new mail in /var/spool/mail/root >>>>>>>> [root@vs-inf-int-kvm-fr-301-210 ~] >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Guillaume Pavese >>>>>>>> Ingénieur Système et Réseau >>>>>>>> Interactiv-Group >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>>>>>>> stirabos@redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>>>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>>>>> >>>>>>>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>>>>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>>>>>>> >>>>>>>>>> See following Error : >>>>>>>>>> >>>>>>>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>>>>>>> local VM IP] >>>>>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", >>>>>>>>>> u'end': u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, >>>>>>>>>> u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': >>>>>>>>>> True, u'executable': >>>>>>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>>>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>>>>>>> u'chdir': None, u'std >>>>>>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>>>>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>>>>>>> u'stdout_lines': []} >>>>>>>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>>>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>>>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>>>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>>>>>>> "", "stdout_lines": []} >>>>>>>>>> >>>>>>>>> >>>>>>>>> Here we are just waiting for the bootstrap engine VM to >>>>>>>>> fetch an IP address from default libvirt network over DHCP but it your case >>>>>>>>> it never happened. >>>>>>>>> Possible issues: something went wrong in the bootstrap >>>>>>>>> process for the engine VM or the default libvirt network is not correctly >>>>>>>>> configured. >>>>>>>>> >>>>>>>>> 1. can you try to reach the engine VM via VNC and check >>>>>>>>> what's happening there? (another question, are you running it nested? AFAIK >>>>>>>>> it will not work if nested over ESXi) >>>>>>>>> 2. can you please share the output of >>>>>>>>> cat /etc/libvirt/qemu/networks/default.xml >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Guillaume Pavese >>>>>>>>>> Ingénieur Système et Réseau >>>>>>>>>> Interactiv-Group >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>> Privacy Statement: >>>>>>>>>> https://www.ovirt.org/site/privacy-policy/ >>>>>>>>>> oVirt Code of Conduct: >>>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>>>>> List Archives: >>>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>>> oVirt Code of Conduct: >>>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>>> List Archives: >>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>>>>>>> >>>>>>>

Happy to say that I just passed this "Get local VM IP" step There were a lot of leftover from previous failed attempts (cf log I sent earlier : "internal error: Failed to autostart storage pool..." ) Those were not cleaned up by ovirt-hosted-engine-cleanup I had to do the followinf so libvirt would be happy again : rm -rf /etc/libvirt/storage/*.xml rm -rf /etc/libvirt/storage/autostart/* rm -rf /var/tmp/local* ovirt-hosted-engine-cleanup is not doing a really good job Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Tue, Feb 26, 2019 at 3:49 AM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
journalctl -u libvirtd.service :
févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopping Virtualization daemon... févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopped Virtualization daemon. févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Starting Virtualization daemon... févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Started Virtualization daemon. févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq[6310]: read /etc/hosts - 4 addresses févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq[6310]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: read /var/lib/libvirt/dnsmasq/default.hostsfile févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.739+0000: 13551: info : libvirt version: 4.5.0, package: 10.el7_6.4 (CentOS BuildSystem <http://bugs.centos.org>, 2019-01-29-17:31:22, x86-01.bsys.centos.org) févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.739+0000: 13551: info : hostname: vs-inf-int-kvm-fr-301-210.hostics.fr févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.739+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool '15023c8a-e3a7-4851-a97d-3b90996b423b': cannot open directory '/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmdRIozH': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool 'localvmdRIozH': cannot open directory '/var/tmp/localvmdRIozH': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool '15023c8a-e3a7-4851-a97d-3b90996b423b-1': cannot open directory '/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : virDirOpenInternal:2936 : cannot open directory '/var/tmp/localvmgmyYik': No such file or directory févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]: 2019-02-25 17:47:34.740+0000: 13551: error : storageDriverAutostartCallback:209 : internal error: Failed to autostart storage pool 'localvmgmyYik': cannot open directory '/var/tmp/localvmgmyYik': No such file or directory
/var/log/libvirt/qemu/HostedEngineLocal.log :
2019-02-25 17:50:08.694+0000: starting up libvirt version: 4.5.0, package: 10.el7_6.4 (CentOS BuildSystem <http://bugs.centos.org>, 2019-01-29-17:31:22, x86-01.bsys.centos.org), qemu version: 2.12.0qemu-kvm-ev-2.12.0-18.el7_6.3.1, kernel: 3.10.0-957.5.1.el7.x86_64, hostname: vs-inf-int-kvm-fr-301-210.hostics.fr LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 8ba608c8-b721-4b5b-b839-b62f5e919814 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvmlF5yTM/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvmlF5yTM/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:1d:4b:b6,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on 2019-02-25T17:50:08.904663Z qemu-kvm: -chardev pty,id=charserial0: char device redirected to /dev/pts/4 (label charserial0) 2019-02-25T17:50:08.911239Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] 2019-02-25T17:50:08.917723Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] 2019-02-25T17:50:08.918494Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] 2019-02-25T17:50:08.919217Z qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10]
I guess there is something about those last warnings? It should be noted that I previously successfully deployed oVirt 4.2 in the same Nested environment
Running libvirt in debug mode will need to wait tomorrow ; my night is already cut to nothing much anymore XD
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 3:33 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 7:15 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
No, as indicated previously, still :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
I did not see any relevant log on the HE vm. Is there something I should look for there?
This smells really bad: I'd suggest to check /var/log/messages and /var/log/libvirt/qemu/HostedEngineLocal.log for libvirt errors; if nothing is there can I ask you to try reexecuting with libvirt debug logs (edit /etc/libvirt/libvirtd.conf).
Honestly I'm not able to reproduce it on my side.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I still can't connect with VNC remotely but locally with X forwarding it works. However my connection has too high latency for that to be usable (I'm in Japan, my hosts in France, ~250 ms ping)
But I could see that the VM is booted!
and in Hosts logs there is :
févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210 févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 | awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None chdir=None stdin=None ....
ssh to the vm works too :
[root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14 The authenticity of host '192.168.122.14 (192.168.122.14)' can't be established. ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ. ECDSA key fingerprint is MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known hosts. root@192.168.122.14's password: [root@vs-inf-int-ovt-fr-301-210 ~]#
But the test that the playbook tries still fails with empty result :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
[root@vs-inf-int-kvm-fr-301-210 ~]#
This smells like a bug to me: and nothing at all in the output of virsh -r net-dhcp-leases default
?
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
> I did that but no success yet. > > I see that "Get local VM IP" task tries the following : > > virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | > awk '{ print $5 }' | cut -f1 -d'/' > > > However while the task is running, and vm running in qemu, "virsh -r > net-dhcp-leases default" never returns anything : >
Yes, I think that libvirt will never provide a DHCP lease since the appliance OS never correctly complete the boot process. I'd suggest to connect to the running VM via vnc DURING the boot process and check what's wrong.
> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default > Expiry Time MAC address Protocol IP address > Hostname Client ID or DUID > > ------------------------------------------------------------------------------------------------------------------- > > [root@vs-inf-int-kvm-fr-301-210 ~]# > > > > > Guillaume Pavese > Ingénieur Système et Réseau > Interactiv-Group > > > On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> OK, try this: >> temporary >> edit /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml >> around line 120 >> and edit tasks "Get local VM IP" >> changing from "retries: 50" to "retries: 500" so that you have >> more time to debug it >> >> >> >> On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese < >> guillaume.pavese@interactiv-group.com> wrote: >> >>> I retried after killing the remaining qemu process and >>> doing ovirt-hosted-engine-cleanup >>> The new attempt failed again at the same step. Then after it >>> fails, it cleans the temporary files (and vm disk) but *qemu >>> still runs!* : >>> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP] >>> >>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, >>> "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i >>> 00:16:3e:6c:e8:f9 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>> "0:00:00.092436", "end": "2019-02-25 16:09:38.863263", "rc": 0, "start": >>> "2019-02-25 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": >>> "", "stdout_lines": []} >>> [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] >>> [ INFO ] ok: [localhost] >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] >>> [ INFO ] changed: [localhost] >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry >>> in /etc/hosts for the local VM] >>> [ INFO ] ok: [localhost] >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about >>> a failure] >>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >>> "The system may not be provisioned according to the playbook results: >>> please check the logs for the issue, fix accordingly or re-deploy from >>> scratch.\n"} >>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>> ansible-playbook >>> [ INFO ] Stage: Clean up >>> [ INFO ] Cleaning temporary resources >>> ... >>> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] >>> [ INFO ] ok: [localhost] >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry >>> in /etc/hosts for the local VM] >>> [ INFO ] ok: [localhost] >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Hosted Engine deployment failed: please check the logs >>> for the issue, fix accordingly or re-deploy from scratch. >>> >>> >>> >>> [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu >>> root 4021 0.0 0.0 24844 1788 ? Ss févr.22 0:00 >>> /usr/bin/qemu-ga --method=virtio-serial >>> --path=/dev/virtio-ports/org.qemu.guest_agent.0 >>> --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status >>> -F/etc/qemu-ga/fsfreeze-hook >>> qemu 26463 22.9 4.8 17684512 1088844 ? Sl 16:01 3:09 >>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S >>> -object >>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>> 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e >>> -no-user-config -nodefaults -chardev >>> socket,id=charmonitor,fd=27,server,nowait -mon >>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>> menu=off,strict=on -device >>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>> file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>> -device >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>> -drive >>> file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3 >>> -chardev pty,id=charserial0 -device >>> isa-serial,chardev=charserial0,id=serial0 -chardev >>> socket,id=charchannel0,fd=31,server,nowait -device >>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>> -vnc 127.0.0.1:0 -device >>> VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object >>> rng-random,id=objrng0,filename=/dev/random -device >>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>> -msg timestamp=on >>> root 28416 0.0 0.0 112712 980 pts/3 S+ 16:14 0:00 >>> grep --color=auto qemu >>> >>> >>> Before the first Error, while the vm was running for sure and the >>> disk was there, I also unsuccessfuly tried to connect to it with VNC and >>> got the same error I got before : >>> >>> [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port >>> 5900 forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 >>> requested. >>> debug1: channel 3: new [direct-tcpip] >>> channel 3: open failed: connect failed: Connection refused >>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >>> 127.0.0.1 port 37002 to 127.0.0.1 port 5900, nchannels 4 >>> >>> >>> Guillaume Pavese >>> Ingénieur Système et Réseau >>> Interactiv-Group >>> >>> >>> On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese < >>> guillaume.pavese@interactiv-group.com> wrote: >>> >>>> Something was definitely wrong ; as indicated, qemu process >>>> for guest=HostedEngineLocal was running but the disk file did not exist >>>> anymore... >>>> No surprise I could not connect >>>> >>>> I am retrying >>>> >>>> >>>> Guillaume Pavese >>>> Ingénieur Système et Réseau >>>> Interactiv-Group >>>> >>>> >>>> On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese < >>>> guillaume.pavese@interactiv-group.com> wrote: >>>> >>>>> It fails too : >>>>> I made sure PermitTunnel=yes in sshd config but when I try to >>>>> connect to the forwarded port I get the following error on the openened >>>>> host ssh session : >>>>> >>>>> [gpavese@sheepora-X230 ~]$ ssh -v -L 5900: >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>>>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>>>> ... >>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# >>>>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >>>>> ::1 port 42144 to ::1 port 5900, nchannels 4 >>>>> debug1: Connection to port 5900 forwarding to >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested. >>>>> debug1: channel 3: new [direct-tcpip] >>>>> channel 3: open failed: connect failed: Connection refused >>>>> debug1: channel 3: free: direct-tcpip: listening port 5900 for >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from >>>>> 127.0.0.1 port 32778 to 127.0.0.1 port 5900, nchannels 4 >>>>> >>>>> >>>>> and in journalctl : >>>>> >>>>> févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr >>>>> sshd[19595]: error: connect_to >>>>> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed. >>>>> >>>>> >>>>> Guillaume Pavese >>>>> Ingénieur Système et Réseau >>>>> Interactiv-Group >>>>> >>>>> >>>>> On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi < >>>>> stirabos@redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese < >>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>> >>>>>>> I made sure of everything and even stopped firewalld but still >>>>>>> can't connect : >>>>>>> >>>>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>>>> /var/run/libvirt/qemu/HostedEngineLocal.xml >>>>>>> <graphics type='vnc' port='*5900*' autoport='yes' >>>>>>> *listen='127.0.0.1*'> >>>>>>> <listen type='address' address='*127.0.0.1*' >>>>>>> fromConfig='1' autoGenerated='no'/> >>>>>>> >>>>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59 >>>>>>> tcp 0 0 127.0.0.1:5900 0.0.0.0:* >>>>>>> LISTEN 13376/qemu-kvm >>>>>>> >>>>>> >>>>>> >>>>>> I suggest to try ssh tunneling, run >>>>>> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900 >>>>>> root@vs-inf-int-kvm-fr-301-210.hostics.fr >>>>>> >>>>>> and then >>>>>> remote-viewer vnc://localhost:5900 >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status >>>>>>> firewalld.service >>>>>>> ● firewalld.service - firewalld - dynamic firewall daemon >>>>>>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; >>>>>>> enabled; vendor preset: enabled) >>>>>>> *Active: inactive (dead)* >>>>>>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr >>>>>>> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld >>>>>>> - dynamic firewall daemon.* >>>>>>> >>>>>>> From my laptop : >>>>>>> [gpavese@sheepora-X230 ~]$ telnet >>>>>>> vs-inf-int-kvm-fr-301-210.hostics.fr *5900* >>>>>>> Trying 10.199.210.11... >>>>>>> [*nothing gets through...*] >>>>>>> ^C >>>>>>> >>>>>>> For making sure : >>>>>>> [gpavese@sheepora-X230 ~]$ telnet >>>>>>> vs-inf-int-kvm-fr-301-210.hostics.fr *9090* >>>>>>> Trying 10.199.210.11... >>>>>>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr. >>>>>>> Escape character is '^]'. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Guillaume Pavese >>>>>>> Ingénieur Système et Réseau >>>>>>> Interactiv-Group >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal < >>>>>>> dparth@redhat.com> wrote: >>>>>>> >>>>>>>> Hey! >>>>>>>> >>>>>>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml >>>>>>>> Search for 'vnc' >>>>>>>> From there you can look up the port on which the HE VM is >>>>>>>> available and connect to the same. >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese < >>>>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>>>> >>>>>>>>> 1) I am running in a Nested env, but under libvirt/kvm on >>>>>>>>> remote Centos 7.4 Hosts >>>>>>>>> >>>>>>>>> Please advise how to connect with VNC to the local HE vm. I >>>>>>>>> see it's running, but this is on a remote host, not my local machine : >>>>>>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 >>>>>>>>> 85:08 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on >>>>>>>>> -S -object >>>>>>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes >>>>>>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu >>>>>>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp >>>>>>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a >>>>>>>>> -no-user-config -nodefaults -chardev >>>>>>>>> socket,id=charmonitor,fd=27,server,nowait -mon >>>>>>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown >>>>>>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot >>>>>>>>> menu=off,strict=on -device >>>>>>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive >>>>>>>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0 >>>>>>>>> -device >>>>>>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 >>>>>>>>> -drive >>>>>>>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on >>>>>>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev >>>>>>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device >>>>>>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3 >>>>>>>>> -chardev pty,id=charserial0 -device >>>>>>>>> isa-serial,chardev=charserial0,id=serial0 -chardev >>>>>>>>> socket,id=charchannel0,fd=31,server,nowait -device >>>>>>>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >>>>>>>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 >>>>>>>>> -object rng-random,id=objrng0,filename=/dev/random -device >>>>>>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox >>>>>>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny >>>>>>>>> -msg timestamp=on >>>>>>>>> >>>>>>>>> >>>>>>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat >>>>>>>>> /etc/libvirt/qemu/networks/default.xml >>>>>>>>> <!-- >>>>>>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE >>>>>>>>> LIKELY TO BE >>>>>>>>> OVERWRITTEN AND LOST. Changes to this xml configuration >>>>>>>>> should be made using: >>>>>>>>> virsh net-edit default >>>>>>>>> or other application using the libvirt API. >>>>>>>>> --> >>>>>>>>> >>>>>>>>> <network> >>>>>>>>> <name>default</name> >>>>>>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid> >>>>>>>>> <forward mode='nat'/> >>>>>>>>> <bridge name='virbr0' stp='on' delay='0'/> >>>>>>>>> <mac address='52:54:00:e5:fe:3b'/> >>>>>>>>> <ip address='192.168.122.1' netmask='255.255.255.0'> >>>>>>>>> <dhcp> >>>>>>>>> <range start='192.168.122.2' end='192.168.122.254'/> >>>>>>>>> </dhcp> >>>>>>>>> </ip> >>>>>>>>> </network> >>>>>>>>> You have new mail in /var/spool/mail/root >>>>>>>>> [root@vs-inf-int-kvm-fr-301-210 ~] >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Guillaume Pavese >>>>>>>>> Ingénieur Système et Réseau >>>>>>>>> Interactiv-Group >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi < >>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese < >>>>>>>>>> guillaume.pavese@interactiv-group.com> wrote: >>>>>>>>>> >>>>>>>>>>> He deployment with "hosted-engine --deploy" fails at TASK >>>>>>>>>>> [ovirt.hosted_engine_setup : Get local VM IP] >>>>>>>>>>> >>>>>>>>>>> See following Error : >>>>>>>>>>> >>>>>>>>>>> 2019-02-25 12:46:50,154+0100 INFO >>>>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get >>>>>>>>>>> local VM IP] >>>>>>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG >>>>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True, >>>>>>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00 >>>>>>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", >>>>>>>>>>> u'end': u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, >>>>>>>>>>> u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': >>>>>>>>>>> True, u'executable': >>>>>>>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r >>>>>>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | >>>>>>>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, >>>>>>>>>>> u'chdir': None, u'std >>>>>>>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', >>>>>>>>>>> u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', >>>>>>>>>>> u'stdout_lines': []} >>>>>>>>>>> 2019-02-25 12:55:26,924+0100 ERROR >>>>>>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils >>>>>>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! => >>>>>>>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default >>>>>>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": >>>>>>>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start": >>>>>>>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout": >>>>>>>>>>> "", "stdout_lines": []} >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Here we are just waiting for the bootstrap engine VM to >>>>>>>>>> fetch an IP address from default libvirt network over DHCP but it your case >>>>>>>>>> it never happened. >>>>>>>>>> Possible issues: something went wrong in the bootstrap >>>>>>>>>> process for the engine VM or the default libvirt network is not correctly >>>>>>>>>> configured. >>>>>>>>>> >>>>>>>>>> 1. can you try to reach the engine VM via VNC and check >>>>>>>>>> what's happening there? (another question, are you running it nested? AFAIK >>>>>>>>>> it will not work if nested over ESXi) >>>>>>>>>> 2. can you please share the output of >>>>>>>>>> cat /etc/libvirt/qemu/networks/default.xml >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Guillaume Pavese >>>>>>>>>>> Ingénieur Système et Réseau >>>>>>>>>>> Interactiv-Group >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>>> Privacy Statement: >>>>>>>>>>> https://www.ovirt.org/site/privacy-policy/ >>>>>>>>>>> oVirt Code of Conduct: >>>>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>>>>>> List Archives: >>>>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2Y... >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>> Privacy Statement: >>>>>>>>> https://www.ovirt.org/site/privacy-policy/ >>>>>>>>> oVirt Code of Conduct: >>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/ >>>>>>>>> List Archives: >>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV7YV... >>>>>>>>> >>>>>>>>
participants (2)
-
Guillaume Pavese
-
Simone Tiraboschi