Something was definitely wrong ; as indicated, qemu process
for guest=HostedEngineLocal was running but the disk file did not exist
anymore...
No surprise I could not connect
I am retrying
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese <
guillaume.pavese(a)interactiv-group.com> wrote:
It fails too :
I made sure PermitTunnel=yes in sshd config but when I try to connect to
the forwarded port I get the following error on the openened host ssh
session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900:
vs-inf-int-kvm-fr-301-210.hostics.fr:5900
root(a)vs-inf-int-kvm-fr-301-210.hostics.fr
...
[root@vs-inf-int-kvm-fr-301-210 ~]#
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port
42144 to ::1 port 5900, nchannels 4
debug1: Connection to port 5900 forwarding to
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested.
debug1: channel 3: new [direct-tcpip]
channel 3: open failed: connect failed: Connection refused
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1
port 32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]:
error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
>
> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese <
> guillaume.pavese(a)interactiv-group.com> wrote:
>
>> I made sure of everything and even stopped firewalld but still can't
>> connect :
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat
>> /var/run/libvirt/qemu/HostedEngineLocal.xml
>> <graphics type='vnc' port='*5900*'
autoport='yes'
>> *listen='127.0.0.1*'>
>> <listen type='address' address='*127.0.0.1*'
fromConfig='1'
>> autoGenerated='no'/>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59
>> tcp 0 0 127.0.0.1:5900 0.0.0.0:*
>> LISTEN 13376/qemu-kvm
>>
>
>
> I suggest to try ssh tunneling, run
> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900
> root(a)vs-inf-int-kvm-fr-301-210.hostics.fr
>
> and then
> remote-viewer vnc://localhost:5900
>
>
>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service
>> ● firewalld.service - firewalld - dynamic firewall daemon
>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled;
>> vendor preset: enabled)
>> *Active: inactive (dead)*
>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr
>> <
http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped
firewalld
>> - dynamic firewall daemon.*
>>
>> From my laptop :
>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>> *5900*
>> Trying 10.199.210.11...
>> [*nothing gets through...*]
>> ^C
>>
>> For making sure :
>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>> *9090*
>> Trying 10.199.210.11...
>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr.
>> Escape character is '^]'.
>>
>>
>>
>>
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth(a)redhat.com>
>> wrote:
>>
>>> Hey!
>>>
>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml
>>> Search for 'vnc'
>>> From there you can look up the port on which the HE VM is available and
>>> connect to the same.
>>>
>>>
>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
>>> guillaume.pavese(a)interactiv-group.com> wrote:
>>>
>>>> 1) I am running in a Nested env, but under libvirt/kvm on remote
>>>> Centos 7.4 Hosts
>>>>
>>>> Please advise how to connect with VNC to the local HE vm. I see it's
>>>> running, but this is on a remote host, not my local machine :
>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08
>>>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
>>>> -object
>>>>
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
>>>> -no-user-config -nodefaults -chardev
>>>> socket,id=charmonitor,fd=27,server,nowait -mon
>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
>>>> menu=off,strict=on -device
>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
>>>>
file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
>>>> -device
>>>>
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>>>> -drive
>>>>
file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
>>>>
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3
>>>> -chardev pty,id=charserial0 -device
>>>> isa-serial,chardev=charserial0,id=serial0 -chardev
>>>> socket,id=charchannel0,fd=31,server,nowait -device
>>>>
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
>>>> *-vnc 127.0.0.1:0 <
http://127.0.0.1:0> -device
VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
>>>> -object rng-random,id=objrng0,filename=/dev/random -device
>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
>>>> -msg timestamp=on
>>>>
>>>>
>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat
>>>> /etc/libvirt/qemu/networks/default.xml
>>>> <!--
>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
>>>> OVERWRITTEN AND LOST. Changes to this xml configuration should be made
>>>> using:
>>>> virsh net-edit default
>>>> or other application using the libvirt API.
>>>> -->
>>>>
>>>> <network>
>>>> <name>default</name>
>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid>
>>>> <forward mode='nat'/>
>>>> <bridge name='virbr0' stp='on'
delay='0'/>
>>>> <mac address='52:54:00:e5:fe:3b'/>
>>>> <ip address='192.168.122.1'
netmask='255.255.255.0'>
>>>> <dhcp>
>>>> <range start='192.168.122.2'
end='192.168.122.254'/>
>>>> </dhcp>
>>>> </ip>
>>>> </network>
>>>> You have new mail in /var/spool/mail/root
>>>> [root@vs-inf-int-kvm-fr-301-210 ~]
>>>>
>>>>
>>>>
>>>> Guillaume Pavese
>>>> Ingénieur Système et Réseau
>>>> Interactiv-Group
>>>>
>>>>
>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi
<stirabos(a)redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
>>>>> guillaume.pavese(a)interactiv-group.com> wrote:
>>>>>
>>>>>> He deployment with "hosted-engine --deploy" fails at
TASK
>>>>>> [ovirt.hosted_engine_setup : Get local VM IP]
>>>>>>
>>>>>> See following Error :
>>>>>>
>>>>>> 2019-02-25 12:46:50,154+0100 INFO
>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup
: Get
>>>>>> local VM IP]
>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG
>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed':
True,
>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r
net-dhcp-leases default | grep -i 00
>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1
-d'/'", u'end':
>>>>>> u'2019-02-25 12:55:26.666925',
u'_ansible_no_log': False, u'stdout': u'',
>>>>>> u'changed': True, u'invocation':
{u'module_args': {u'warn': True,
>>>>>> u'executable':
>>>>>> None, u'_uses_shell': True, u'_raw_params':
u"virsh -r
>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{
print $5 }' |
>>>>>> cut -f1 -d'/'", u'removes': None,
u'argv': None, u'creates': None,
>>>>>> u'chdir': None, u'std
>>>>>> in': None}}, u'start': u'2019-02-25
12:55:26.584686', u'attempts':
>>>>>> 50, u'stderr': u'', u'rc': 0,
u'delta': u'0:00:00.082239', u'stdout_lines':
>>>>>> []}
>>>>>> 2019-02-25 12:55:26,924+0100 ERROR
>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED!
=>
>>>>>> {"attempts": 50, "changed": true,
"cmd": "virsh -r net-dhcp-leases default
>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut
-f1 -d'/'", "delta":
>>>>>> "0:00:00.082239", "end": "2019-02-25
12:55:26.666925", "rc": 0, "start":
>>>>>> "2019-02-25 12:55:26.584686", "stderr":
"", "stderr_lines": [], "stdout":
>>>>>> "", "stdout_lines": []}
>>>>>>
>>>>>
>>>>> Here we are just waiting for the bootstrap engine VM to fetch an IP
>>>>> address from default libvirt network over DHCP but it your case it
never
>>>>> happened.
>>>>> Possible issues: something went wrong in the bootstrap process for
>>>>> the engine VM or the default libvirt network is not correctly
configured.
>>>>>
>>>>> 1. can you try to reach the engine VM via VNC and check what's
>>>>> happening there? (another question, are you running it nested? AFAIK
it
>>>>> will not work if nested over ESXi)
>>>>> 2. can you please share the output of
>>>>> cat /etc/libvirt/qemu/networks/default.xml
>>>>>
>>>>>
>>>>>>
>>>>>> Guillaume Pavese
>>>>>> Ingénieur Système et Réseau
>>>>>> Interactiv-Group
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTM...
>>>>>>
>>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV...
>>>>
>>>