info about ovirtmgmt bridge config in master

Hello, I'm configuring an all-in-one environment in a Qemu-KVM VM with CentOS 6.5 using nested virtualization. All seems ok except the hypervisor part that goes in timeout during setup and after reboot remains in "Unassigned" state. I notice that under /etc/sysconfig/network-scripts directory I don't have the ifcfg-ovirtmgmt file but only the old ifcfg-eth0 and in fact I only have bridge [root@ovirtita ~]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no Ha anything changed for master and ovirtmgmt network or does this mean that during setup something has gone wrong and that I can try to manually create ovirtmgmt bridge config...? If this can interfer, in my case I don't have dns set up but only entry in /etc/hosts. With previous versions and this one I got a warning but previously this didn't compromise hypervisor part in all-in-one environments.. engine-setup log https://drive.google.com/file/d/0BwoPbcrMv8mvQVRyZXFqdFpCbnM/edit?usp=sharin... host deploy (on itself) part: https://drive.google.com/file/d/0BwoPbcrMv8mvLXhlMl9qcWhqTlU/edit?usp=sharin... Thanks Gianluca

On Fri, Aug 15, 2014 at 11:28:58AM +0200, Gianluca Cecchi wrote:
Hello, I'm configuring an all-in-one environment in a Qemu-KVM VM with CentOS 6.5 using nested virtualization. All seems ok except the hypervisor part that goes in timeout during setup and after reboot remains in "Unassigned" state.
I notice that under /etc/sysconfig/network-scripts directory I don't have the ifcfg-ovirtmgmt file but only the old ifcfg-eth0 and in fact I only have bridge
[root@ovirtita ~]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no
Ha anything changed for master and ovirtmgmt network or does this mean that during setup something has gone wrong and that I can try to manually create ovirtmgmt bridge config...?
If this can interfer, in my case I don't have dns set up but only entry in /etc/hosts. With previous versions and this one I got a warning but previously this didn't compromise hypervisor part in all-in-one environments..
engine-setup log https://drive.google.com/file/d/0BwoPbcrMv8mvQVRyZXFqdFpCbnM/edit?usp=sharin...
host deploy (on itself) part: https://drive.google.com/file/d/0BwoPbcrMv8mvLXhlMl9qcWhqTlU/edit?usp=sharin...
ifcfg-ovirtmgmt network should be found on disk after a successful installation of a host. However, if network configuration phase fails, ifcfg files should be reverted to their original values. Could you see your super/vdsm.log if that is the case? Could you explain when did "hypervisor part goes in timeout". Can you correlate this to something in vdsm.log? Dan.

On Fri, Aug 15, 2014 at 12:30 PM, Dan Kenigsberg <danken@redhat.com> wrote:
ifcfg-ovirtmgmt network should be found on disk after a successful installation of a host. However, if network configuration phase fails, ifcfg files should be reverted to their original values.
ok.
Could you see your super/vdsm.log if that is the case? Could you explain when did "hypervisor part goes in timeout". Can you correlate this to something in vdsm.log?
Dan.
So, initial config was no dns and I forgot the /etc/hosts part too. So engine-setup complained about hostname input and asked again. At this point I filled up /etc/hosts and confirmed hostname in engine-setup prompt. It complained about dns part but continued. As this is an all-in-one setup it arrived at the hypervisor config part. It gave in engine-setup output the message about time out in having hypervisor host up a few times and at the end [ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs. During these recurring time-out warnings I see in vdsm.log Detector thread::ERROR::2014-08-13 17:48:59,482::protocoldetector::104::vds.MultiProtocolAcceptor::(serve_forever) Unhandled exception Traceback (most recent call last): File "/usr/share/vdsm/protocoldetector.py", line 100, in serve_forever self._process_events() File "/usr/share/vdsm/protocoldetector.py", line 117, in _process_events self._accept_connection() File "/usr/share/vdsm/protocoldetector.py", line 180, in _accept_connection client_socket, _ = self._socket.accept() File "/usr/lib64/python2.6/site-packages/vdsm/sslutils.py", line 121, in accept raise SSL.SSLError("%s, client %s" % (e, address[0])) SSLError: unexpected eof, client 192.168.122.51 engine-setup went ahead with these messages: [WARNING] Local storage domain not added because the VDSM host was not up. Please add it manually. [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140813173448-i47i1l.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20140813174959-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully inside supervdsm.log no error message. See it here: https://drive.google.com/file/d/0BwoPbcrMv8mvRVpxR3UzYU9hUmM/edit?usp=sharin... in connectivity.log I see only: 2014-08-13 17:39:41,206:DEBUG:recent_client:True, lo:(operstate:up speed:0 duplex:unknown), ;vdsmdum my;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:0 duplex:unknown) 2014-08-13 17:40:11,360:DEBUG:recent_client:False At this moment it seems I can't do anything on the host, nor reinstall or activate or anything... BTW: the VM fo all-in-one has 6Gb of ram configured... could it be the cause in any way? Gianluca

On Fri, Aug 15, 2014 at 01:01:50PM +0200, Gianluca Cecchi wrote:
On Fri, Aug 15, 2014 at 12:30 PM, Dan Kenigsberg <danken@redhat.com> wrote:
ifcfg-ovirtmgmt network should be found on disk after a successful installation of a host. However, if network configuration phase fails, ifcfg files should be reverted to their original values.
ok.
Could you see your super/vdsm.log if that is the case? Could you explain when did "hypervisor part goes in timeout". Can you correlate this to something in vdsm.log?
Dan.
So, initial config was no dns and I forgot the /etc/hosts part too. So engine-setup complained about hostname input and asked again. At this point I filled up /etc/hosts and confirmed hostname in engine-setup prompt. It complained about dns part but continued. As this is an all-in-one setup it arrived at the hypervisor config part. It gave in engine-setup output the message about time out in having hypervisor host up a few times and at the end
[ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs.
During these recurring time-out warnings I see in vdsm.log Detector thread::ERROR::2014-08-13 17:48:59,482::protocoldetector::104::vds.MultiProtocolAcceptor::(serve_forever) Unhandled exception Traceback (most recent call last): File "/usr/share/vdsm/protocoldetector.py", line 100, in serve_forever self._process_events() File "/usr/share/vdsm/protocoldetector.py", line 117, in _process_events self._accept_connection() File "/usr/share/vdsm/protocoldetector.py", line 180, in _accept_connection client_socket, _ = self._socket.accept() File "/usr/lib64/python2.6/site-packages/vdsm/sslutils.py", line 121, in accept raise SSL.SSLError("%s, client %s" % (e, address[0])) SSLError: unexpected eof, client 192.168.122.51
Gianluca, does this show repeatedly when `vdsClient -s 0 getVdsCaps` is called manually? Piotr, can you guess what may cause this?
engine-setup went ahead with these messages:
[WARNING] Local storage domain not added because the VDSM host was not up. Please add it manually. [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140813173448-i47i1l.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20140813174959-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully
inside supervdsm.log no error message. See it here: https://drive.google.com/file/d/0BwoPbcrMv8mvRVpxR3UzYU9hUmM/edit?usp=sharin...
I don't think it's anyway related to your problem, but it does require Toni comment - supervdsm.log has traces of needless calls MainThread::DEBUG::2014-08-15 10:38:02,570::vdsm-restore-net-config::56::root::(unified_restoration) Removing all networks ({}) and bonds ({}) in running config. and MainThread::DEBUG::2014-08-15 10:38:02,642::api::640::setupNetworks::(setupNetworks) Setting up network according to configuration: networks:{}, bondings:{}, options:{'_inRollback': True, 'connectivityCheck': False} MainThread::DEBUG::2014-08-15 10:38:02,642::api::644::root::(setupNetworks) Validating configuration

----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, pkliczew@redhat.com, asegurap@redhat.com Cc: "users" <users@ovirt.org> Sent: Wednesday, August 27, 2014 1:28:41 PM Subject: Re: [ovirt-users] info about ovirtmgmt bridge config in master
On Fri, Aug 15, 2014 at 01:01:50PM +0200, Gianluca Cecchi wrote:
On Fri, Aug 15, 2014 at 12:30 PM, Dan Kenigsberg <danken@redhat.com> wrote:
ifcfg-ovirtmgmt network should be found on disk after a successful installation of a host. However, if network configuration phase fails, ifcfg files should be reverted to their original values.
ok.
Could you see your super/vdsm.log if that is the case? Could you explain when did "hypervisor part goes in timeout". Can you correlate this to something in vdsm.log?
Dan.
So, initial config was no dns and I forgot the /etc/hosts part too. So engine-setup complained about hostname input and asked again. At this point I filled up /etc/hosts and confirmed hostname in engine-setup prompt. It complained about dns part but continued. As this is an all-in-one setup it arrived at the hypervisor config part. It gave in engine-setup output the message about time out in having hypervisor host up a few times and at the end
[ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs.
During these recurring time-out warnings I see in vdsm.log Detector thread::ERROR::2014-08-13 17:48:59,482::protocoldetector::104::vds.MultiProtocolAcceptor::(serve_forever) Unhandled exception Traceback (most recent call last): File "/usr/share/vdsm/protocoldetector.py", line 100, in serve_forever self._process_events() File "/usr/share/vdsm/protocoldetector.py", line 117, in _process_events self._accept_connection() File "/usr/share/vdsm/protocoldetector.py", line 180, in _accept_connection client_socket, _ = self._socket.accept() File "/usr/lib64/python2.6/site-packages/vdsm/sslutils.py", line 121, in accept raise SSL.SSLError("%s, client %s" % (e, address[0])) SSLError: unexpected eof, client 192.168.122.51
Gianluca, does this show repeatedly when `vdsClient -s 0 getVdsCaps` is called manually?
Piotr, can you guess what may cause this?
Having information above it looks like ssl handshake failed or disconnect happened during handshake. How easy it is to reproduce it?
engine-setup went ahead with these messages:
[WARNING] Local storage domain not added because the VDSM host was not up. Please add it manually. [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140813173448-i47i1l.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20140813174959-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully
inside supervdsm.log no error message. See it here: https://drive.google.com/file/d/0BwoPbcrMv8mvRVpxR3UzYU9hUmM/edit?usp=sharin...
I don't think it's anyway related to your problem, but it does require Toni comment - supervdsm.log has traces of needless calls
MainThread::DEBUG::2014-08-15 10:38:02,570::vdsm-restore-net-config::56::root::(unified_restoration) Removing all networks ({}) and bonds ({}) in running config.
and
MainThread::DEBUG::2014-08-15 10:38:02,642::api::640::setupNetworks::(setupNetworks) Setting up network according to configuration: networks:{}, bondings:{}, options:{'_inRollback': True, 'connectivityCheck': False} MainThread::DEBUG::2014-08-15 10:38:02,642::api::644::root::(setupNetworks) Validating configuration

On Wed, Aug 27, 2014 at 1:40 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
raise SSL.SSLError("%s, client %s" % (e, address[0])) SSLError: unexpected eof, client 192.168.122.51
Gianluca, does this show repeatedly when `vdsClient -s 0 getVdsCaps` is called manually?
yes
Piotr, can you guess what may cause this?
Having information above it looks like ssl handshake failed or disconnect happened during handshake. How easy it is to reproduce it?
The original problem was configuring AIO on master so that I could merge the italian translation patch. I also started this thread you can check where I detailed other information http://lists.ovirt.org/pipermail/users/2014-August/026824.html It seems my problem arises because I install packages from master but I don't run engine-setup right after that. I shutdown and restart the server and run engine-setup and I have this reproducible problem regarding SSL (because several services are marked as start at boot during install). If I run engine-setup right after install without reboot I don't have this problem, but in current master (as of 22/08; not tried yet following versions) I have the other reproducible problem [ ERROR ] Failed to execute stage 'Closing up': No permission to read file: /etc/pki/vdsm/keys/vdsmkey.pem This is for AIO setup. For 3.5 as of 22/08 all was ok (I didn't try the middle "reboot" workflow with 3.5) Also for master + italian patch at 12/08 AIO setup was ok too (supposing to run engine-setuo right after install) Gianluca
participants (3)
-
Dan Kenigsberg
-
Gianluca Cecchi
-
Piotr Kliczewski