Thanks David for your reply -
I have completely flushed all iptables rules 'iptables --flush" -
iptables -L -v -n
Chain INPUT (policy ACCEPT 1775K packets, 627M bytes)
pkts bytes target prot opt in out source
destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
Chain OUTPUT (policy ACCEPT 1754K packets, 589M bytes)
pkts bytes target prot opt in out source
destination
The base host is Fedora 16 running with desktop
First installed vdsm and then ovirt-engine
Single network bridge installed, but there is another 1GB nic that isn't
being used -
eth0 Link encap:Ethernet HWaddr 00:1B:21:7D:ED:4A
inet6 addr: fe80::21b:21ff:fe7d:ed4a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:99656 errors:0 dropped:0 overruns:0 frame:0
TX packets:51508 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:63007897 (60.0 MiB) TX bytes:18148736 (17.3 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:1814674 errors:0 dropped:0 overruns:0 frame:0
TX packets:1814674 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:646274067 (616.3 MiB) TX bytes:646274067 (616.3 MiB)
ovirtmgmt Link encap:Ethernet HWaddr 00:1B:21:7D:ED:4A
inet addr:192.168.0.118 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::21b:21ff:fe7d:ed4a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:70706 errors:0 dropped:0 overruns:0 frame:0
TX packets:48717 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:52195637 (49.7 MiB) TX bytes:14942359 (14.2 MiB)
vnet0 Link encap:Ethernet HWaddr FE:1A:4A:A8:00:00
inet6 addr: fe80::fc1a:4aff:fea8:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:1 carrier:0
collisions:0 txqueuelen:500
RX bytes:1299 (1.2 KiB) TX bytes:2760 (2.6 KiB)
After ovirt engine is installed logged into the interface and configured
the host using 127.0.0.1 . Host reboots. Host shows up in the admin
interface only complaining about power management that isn't configured.
Here<https://picasaweb.google.com/lh/photo/3vclaT_6d3uy2QODU6xp_zyLvDW...
a screen shot of the web interface
The only configuration settings I've changed are in the qemu.conf to either
tls=0 or tls=1
spice-gtk-0.11-4.fc16.x86_64
spice-client-0.10.1-1.fc16.x86_64
spice-glib-0.11-4.fc16.x86_64
spice-gtk3-0.11-4.fc16.x86_64
spice-xpi-2.7-3.fc16.x86_64
spice-gtk-tools-0.11-4.fc16.x86_64
spice-server-0.10.1-1.fc16.x86_64
The link in the admin interface shows available(using FF). When I click it
opens a spicec:0 dialog and just closes
If I try to open from a shell I get things like this -
Brief window open and then error -
spicec -h 127.0.0.1 -p 5900
Warning: connect error 5 - need secured connection
On Wed, Jul 25, 2012 at 10:04 AM, David Jaša <djasa(a)redhat.com> wrote:
Hi Brent,
first guess: have a look if your iptables setup allow connection to the
qemu processes. RHEV 3.0 documentation (publicly accesible) says that a
host needs these ports open:
port 22 for SSH,
ports 5634 to 6166 for guest console connections,
port 16514 for libvirt virtual machine migration traffic,
ports 49152 to 49216 for VDSM virtual machine migration traffic,
and
port 54321 for the Red Hat Enterprise Virtualization Manager.
If you have ovirt-engine running onu the same machine as vdsm, most of
the ports don't need to be accessible from outside but "guest console"
ports do.
If it isn't iptables, please share at least:
* what your actual topology is (engine on the physical host?)
* if you use some custom tls settings such as tls switched off
* what spice client & xpi versions are you using
* how exactly the client failed (showed error window? with what error?
just didn't launch?)
In your email, you didn't write any debugging hints apart from the setup
being single-host one...
David
Brent Bolin píše v St 25. 07. 2012 v 09:00 -0500:
> About 6 months ago I asked on this list if it was possible to install
> ovirt on a single host. Thread got long and winded and lost interest.
>
> Started looking at the project again about two days ago. What I
> really didn't understand was using a base Fedora install. Installing
> vdsm and then installing ovirt engine.
>
> So everything is up. Created data center, storage, cluster, host and
> virtual machine.
>
> But I can't get there from here. I can't get console running to
> configure the booted install.
>
> I've tried VNC, Spice, Firefox with spice-xpi plugin.
>
> Tried tweaking, turning, touching, swearing @ /etc/libvirt/qemu.conf
> settings. tls settings. Not even sure if this is the right place to
> be checking.
>
> This is a show stopper.
>
> LSB Version: :core-4.0-amd64:core-4.0-noarch
> Distributor ID: Fedora
> Description: Fedora release 16 (Verne)
> Release: 16
> Codename: Verne
>
> [root@ovirt # rpm -qa|grep ovirt-engine
> ovirt-engine-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-log-collector-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-iso-uploader-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-backend-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-notification-service-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-jboss-deps-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-tools-common-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-dbscripts-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-setup-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-jbossas-1.2-2.fc16.x86_64
> ovirt-engine-userportal-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-restapi-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-genericapi-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-config-3.0.0_0001-1.6.fc16.x86_64
> ovirt-engine-webadmin-portal-3.0.0_0001-1.6.fc16.x86_64
>
> Any input would be appreciated
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
--
David Jaša, RHCE
SPICE QE based in Brno
GPG Key: 22C33E24
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24