[Users] New user to oVirt, and I haz a sad so far...
Einav Cohen
ecohen at redhat.com
Fri Jan 17 12:10:50 EST 2014
> ----- Original Message -----
> From: "Will Dennis (Live.com)" <willarddennis at live.com>
> Sent: Friday, January 17, 2014 11:55:55 AM
>
>
>
> Thanks, Joop, for the node platform best practices… I did turn selinux from
> “enforcing” to “permissive”, and then when I tried to ping the engine by
> fqdn, I saw that DNS lookups were failing (even tho resolv.conf looked
> correct…) Did a ‘yum remove NetworkManager” and then fixed the
> /etc/sysconfig/network-scripts/ifcfg-<nic>, and then after a reboot, I can
> now see the info for the node in the WUI on the manager, although the status
> for the node is still “Non Operational”…
In the GUI, within the "Events" sub-tab of that node, there should be an error
message detailing the reason for that node being in the Non-Operational state;
what does this message say?
>
>
>
> Where can I find the node install log (on the engine or the node, and name?)
> (Sorry for noob status, but I am a quick learner ;)
>
>
>
> Thanks,
>
> Will
>
>
>
>
> From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of
> noc
> Sent: Friday, January 17, 2014 3:35 AM
> To: users at ovirt.org
> Subject: Re: [Users] New user to oVirt, and I haz a sad so far...
>
>
>
>
>
> On 17-1-2014 8:53, Gabi C wrote:
>
>
>
>
>
> 've been there! :-D
>
>
> I mean exactly same issuse you had on Centos, I had on Fedora 19.
>
>
> Did you disable selinux on nodes? 'cause that's what is causing SSh
> connection closing
>
>
> My setup:
>
>
> 1 engine on vmware - fedora 19, up-to-date
>
>
>
>
>
>
>
>
> 2 nodes on IBM x series 3650 - fedora 19 based -oVirt Node - 3.0.3 - 1.1.fc19
> with nodes beig in glusterfs cluster also.....
>
>
>
> Right now, I'm banging my head against "Operation Add-Disk failed to
> complete." , message I have got after adding a new virtual machine and try
> to addd its disk
>
>
>
>
>
> On Fri, Jan 17, 2014 at 6:08 AM, Will Dennis (Live.com) <
> willarddennis at live.com > wrote:
>
> Hi all, ready for a story? (well, more of a rant, but hopefully it will be a
> good UX tale, and may even be entertaining.)
>
> Had one of the groups come to me at work this week and request a OpenStack
> setup. When I sat down and discussed their needs, it turns out that they
> really only need a multi-hypervisor setup where they can spin up VMs for
> their research projects. The VMs should be fairly long-lived, and will have
> persistent storage. Their other request is that the storage should be local
> on the hypervisor nodes (they plan to use Intel servers with 8-10 2TB drives
> for VM storage on each node.) They desire this in order to keep the VM I/O
> local - they do not have a SAN of any sort anyhow, and they do not care
> about live migration, etc
>
>
> @Will
> If the installation ends, either with or without error, it will give you a
> log location. Upload the log to a paste.bin and mail the link.
>
> @Gabi,
> There should be more info in either the vdsm.log on the SPM server or in the
> engine.log on the engine server, see above for lettings us know what the
> error is.
>
> Having installed oVirt, probably dozens of times, I have some guidelines:
> - temporarily disable firewalld/iptables (if all works, enable should still
> work, scripts with rules are generated and location is given)
> - make selinux permissive either via setenforce 0 (until next boot) or via
> /etc/selinux/config ( survives reboots), dont disable it!
> - make sure fqdn work in both directions between engine and host(s) (either
> using /etc/hosts or DNS)
> - make sure NetworkManager is disabled and network enabled
>
> Joop
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
More information about the Users
mailing list