[Users] New user to oVirt, and I haz a sad so far...

Gabi C gabicr at gmail.com
Fri Jan 17 05:06:29 EST 2014


Hi Joop,

1. I did disabled firewalld/iptbles on both nodes and engines.
2. Disabling selinux: when setting to permissive via /etc/selinux/config
(and of course unpersist/persist them -> /config/etc........) doesn't work
( as write this I've just reboot one node with the above modification and
when I try to ssh node I get "connection closed by..." - of course selinux
is enabled; after setenforce 0 ssh is OK.The only way I manage to diasble
selinux was

mount -o.rw,remount /run/initramfs/live
edit grub.cfg and add selinux=0 to kernel line

3. one strange issue: after I reboot nodes, trying to ssh raise ssh warning
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! " so I have to remove host
entry from .ssh/.known_host in order to connect
4. I am about to check name resolution
5.I have glusterfs also on both nodes. After I put nodes on maintenace,
stop gluster volumes  and rebooting nodes caused gluster volumes to
dissapear from engine web interface

Thanks




On Fri, Jan 17, 2014 at 10:35 AM, noc <noc at nieuwland.nl> wrote:

>  On 17-1-2014 8:53, Gabi C wrote:
>
>   've been there! :-D
>
>  I mean exactly same issuse you had on Centos, I had on Fedora 19.
>  Did you disable selinux on nodes? 'cause that's what is causing SSh
> connection closing
>
>  My setup:
>
>  1 engine on vmware  - fedora 19, up-to-date
>
>
>  2 nodes on IBM x series 3650  - fedora 19 based -oVirt Node - 3.0.3 -
> 1.1.fc19 with nodes beig in glusterfs cluster also.....
>
>
>  Right now, I'm banging my head against "Operation Add-Disk failed to
> complete." , message I have got after adding a new virtual machine and try
> to addd its disk
>
>
> On Fri, Jan 17, 2014 at 6:08 AM, Will Dennis (Live.com) <
> willarddennis at live.com> wrote:
>
>> Hi all, ready for a story? (well, more of a rant, but hopefully it will
>> be a
>> good UX tale, and may even be entertaining.)
>>
>> Had one of the groups come to me at work this week and request a OpenStack
>> setup. When I sat down and discussed their needs, it turns out that they
>> really only need a multi-hypervisor setup where they can spin up VMs for
>> their research projects. The VMs should be fairly long-lived, and will
>> have
>> persistent storage. Their other request is that the storage should be
>> local
>> on the hypervisor nodes (they plan to use Intel servers with 8-10 2TB
>> drives
>> for VM storage on each node.) They desire this in order to keep the VM I/O
>> local - they do not have a SAN of any sort anyhow, and they do not care
>> about live migration, etc
>>
>   @Will
> If the installation ends, either with or without error, it will give you a
> log location. Upload the log to a paste.bin and mail the link.
>
> @Gabi,
> There should be more info in either the vdsm.log on the SPM server or in
> the engine.log on the engine server, see above for lettings us know what
> the error is.
>
> Having installed oVirt, probably dozens of times, I have some guidelines:
> - temporarily disable firewalld/iptables (if all works, enable should
> still work, scripts with rules are generated and location is given)
> - make selinux permissive either via setenforce 0 (until next boot) or via
> /etc/selinux/config ( survives reboots), dont disable it!
> - make sure fqdn work in both directions between engine and host(s)
> (either using /etc/hosts or DNS)
> - make sure NetworkManager is disabled and network enabled
>
> Joop
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140117/82d5d059/attachment.html>


More information about the Users mailing list