Simone,
Thanks for the reply. I really appreciate you taking the time to answer.
Keeping the numbers the same as my original email:
1. Any idea why my cluster is showing as N/A and where can I look for the reason answer?
2. <answered inline below>
3. I changed to the vdsm user (broker user) and ran sendmail test from the command line of
my host and it was delivered. Is this an adequate test?
4. <answered inline below>
5. Any ideas about: ETL service sampling error and where to look for answers?
6. Networking on my first host (the one that is running as NFS server) remains a mess. I
came in on Friday *holiday here* and tried to get oVirt working again on this server. If I
start vdsmd it becomes unresponsive and I can't access network on the host. As a test,
I tried to setup another host with "OVS" networking instead of
"Legacy" network type. It seemed fine, but would not survive a reboot... i.e.
when using OVS networking, the machine would lose all access to the network after a
reboot. I ended up doing a full OS wipe and reinstall on this second host. I then
reinstalled oVirt with legacy networking and it is now back in my cluster. It's like
OVS networking does not function on my machines. Any ideas how I can get mu first host
back into the cluster (keeping in mind that it is also the NFS server for the cluster)?
Cheers,
Gervais
On Aug 23, 2016, at 9:28 AM, Simone Tiraboschi
<stirabos(a)redhat.com> wrote:
On Thu, Aug 18, 2016 at 7:32 PM, Gervais de Montbrun
<gervais(a)demontbrun.com> wrote:
> Hey Folks,
>
> I setup an oVirt 3.6.x cluster with one of my host nodes as the NFS server.
> I did this because I was waiting for true Gluster integration (ie oVirt 4.0)
> and wanted to go ahead with setting up my test environment. I'd like to now
> move my existing 3 server oVirt cluster to 4.0 with Gluster setup as for all
> backend Storage Domains and then destroy/remove the NFS server that is
> running on one of my nodes.
>
> I've upgraded my servers to 4.0.2 with my existing setup, but I am having
> all kinds of issues. I would like to try to clear them all up prior to
> trying to go to Gluster for the Storage Domains and would love some help. At
> the risk of overwhelming all of y'all, I'm going to try to list the issues I
> am seeing below and hope that folks will let me know what they would like to
> see in terms of logs, etc. If these are known issues that will be cleared up
> soon (or eventually) then I will wait and apply updates are they come, but
> want to make sure that the problems I am having are not unique to me.
>
> 1. The beautiful new dashboard shows that my cluster is N/A:
>
> and I see the following events:
>
> I don't know which log file to search for a solution.
>
> 2. My hosted engine never seems to start properly unless I manually set my
> hosts to Global Maintenance mode. I then have to re-run "engine-setup" and
> accept all the defaults and it will start and function. It does throw an
> error about starting ovirt-imageio-proxy when running engine-setup
>
> [ ERROR ] Failed to execute stage 'Closing up': Failed to start service
> 'ovirt-imageio-proxy'
>
>
> 2(a). Can I just disable ovirt-imageio-proxy somewhere? Is this just
> supposed to make uploading ISO images simpler? I don't really need this.
It's tracked by this:
https://bugzilla.redhat.com/show_bug.cgi?id=1365451
4.0.3 will come with a fix
> 2(b). Do I need to have DWH enabled? I am not certain when this was added
> into my hosted-engine setup, but I don't think I need it.
Yes, you have to since the new dashboard is consuming it.
> 3. My vdsm.log is full of entries like, "JsonRpc
> (StompReactor)::ERROR::2016-08-18
> 12:59:56,783::betterAsyncore::113::vds.dispatcher::(recv) SSL error during
> reading data: unexpected eof"
Can you please check if ovirt-ha.broker can successfully reach your smtp server?
> 4. Commands throwing deprecation warning. Example:
>
> [root@cultivar2 ~]# hosted-engine --vm-status
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
> deprecated, please use vdsm.jsonrpcvdscli
> import vdsm.vdscli
>
> or in the logs:
>
> Aug 18 13:56:17 cultivar3 ovirt-ha-agent:
> /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning:
> Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
They are due to
https://bugzilla.redhat.com/show_bug.cgi?id=1101554
We are going to fix for 4.1
The warning is currently harmless.
> 5. This error in the console: "ETL service sampling has encountered an
> error. Please consult the service log for more details." ETL service
> sampling log? Where's that?
>
> 6. The host that is running the NFS server had network issues on Monday and
> I have had to remove it from the cluster. It was working fine up until I did
> the lastest yum updates. The networking just stopped working when vdsm
> restarted. Could this be related to my switching the cluster "Switch type"
> to OVS from Legacy? I have since switched it back to Legacy. I'm going to
> completely remove oVirt and reinstall/re-add it to the cluster. As it is the
> NFS server, I do need the network to continue to function throughout the
> process. Any advice would be appreciated.
>
> Lastly... If I clear up the above, how feasible is it to GoGO GlusterFS
> instead of my existing straight NFS server setup that I have now? Any advice
> on how to best move my hosted_engine, data, ISO, export Storage Domains to
> Gluster?
>
> Cheers,
> Gervais
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>