On Thu, Aug 18, 2016 at 7:32 PM, Gervais de Montbrun
<gervais(a)demontbrun.com> wrote:
Hey Folks,
I setup an oVirt 3.6.x cluster with one of my host nodes as the NFS server.
I did this because I was waiting for true Gluster integration (ie oVirt 4.0)
and wanted to go ahead with setting up my test environment. I'd like to now
move my existing 3 server oVirt cluster to 4.0 with Gluster setup as for all
backend Storage Domains and then destroy/remove the NFS server that is
running on one of my nodes.
I've upgraded my servers to 4.0.2 with my existing setup, but I am having
all kinds of issues. I would like to try to clear them all up prior to
trying to go to Gluster for the Storage Domains and would love some help. At
the risk of overwhelming all of y'all, I'm going to try to list the issues I
am seeing below and hope that folks will let me know what they would like to
see in terms of logs, etc. If these are known issues that will be cleared up
soon (or eventually) then I will wait and apply updates are they come, but
want to make sure that the problems I am having are not unique to me.
1. The beautiful new dashboard shows that my cluster is N/A:
and I see the following events:
I don't know which log file to search for a solution.
2. My hosted engine never seems to start properly unless I manually set my
hosts to Global Maintenance mode. I then have to re-run "engine-setup" and
accept all the defaults and it will start and function. It does throw an
error about starting ovirt-imageio-proxy when running engine-setup
[ ERROR ] Failed to execute stage 'Closing up': Failed to start service
'ovirt-imageio-proxy'
2(a). Can I just disable ovirt-imageio-proxy somewhere? Is this just
supposed to make uploading ISO images simpler? I don't really need this.
It's tracked by this:
https://bugzilla.redhat.com/show_bug.cgi?id=1365451
4.0.3 will come with a fix
2(b). Do I need to have DWH enabled? I am not certain when this was
added
into my hosted-engine setup, but I don't think I need it.
Yes, you have to since the new dashboard is consuming it.
3. My vdsm.log is full of entries like, "JsonRpc
(StompReactor)::ERROR::2016-08-18
12:59:56,783::betterAsyncore::113::vds.dispatcher::(recv) SSL error during
reading data: unexpected eof"
Can you please check if ovirt-ha.broker can successfully reach your smtp server?
4. Commands throwing deprecation warning. Example:
[root@cultivar2 ~]# hosted-engine --vm-status
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
deprecated, please use vdsm.jsonrpcvdscli
import vdsm.vdscli
or in the logs:
Aug 18 13:56:17 cultivar3 ovirt-ha-agent:
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning:
Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
They are due to
https://bugzilla.redhat.com/show_bug.cgi?id=1101554
We are going to fix for 4.1
The warning is currently harmless.
5. This error in the console: "ETL service sampling has
encountered an
error. Please consult the service log for more details." ETL service
sampling log? Where's that?
6. The host that is running the NFS server had network issues on Monday and
I have had to remove it from the cluster. It was working fine up until I did
the lastest yum updates. The networking just stopped working when vdsm
restarted. Could this be related to my switching the cluster "Switch type"
to OVS from Legacy? I have since switched it back to Legacy. I'm going to
completely remove oVirt and reinstall/re-add it to the cluster. As it is the
NFS server, I do need the network to continue to function throughout the
process. Any advice would be appreciated.
Lastly... If I clear up the above, how feasible is it to GoGO GlusterFS
instead of my existing straight NFS server setup that I have now? Any advice
on how to best move my hosted_engine, data, ISO, export Storage Domains to
Gluster?
Cheers,
Gervais
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users