Remote PostgreSQL 9.5 (was: Answer file key for "nonlocal postgres")

On Mon, Mar 27, 2017 at 9:07 PM, Jamie Lawrence <jlawrence@squaretrade.com> wrote:
On Mar 25, 2017, at 10:57 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Fri, Mar 24, 2017 at 3:08 AM, Jamie Lawrence <jlawrence@squaretrade.com> wrote:
[…]
Anyone know what I am missing?
Probably OVESETUP_PROVISIONING/postgresProvisioningEnabled and OVESETUP_DWH_PROVISIONING/postgresProvisioningEnabled .
Appreciate the reply - thanks!
That said, I strongly recommend to not try and write the answer file by hand. Instead, do an interactive setup with the exact conditions […]
I know what I was doing is unsupported. I was wondering down the wrong troubleshooting path for a bit there, but I think ultimately what I need is also unsupported.
It was because I was trying to push this into our extant DB infrastructure, which is PG 9.5. Which I found doesn’t work with a local-install, either. (I was thinking it would work due to past experience with things that demand an old Postgres; IME, PG generally has pretty solid forward-compatibility.)
In "local-install" you mean on the engine machine?
From RPMs?
Instead of the OS-packaged PG (and not in parallel)? Were its binaries in /usr/bin (and not some private path)? If answers for all of the above are 'Yes', then please share setup logs, perhaps preferably by opening a bugzilla RFE and attaching them there. It's rather likely that whatever problems you had are quite easy to solve. Otherwise, please try that, or see the bug(s) below for a discussion about this.
So that leads me to my next question: if I install under the supported version and dump/load/reconfigure to PG9.5.3, is anyone aware of any actual problems (other than lack of official support)?
I personally didn't yet reach that point to be able to tell about, nor do I know about others that did, but see below.
In doing answerfile-driven installs repeatedly, the point where it now fails is after the DB load, with ovirt-aaa-jdbc-tool choking and failing the run.
The reason I’m considering that as my fallback, nothing-else-worked option is that the DB needs to live in one of our existing clusters. We are a heavy Postres shop with a lot of hardware, humans and process devoted to maintaining it, and the DBAs would hang my corpse up as a deterrent to others if I started installing ancient instances in random places for them to take care of.
If you do manage to make it work locally, then working with remote db should be easy, but does require (currently) to have the local client to be of the same version. Please see this bug, and the the ones it depends on: https://bugzilla.redhat.com/show_bug.cgi?id=1324882 Almost all of it is relevant for a "vanilla" 9.5. I am allowing myself to change the subject of current email, as I assume the original issue is concluded. Best, -- Didi

On Mar 27, 2017, at 10:42 PM, Yedidyah Bar David <didi@redhat.com> wrote:
I know what I was doing is unsupported. I was wondering down the wrong troubleshooting path for a bit there, but I think ultimately what I need is also unsupported.
It was because I was trying to push this into our extant DB infrastructure, which is PG 9.5. Which I found doesn’t work with a local-install, either. (I was thinking it would work due to past experience with things that demand an old Postgres; IME, PG generally has pretty solid forward-compatibility.)
In "local-install" you mean on the engine machine?
From RPMs? Instead of the OS-packaged PG (and not in parallel)? Were its binaries in /usr/bin (and not some private path)?
Should have provided clearer info. By local install, I mean the machine the engine is being installed on. The PG installs I tried were all from RPMs. 9.5 was from the PGDG archive (the installer refuses to run against it); 9.2, I didn’t look, but assume that was either the Ovirt archive or from Centos upstream (the installer is happy with it). I just checked, and Postgres client binaries (pg_dump, psql, etc.) were installed in /bin; not sure why the package would do that.
If answers for all of the above are 'Yes', then please share setup logs, perhaps preferably by opening a bugzilla RFE and attaching them there. It's rather likely that whatever problems you had are quite easy to solve.
Otherwise, please try that, or see the bug(s) below for a discussion about this.
Sorry, I no longer have that log. I’ve probably run 70-80 iterations of the installation, and cleaned up a few times. Unfortunately, the speed with which I’ve had to do this hasn't matched well with asking for help. I’ve solved my DB problems (although I’m well off into an entirely manual configuration now), and am currently fighting with LDAP setup. (When it works, works fine, in that it authenticates, returns the right data, etc. It just times out on connect about 90% of the time, and is the only one of a diverse set of LDAP clients to do so.)
So that leads me to my next question: if I install under the supported version and dump/load/reconfigure to PG9.5.3, is anyone aware of any actual problems (other than lack of official support)?
I personally didn't yet reach that point to be able to tell about, nor do I know about others that did, but see below.
FTR, as best I can tell so far, there’s no issue with 9.5.3, once you get it working. However, at least in my experience, you won’t get it working with the installer. I also tried the installer-facilitated migration from local to remote at one point; it blew up and died telling me to downgrade our cluster, change a bunch of vacuum-related config variables(!), and I think there was something else it was fussing about - really ticky operational details the installer has no business refusing to run over. (There is simply no way the installer has a better idea of what optimum vacuum settings are for our hardware, environment and load than our DBAs do.) What I did, which is working: - Let the installer do as it pleases with PG9.2 on the engine machine. - pg_dump […] | pg_restore […] to the 9.5 cluster for both DBs. - Revise as needed: /etc/ovirt-engine/aaa/internal.properties /etc/ovirt-engine/engine.conf.d/10-database-setup.conf /etc/ovirt-engine/engine.conf.d/10-setup-dwh-database.conf /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf - Shut down local PG, verify everything is using the right DB and working. - Blow away the local DB, leave the ancient PG and related installed for Ovirt to do whatever with. - (Not done yet) code and document this madness for our Puppet system. One additional manual config requirement I ran in to is that in the places where DB URLs are used (.properties, DB-related .conf files), when enabling SSL, the URL needs to have ‘?ssl=true’ appended or it fails to attempt SSL on connect in our environment. I assume that’s some driver peculiarity but haven’t looked (Most of our DB hosts are Debian, a couple are Ubuntu
Please see this bug, and the the ones it depends on:
https://bugzilla.redhat.com/show_bug.cgi?id=1324882
Almost all of it is relevant for a "vanilla" 9.5.
Thank you; it was helpful in piecing together what needed to happen. I do fear upgrades now, especially anything touching the DB. But the installer as it works now simply doesn’t support our environment. I had/am having issues with: - remote PG 9.5 cluster + required SSL, - using our CA, - OpenLDAP + required SSL, - Bonded NICs I know this is being discussed on the dev list, and I’ve resisted jumping in, because I’m not going to be contributing code. But please consider this one emphatic vote for adding options to the installer to selectively disable/skip parts of the ‘non-core’ config: database, authentication, firewall, NICs, etc. etc. etc. I would love a big fat ‘--this-voids-the-warranty-use-at-your-own-risk-rabid-dingos-ahead’ flag, or if you want to bury it even further, I’m less thrilled with answerfile keys modulo documentation, but that would be OK, too. I realize that the installer is considered a big selling point/distinguishing feature, and I’m sure it is great for a lot of folks. And I’m sure the model works much better for RHEL support. But it doesn’t work in our environment, and I don’t think our environment is all that strange. If this had taken much longer (or, TBH, had I more accurately assessed how long it would take), we probably would have chosen something other than Ovirt. I do think using CM systems as the installer works well for this class of application (apps that need to interoperate with/across a number of other independent software bits) - let the Ansible folks maintain the interop details. It also has the advantage of integrating nicely with at least a subset of users who use the same CM tool. Foreman, of course, is the canonical example of this at the moment. I am really, really hoping that there is a way to coherently look at DB changes for an upgrade by the time I need to. (This is something our change control policies demand, anyway - DBAs apply changes to production only after reviewing SQL.) Anyway, I’m mainly just trying to provide feedback on my experience. I’m almost there in terms of getting this working. I really appreciate your time and help, -j
participants (2)
-
Jamie Lawrence
-
Yedidyah Bar David