The data Bitergia is collecting for oVirt, Gluster, and RDO is part of a dashboard project
that these teams have been working on for some time now, and the end results are finally
ready to post to the oVirt community to get their feedback. What you found in [1] is
something that we want to host locally on
ovirt.org, using a daily cron job that will pull
the cached data from Bitergia's git repo found at [2].
This has already been done by Rich Bowen at RDO, and the results are at [3].
There is no urgency for this task. I did talk to Karsten about it a while back, but my
mention of it on the [infra] list is really my first effort to get this going.
As you requested, the guidelines sent to me by Rich were:
1) Checkout the [Bitergia] git repo somewhere - anywhere - on the web server.
Ensure that the directory is permission +rx so that Apache can descend
into the directory and read the files.
2) git pull every morning with a cron job:
0 4 * * * cd /var/redhat-rdo-dashboard/browser && git pull origin master
(Not sure about the time - if you can tell about what time every day
they push the updates, that would make better timing. I haven't done
that yet, so I'm probably a day behind. I should check.)
3) Point Apache at it:
Alias /stats /var/redhat-rdo-dashboard/browser
<Directory /var/redhat-rdo-dashboard>
Options FollowSymLinks
Order allow,deny
Allow from all
DirectoryIndex index.html
</Directory>
4) Restart httpd and you should be golden. That'll crate a URI of /stats
for that content.
Let me know if you have any additional questions.
Peace,
Brian
[1]
http://bitergia.com/projects/redhat-ovirt-dashboard/browser/
[2]
https://github.com/Bitergia/redhat-ovirt-dashboard
[3]
http://openstack.redhat.com/stats/
----- Original Message -----
From: "R P Herrold" <herrold(a)owlriver.com>
To: "Brian Proffitt" <bproffit(a)redhat.com>
Cc: "oVirt infrastructure ML" <infra(a)ovirt.org>
Sent: Thursday, January 23, 2014 3:30:18 PM
Subject: Re:
oVirt.org Access Needed
On Thu, 23 Jan 2014, Brian Proffitt wrote:
Thanks to all for the assist. Rich just sent me the how to,
and apparently, I need to ssh into the actual server to pull
the git repo of Bitergia's data into the web server, set up
the alias in the conf file, and add a cron job to pull the
git data down daily.
Could you please forward those instructions into the infra ML,
so the list members can see how it was set up?
and so to some questions for clarification: Is 'bitergia' and
its sub-parts packaged into a form that has landed in Fedora,
or ... where ? I found the demo instance pointed to to be
very sluggish and loady on my local browser, but did not run
down why yet with a local install
I spent some time looking for a way to retrieve the sources to
do a local setup, and was not able to find them. Are they
under a FOSS license?
Is that linked instance pulling real time stats from live
servers or from cached details? If the former, infra probably
need to get an understanding as to the load effects, as the
ovirt infrastructure lacks spare capacity in terms of memory,
and in some cases in terms of network bandwidth. No surprises
there as it has been reported in infra meetings, and in the
Wednesday 'sync' but ...
So, is that do-able? Or would it be easier for someone on
the team to set this up?
'do-able' and 'done right' probably are different here. The
demo instance shows it _can_ be done, but ...
/me looks for a soapbox and puts on an infra 'hat':
unpackaged tools are 'magical' magical is a problem
Unpackaged tools are un-vetted in a traceable manner as to
License. Unpackaged tools in a remote VCS can simply
disappear, be invisibly compromised, or go through an API
change, or otherwise become NON re-deployable in the future.
A start-up vendor can close its doors and disappear, re-license,
take down archives, ... . Entropy happens all the time
The discipline of packaging prevents many of these problems
from gaining a toe-hold, by forcing retrieval of a version,
which may be checked against published md5sums (sha,
whatever); gets a review (by human eyes); and gets replication
of the build process by a non-human auto-builder. If there is
a good 'make test', it also has a sample set of configs to
read and confirm function of ...
Ones which ** require ** a manual [woops -- magical ;) ]
content deployment from git from instructions conveyed in a
non-public channel (or: unrolling a tarball, running some tool
which untraceably solves dependencies, such as 'cpan' or 'npm'
to get a point in time image which may be broken tomorrow when
one goes to re-deploy it) are broken.
It may be pulling in encumbered matter (eg: a patent problem
like the old: 'gif', or 'rar'). An undocumented manual
configuration / setup process is inherently fragile
New non-packaged matter needs explain the path it will follow
to move to being packaged. In the interim, it needs the rigor
of being documented software 'engineering'. One road to that
documentation of process is for it to be done via a VCS
checking, and a puppet CO in deployment, just like management
of configurations, and re-doable and relocatable via puppet
Also, I do not find that this 'bitergia' facility has a
tracking bug asking that it be installed [1] [2]
If there is an urgency to attain it, today is the first it has
been mentioned publicly
The approach of 'undocumented' except via back channel
communication (digging through IRC logs, digging through email
archives, whatever) is a broken one. The result can turn into
a subject to a single point of outage when the manual maker of
the tool is off line or unavailable otherwise
Turning to process for 'infra', it seems to me a person
proposing something should be able to point to a solution
which has it 'solved' in demonstration with a puppet recipe
that is checked into a testing branch of the infra git, which
recipes handle building that a testing instance, and then
managing its configuration (and so, when re-pointed to 'live',
takes it live)
-- Russ herrold
[1]
https://fedorahosted.org/ovirt/report/1?page=1&asc=0&sort=created
[2]
http://bitergia.com/projects/redhat-ovirt-dashboard/browser/