[Engine-devel] Change in the location of krb5.conf
by Juan Hernandez
Hello all,
The location of the krb5.conf file used to be indicated by the
jboss.server.config.dir system property, but as part of the change to
generate the JBoss configuration file from a template I had to change
that. Now the location of the krb5.conf file has to be
/etc/ovirt-engine. This was already the location for environments
installed from RPMs, so if you are using RPMs you don't need to change
anything.
In development environments you will need to use the environment
variable ENGINE_ETC:
export ENGINE_ETC=$JBOSS_HOME/standalone/configuration
I continue working on a solution better than this. Will keep you informed.
Regards,
Juan Hernandez
--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
12 years, 5 months
[Engine-devel] Please review: Sync Networks enhancement
by Mike Kolesnik
--=_1e0ac87f-e0d0-4bb3-87c1-1bffd4f2d29f
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
http://www.ovirt.org/wiki/SetupNetworks_SyncNetworks
Please review this enhancement to the way network attachment on host network device is kept in sync or not.
Regards,
Mike
--=_1e0ac87f-e0d0-4bb3-87c1-1bffd4f2d29f
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Times New Roman; font-size: 12pt; color: #000000'>http://www.ovirt.org/wiki/SetupNetworks_SyncNetworks<br><br>Please review this enhancement to the way network attachment on host network device is kept in sync or not.<br><br><div><span name="x"></span>Regards,<br>Mike<span name="x"></span><br></div><br></div></body></html>
--=_1e0ac87f-e0d0-4bb3-87c1-1bffd4f2d29f--
12 years, 5 months
[Engine-devel] Requirements for hosts in a non-virt cluster.
by Steve Gordon
Hi guys,
When creating a cluster with 'Enable virt service' selected when a host is later added to it we use vdsGetCaps to check that the host has the required virtualization extensions etc. What checks are performed if 'Enable virt service' is *not* checked but 'Enable gluster service' is?
Steve
12 years, 5 months
[Engine-devel] Minor trouble getting SSL enabled in JBOSS
by Frantz, Chris
Greetings,
I've been getting my feet wet with the ovirt-engine codebase. I've followed the instructions on the Building_oVirt_engine page and everything went rather well except for enabling SSL in JBOSS (7.1.1-Final).
I had to do this, instead of what is in the wiki page:
$ cd /usr/share/jboss-as
$ keytool -genkey -alias jboss -keyalg RSA -keysize 1024 -keystore .keystore -validity 3650
$ chown jboss-as:jboss-as .keystore
$ /usr/share/jboss-as/bin/jboss-cli.sh --connect
[standalone@localhost:9999 /] /subsystem=web/connector=https/ssl=configuration/:add
[standalone@localhost:9999 /] /subsystem=web/connector=https/ssl=configuration/:write-attribute(name=name,value=https)
[standalone@localhost:9999 /] /subsystem=web/connector=https/ssl=configuration/:write-attribute(name=key-alias,value=jboss)
[standalone@localhost:9999 /] /subsystem=web/connector=https/ssl=configuration/:write-attribute(name=password,value=PASSWORD)
[standalone@localhost:9999 /] /subsystem=web/connector=https/ssl=configuration/:write-attribute(name=certificate-key-file,value=/usr/share/jboss-as/.keystore)
[standalone@localhost:9999 /] exit
# service jboss-as restart
My knowledge of jboss is extremely limited, so I don't know if this different procedure is related to a change from one jboss version to the next, a misconfiguration on my part or some other factor.
Would anyone care to comment? Should I update the wiki page with this alternate procedure?
Thanks,
--Chris
12 years, 5 months
[Engine-devel] Changes to engine DB DWH views.
by Yaniv Dary
Good morning,
I have encountered several cases in which people have broken the dwh views in the engine DB and have fixed this by modifying them themselves to make the create DB scripts to work. This causes the ETL that collects samples from the engine to stop working and a real big headache for me with blockers on builds and so on.
Developers please email me, if changes you make break any of these views. Reviewers please don't approve changes to these views, if you are not sure the dwh side was handled.
Have a great week!
---
Yaniv Dary
BI Software Engineer
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 43501
Tel : +972 (9) 7692306
72306
Email: ydary(a)redhat.com
IRC : Yaniv D
12 years, 5 months
[Engine-devel] Fwd: Problem in REST API handling/displaying of logical networks
by Mike Kolesnik
--=_4ee2f20c-97ca-4df5-82e5-22b48c2bc7b8
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi All,
I would like to hear opinions about a behaviour that I think is problematic in
REST API handling of logical networks.
-- Intro --
Today in the REST API we are exposing two collections for "logical
network" related entities.
First is a top level collection which is out of any context at the address
http://engine/api/networks.
Second is a sub-collection in the context of a cluster:
http://engine/api/cluster/xxx/networks
The network itself is defined per DC level, so for each DC you would have
at least one logical network for management, which has some properties such
as STP, MTU, etc..
The top level collection is used to create/delete such network entities.
The sub-collection in the context of a Cluster is used to attach/detach a
network from the DC of that cluster.
The network in the context of a cluster has some additional information, let's
say for example 'status' of the network:
If a network is defined on all hosts in the cluster then it's status is
'Operational'.
If a network is not defined on some of the hosts in the cluster then it's
status is 'Not Operational'[1].
-- Problem --
The problem is that details which are only relevant in context of a
cluster, are still displayed in the root context as well (e.g: 'status').
This can, in certain cases, cause unexpected behaviour.
For example, let's consider this topology:
Data Center A
|
|\____ Network 'red'
|\____ Cluster A1
| \______ Network 'red' attached
\____ Cluster A2
\______ Network 'red' attached
If the 'status' is the same on all the clusters that the network is attached to
(A1, A2) then there will be one element in the top level collection, with the
network details and the 'status' field representing the state (which is same
for all networks in the cluster contexts of the cluster).
If, however, the status is not the same (ie. on A1 the network is
'Operational' and on A2 it is 'Non Operational') then the top-level
collection will show two elements for the network, where all network
details are the same and only the 'status' field is different.
This is problematic IMHO for several reasons:
1. Showing one network in certain states, and multiple copies of this
network in other states is not optimal, to say the least.
2. In the top-level collection there is no indicator of the cluster for which
the network is displayed, so there is no way to differentiate between the
two 'red' network elements (they will have same id, name, etc.).
3. There is a certain asymmetry between the remove action[2] and the
result in that you would expect: you either remove a network but in the
result you would see several elements removed.
-- Proposed Solutions --
Personally I can think of several solutions to this problem:
1. Declare the top-level collection as a collection of all networks that are
either attached to cluster or not, and if they are indeed attached then
show the details for each cluster, including a link to the cluster.
2. Declare the top-level collection as a collection of all networks that are
defined in data-centers, but they will not contain any cluster specific
data, and thus each entry is unique.
Solution #2 is breaking the API backwards-compatibility, since it includes
removing certain fields that have appeared today (namely 'status' and
'display') but IMO would give a better experience since the top-level
collection is actually used for managing networks, and not their attachment
to clusters which should be done in the context of each cluster.
I would like to hear what suggestions you have to solve this problem or if
you prefer either of the above solutions.
-- Footnotes --
[1] In 3.1 this is slightly different, but for the sake of simplicity I didn't
specify the new behaviour.
[2] Currently you can't update the network if it's attached to any cluster,
but perhaps in the future this would be possible.
Regards,
Mike
--=_4ee2f20c-97ca-4df5-82e5-22b48c2bc7b8
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: Times New Roman; font-size: 12pt; color: #000000'=
><div style=3D"color:#000;font-weight:normal;font-style:normal;text-decorat=
ion:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><style></s=
tyle><div style=3D"font-family: Times New Roman; font-size: 12pt; color: #0=
00000">Hi All,<br><br>I would like to hear opinions about a behaviour that =
I think is problematic in<br>REST API handling of logical networks.<br><br>=
-- Intro --<br>Today in the REST API we are exposing two collections for "l=
ogical<br>network" related entities.<br><br>First is a top level collection=
which is out of any context at the address <br>http://engine/api/networks.=
<br>Second is a sub-collection in the context of a cluster: <br>http://engi=
ne/api/cluster/xxx/networks<br><br>The network itself is defined per DC lev=
el, so for each DC you would have <br>at least one logical network for mana=
gement, which has some properties such<br>as STP, MTU, etc..<br>The top lev=
el collection is used to create/delete such network entities.<br><br>The su=
b-collection in the context of a Cluster is used to attach/detach a <br>net=
work from the DC of that cluster.<br>The network in the context of a cluste=
r has some additional information, let's<br>say for example 'status' of the=
network:<br> If a network is defined on all hosts in the clus=
ter then it's status is <br> 'Operational'.<br> &nb=
sp; If a network is not defined on some of the hosts in the cluster then it=
's <br> status is 'Not Operational'[1].<br><br><br>-- Pro=
blem --<br>The problem is that details which are only relevant in context o=
f a<br>cluster, are still displayed in the root context as well (e.g: 'stat=
us').<br>
This can, in certain cases, cause unexpected behaviour.<br><br>For example,=
let's consider this topology:<br> Data Center A<br>  =
; |<br> |=
\____ Network 'red'<br> |\____ Cluster A1<b=
r> | &nb=
sp; \______ Network 'red' attached<br>&=
nbsp; \____ Cluster A2<br>  =
; &n=
bsp; \______ Network 'red' attached<br><br>If=
the 'status' is the same on all the clusters that the network is attached =
to<br>(A1, A2) then there will be one element in the top level collection, =
with the<br>network details and the 'status' field representing the state (=
which is same<br>for all networks in the cluster contexts of the cluster).<=
br>If, however, the status is not the same (ie. on A1 the network is<br>'Op=
erational' and on A2 it is 'Non Operational') then the top-level <br>collec=
tion will show two elements for the network, where all network<br>details a=
re the same and only the 'status' field is different.<br><br>This is proble=
matic IMHO for several reasons:<br> 1. Showing one network in certain=
states, and multiple copies of this<br> netw=
ork in other states is not optimal, to say the least.<br> 2. In the t=
op-level collection there is no indicator of the cluster for which<br> =
; the network is displayed, so there is no way to d=
ifferentiate between the<br> two 'red' networ=
k elements (they will have same id, name, etc.).<br> 3. There is a ce=
rtain asymmetry between the remove action[2] and the <br> =
result in that you would expect: you either remove a network b=
ut in the<br> result you would see several el=
ements removed.<br><br><br>-- Proposed Solutions --<br>Personally I can thi=
nk of several solutions to this problem:<br> 1. Declare the top-level=
collection as a collection of all networks that are<br> &=
nbsp; either attached to cluster or not, and if they are indeed attac=
hed then<br> show the details for each cluste=
r, including a link to the cluster.<br> 2. Declare the top-level coll=
ection as a collection of all networks that are<br> =
defined in data-centers, but they will not contain any cluster speci=
fic<br> data, and thus each entry is unique.<=
br><br>Solution #2 is breaking the API backwards-compatibility, since it in=
cludes<br>removing certain fields that have appeared today (namely 'status'=
and<br>'display') but IMO would give a better experience since the top-lev=
el<br>collection is actually used for managing networks, and not their atta=
chment<br>to clusters which should be done in the context of each cluster.<=
br><br>I would like to hear what suggestions you have to solve this problem=
or if<br>you prefer either of the above solutions.<br><br><br>-- Footnotes=
--<br>[1] In 3.1 this is slightly different, but for the sake of simplicit=
y I didn't<br> specify the new behaviour.<br>[2] Cu=
rrently you can't update the network if it's attached to any cluster,<br>&n=
bsp; but perhaps in the future this would be possible.<br=
><br><div><span></span>Regards,<br>Mike<span></span><br></div><br></div></d=
iv><br></div></body></html>
--=_4ee2f20c-97ca-4df5-82e5-22b48c2bc7b8--
12 years, 5 months
[Engine-devel] Separating engine-setup from ovirt-engine
by Ofer Schreiber
In our days, ovirt-engine-setup is a part of the big ovirt-engine rpm.
This means that each time you need to build yourself a new ovirt-engine-setup rpm, you need to compile all the engine.
I've started to think about separating it into another git (similar to ovirt-iso-uploader), so we will be able to build this rpm separately.
This change is really easy to implement (actually, I have already done it locally), and sounds to me like it's the right thing to do.
Thought?
Ofer.
12 years, 5 months