[Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
by Deepak C Shetty
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and
crystallize it, before putting it on the ovirt wiki.
I have run this once thru Ayal and Tony, so have some of their comments
incorporated.
I still have few doubts/questions, which I have posted below with lines
ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx,
deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant
to them too]
--------------------------------------------------------------------------------------------------------------
1) Background:
VDSM provides high level API for node virtualization management. It acts
in response to the requests sent by oVirt Engine, which uses VDSM to do
all node virtualization related tasks, including but not limited to
storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external
storage array. It should help system administrators utilizing open
source solutions have a way to programmatically manage their storage
hardware in a vendor neutral way. It also aims to facilitate management
automation, ease of use and take advantage of storage vendor supported
features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to
external storage array using SMI-S as well as native interfaces (eg:
netapp plugin )
Plan is to grow the SMI-S interface as needed over time and add more
vendor specific plugins for exploiting features not possible via SMI-S
or have better alternatives than using SMI-S.
For eg: Many of the copy offload features require to use vendor specific
commands, which justifies the need for a vendor specific plugin.
2) Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities from
oVirt/VDSM.
3) Details:
LSM will sit as a new repository engine in VDSM.
VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
*Note : 'storage' used below is generic. It can be a file/nfs-export for
NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following...
- Provision storage
- Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the
- ability to list the different storage arrays along with their
types (NAS/SAN), capabilities, free/used space.
- ability to provision storage using any of the array capabilities
(eg: thin provisioned lun or new NFS export )
- ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to
refresh the host(s) for them to be able to see the newly provisioned
storage.
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is
needed to make LUN available to list of hosts passed by mgmt
Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices)
Repeat above for all relevant hosts (depending on list passed earlier,
mostly relevant when extending an existing VG)
Mgmt -> use LUN in normal flows.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to
use (default today is SPM), however there are a few flows where mgmt
will know which host to use:
1. extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN
2. attach new LUN to a VM which is pinned to a specific host - use this host
3. attach new LUN to a VM which is not pinned - use a host from the
cluster the VM belongs to and make sure all nodes in cluster can see the
new LUN
Flows for which there is no clear candidate (Maybe we can use the SPM
host itself which is the default ?)
1. create a new disk without attaching it to any VM
2. create a LUN for a new storage domain
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to
- Create a new storage domain using the storage on the array.
- Be able to specify whether VDSM should use the storage offload
capability (default) or override it to use its own internal logic.
4) VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ?
which bring another question...1 array == 1 storage domain OR 1
LUN/nfs-export on the array == 1 storage domain ?
Pros & Cons of each...
1 array == 1 storage domain
- Each new vmdisk (aka volume) will be a new lun/file on the array.
- Easier to exploit offload capabilities, as they are available at
the LUN/File granularity
- Will there be any issues where there will be too many LUNs/Files
... any maxluns limit on linux hosts that we might hit ?
-- VDSM has been tested with 1K LUNs and it worked fine - ayal
- Storage array limitations on the number of LUNs can be a downside
here.
- Would it be ok to share the array for hosting another storage
domain if need be ?
-- Provided the existing domain is not utilising all of the
free space
-- We can create new LUNs and hand it over to anyone needed ?
-- Changes needed in VDSM to work with raw LUNs, today it only has
support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain
- How to represent a new vmdisk (aka vdsm volume) if its a LUN
provisioned using SAN target ?
-- Will it be VG/LV as is done today for block domains ?
-- If yes, then it will be difficult to exploit offload
capabilities, as they are at LUN level, not at LV level.
- Each new vmdisk will be a new file on the nfs-export, assuming
offload capability is available at the file level, so this should work
for NAS targets ?
- Can use the storage array for hosting multiple storage domains.
-- Provision one more LUN and use it for another storage domain
if need be.
- VDSM already supports this today, as part of block storage
domains for LUNs case.
Note that we will allow user to do either one of the two options above,
depending on need.
4.2) Storage domain metadata will also include the features/capabilities
of the storage array as reported by LSM.
- Capabilities (taken via LSM) will be stored in the domain
metadata during storage domain create flow.
- Need changes in oVirt engine as well ( see 'oVirt Engine
potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ?
Per ayal:
- If we have a 'storage array' entity in oVirt Engine (see 'oVirt
Engine potential changes' section below ) then we can have a 'refresh
capabilities' button/verb.
- We can periodically query the storage array.
- Query LSM before running operations (sounds redundant to me, but
if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency -
1/hour or 1/day + refresh button)
5) oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just
keep this info as part of the domain at engine level.
- Have a 'storage array' entity in oVirt Engine to support
'refresh capabilities' as a button/verb.
- When user during storage provisioning, selects a LUN exported
from a storage array (via LSM), the oVirt Engine would know from then
onwards that this LUN is being served via LSM.
It would then be able to query the capabilities of the LUN and
show it to the virt admin during storage consumption flow.
6) Potential flows:
- Create snapshot flow
-- VDSM will check the snapshot offload capability in the
domain metadata
-- If available, and override is not configured, it will use
LSM to offload LUN/File snapshot
-- If override is configured or capability is not available, it
will use its internal logic to create
snapshot (qcow2).
- Copy/Clone vmdisk flow
-- VDSM will check the copy offload capability in the domain
metadata
-- If available, and override is not configured, it will use
LSM to offload LUN/File copy
-- If override is configured or capability is not available, it
will use its internal logic to create
snapshot (eg: dd cmd in case of LUN).
7) LSM potential changes:
- list features/capabilities of the array. Eg: copy offload, thin
prov. etc.
- list containers (aka pools) (present in LSM today)
- Ability to list different types of arrays being managed, their
capabilities and used/free space
- Ability to create/list/delete/resize volumes ( LUN or exports,
available in LSM as of today)
- Get monitoring info with object (LUN/snapshot/volume) as optional
parameter for specific info. eg: container/pool free/used space, raid
type etc.
Need to make sure above info is listed in a coherent way across arrays
(number of LUNs, raid type used? free/total per container/pool, per
LUN?. Also need I/O statistics wherever possible.
12 years, 6 months
[Engine-devel] [ovirt-engine-sdk] Simplify the process of the RSDL code generation
by ShaoHe Feng
This is a multi-part message in MIME format.
--------------060000080308030607050901
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi all,
Now I'm using the code generation suites of ovirt-engine-sdk, I find it
is very troublesome.
IMO, we can simplify the process. And I want to engaged in it.
there are two tools parse the api.xsd and generate the params.py code.
They are generateds_gui.py and generateDS.py.
but there still some code can not be generate by these tools. now we
should add these codes manually.
the not NOT_GENERATED codes are as follow in the current params.py code:
1. import python module
2. a new attribute of GeneratedsSuper class
3. modify the get_root_tag function.
4. modify the parseString function to shut up the stdout.
5. _rootClassMap
6 . _elementToClassMap
7. findRootClass
And I have two ideas about the code generation.
For we should not modify the generateDS.py tools.
But we can improve it.
I think the 1, 2, 3, 7, can be hard-code, and 4, 5 and 6 can be configured.
So I want to add an configure file to tell how to add the extra code
that are not generated by generateDS.py tools.
And new python program, as extension of generateDS.py to read the
configure file and generate these codes.
Or without the configure file, just make the new python program that
supports an interactive commands.
--------------060000080308030607050901
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi all, <br>
<br>
Now I'm using the code generation suites of ovirt-engine-sdk, I find
it is very troublesome.<br>
<br>
IMO, we can simplify the process. And I want to engaged in it.<br>
<br>
there are two tools parse the api.xsd and generate the params.py
code. They are generateds_gui.py and generateDS.py.<br>
but there still some code can not be generate by these tools. now we
should add these codes manually. <br>
<br>
the not NOT_GENERATED codes are as follow in the current params.py
code:<br>
1. import python module <br>
2. a new attribute of GeneratedsSuper class<br>
3. modify the get_root_tag function.<br>
4. modify the parseString function to shut up the stdout. <br>
5. _rootClassMap <br>
6 . _elementToClassMap<br>
7. findRootClass <br>
<br>
And I have two ideas about the code generation.<br>
For we should not modify the generateDS.py tools. <br>
But we can improve it. <br>
<br>
I think the 1, 2, 3, 7, can be hard-code, and 4, 5 and 6 can be
configured.<br>
So I want to add an configure file to tell how to add the extra code
that are not generated by generateDS.py tools.<br>
And new python program, as extension of generateDS.py to read the
configure file and generate these codes.<br>
<br>
Or without the configure file, just make the new python program that
supports an interactive commands. <br>
<span style="color: rgb(51, 51, 51); font-family: arial, sans-serif;
font-size: 24px; font-style: normal; font-variant: normal;
font-weight: normal; letter-spacing: normal; line-height: normal;
orphans: 2; text-align: -webkit-auto; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(245, 245,
245); display: inline !important; float: none; "></span>
</body>
</html>
--------------060000080308030607050901--
12 years, 6 months
[Engine-devel] LOCALFS path validation
by Amador pahim
This is a multi-part message in MIME format.
--------------040303020501050109020008
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
I'm starting to know the engine code. I chose a little unstandardized
behaviour to follow through the devel process. I have a patch and I'd
like to know if you fell relevant to correct this issue:
- Description: Adding a LOCAL storage [1], webadmin does not validate
path against regex, sendind the invalid path (with final slash) to vdsm
[2] [3]. But, adding a NFS storage, the path is validated before
contacting vdsm [4] avoiding extra vdsm processing and quickly/clearly
informing user about what's wrong.
- Expected result: Same behaviour to NFS and LOCALFS storage path
validation. Validate LOCALFS path in webadmin before send it to vdsm [5].
- Newbie doubt: Wouldn't be better to validate the both local and nfs
path on the backend, achieving all user interfaces/APIs?
[1] -
https://picasaweb.google.com/lh/photo/FWNiou2Y12GZO3AjfCH6K7QAv8cs6edaj3f...
[2] -
https://picasaweb.google.com/lh/photo/Pof6Z8ohgQAkRTDpEJKG-LQAv8cs6edaj3f...
[3] - https://gist.github.com/2762656
[4] -
https://picasaweb.google.com/lh/photo/Fd3zWegWE0T5C2tDo_tPZrQAv8cs6edaj3f...
[5] -
https://picasaweb.google.com/lh/photo/PgzYrZHkkvm-WtFk_UFZLrQAv8cs6edaj3f...
I look forward to hearing your comments.
Best Regards,
--
Pahim
--------------040303020501050109020008
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello,<br>
<br>
I'm starting to know the engine code. I chose a little
unstandardized behaviour to follow through the devel process. I have
a patch and I'd like to know if you fell relevant to correct this
issue:<br>
<br>
- Description: Adding a LOCAL storage [1], webadmin does not
validate path against regex, sendind the invalid path (with final
slash) to vdsm [2] [3]. But, adding a NFS storage, the path is
validated before contacting vdsm [4] avoiding extra vdsm processing
and quickly/clearly informing user about what's wrong.<br>
<br>
- Expected result: Same behaviour to NFS and LOCALFS storage path
validation. Validate LOCALFS path in webadmin before send it to vdsm
[5].<br>
<br>
- Newbie doubt: Wouldn't be better to validate the both local and
nfs path on the backend, achieving all user interfaces/APIs?<br>
<br>
[1] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/FWNiou2Y12GZO3AjfCH6K7QAv8cs6edaj3f...">https://picasaweb.google.com/lh/photo/FWNiou2Y12GZO3AjfCH6K7QAv8cs6edaj3f...</a><br>
[2] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/Pof6Z8ohgQAkRTDpEJKG-LQAv8cs6edaj3f...">https://picasaweb.google.com/lh/photo/Pof6Z8ohgQAkRTDpEJKG-LQAv8cs6edaj3f...</a><br>
[3] - <a class="moz-txt-link-freetext" href="https://gist.github.com/2762656">https://gist.github.com/2762656</a><br>
[4] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/Fd3zWegWE0T5C2tDo_tPZrQAv8cs6edaj3f...">https://picasaweb.google.com/lh/photo/Fd3zWegWE0T5C2tDo_tPZrQAv8cs6edaj3f...</a><br>
[5] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/PgzYrZHkkvm-WtFk_UFZLrQAv8cs6edaj3f...">https://picasaweb.google.com/lh/photo/PgzYrZHkkvm-WtFk_UFZLrQAv8cs6edaj3f...</a><br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
I look forward to hearing your comments.<br>
<br>
Best Regards,<br>
--<br>
Pahim<br>
</body>
</html>
--------------040303020501050109020008--
12 years, 6 months
[Engine-devel] Maven 3 here we come!
by Doron Fediuck
Hi all,
As discussed last month[1], we had to deal with some issues which turned out to be a Maven bug.
Thanks to Juan and Asaf's work, our current sources now build properly using Maven 3.
So you're all invited to migrate into Maven 3. Other than upgrading your local maven package
no other action is needed.
For now, Maven 2 will also work for you, but I expect in the future we'd like to make use
of some advanced features, so migration to 3 is recommended.
Talking about advanced features, an interesting challenge is feedback on parallel builds [2].
So whoever wants to try it out and report if it improves run time without breaking anything,
will be appreciated.
Happy migration!
[1]http://lists.ovirt.org/pipermail/arch/2012-April/000490.html
[2] https://cwiki.apache.org/MAVEN/parallel-builds-in-maven-3.html
--
/d
"Email returned to sender -- insufficient voltage."
12 years, 6 months
Re: [Engine-devel] [oVirt Jenkins] ovirt_engine_find_bugs - Build # 915 - Still Unstable!
by Eyal Edri
FYI,
These set of patches introduced new HIGH findbugs warnings:
http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/913/changes:
webadmin: Gluster Volume Options - populating from help xml (details)
webadmin: Gluster Brick sub tab - removing columns (details)
webadmin: Support for mode specific Tabs and Sub Tabs (details)
restapi: Gluster Resources Implementation classes (details)
restapi: RSDL metadata for gluster related REST api (details)
restapi: Gluster Volumes Collection implementation (details)
engine: Add ID fields to gluster brick and option (details)
webadmin: Gluster Volume - add bricks enabling (#823284) (details)
webadmin: Gluster Volume - upadting actions (#823273) (details)
webadmin: Gluster Volume - validations fixed (#823277) (details)
bugs appear to be in GlusterVolumeEntity.java:
http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/913/findbugsResult/HI...
http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/913/findbugsResult/HI...
Please review and handle,
Eyal Edri
oVirt Infra Team
----- Original Message -----
> From: "Jenkins oVirt Server" <jenkins(a)ovirt.org>
> To: eedri(a)redhat.com, engine-patches(a)ovirt.org, oliel(a)redhat.com, yzaslavs(a)redhat.com, amureini(a)redhat.com,
> dfediuck(a)redhat.com
> Sent: Tuesday, May 22, 2012 12:11:33 PM
> Subject: [oVirt Jenkins] ovirt_engine_find_bugs - Build # 915 - Still Unstable!
>
> Project: http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/
> Build: http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/915/
> Build Number: 915
> Build Status: Still Unstable
> Triggered By: Started by upstream project "ovirt_engine" build number
> 1,240
>
> -------------------------------------
> Changes Since Last Success:
> -------------------------------------
> Changes for Build #913
> [gchaplik] webadmin: Gluster Volume Options - populating from help
> xml
>
> [gchaplik] webadmin: Gluster Brick sub tab - removing columns
>
> [gchaplik] webadmin: Support for mode specific Tabs and Sub Tabs
>
> [sanjal] restapi: Gluster Resources Implementation classes
>
> [sanjal] restapi: RSDL metadata for gluster related REST api
>
> [sanjal] restapi: Gluster Volumes Collection implementation
>
> [sanjal] engine: Add ID fields to gluster brick and option
>
> [gchaplik] webadmin: Gluster Volume - add bricks enabling (#823284)
>
> [gchaplik] webadmin: Gluster Volume - upadting actions (#823273)
>
> [gchaplik] webadmin: Gluster Volume - validations fixed (#823277)
>
>
> Changes for Build #914
> [emesika] core:dbfunctions.sh script needs to be compatible with DWH
>
> [mpastern] restapi: fix rsdl regression
>
>
> Changes for Build #915
> [dfediuck] core: Use same ids for artifacts and plugins
>
> [amureini] core: Allow admin permissions in user views
>
> [amureini] core: Roles commands cleanup
>
> [amureini] core: Cleanup Permissions Commands
>
> [amureini] core: Roles commands - use the cached getRole()
>
> [amureini] core: is_inheritable property to MLA entities
>
>
>
>
> -----------------
> Failed Tests:
> -----------------
> No tests ran.
>
> ------------------
> Build Log:
> ------------------
> [...truncated 4148 lines...]
> [INFO] Assembling webapp [userportal] in
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/userportal-3.1.0-0001]
> [INFO] Processing war project
> [INFO] Copying webapp resources
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/src/main/webapp]
> [INFO] Webapp assembled in [147 msecs]
> [INFO] Building war:
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/userportal-3.1.0-0001.war
> [INFO] WEB-INF/web.xml already added, skipping
> [INFO]
> [INFO] --- maven-install-plugin:2.3.1:install (default-install) @
> userportal ---
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/userportal-3.1.0-0001.war
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/userportal/3.1.0-0001/userportal-3.1.0-0001.war
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/pom.xml
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/userportal/3.1.0-0001/userportal-3.1.0-0001.pom
> [INFO]
> [INFO] --- findbugs-maven-plugin:2.4.0:findbugs (default-cli) @
> userportal ---
> [INFO] Fork Value is true
> [INFO]
> [INFO] --- maven-checkstyle-plugin:2.6:check (default) @ webadmin ---
> [INFO] Starting audit...
> Audit done.
>
> [INFO]
> [INFO] --- maven-resources-plugin:2.5:testResources
> (default-testResources) @ webadmin ---
> [debug] execute contextualize
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/src/test/resources
> [INFO]
> [INFO] --- maven-compiler-plugin:2.3.2:testCompile
> (default-testCompile) @ webadmin ---
> [INFO] No sources to compile
> [INFO]
> [INFO] --- maven-surefire-plugin:2.10:test (default-test) @ webadmin
> ---
> [INFO] Tests are skipped.
> [INFO]
> [INFO] --- maven-war-plugin:2.1.1:war (default-war) @ webadmin ---
> [INFO] Packaging webapp
> [INFO] Assembling webapp [webadmin] in
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/webadmin-3.1.0-0001]
> [INFO] Processing war project
> [INFO] Copying webapp resources
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/src/main/webapp]
> [INFO] Webapp assembled in [147 msecs]
> OpenJDK 64-Bit Server VM warning: CodeCache is full. Compiler has
> been disabled.
> OpenJDK 64-Bit Server VM warning: Try increasing the code cache size
> using -XX:ReservedCodeCacheSize=
> [INFO] Building war:
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/webadmin-3.1.0-0001.war
> [INFO] WEB-INF/web.xml already added, skipping
> [INFO]
> [INFO] --- maven-install-plugin:2.3.1:install (default-install) @
> webadmin ---
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/webadmin-3.1.0-0001.war
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/webadmin/3.1.0-0001/webadmin-3.1.0-0001.war
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/pom.xml
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/webadmin/3.1.0-0001/webadmin-3.1.0-0001.pom
> [INFO]
> [INFO] --- findbugs-maven-plugin:2.4.0:findbugs (default-cli) @
> webadmin ---
> [INFO] Fork Value is true
> [java] Warnings generated: 14
> [INFO] Done FindBugs Analysis....
> [java] Warnings generated: 56
> [INFO] Done FindBugs Analysis....
> [INFO]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building oVirt Server EAR 3.1.0-0001
> [INFO]
> ------------------------------------------------------------------------
> [WARNING] The POM for
> org.codehaus.mojo:gwt-maven-plugin:jar:1.3.2.google is missing, no
> dependency information available
> [WARNING] Failed to retrieve plugin descriptor for
> org.codehaus.mojo:gwt-maven-plugin:1.3.2.google: Plugin
> org.codehaus.mojo:gwt-maven-plugin:1.3.2.google or one of its
> dependencies could not be resolved: Failed to read artifact
> descriptor for org.codehaus.mojo:gwt-maven-plugin:jar:1.3.2.google
> [WARNING]
> *****************************************************************
> [WARNING] * Your build is requesting parallel execution, but project
> *
> [WARNING] * contains the following plugin(s) that are not marked as
> *
> [WARNING] * @threadSafe to support parallel building.
> *
> [WARNING] * While this /may/ work fine, please look for plugin
> updates *
> [WARNING] * and/or request plugins be made thread-safe.
> *
> [WARNING] * If reporting an issue, report it against the plugin in
> *
> [WARNING] * question, not against maven-core
> *
> [WARNING]
> *****************************************************************
> [WARNING] The following plugins are not marked @threadSafe in oVirt
> Server EAR:
> [WARNING] org.apache.maven.plugins:maven-dependency-plugin:2.1
> [WARNING]
> *****************************************************************
> [INFO]
> [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @
> engine-server-ear ---
> [INFO] Deleting /ephemeral0/ovirt_engine_find_bugs/ear/target
> [INFO]
> [INFO] --- maven-ear-plugin:2.6:generate-application-xml
> (default-generate-application-xml) @ engine-server-ear ---
> [INFO] Generating application.xml
> [INFO]
> [INFO] --- maven-resources-plugin:2.4.3:resources (default-resources)
> @ engine-server-ear ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /ephemeral0/ovirt_engine_find_bugs/ear/src/main/java
> [INFO] skip non existing resourceDirectory
> /ephemeral0/ovirt_engine_find_bugs/ear/src/main/resources
> [INFO]
> [INFO] --- maven-ear-plugin:2.6:ear (default-ear) @ engine-server-ear
> ---
> [INFO] Copying artifact[jar:org.ovirt.engine.core:common:3.1.0-0001]
> to[lib/engine-common.jar]
> [INFO] Copying artifact[jar:org.ovirt.engine.core:compat:3.1.0-0001]
> to[lib/engine-compat.jar]
> [INFO] Copying artifact[jar:org.ovirt.engine.core:dal:3.1.0-0001]
> to[lib/engine-dal.jar]
> [INFO] Copying artifact[jar:org.ovirt.engine.core:utils:3.1.0-0001]
> to[lib/engine-utils.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:engineencryptutils:3.1.0-0001]
> to[lib/engine-encryptutils.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:vdsbroker:3.1.0-0001]
> to[lib/engine-vdsbroker.jar]
> [INFO] Copying
> artifact[war:org.ovirt.engine.core:root-war:3.1.0-0001] to[root.war]
> (unpacked)
> [INFO] Copying artifact[war:org.ovirt.engine.ui:rmw-war:3.1.0-0001]
> to[ovirtengineweb.war] (unpacked)
> [INFO] Copying artifact[war:org.ovirt.engine.ui:rm-war:3.1.0-0001]
> to[ovirtengine.war] (unpacked)
> [INFO] Copying
> artifact[war:org.ovirt.engine.ui:components-war:3.1.0-0001]
> to[components.war] (unpacked)
> [INFO] Copying
> artifact[war:org.ovirt.engine.api:restapi-webapp:3.1.0-0001]
> to[restapi.war] (unpacked)
> [INFO] Copying
> artifact[war:org.ovirt.engine.ui:userportal:3.1.0-0001]
> to[userportal.war] (unpacked)
> [INFO] Copying artifact[war:org.ovirt.engine.ui:webadmin:3.1.0-0001]
> to[webadmin.war] (unpacked)
> [INFO] Copying
> artifact[ejb:org.ovirt.engine.ui:genericapi:3.1.0-0001]
> to[engine-genericapi.jar] (unpacked)
> [INFO] Copying
> artifact[ejb:org.ovirt.engine.core:scheduler:3.1.0-0001]
> to[engine-scheduler.jar] (unpacked)
> [INFO] Copying artifact[ejb:org.ovirt.engine.core:bll:3.1.0-0001]
> to[engine-bll.jar] (unpacked)
> [INFO] Copying artifact[jar:commons-codec:commons-codec:1.4]
> to[lib/commons-codec-1.4.jar]
> [INFO] Copying
> artifact[jar:org.hibernate:hibernate-validator:4.0.2.GA]
> to[lib/hibernate-validator-4.0.2.GA.jar]
> [INFO] Copying artifact[jar:javax.validation:validation-api:1.0.0.GA]
> to[lib/validation-api-1.0.0.GA.jar]
> [INFO] Copying artifact[jar:org.slf4j:slf4j-api:1.5.6]
> to[lib/slf4j-api-1.5.6.jar]
> [INFO] Copying artifact[jar:javax.xml.bind:jaxb-api:2.1]
> to[lib/jaxb-api-2.1.jar]
> [INFO] Copying artifact[jar:javax.xml.stream:stax-api:1.0-2]
> to[lib/stax-api-1.0-2.jar]
> [INFO] Copying artifact[jar:javax.activation:activation:1.1]
> to[lib/activation-1.1.jar]
> [INFO] Copying artifact[jar:com.sun.xml.bind:jaxb-impl:2.1.3]
> to[lib/jaxb-impl-2.1.3.jar]
> [INFO] Copying
> artifact[jar:org.hibernate:hibernate-annotations:3.4.0.GA]
> to[lib/hibernate-annotations-3.4.0.GA.jar]
> [INFO] Copying artifact[jar:org.hibernate:ejb3-persistence:1.0.2.GA]
> to[lib/ejb3-persistence-1.0.2.GA.jar]
> [INFO] Copying
> artifact[jar:org.hibernate:hibernate-commons-annotations:3.1.0.GA]
> to[lib/hibernate-commons-annotations-3.1.0.GA.jar]
> [INFO] Copying artifact[jar:org.hibernate:hibernate-core:3.3.0.SP1]
> to[lib/hibernate-core-3.3.0.SP1.jar]
> [INFO] Copying artifact[jar:antlr:antlr:2.7.6]
> to[lib/antlr-2.7.6.jar]
> [INFO] Copying artifact[jar:dom4j:dom4j:1.6.1]
> to[lib/dom4j-1.6.1.jar]
> [INFO] Copying artifact[jar:xml-apis:xml-apis:1.0.b2]
> to[lib/xml-apis-1.0.b2.jar]
> [INFO] Copying
> artifact[jar:org.codehaus.jackson:jackson-mapper-asl:1.9.4]
> to[lib/jackson-mapper-asl-1.9.4.jar]
> [INFO] Copying
> artifact[jar:org.codehaus.jackson:jackson-core-asl:1.9.4]
> to[lib/jackson-core-asl-1.9.4.jar]
> [INFO] Copying
> artifact[jar:org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.1_spec:1.0.0.Final]
> to[lib/jboss-interceptors-api_1.1_spec-1.0.0.Final.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:engine-tools-common:3.1.0-0001]
> to[lib/engine-tools-common-3.1.0-0001.jar]
> [INFO] Copying
> artifact[jar:commons-beanutils:commons-beanutils:1.8.2]
> to[lib/commons-beanutils-1.8.2.jar]
> [INFO] Copying artifact[jar:com.jcraft:jsch:0.1.42]
> to[lib/jsch-0.1.42.jar]
> [INFO] Copying artifact[jar:org.apache.mina:mina-core:2.0.1]
> to[lib/mina-core-2.0.1.jar]
> [INFO] Copying artifact[jar:org.apache.sshd:sshd-core:0.6.0]
> to[lib/sshd-core-0.6.0.jar]
> [INFO] Copying artifact[jar:commons-lang:commons-lang:2.4]
> to[lib/commons-lang-2.4.jar]
> [INFO] Copying artifact[jar:org.apache.xmlrpc:xmlrpc-client:3.1.3]
> to[lib/xmlrpc-client-3.1.3.jar]
> [INFO] Copying artifact[jar:org.apache.xmlrpc:xmlrpc-common:3.1.3]
> to[lib/xmlrpc-common-3.1.3.jar]
> [INFO] Copying
> artifact[jar:org.apache.ws.commons.util:ws-commons-util:1.0.2]
> to[lib/ws-commons-util-1.0.2.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-jdbc:2.5.6.SEC02]
> to[lib/spring-jdbc-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-tx:2.5.6.SEC02]
> to[lib/spring-tx-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework.ldap:spring-ldap-core:1.3.0.RELEASE]
> to[lib/spring-ldap-core-1.3.0.RELEASE.jar]
> [INFO] Copying
> artifact[jar:commons-httpclient:commons-httpclient:3.1]
> to[lib/commons-httpclient-3.1.jar]
> [INFO] Copying artifact[jar:org.quartz-scheduler:quartz:2.1.2]
> to[lib/quartz-2.1.2.jar]
> [INFO] Copying artifact[jar:c3p0:c3p0:0.9.1.1]
> to[lib/c3p0-0.9.1.1.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:searchbackend:3.1.0-0001]
> to[lib/searchbackend-3.1.0-0001.jar]
> [INFO] Copying
> artifact[jar:commons-collections:commons-collections:3.1]
> to[lib/commons-collections-3.1.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-core:2.5.6.SEC02]
> to[lib/spring-core-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-beans:2.5.6.SEC02]
> to[lib/spring-beans-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-context:2.5.6.SEC02]
> to[lib/spring-context-2.5.6.SEC02.jar]
> [INFO] Copying artifact[jar:aopalliance:aopalliance:1.0]
> to[lib/aopalliance-1.0.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-agent:2.5.6.SEC02]
> to[lib/spring-agent-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-aop:2.5.6.SEC02]
> to[lib/spring-aop-2.5.6.SEC02.jar]
> [INFO] Copy ear sources to
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine
> [INFO] Could not find manifest file:
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine/META-INF/MANIFEST.MF
> - Generating one
> [INFO] Building jar:
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine.ear
> [INFO]
> [INFO] --- maven-dependency-plugin:2.1:copy (copy-quartz-jar) @
> engine-server-ear ---
> [INFO] Configured Artifact: org.quartz-scheduler:quartz:2.1.2:jar
> [INFO] Copying quartz-2.1.2.jar to
> /ephemeral0/ovirt_engine_find_bugs/ear/target/quartz/quartz-2.1.2.jar
> [INFO]
> [INFO] --- maven-install-plugin:2.3.1:install (default-install) @
> engine-server-ear ---
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine.ear to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/engine-server-ear/3.1.0-0001/engine-server-ear-3.1.0-0001.ear
> [INFO] Installing /ephemeral0/ovirt_engine_find_bugs/ear/pom.xml to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/engine-server-ear/3.1.0-0001/engine-server-ear-3.1.0-0001.pom
> [INFO]
> [INFO] --- findbugs-maven-plugin:2.4.0:findbugs (default-cli) @
> engine-server-ear ---
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] oVirt Engine Root Project ......................... SUCCESS
> [11.175s]
> [INFO] oVirt Build Tools root ............................ SUCCESS
> [0.154s]
> [INFO] oVirt checkstyle .................................. SUCCESS
> [2.925s]
> [INFO] oVirt Checkstyle Checks ........................... SUCCESS
> [32.541s]
> [INFO] oVirt Modules - backend ........................... SUCCESS
> [0.137s]
> [INFO] oVirt Manager ..................................... SUCCESS
> [0.633s]
> [INFO] oVirt Modules - manager ........................... SUCCESS
> [1.512s]
> [INFO] CSharp Compatibility .............................. SUCCESS
> [1:18.689s]
> [INFO] Encryption Libraries .............................. SUCCESS
> [42.599s]
> [INFO] oVirt Tools ....................................... SUCCESS
> [0.205s]
> [INFO] oVirt Tools Common Library ........................ SUCCESS
> [25.939s]
> [INFO] Common Code ....................................... SUCCESS
> [2:09.368s]
> [INFO] Common utilities .................................. SUCCESS
> [1:43.075s]
> [INFO] Data Access Layer ................................. SUCCESS
> [1:39.624s]
> [INFO] engine beans ...................................... SUCCESS
> [0.237s]
> [INFO] engine scheduler bean ............................. SUCCESS
> [40.875s]
> [INFO] Vds broker ........................................ SUCCESS
> [1:44.474s]
> [INFO] Search Backend .................................... SUCCESS
> [59.374s]
> [INFO] Backend Logic @Service bean ....................... SUCCESS
> [2:17.939s]
> [INFO] oVirt RESTful API Backend Integration ............. SUCCESS
> [0.154s]
> [INFO] oVirt RESTful API interface ....................... SUCCESS
> [0.315s]
> [INFO] oVirt Engine API Definition ....................... SUCCESS
> [1:32.846s]
> [INFO] oVirt Engine API Commom Parent POM ................ SUCCESS
> [0.328s]
> [INFO] oVirt Engine API Common JAX-RS .................... SUCCESS
> [58.151s]
> [INFO] oVirt RESTful API Backend Integration Type Mappers SUCCESS
> [1:29.592s]
> [INFO] oVirt RESTful API Backend Integration JAX-RS Resources
> SUCCESS [1:34.159s]
> [INFO] oVirt RESTful API Backend Integration Webapp ...... SUCCESS
> [12.297s]
> [INFO] oVirt Engine Web Root ............................. SUCCESS
> [33.235s]
> [INFO] oVirt Configuration Tool .......................... SUCCESS
> [46.202s]
> [INFO] Notifier Service package .......................... SUCCESS
> [0.143s]
> [INFO] Notifier Service .................................. SUCCESS
> [56.794s]
> [INFO] Notifier Service Resources ........................ SUCCESS
> [9.712s]
> [INFO] oVirt Modules - frontend .......................... SUCCESS
> [3.064s]
> [INFO] oVirt APIs ........................................ SUCCESS
> [1.472s]
> [INFO] oVirt generic API ................................. SUCCESS
> [32.572s]
> [INFO] oVirt Modules - webadmin .......................... SUCCESS
> [0.146s]
> [INFO] oVirt Modules - ui ................................ SUCCESS
> [0.250s]
> [INFO] Extensions for GWT ................................ SUCCESS
> [1:17.416s]
> [INFO] UI Utils Compatibility (for UICommon) ............. SUCCESS
> [47.857s]
> [INFO] Frontend for GWT UI Projects ...................... SUCCESS
> [47.153s]
> [INFO] UICommon .......................................... SUCCESS
> [3:17.484s]
> [INFO] UICommonWeb ....................................... SUCCESS
> [3:41.508s]
> [INFO] oVirt GWT UI common infrastructure ................ SUCCESS
> [1:44.412s]
> [INFO] WebAdmin .......................................... SUCCESS
> [3:28.888s]
> [INFO] UserPortal ........................................ SUCCESS
> [2:12.619s]
> [INFO] oVirt WARs ........................................ SUCCESS
> [0.134s]
> [INFO] WPF Application Module ............................ SUCCESS
> [8.143s]
> [INFO] oVirt Web Application Module ...................... SUCCESS
> [32.504s]
> [INFO] Components Web Application Module ................. SUCCESS
> [6.230s]
> [INFO] oVirt Server EAR .................................. SUCCESS
> [17.227s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 23:52.051s (Wall Clock)
> [INFO] Finished at: Tue May 22 05:10:14 EDT 2012
> [INFO] Final Memory: 302M/781M
> [INFO]
> ------------------------------------------------------------------------
> [FINDBUGS] Collecting findbugs analysis files...
> [FINDBUGS] Parsing 30 files in
> /home/jenkins/workspace/ovirt_engine_find_bugs
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/beans/scheduler/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/beans/vdsbroker/target/findbugsXml.xml
> of module with 0 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/bll/target/findbugsXml.xml
> of module with 439 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/common/target/findbugsXml.xml
> of module with 335 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/compat/target/findbugsXml.xml
> of module with 71 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/dal/target/findbugsXml.xml
> of module with 27 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/engineencryptutils/target/findbugsXml.xml
> of module with 10 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/interface/common/jaxrs/target/findbugsXml.xml
> of module with 18 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/interface/definition/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/jaxrs/target/findbugsXml.xml
> of module with 23 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/types/target/findbugsXml.xml
> of module with 10 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/root/target/findbugsXml.xml
> of module with 5 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/searchbackend/target/findbugsXml.xml
> of module with 13 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/utils/target/findbugsXml.xml
> of module with 122 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/vdsbroker/target/findbugsXml.xml
> of module with 238 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/tools/engine-config/target/findbugsXml.xml
> of module with 7 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/tools/engine-notifier/engine-notifier-service/target/findbugsXml.xml
> of module with 11 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/tools/engine-tools-common/target/findbugsXml.xml
> of module with 0 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/build-tools-root/ovirt-checkstyle-extension/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/api/genericapi/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/wars/rmw-war/target/findbugsXml.xml
> of module with 0 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/frontend/target/findbugsXml.xml
> of module with 21 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/gwt-common/target/findbugsXml.xml
> of module with 65 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/gwt-extension/target/findbugsXml.xml
> of module with 29 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/uicommon/target/findbugsXml.xml
> of module with 420 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/uicommonweb/target/findbugsXml.xml
> of module with 602 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/uicompat/target/findbugsXml.xml
> of module with 40 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/findbugsXml.xml
> of module with 14 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal/target/findbugsXml.xml
> of module with 118 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/findbugsXml.xml
> of module with 56 warnings.
> [FINDBUGS] Computing warning deltas based on reference build #912
> [FINDBUGS] Using set difference to compute new warnings
> Build step 'Publish FindBugs analysis results' changed build result
> to UNSTABLE
> Email was triggered for: Unstable
> Sending email for trigger: Unstable
>
>
12 years, 6 months
[Engine-devel] Build and run with Fedora 17 jboss-as packages
by Juan Hernandez
Hello,
The changes required to build and run the engine using the jboss-as
packages in Fedora 17 have been recently merged:
http://gerrit.ovirt.org/4416
This means that from now on in order to install the RPMs you will need a
Fedora 17 machine.
In addition this also changes how the application server is used. We
have been running the engine as an application deployed to the default
instance of the application server. Starting now the engine will run a
private instance of the application server owned by the user ovirt ovirt
and managed by systemd, so please remember that you don't longer
need/should start the jboss-as service, but the ovirt-engine systemd
service:
systemctl start ovirt-engine.service
systemctl stop ovirt-engine.service
The locations of the files/directories used by this private instance of
the application server are also different:
1. You are probably familiar with the standalone.xml file. This is no
longer used, we use the /etc/ovirt-engine/engine-service.xml instead.
2. The location of the engine.ear file is still the same
(/usr/share/ovirt-engine/engine.ear), but the deployment marker file
engine.ear.dodeploy is not created in the same directory. Instead of
that a symlink to the engine.ear file is created in
/var/lib/ovirt-engine/deployments directory and the engine.ear.dodeploy
file is created there by the start/stop script.
3. Locations of log files are also slightly different. The main log file
is still /var/log/ovirt-engine/engine.log, but the server.log file is no
longer in the jboss-as directory, but in /var/log/ovirt-engine as well.
In addition there is a /var/log/ovirt-engine/console.log file that
stores the standard and error output of the engine.
There are other changes, but probably less relevant to most of you.
I made many tests, but I am sure that issues will appear, so please keep
an eye on this and let me know of any issues you may encounter.
Regards,
Juan Hernandez
12 years, 7 months
[Engine-devel] Shared Memory
by Amador Pahim
Hello,
Not sure if it was already discussed, but I'd like to talk about oVirt
"Shared Memory".
Webadmin has the Shared Memory percent information in host general tab
[1]. Initially, I thought that Shared Memory was the KSM de-duplication
results. But comparing with my KSM stats, it does not make sense. My env:
3,5 GB virt-host.
6 identical VMs running with 1GB RAM each.
Webadmin host details:
Memory Sharing: Active
Shared Memory: 0%
KSM - How many shared pages are being used:
$ cat /sys/kernel/mm/ksm/pages_sharing
109056
KSM - How many more sites are sharing them i.e. how much saved
$ cat /sys/kernel/mm/ksm/pages_sharing
560128
Converting to Mbytes:
$ echo $(( (109056 * $(getconf PAGE_SIZE)) / (1024 * 1024) ))
426
$ echo $(( ( 560128 * $(getconf PAGE_SIZE)) / (1024 * 1024) ))
2188
With those KSM results, I could expect something but 0 in "Shared Memory".
Tracing the origin of "Shared Memory" in oVirt code, I realized it's
coming from memShared value (from getVdsStats vdsm command), which is
provided in Mbytes:
$ vdsClient -s 192.168.10.250 getVdsStats | grep memShared
memShared = 9
Finding memShared function in vdsm, we have:
$VDSM_ROOT/vdsm/API.py
...
stats['memShared'] = self._memShared() / Mbytes
...
def _memShared(self):
"""
Return an approximation of memory shared by VMs thanks to KSM.
"""
shared = 0
for v in self._cif.vmContainer.values():
if v.conf['pid'] == '0':
continue
try:
statmfile = file('/proc/' + v.conf['pid'] + '/statm')
shared += int(statmfile.read().split()[2]) * PAGE_SIZE_BYTES
except:
pass
return shared
...
memShared is calculated adding the shared pages value (3rd field) from
/proc/<VM_PID>/statm file from all running VMs, converting to bytes
through PAGE_SIZE value and transforming to Mbytes at the end. This
field in statm file currently is (it changed along kernel history) the
count of pages instantiated in the process address space which are
shared with a file, including executable, library or shared memory.
Despite vdsm code comment, KSM shared pages are not accounted here. KSM
works de-duplicating and sharing memory pages without process awareness.
Engine is calculating the percent considering total physical memory -
memSize value from getVdsCapabilities vdsm command:
$ vdsClient -s 192.168.10.250 getVdsCapabilities | grep memSize
memSize = 3574
Calculating the percent:
$ echo "scale=2; 9 * 100 / 3574" | bc
.25
So, we have arround 0,25%, rounded to 0%. "Shared Memory" is coherent,
but not reflecting the real page deduplication benefits. And unnoticed
administrators - me included - are led to think that Shared Memory is
related with KSM results.
IMO, memShared is not providing any representative information. On the
other hand, the missing KSM results are really important to oVirt
administrators, providing information about how much memory they are
over committed by, for capacity management, and how much money oVirt is
saving in memory.
In order to offer those KSM stats to engine, I sent a patch [2] (waiting
approval) adding "ksmShared" and "ksmSharing" values to vdsm getVdsStats
command in a standard way, with key names that fits with existing KSM
ones (ksmCpu, ksmPages and ksmState).
Before patch:
$ vdsClient -s 192.168.10.250 getVdsStats | grep ksm
ksmCpu = 1 -> the ksmd process cpu load
ksmPages = 664 -> pages to scan before ksmd goes to sleep
ksmState = True -> is ksm running?
With the patch:
$ vdsClient -s 192.168.10.250 getVdsStats | grep ksm
ksmCpu = 1
ksmPages = 664
ksmShared = 426 -> how many Mbytes of memory are being shared
ksmSharing = 2188 -> how many more sites are sharing them
i.e. how much Mbytes are being saved
ksmState = True
Finally my questions:
1 - Is sharedMem value (from /proc/PID/statm) that significant to be
kept in host details? If yes, why?
2 - What about - and how (%, Mb, ...) - to add the vdsm new ksmShared
and ksmSharing stats in host details?
Sorry about my long history. I look forward to hearing your comments.
All the best,
--
Pahim
[1] -
http://www.pahim.org/wp-content/uploads/2012/05/Screenshot-oVirt-Enterpri...
[2] - http://gerrit.ovirt.org/4755
12 years, 7 months
[Engine-devel] Adding atomic restore snapshot command at backend
by Michael Pasternak
Currently 'restore snapshot' done in two steps on a client side:
1. TryBackToAllSnapshotsOfVm
2. RestoreAllSnapshots
This implementation creates race condition on 1 and therefore unstable & bug prone,
i suggested refactoring 2 to include 1 as single atomic operation at backend.
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 7 months
[Engine-devel] Using 3.1.0 instead of 3.1.0-0001 for next release?
by Juan Hernandez
Hello,
I think we have the opportunity now to clean the version number and use
3.1.0 instead of 3.1.0-0001 for the next release. I submitted the
corresponding change to gerrit for review:
http://gerrit.ovirt.org/4914
As far as I can tell there are no issues introduced by this change and
it allows a cleaner versioning schema for the RPM packages.
Please let me now if you foresee any issue.
Regards,
Juan Hernandez
12 years, 7 months
[Engine-devel] Enabling guest memory balloon device
by Doron Fediuck
Hi All,
In the following wiki, there's a design for enabling the balloon device,
which is currently disabled in engine setups. Other than enabling the device,
this is also a step forward in the path to vdsm and MoM sub-project integration.
More details can be found here:
http://www.ovirt.org/wiki/Features/Design/memory-balloon
P.S.
UI mockups should be updated soon.
--
/d
“Funny,” he intoned funereally, “how just when you think life can't possibly get any worse it suddenly does.” --Douglas Adams, The Hitchhiker's Guide to the Galaxy
12 years, 7 months