Devel
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 1 participants
- 6474 discussions
This is a multi-part message in MIME format.
--------------040303020501050109020008
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
I'm starting to know the engine code. I chose a little unstandardized
behaviour to follow through the devel process. I have a patch and I'd
like to know if you fell relevant to correct this issue:
- Description: Adding a LOCAL storage [1], webadmin does not validate
path against regex, sendind the invalid path (with final slash) to vdsm
[2] [3]. But, adding a NFS storage, the path is validated before
contacting vdsm [4] avoiding extra vdsm processing and quickly/clearly
informing user about what's wrong.
- Expected result: Same behaviour to NFS and LOCALFS storage path
validation. Validate LOCALFS path in webadmin before send it to vdsm [5].
- Newbie doubt: Wouldn't be better to validate the both local and nfs
path on the backend, achieving all user interfaces/APIs?
[1] -
https://picasaweb.google.com/lh/photo/FWNiou2Y12GZO3AjfCH6K7QAv8cs6edaj3fEc…
[2] -
https://picasaweb.google.com/lh/photo/Pof6Z8ohgQAkRTDpEJKG-LQAv8cs6edaj3fEc…
[3] - https://gist.github.com/2762656
[4] -
https://picasaweb.google.com/lh/photo/Fd3zWegWE0T5C2tDo_tPZrQAv8cs6edaj3fEc…
[5] -
https://picasaweb.google.com/lh/photo/PgzYrZHkkvm-WtFk_UFZLrQAv8cs6edaj3fEc…
I look forward to hearing your comments.
Best Regards,
--
Pahim
--------------040303020501050109020008
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello,<br>
<br>
I'm starting to know the engine code. I chose a little
unstandardized behaviour to follow through the devel process. I have
a patch and I'd like to know if you fell relevant to correct this
issue:<br>
<br>
- Description: Adding a LOCAL storage [1], webadmin does not
validate path against regex, sendind the invalid path (with final
slash) to vdsm [2] [3]. But, adding a NFS storage, the path is
validated before contacting vdsm [4] avoiding extra vdsm processing
and quickly/clearly informing user about what's wrong.<br>
<br>
- Expected result: Same behaviour to NFS and LOCALFS storage path
validation. Validate LOCALFS path in webadmin before send it to vdsm
[5].<br>
<br>
- Newbie doubt: Wouldn't be better to validate the both local and
nfs path on the backend, achieving all user interfaces/APIs?<br>
<br>
[1] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/FWNiou2Y12GZO3AjfCH6K7QAv8cs6edaj3fEc…">https://picasaweb.google.com/lh/photo/FWNiou2Y12GZO3AjfCH6K7QAv8cs6edaj3fEc…</a><br>
[2] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/Pof6Z8ohgQAkRTDpEJKG-LQAv8cs6edaj3fEc…">https://picasaweb.google.com/lh/photo/Pof6Z8ohgQAkRTDpEJKG-LQAv8cs6edaj3fEc…</a><br>
[3] - <a class="moz-txt-link-freetext" href="https://gist.github.com/2762656">https://gist.github.com/2762656</a><br>
[4] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/Fd3zWegWE0T5C2tDo_tPZrQAv8cs6edaj3fEc…">https://picasaweb.google.com/lh/photo/Fd3zWegWE0T5C2tDo_tPZrQAv8cs6edaj3fEc…</a><br>
[5] -
<a class="moz-txt-link-freetext" href="https://picasaweb.google.com/lh/photo/PgzYrZHkkvm-WtFk_UFZLrQAv8cs6edaj3fEc…">https://picasaweb.google.com/lh/photo/PgzYrZHkkvm-WtFk_UFZLrQAv8cs6edaj3fEc…</a><br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
I look forward to hearing your comments.<br>
<br>
Best Regards,<br>
--<br>
Pahim<br>
</body>
</html>
--------------040303020501050109020008--
4
6
Re: [Engine-devel] Fwd: [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
by Deepak C Shetty 04 Jun '12
by Deepak C Shetty 04 Jun '12
04 Jun '12
This is a multi-part message in MIME format.
--------------060306070208010807040506
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
(for some reason i never recd. Adam's note tho' I am subscribed to all
the 3 lists Cc'ed here, strange !
Replying off from the mail fwd.ed to me from my colleague, pls see my
responses inline below. Thanks. )
>
>
> ---------- Forwarded message ----------
> From: *Adam Litke* <agl(a)us.ibm.com <mailto:agl@us.ibm.com>>
> Date: Thu, May 31, 2012 at 7:31 PM
> Subject: Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
> To: Deepak C Shetty <deepakcs(a)linux.vnet.ibm.com
> <mailto:deepakcs@linux.vnet.ibm.com>>
> Cc: libstoragemgmt-devel(a)lists.sourceforge.net
> <mailto:libstoragemgmt-devel@lists.sourceforge.net>,
> engine-devel(a)ovirt.org <mailto:engine-devel@ovirt.org>, VDSM Project
> Development <vdsm-devel(a)lists.fedorahosted.org
> <mailto:vdsm-devel@lists.fedorahosted.org>>
>
>
> On Wed, May 30, 2012 at 03:08:46PM +0530, Deepak C Shetty wrote:
> > Hello All,
> >
> > I have a draft write-up on the VDSM-libstoragemgmt integration.
> > I wanted to run this thru' the mailing list(s) to help tune and
> > crystallize it, before putting it on the ovirt wiki.
> > I have run this once thru Ayal and Tony, so have some of their
> > comments incorporated.
> >
> > I still have few doubts/questions, which I have posted below with
> > lines ending with '?'
> >
> > Comments / Suggestions are welcome & appreciated.
> >
> > thanx,
> > deepak
> >
> > [Ccing engine-devel and libstoragemgmt lists as this stuff is
> > relevant to them too]
> >
> >
> --------------------------------------------------------------------------------------------------------------
> >
> > 1) Background:
> >
> > VDSM provides high level API for node virtualization management. It
> > acts in response to the requests sent by oVirt Engine, which uses
> > VDSM to do all node virtualization related tasks, including but not
> > limited to storage management.
> >
> > libstoragemgmt aims to provide vendor agnostic API for managing
> > external storage array. It should help system administrators
> > utilizing open source solutions have a way to programmatically
> > manage their storage hardware in a vendor neutral way. It also aims
> > to facilitate management automation, ease of use and take advantage
> > of storage vendor supported features which improve storage
> > performance and space utilization.
> >
> > Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
> >
> > libstoragemgmt (LSM) today supports C and python plugins for talking
> > to external storage array using SMI-S as well as native interfaces
> > (eg: netapp plugin )
> > Plan is to grow the SMI-S interface as needed over time and add more
> > vendor specific plugins for exploiting features not possible via
> > SMI-S or have better alternatives than using SMI-S.
> > For eg: Many of the copy offload features require to use vendor
> > specific commands, which justifies the need for a vendor specific
> > plugin.
> >
> >
> > 2) Goals:
> >
> > 2a) Ability to plugin external storage array into oVirt/VDSM
> > virtualization stack, in a vendor neutral way.
> >
> > 2b) Ability to list features/capabilities and other statistical
> > info of the array
> >
> > 2c) Ability to utilize the storage array offload capabilities
> > from oVirt/VDSM.
> >
> >
> > 3) Details:
> >
> > LSM will sit as a new repository engine in VDSM.
> > VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
> >
> > Current plan is to have LSM co-exist with VDSM on the virtualization
> nodes.
> >
> > *Note : 'storage' used below is generic. It can be a file/nfs-export
> > for NAS targets and LUN/logical-drive for SAN targets.
> >
> > VDSM can use LSM and do the following...
> > - Provision storage
> > - Consume storage
> >
> > 3.1) Provisioning Storage using LSM
> >
> > Typically this will be done by a Storage administrator.
> >
> > oVirt/VDSM should provide storage admin the
> > - ability to list the different storage arrays along with their
> > types (NAS/SAN), capabilities, free/used space.
> > - ability to provision storage using any of the array
> > capabilities (eg: thin provisioned lun or new NFS export )
> > - ability to manage the provisioned storage (eg: resize/delete
> storage)
>
> I guess vdsm will need to model a new type of object (perhaps
> StorageTarget) to
> be used for performing the above provisioning operations. Then, to
> consume the
> provisioned storage, we could create a StorageConnectionRef by passing
> in a
> StorageTarget object and some additional parameters. Sound about right?
Sounds right to me, but I am not an expert in VDSM object model,
Saggi/Ayal/Dan can provide
more inputs here. The (proposed) storage array entity in ovirt engine
can use this vdsm object to
communicate and work with the storage array in doing the provisioning work.
Going ahead with the change to new Image Repository, I was envisioning
that LSM when integrated as
a new repo engine will exhibit "Storage Provisioning" as a implicit
feature/capability, only then it
will be picked up by the StorageTarget, else not.
>
> > Once the storage is provisioned by the storage admin, VDSM will have
> > to refresh the host(s) for them to be able to see the newly
> > provisioned storage.
>
> How would this refresh affect currently connected storage and running VMs?
I am not too sure on this... looking for more info from the experts
here. Per ayal, getDeviceInfo
should help refresh, but by 'affect' are you referring to what happens
if post refresh the device
IDs and/or names of the existing storage on the host changes ? What
exactly is the concern here ?
>
> > 3.1.1) Potential flows:
> >
> > Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is
> > needed to make LUN available to list of hosts passed by mgmt
> > Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices)
> > Repeat above for all relevant hosts (depending on list passed
> > earlier, mostly relevant when extending an existing VG)
> > Mgmt -> use LUN in normal flows.
> >
> >
> > 3.1.2) How oVirt Engine will know which LSM to use ?
> >
> > Normally the way this works today is that user can choose the host
> > to use (default today is SPM), however there are a few flows where
> > mgmt will know which host to use:
> > 1. extend storage domain (add LUN to existing VG) - Use SPM and make
> > sure *all* hosts that need access to this SD can see the new LUN
> > 2. attach new LUN to a VM which is pinned to a specific host - use
> this host
> > 3. attach new LUN to a VM which is not pinned - use a host from the
> > cluster the VM belongs to and make sure all nodes in cluster can see
> > the new LUN
>
> You are still going to need to worry about locking the shared storage
> resource.
> Will libstoragemgmt have storage clustering support baked in or will
> we continue
> to rely on SPM? If the latter is true, most/all of these operations
> would still
> need to be done by SPM if I understand correctly.
The above scenarios were noted by me on behalf of Ayal.
I don't think LSM will worry abt storage clustering. We are just using
LSM to 'talk' with the
storage array. I am not sure if we need locking for the above scenarios.
We are just ensuring
that the newly provisioned LUN is visible to the relevant hosts, so not
sure why we might need
locking?
>
> > Flows for which there is no clear candidate (Maybe we can use the
> > SPM host itself which is the default ?)
> > 1. create a new disk without attaching it to any VM
> > 2. create a LUN for a new storage domain
>
> Yes, SPM would seem correct to me.
>
> > 3.2) Consuming storage using LSM
> >
> > Typically this will be done by a virtualization administrator
> >
> > oVirt/VDSM should allow virtualization admin to
> > - Create a new storage domain using the storage on the array.
> > - Be able to specify whether VDSM should use the storage offload
> > capability (default) or override it to use its own internal logic.
>
> If vdsm can make the right decisions, I would prefer that vdsm decides
> when to use
> hardware offload and when to use software algorithms without administrator
> intervention. It's another case where oVirt can provide value-add by
> simplifying the configuration and providing optimal performance.
Per ayal, the thought was that in scenarios we know where the storage
array implementation is
not optimal, we can override and tell VDSM to use its internal logic
than offload.
>
> > 4) VDSM potential changes:
> >
> > 4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk
> > ? which bring another question...1 array == 1 storage domain OR 1
> > LUN/nfs-export on the array == 1 storage domain ?
>
> Saggi has mentioned some ideas on this topic so I will encourage him
> to explain
> his thoughts here.
Looking forward to Saggi's thoughts :)
>
> >
> > Pros & Cons of each...
> >
> > 1 array == 1 storage domain
> > - Each new vmdisk (aka volume) will be a new lun/file on the array.
> > - Easier to exploit offload capabilities, as they are available
> > at the LUN/File granularity
> > - Will there be any issues where there will be too many
> > LUNs/Files ... any maxluns limit on linux hosts that we might hit ?
> > -- VDSM has been tested with 1K LUNs and it worked fine - ayal
> > - Storage array limitations on the number of LUNs can be a
> > downside here.
> > - Would it be ok to share the array for hosting another storage
> > domain if need be ?
> > -- Provided the existing domain is not utilising all of the
> > free space
> > -- We can create new LUNs and hand it over to anyone needed ?
> > -- Changes needed in VDSM to work with raw LUNs, today it only
> > has support for consuming LUNs via VG/LV.
> >
> > 1 LUN/nfs-export on the array == 1 storage domain
> > - How to represent a new vmdisk (aka vdsm volume) if its a LUN
> > provisioned using SAN target ?
> > -- Will it be VG/LV as is done today for block domains ?
> > -- If yes, then it will be difficult to exploit offload
> > capabilities, as they are at LUN level, not at LV level.
> > - Each new vmdisk will be a new file on the nfs-export, assuming
> > offload capability is available at the file level, so this should
> > work for NAS targets ?
> > - Can use the storage array for hosting multiple storage domains.
> > -- Provision one more LUN and use it for another storage
> > domain if need be.
> > - VDSM already supports this today, as part of block storage
> > domains for LUNs case.
> >
> > Note that we will allow user to do either one of the two options
> > above, depending on need.
> >
> > 4.2) Storage domain metadata will also include the
> > features/capabilities of the storage array as reported by LSM.
> > - Capabilities (taken via LSM) will be stored in the domain
> > metadata during storage domain create flow.
> > - Need changes in oVirt engine as well ( see 'oVirt Engine
> > potential changes' section below )
>
> Do we want to store the exact hw capabilities or some set of vdsm
> chosen feature
> bits that are set at create time based on the discovered hw
> capabilities? The
> difference would be that vdsm could choose which features to enable at
> create
> time and update those features later if needed.
IIUC, you are saying VDSM will only look for those capabilities, whcih
are of interest to it and
store it? That should be done by way LSM returning its capabilities as
part of it being a Image Repo.
I am referring to how localFSRepo (def capabilities) is shown in the PoC
Saggie posted @
http://gerrit.ovirt.org/#change,192
>
> > 4.3) VDSM to poll LSM for array capabilities on a regular basis ?
> > Per ayal:
> > - If we have a 'storage array' entity in oVirt Engine (see
> > 'oVirt Engine potential changes' section below ) then we can have a
> > 'refresh capabilities' button/verb.
> > - We can periodically query the storage array.
> > - Query LSM before running operations (sounds redundant to me,
> > but if it's cheap enough it could be simplest).
> >
> > Probably need a combination of 1+2 (query at very low frequency
> > - 1/hour or 1/day + refresh button)
>
> This problem can be aleviated by the abstraction I suggested above.
> Then, LSM
> can be queried only when we may want to adjust the policy connected with a
> particular storage target.
Not clear to me, can you explain more ?
LSM might need to be contacted for updating the capabilities, because
storage admins can add/remove
capabilities over a period of time. Many storage arrays provide ability
to enable/disable array
features on demand.
>
> > 5) oVirt Engine potential changes - as described by ayal :
> >
> > - We will either need a new 'storage array' entity in engine to
> > keep credentials, or, in case of storage array as storage domain,
> > just keep this info as part of the domain at engine level.
> > - Have a 'storage array' entity in oVirt Engine to support
> > 'refresh capabilities' as a button/verb.
> > - When user during storage provisioning, selects a LUN exported
> > from a storage array (via LSM), the oVirt Engine would know from
> > then onwards that this LUN is being served via LSM.
> > It would then be able to query the capabilities of the LUN
> > and show it to the virt admin during storage consumption flow.
> >
> > 6) Potential flows:
> > - Create snapshot flow
> > -- VDSM will check the snapshot offload capability in the
> > domain metadata
> > -- If available, and override is not configured, it will use
> > LSM to offload LUN/File snapshot
> > -- If override is configured or capability is not available,
> > it will use its internal logic to create
> > snapshot (qcow2).
> >
> > - Copy/Clone vmdisk flow
> > -- VDSM will check the copy offload capability in the domain
> > metadata
> > -- If available, and override is not configured, it will use
> > LSM to offload LUN/File copy
> > -- If override is configured or capability is not available,
> > it will use its internal logic to create
> > snapshot (eg: dd cmd in case of LUN).
> >
> > 7) LSM potential changes:
> >
> > - list features/capabilities of the array. Eg: copy offload,
> > thin prov. etc.
> > - list containers (aka pools) (present in LSM today)
> > - Ability to list different types of arrays being managed, their
> > capabilities and used/free space
> > - Ability to create/list/delete/resize volumes ( LUN or exports,
> > available in LSM as of today)
> > - Get monitoring info with object (LUN/snapshot/volume) as
> > optional parameter for specific info. eg: container/pool free/used
> > space, raid type etc.
> >
> > Need to make sure above info is listed in a coherent way across
> > arrays (number of LUNs, raid type used? free/total per
> > container/pool, per LUN?. Also need I/O statistics wherever
> > possible.
I forgot to add this in the original mail.. adding it now.
8) Concerns/Issues
- Per Tony of libstoragemgmt
-- Some additional things to consider.
-- Some of the array vendors may not allow multiple points of control at
the same time. e.g. you may not be able to have 2 or more nodes running
libStorageMgmt at the same time talking to the same array. NetApp
limits what things can be done concurrently.
-- LibStorageMgmt currently just provides the bits to control external
storage arrays. The plug-in daemon and the plug-ins themselves execute
unprivileged.
- How does the change from SPM to SDM will affect the above discussions ?
--------------060306070208010807040506
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
<tt>(for some reason i never recd. Adam's note tho' I am subscribed
to all the 3 lists Cc'ed here, strange !<br>
Replying off from the mail fwd.ed to me from my colleague, pls
see my responses inline below. Thanks. )<br>
<br>
</tt>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite"><br>
<br>
<div class="gmail_quote">---------- Forwarded message ----------<br>
From: <b class="gmail_sendername">Adam Litke</b> <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:agl@us.ibm.com">agl(a)us.ibm.com</a>></span><br>
Date: Thu, May 31, 2012 at 7:31 PM<br>
Subject: Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt
integration<br>
To: Deepak C Shetty <<a moz-do-not-send="true"
href="mailto:deepakcs@linux.vnet.ibm.com">deepakcs(a)linux.vnet.ibm.com</a>><br>
Cc: <a moz-do-not-send="true"
href="mailto:libstoragemgmt-devel@lists.sourceforge.net">libstoragemgmt-devel(a)lists.sourceforge.net</a>,
<a moz-do-not-send="true" href="mailto:engine-devel@ovirt.org">engine-devel(a)ovirt.org</a>,
VDSM Project Development <<a moz-do-not-send="true"
href="mailto:vdsm-devel@lists.fedorahosted.org">vdsm-devel(a)lists.fedorahosted.org</a>><br>
<br>
<br>
<div class="HOEnZb">
<div class="h5">On Wed, May 30, 2012 at 03:08:46PM +0530,
Deepak C Shetty wrote:<br>
> Hello All,<br>
><br>
> I have a draft write-up on the VDSM-libstoragemgmt
integration.<br>
> I wanted to run this thru' the mailing list(s) to help
tune and<br>
> crystallize it, before putting it on the ovirt wiki.<br>
> I have run this once thru Ayal and Tony, so have some
of their<br>
> comments incorporated.<br>
><br>
> I still have few doubts/questions, which I have posted
below with<br>
> lines ending with '?'<br>
><br>
> Comments / Suggestions are welcome & appreciated.<br>
><br>
> thanx,<br>
> deepak<br>
><br>
> [Ccing engine-devel and libstoragemgmt lists as this
stuff is<br>
> relevant to them too]<br>
><br>
>
--------------------------------------------------------------------------------------------------------------<br>
><br>
> 1) Background:<br>
><br>
> VDSM provides high level API for node virtualization
management. It<br>
> acts in response to the requests sent by oVirt Engine,
which uses<br>
> VDSM to do all node virtualization related tasks,
including but not<br>
> limited to storage management.<br>
><br>
> libstoragemgmt aims to provide vendor agnostic API for
managing<br>
> external storage array. It should help system
administrators<br>
> utilizing open source solutions have a way to
programmatically<br>
> manage their storage hardware in a vendor neutral way.
It also aims<br>
> to facilitate management automation, ease of use and
take advantage<br>
> of storage vendor supported features which improve
storage<br>
> performance and space utilization.<br>
><br>
> Home Page: <a moz-do-not-send="true"
href="http://sourceforge.net/apps/trac/libstoragemgmt/"
target="_blank">http://sourceforge.net/apps/trac/libstoragemgmt/</a><br>
><br>
> libstoragemgmt (LSM) today supports C and python
plugins for talking<br>
> to external storage array using SMI-S as well as native
interfaces<br>
> (eg: netapp plugin )<br>
> Plan is to grow the SMI-S interface as needed over time
and add more<br>
> vendor specific plugins for exploiting features not
possible via<br>
> SMI-S or have better alternatives than using SMI-S.<br>
> For eg: Many of the copy offload features require to
use vendor<br>
> specific commands, which justifies the need for a
vendor specific<br>
> plugin.<br>
><br>
><br>
> 2) Goals:<br>
><br>
> 2a) Ability to plugin external storage array into
oVirt/VDSM<br>
> virtualization stack, in a vendor neutral way.<br>
><br>
> 2b) Ability to list features/capabilities and other
statistical<br>
> info of the array<br>
><br>
> 2c) Ability to utilize the storage array offload
capabilities<br>
> from oVirt/VDSM.<br>
><br>
><br>
> 3) Details:<br>
><br>
> LSM will sit as a new repository engine in VDSM.<br>
> VDSM Repository Engine WIP @ <a moz-do-not-send="true"
href="http://gerrit.ovirt.org/#change,192" target="_blank">http://gerrit.ovirt.org/#change,192</a><br>
><br>
> Current plan is to have LSM co-exist with VDSM on the
virtualization nodes.<br>
><br>
> *Note : 'storage' used below is generic. It can be a
file/nfs-export<br>
> for NAS targets and LUN/logical-drive for SAN targets.<br>
><br>
> VDSM can use LSM and do the following...<br>
> - Provision storage<br>
> - Consume storage<br>
><br>
> 3.1) Provisioning Storage using LSM<br>
><br>
> Typically this will be done by a Storage administrator.<br>
><br>
> oVirt/VDSM should provide storage admin the<br>
> - ability to list the different storage arrays
along with their<br>
> types (NAS/SAN), capabilities, free/used space.<br>
> - ability to provision storage using any of the
array<br>
> capabilities (eg: thin provisioned lun or new NFS
export )<br>
> - ability to manage the provisioned storage (eg:
resize/delete storage)<br>
<br>
</div>
</div>
I guess vdsm will need to model a new type of object (perhaps
StorageTarget) to<br>
be used for performing the above provisioning operations. Then,
to consume the<br>
provisioned storage, we could create a StorageConnectionRef by
passing in a<br>
StorageTarget object and some additional parameters. Sound
about right?<br>
</div>
</blockquote>
<br>
<tt>Sounds right to me, but I am not an expert in VDSM object model,
Saggi/Ayal/Dan can provide<br>
more inputs here. The (proposed) storage array entity in ovirt
engine can use this vdsm object to <br>
communicate and work with the storage array in doing the
provisioning work.<br>
<br>
Going ahead with the change to new Image Repository, I was
envisioning that LSM when integrated as<br>
a new repo engine will exhibit "Storage Provisioning" as a
implicit feature/capability, only then it<br>
will be picked up by the StorageTarget, else not.<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> Once the storage is provisioned by the storage admin,
VDSM will have<br>
> to refresh the host(s) for them to be able to see the
newly<br>
> provisioned storage.<br>
<br>
</div>
How would this refresh affect currently connected storage and
running VMs?<br>
</div>
</blockquote>
<br>
<tt>I am not too sure on this... looking for more info from the
experts here. Per ayal, getDeviceInfo<br>
should help refresh, but by 'affect' are you referring to what
happens if post refresh the device<br>
IDs and/or names of the existing storage on the host changes ?
What exactly is the concern here ?<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> 3.1.1) Potential flows:<br>
><br>
> Mgmt -> vdsm -> lsm: create LUN + LUN Mapping /
Zoning / whatever is<br>
> needed to make LUN available to list of hosts passed by
mgmt<br>
> Mgmt -> vdsm: getDeviceList (refreshes host and gets
list of devices)<br>
> Repeat above for all relevant hosts (depending on list
passed<br>
> earlier, mostly relevant when extending an existing VG)<br>
> Mgmt -> use LUN in normal flows.<br>
><br>
><br>
> 3.1.2) How oVirt Engine will know which LSM to use ?<br>
><br>
> Normally the way this works today is that user can choose
the host<br>
> to use (default today is SPM), however there are a few
flows where<br>
> mgmt will know which host to use:<br>
> 1. extend storage domain (add LUN to existing VG) - Use
SPM and make<br>
> sure *all* hosts that need access to this SD can see the
new LUN<br>
> 2. attach new LUN to a VM which is pinned to a specific
host - use this host<br>
> 3. attach new LUN to a VM which is not pinned - use a
host from the<br>
> cluster the VM belongs to and make sure all nodes in
cluster can see<br>
> the new LUN<br>
<br>
</div>
You are still going to need to worry about locking the shared
storage resource.<br>
Will libstoragemgmt have storage clustering support baked in or
will we continue<br>
to rely on SPM? If the latter is true, most/all of these
operations would still<br>
need to be done by SPM if I understand correctly.<br>
</div>
</blockquote>
<br>
<tt>The above scenarios were noted by me on behalf of Ayal.<br>
I don't think LSM will worry abt storage clustering. We are just
using LSM to 'talk' with the<br>
storage array. I am not sure if we need locking for the above
scenarios. We are just ensuring<br>
that the newly provisioned LUN is visible to the relevant hosts,
so not sure why we might need<br>
locking?<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> Flows for which there is no clear candidate (Maybe we can
use the<br>
> SPM host itself which is the default ?)<br>
> 1. create a new disk without attaching it to any VM<br>
> 2. create a LUN for a new storage domain<br>
<br>
</div>
Yes, SPM would seem correct to me.<br>
<div class="im"><br>
> 3.2) Consuming storage using LSM<br>
><br>
> Typically this will be done by a virtualization
administrator<br>
><br>
> oVirt/VDSM should allow virtualization admin to<br>
> - Create a new storage domain using the storage on
the array.<br>
> - Be able to specify whether VDSM should use the
storage offload<br>
> capability (default) or override it to use its own
internal logic.<br>
<br>
</div>
If vdsm can make the right decisions, I would prefer that vdsm
decides when to use<br>
hardware offload and when to use software algorithms without
administrator<br>
intervention. It's another case where oVirt can provide
value-add by<br>
simplifying the configuration and providing optimal performance.<br>
</div>
</blockquote>
<br>
<tt>Per ayal, the thought was that in scenarios we know where the
storage array implementation is <br>
not optimal, we can override and tell VDSM to use its internal
logic than offload.<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> 4) VDSM potential changes:<br>
><br>
> 4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV
= 1 VMdisk<br>
> ? which bring another question...1 array == 1 storage
domain OR 1<br>
> LUN/nfs-export on the array == 1 storage domain ?<br>
<br>
</div>
Saggi has mentioned some ideas on this topic so I will encourage
him to explain<br>
his thoughts here.<br>
</div>
</blockquote>
<br>
<tt>Looking forward to Saggi's thoughts :)</tt><br>
<br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div>
<div class="h5"><br>
><br>
> Pros & Cons of each...<br>
><br>
> 1 array == 1 storage domain<br>
> - Each new vmdisk (aka volume) will be a new
lun/file on the array.<br>
> - Easier to exploit offload capabilities, as they
are available<br>
> at the LUN/File granularity<br>
> - Will there be any issues where there will be too
many<br>
> LUNs/Files ... any maxluns limit on linux hosts that we
might hit ?<br>
> -- VDSM has been tested with 1K LUNs and it
worked fine - ayal<br>
> - Storage array limitations on the number of LUNs
can be a<br>
> downside here.<br>
> - Would it be ok to share the array for hosting
another storage<br>
> domain if need be ?<br>
> -- Provided the existing domain is not
utilising all of the<br>
> free space<br>
> -- We can create new LUNs and hand it over to
anyone needed ?<br>
> -- Changes needed in VDSM to work with raw LUNs,
today it only<br>
> has support for consuming LUNs via VG/LV.<br>
><br>
> 1 LUN/nfs-export on the array == 1 storage domain<br>
> - How to represent a new vmdisk (aka vdsm volume)
if its a LUN<br>
> provisioned using SAN target ?<br>
> -- Will it be VG/LV as is done today for block
domains ?<br>
> -- If yes, then it will be difficult to exploit
offload<br>
> capabilities, as they are at LUN level, not at LV
level.<br>
> - Each new vmdisk will be a new file on the
nfs-export, assuming<br>
> offload capability is available at the file level, so
this should<br>
> work for NAS targets ?<br>
> - Can use the storage array for hosting multiple
storage domains.<br>
> -- Provision one more LUN and use it for
another storage<br>
> domain if need be.<br>
> - VDSM already supports this today, as part of
block storage<br>
> domains for LUNs case.<br>
><br>
> Note that we will allow user to do either one of the
two options<br>
> above, depending on need.<br>
><br>
> 4.2) Storage domain metadata will also include the<br>
> features/capabilities of the storage array as reported
by LSM.<br>
> - Capabilities (taken via LSM) will be stored
in the domain<br>
> metadata during storage domain create flow.<br>
> - Need changes in oVirt engine as well ( see
'oVirt Engine<br>
> potential changes' section below )<br>
<br>
</div>
</div>
Do we want to store the exact hw capabilities or some set of
vdsm chosen feature<br>
bits that are set at create time based on the discovered hw
capabilities? The<br>
difference would be that vdsm could choose which features to
enable at create<br>
time and update those features later if needed.<br>
</div>
</blockquote>
<br>
<tt>IIUC, you are saying VDSM will only look for those capabilities,
whcih are of interest to it and<br>
store it? That should be done by way LSM returning its
capabilities as part of it being a Image Repo.<br>
I am referring to how localFSRepo (def capabilities) is shown in
the PoC Saggie posted @<br>
<a href="http://gerrit.ovirt.org/#change,192" target="_blank">http://gerrit.ovirt.org/#change,192</a></tt><br>
<br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> 4.3) VDSM to poll LSM for array capabilities on a regular
basis ?<br>
> Per ayal:<br>
> - If we have a 'storage array' entity in oVirt Engine
(see<br>
> 'oVirt Engine potential changes' section below ) then we
can have a<br>
> 'refresh capabilities' button/verb.<br>
> - We can periodically query the storage array.<br>
> - Query LSM before running operations (sounds
redundant to me,<br>
> but if it's cheap enough it could be simplest).<br>
><br>
> Probably need a combination of 1+2 (query at very low
frequency<br>
> - 1/hour or 1/day + refresh button)<br>
<br>
</div>
This problem can be aleviated by the abstraction I suggested
above. Then, LSM<br>
can be queried only when we may want to adjust the policy
connected with a<br>
particular storage target.<br>
</div>
</blockquote>
<br>
<tt>Not clear to me, can you explain more ?<br>
LSM might need to be contacted for updating the capabilities,
because storage admins can add/remove<br>
capabilities over a period of time. Many storage arrays provide
ability to enable/disable array <br>
features on demand.<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div>
<div class="h5"><br>
> 5) oVirt Engine potential changes - as described by
ayal :<br>
><br>
> - We will either need a new 'storage array' entity
in engine to<br>
> keep credentials, or, in case of storage array as
storage domain,<br>
> just keep this info as part of the domain at engine
level.<br>
> - Have a 'storage array' entity in oVirt Engine to
support<br>
> 'refresh capabilities' as a button/verb.<br>
> - When user during storage provisioning, selects a
LUN exported<br>
> from a storage array (via LSM), the oVirt Engine would
know from<br>
> then onwards that this LUN is being served via LSM.<br>
> It would then be able to query the capabilities
of the LUN<br>
> and show it to the virt admin during storage
consumption flow.<br>
><br>
> 6) Potential flows:<br>
> - Create snapshot flow<br>
> -- VDSM will check the snapshot offload
capability in the<br>
> domain metadata<br>
> -- If available, and override is not
configured, it will use<br>
> LSM to offload LUN/File snapshot<br>
> -- If override is configured or capability is
not available,<br>
> it will use its internal logic to create<br>
> snapshot (qcow2).<br>
><br>
> - Copy/Clone vmdisk flow<br>
> -- VDSM will check the copy offload capability
in the domain<br>
> metadata<br>
> -- If available, and override is not
configured, it will use<br>
> LSM to offload LUN/File copy<br>
> -- If override is configured or capability is
not available,<br>
> it will use its internal logic to create<br>
> snapshot (eg: dd cmd in case of LUN).<br>
><br>
> 7) LSM potential changes:<br>
><br>
> - list features/capabilities of the array. Eg: copy
offload,<br>
> thin prov. etc.<br>
> - list containers (aka pools) (present in LSM
today)<br>
> - Ability to list different types of arrays being
managed, their<br>
> capabilities and used/free space<br>
> - Ability to create/list/delete/resize volumes (
LUN or exports,<br>
> available in LSM as of today)<br>
> - Get monitoring info with object
(LUN/snapshot/volume) as<br>
> optional parameter for specific info. eg:
container/pool free/used<br>
> space, raid type etc.<br>
><br>
> Need to make sure above info is listed in a coherent
way across<br>
> arrays (number of LUNs, raid type used? free/total per<br>
> container/pool, per LUN?. Also need I/O statistics
wherever<br>
> possible.<br>
</div>
</div>
</div>
</blockquote>
<br>
<tt>I forgot to add this in the original mail.. adding it now.<br>
<br>
8) Concerns/Issues<br>
</tt><tt><br>
- Per Tony of libstoragemgmt</tt><br>
<pre wrap=""> -- Some additional things to consider.
-- Some of the array vendors may not allow multiple points of control at
the same time. e.g. you may not be able to have 2 or more nodes running
libStorageMgmt at the same time talking to the same array. NetApp
limits what things can be done concurrently.
-- LibStorageMgmt currently just provides the bits to control external
storage arrays. The plug-in daemon and the plug-ins themselves execute
unprivileged.
- How does the change from SPM to SDM will affect the above discussions ?
</pre>
<br>
</body>
</html>
--------------060306070208010807040506--
1
0
--=_2a8a8fbe-5347-4f29-bd00-34d59f114aef
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
The following is a new meeting request:
Subject: oVirt - quantum integration
Organiser: "Livnat Peer" <lpeer(a)redhat.com>
Time: Wednesday, 6 June, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem
Invitees: engine-devel(a)ovirt.org; arch(a)ovirt.com
*~*~*~*~*~*~*~*~*~*
Hi All,
We'll have a discussion on oVirt-Quantum integration.
In the meeting we'll discuss -
http://www.ovirt.org/wiki/Quantum_and_oVirt
Thanks, Livnat
Bridge ID: 972506565679
Dial-in information:
Reservationless-Plus Toll Free Dial-In Number (US & Canada): (800) 451-8679
Reservationless-Plus International Dial-In Number: (212) 729-5016
Conference code: 972506565679
Global Access Numbers Local:
Australia, Sydney Dial-In #: 0289852326
Austria, Vienna Dial-In #: 012534978196
Belgium, Brussels Dial-In #: 027920405
China Dial-In #: 4006205013
Denmark, Copenhagen Dial-In #: 32729215
Finland, Helsinki Dial-In #: 0923194436
France, Paris Dial-In #: 0170377140
Germany, Berlin Dial-In #: 030300190579
Ireland, Dublin Dial-In #: 014367793
Italy, Milan Dial-In #: 0236269529
Netherlands, Amsterdam Dial-In #: 0207975872
Norway, Oslo Dial-In #: 21033188
Singapore Dial-In #: 64840858
Spain, Barcelona Dial-In #: 935452328
Sweden, Stockholm Dial-In #: 0850513770
Switzerland, Geneva Dial-In #: 0225927881
United Kingdom Dial-In #: 02078970515
United Kingdom Dial-In #: 08445790676
United Kingdom, LocalCall Dial-In #: 08445790678
United States Dial-In #: 2127295016
Global Access Numbers Tollfree:
Argentina Dial-In #: 8004441016
Australia Dial-In #: 1800337169
Austria Dial-In #: 0800005898
Bahamas Dial-In #: 18002054776
Bahrain Dial-In #: 80004377
Belgium Dial-In #: 080048325
Brazil Dial-In #: 08008921002
Bulgaria Dial-In #: 008001100236
Chile Dial-In #: 800370228
Colombia Dial-In #: 018009134033
Costa Rica Dial-In #: 08000131048
Cyprus Dial-In #: 80095297
Czech Republic Dial-In #: 800700318
Denmark Dial-In #: 80887114
Dominican Republic Dial-In #: 18887512313
Estonia Dial-In #: 8000100232
Finland Dial-In #: 0800117116
France Dial-In #: 0805632867
Germany Dial-In #: 8006647541
Greece Dial-In #: 00800127562
Hong Kong Dial-In #: 800930349
Hungary Dial-In #: 0680016796
Iceland Dial-In #: 8008967
India Dial-In #: 0008006501533
Indonesia Dial-In #: 0018030179162
Ireland Dial-In #: 1800932401
Israel Dial-In #: 1809462557
Italy Dial-In #: 800985897
Jamaica Dial-In #: 18002050328
Japan Dial-In #: 0120934453
Korea (South) Dial-In #: 007986517393
Latvia Dial-In #: 80003339
Lithuania Dial-In #: 880030479
Luxembourg Dial-In #: 80026595
Malaysia Dial-In #: 1800814451
Mexico Dial-In #: 0018664590915
New Zealand Dial-In #: 0800888167
Norway Dial-In #: 80012994
Panama Dial-In #: 008002269184
Philippines Dial-In #: 180011100991
Poland Dial-In #: 008001210187
Portugal Dial-In #: 800814625
Russian Federation Dial-In #: 81080028341012
Saint Kitts and Nevis Dial-In #: 18002059252
Singapore Dial-In #: 8006162235
Slovak Republic Dial-In #: 0800001441
South Africa Dial-In #: 0800981148
Spain Dial-In #: 800300524
Sweden Dial-In #: 200896860
Switzerland Dial-In #: 800650077
Taiwan Dial-In #: 00801127141
Thailand Dial-In #: 001800656966
Trinidad and Tobago Dial-In #: 18002024615
United Arab Emirates Dial-In #: 8000650591
United Kingdom Dial-In #: 08006948057
United States Dial-In #: 8004518679
Uruguay Dial-In #: 00040190315
Venezuela Dial-In #: 08001627182
--=_2a8a8fbe-5347-4f29-bd00-34d59f114aef
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><h3>The following is a new meeting request:</h3>
<p>
<table border=3D'0'>
<tr><th align=3Dleft>Subject:</th><td>oVirt - quantum integration </td></tr=
>
<tr><th align=3Dleft>Organiser:</th><td>"Livnat Peer" <lpeer(a)redhat.com&=
gt; </td></tr>
</table>
<p>
<table border=3D'0'>
<tr><th align=3Dleft>Time:</th><td>Wednesday, 6 June, 2012, 4:00:00 PM - 5:=
00:00 PM GMT +02:00 Jerusalem
</td></tr></table>
<p>
<table border=3D'0'>
<tr><th align=3Dleft>Invitees:</th><td>engine-devel(a)ovirt.org; arch(a)ovirt.c=
om </td></tr>
</table>
<div>*~*~*~*~*~*~*~*~*~*</div><br><br>Hi All,<br><br>We'll have a discussio=
n on oVirt-Quantum integration.<br>In the meeting we'll discuss -<br>http:/=
/www.ovirt.org/wiki/Quantum_and_oVirt<br><br>Thanks, Livnat<br><br>Bridge I=
D: 972506565679<br>Dial-in information:<br>Reservationless-Plus Toll Free D=
ial-In Number (US & Canada): (800) 451-8679<br>Reservationless-Plus Int=
ernational Dial-In Number: (212) 729-5016<br>Conference code: 972506565679<=
br><br>Global Access Numbers Local:<br>Australia, Sydney Dial-In #: 0289852=
326<br>Austria, Vienna Dial-In #: 012534978196<br>Belgium, Brussels Dial-In=
#: 027920405<br>China Dial-In #: 4006205013<br>Denmark, Copenhagen Dial-In=
#: 32729215<br>Finland, Helsinki Dial-In #: 0923194436<br>France, Paris Di=
al-In #: 0170377140<br>Germany, Berlin Dial-In #: 030300190579<br>Ireland, =
Dublin Dial-In #: 014367793<br>Italy, Milan Dial-In #: 0236269529<br>Nether=
lands, Amsterdam Dial-In #: 0207975872<br>Norway, Oslo Dial-In #: 21033188<=
br>Singapore Dial-In #: 64840858<br>Spain, Barcelona Dial-In #: 935452328<b=
r>Sweden, Stockholm Dial-In #: 0850513770<br>Switzerland, Geneva Dial-In #:=
0225927881<br>United Kingdom Dial-In #: 02078970515<br>United Kingdom Dial=
-In #: 08445790676<br>United Kingdom, LocalCall Dial-In #: 08445790678<br>U=
nited States Dial-In #: 2127295016<br><br><br>Global Access Numbers Tollfre=
e:<br>Argentina Dial-In #: 8004441016<br>Australia Dial-In #: 1800337169<br=
>Austria Dial-In #: 0800005898<br>Bahamas Dial-In #: 18002054776<br>Bahrain=
Dial-In #: 80004377<br>Belgium Dial-In #: 080048325<br>Brazil Dial-In #: 0=
8008921002<br>Bulgaria Dial-In #: 008001100236<br>Chile Dial-In #: 80037022=
8<br>Colombia Dial-In #: 018009134033<br>Costa Rica Dial-In #: 08000131048<=
br>Cyprus Dial-In #: 80095297<br>Czech Republic Dial-In #: 800700318<br>Den=
mark Dial-In #: 80887114<br>Dominican Republic Dial-In #: 18887512313<br>Es=
tonia Dial-In #: 8000100232<br>Finland Dial-In #: 0800117116<br>France Dial=
-In #: 0805632867<br>Germany Dial-In #: 8006647541<br>Greece Dial-In #: 008=
00127562<br>Hong Kong Dial-In #: 800930349<br>Hungary Dial-In #: 0680016796=
<br>Iceland Dial-In #: 8008967<br>India Dial-In #: 0008006501533<br>Indones=
ia Dial-In #: 0018030179162<br>Ireland Dial-In #: 1800932401<br>Israel Dial=
-In #: 1809462557<br>Italy Dial-In #: 800985897<br>Jamaica Dial-In #: 18002=
050328<br>Japan Dial-In #: 0120934453<br>Korea (South) Dial-In #: 007986517=
393<br>Latvia Dial-In #: 80003339<br>Lithuania Dial-In #: 880030479<br>Luxe=
mbourg Dial-In #: 80026595<br>Malaysia Dial-In #: 1800814451<br>Mexico Dial=
-In #: 0018664590915<br>New Zealand Dial-In #: 0800888167<br>Norway Dial-In=
#: 80012994<br>Panama Dial-In #: 008002269184<br>Philippines Dial-In #: 18=
0011100991<br>Poland Dial-In #: 008001210187<br>Portugal Dial-In #: 8008146=
25<br>Russian Federation Dial-In #: 81080028341012<br>Saint Kitts and Nevis=
Dial-In #: 18002059252<br>Singapore Dial-In #: 8006162235<br>Slovak Republ=
ic Dial-In #: 0800001441<br>South Africa Dial-In #: 0800981148<br>Spain Dia=
l-In #: 800300524<br>Sweden Dial-In #: 200896860<br>Switzerland Dial-In #: =
800650077<br>Taiwan Dial-In #: 00801127141<br>Thailand Dial-In #: 001800656=
966<br>Trinidad and Tobago Dial-In #: 18002024615<br>United Arab Emirates D=
ial-In #: 8000650591<br>United Kingdom Dial-In #: 08006948057<br>United Sta=
tes Dial-In #: 8004518679<br>Uruguay Dial-In #: 00040190315<br>Venezuela Di=
al-In #: 08001627182 </body></html>
--=_2a8a8fbe-5347-4f29-bd00-34d59f114aef
Content-Type: text/calendar; charset=utf-8; method=REQUEST; name=meeting.ics
Content-Transfer-Encoding: 7bit
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:REQUEST
BEGIN:VTIMEZONE
TZID:Asia/Jerusalem
BEGIN:STANDARD
DTSTART:19710101T020000
TZOFFSETTO:+0200
TZOFFSETFROM:+0300
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=9;BYDAY=2SU
TZNAME:IST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19710101T020000
TZOFFSETTO:+0300
TZOFFSETFROM:+0200
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=3;BYDAY=-1FR
TZNAME:IDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:6ca077c8-6f6d-4826-9cc4-7b54012c05e7
SUMMARY:oVirt - quantum integration
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:mailto:engine-
devel(a)ovirt.org
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:mailto:arch@ov
irt.com
ORGANIZER;CN=Livnat Peer:mailto:lpeer@redhat.com
DTSTART;TZID="Asia/Jerusalem":20120606T160000
DTEND;TZID="Asia/Jerusalem":20120606T170000
STATUS:CONFIRMED
CLASS:PUBLIC
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
TRANSP:OPAQUE
LAST-MODIFIED:20120604T104546Z
DTSTAMP:20120604T104546Z
SEQUENCE:0
DESCRIPTION:The following is a new meeting request:\n\nSubject: oVirt - quan
tum integration \nOrganiser: "Livnat Peer" <lpeer(a)redhat.com> \n\nTime: Wedn
esday\, 6 June\, 2012\, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem\n \nInv
itees: engine-devel(a)ovirt.org\; arch(a)ovirt.com \n\n\n*~*~*~*~*~*~*~*~*~*\n\n
\nHi All\,\n\nWe'll have a discussion on oVirt-Quantum integration.\nIn the
meeting we'll discuss -\nhttp://www.ovirt.org/wiki/Quantum_and_oVirt\n\nThan
ks\, Livnat\n\nBridge ID: 972506565679\nDial-in information:\nReservationles
s-Plus Toll Free Dial-In Number (US & Canada): (800) 451-8679\nReservationle
ss-Plus International Dial-In Number: (212) 729-5016\nConference code: 97250
6565679\n\nGlobal Access Numbers Local:\nAustralia\, Sydney Dial-In #: 02898
52326\nAustria\, Vienna Dial-In #: 012534978196\nBelgium\, Brussels Dial-In
#: 027920405\nChina Dial-In #: 4006205013\nDenmark\, Copenhagen Dial-In #: 3
2729215\nFinland\, Helsinki Dial-In #: 0923194436\nFrance\, Paris Dial-In #:
0170377140\nGermany\, Berlin Dial-In #: 030300190579\nIreland\, Dublin Dial
-In #: 014367793\nItaly\, Milan Dial-In #: 0236269529\nNetherlands\, Amsterd
am Dial-In #: 0207975872\nNorway\, Oslo Dial-In #: 21033188\nSingapore Dial-
In #: 64840858\nSpain\, Barcelona Dial-In #: 935452328\nSweden\, Stockholm D
ial-In #: 0850513770\nSwitzerland\, Geneva Dial-In #: 0225927881\nUnited Kin
gdom Dial-In #: 02078970515\nUnited Kingdom Dial-In #: 08445790676\nUnited K
ingdom\, LocalCall Dial-In #: 08445790678\nUnited States Dial-In #: 21272950
16\n\n\nGlobal Access Numbers Tollfree:\nArgentina Dial-In #: 8004441016\nAu
stralia Dial-In #: 1800337169\nAustria Dial-In #: 0800005898\nBahamas Dial-I
n #: 18002054776\nBahrain Dial-In #: 80004377\nBelgium Dial-In #: 080048325\
nBrazil Dial-In #: 08008921002\nBulgaria Dial-In #: 008001100236\nChile Dial
-In #: 800370228\nColombia Dial-In #: 018009134033\nCosta Rica Dial-In #: 08
000131048\nCyprus Dial-In #: 80095297\nCzech Republic Dial-In #: 800700318\n
Denmark Dial-In #: 80887114\nDominican Republic Dial-In #: 18887512313\nEsto
nia Dial-In #: 8000100232\nFinland Dial-In #: 0800117116\nFrance Dial-In #:
0805632867\nGermany Dial-In #: 8006647541\nGreece Dial-In #: 00800127562\nHo
ng Kong Dial-In #: 800930349\nHungary Dial-In #: 0680016796\nIceland Dial-In
#: 8008967\nIndia Dial-In #: 0008006501533\nIndonesia Dial-In #: 0018030179
162\nIreland Dial-In #: 1800932401\nIsrael Dial-In #: 1809462557\nItaly Dial
-In #: 800985897\nJamaica Dial-In #: 18002050328\nJapan Dial-In #: 012093445
3\nKorea (South) Dial-In #: 007986517393\nLatvia Dial-In #: 80003339\nLithua
nia Dial-In #: 880030479\nLuxembourg Dial-In #: 80026595\nMalaysia Dial-In #
: 1800814451\nMexico Dial-In #: 0018664590915\nNew Zealand Dial-In #: 080088
8167\nNorway Dial-In #: 80012994\nPanama Dial-In #: 008002269184\nPhilippine
s Dial-In #: 180011100991\nPoland Dial-In #: 008001210187\nPortugal Dial-In
#: 800814625\nRussian Federation Dial-In #: 81080028341012\nSaint Kitts and
Nevis Dial-In #: 18002059252\nSingapore Dial-In #: 8006162235\nSlovak Republ
ic Dial-In #: 0800001441\nSouth Africa Dial-In #: 0800981148\nSpain Dial-In
#: 800300524\nSweden Dial-In #: 200896860\nSwitzerland Dial-In #: 800650077\
nTaiwan Dial-In #: 00801127141\nThailand Dial-In #: 001800656966\nTrinidad a
nd Tobago Dial-In #: 18002024615\nUnited Arab Emirates Dial-In #: 8000650591
\nUnited Kingdom Dial-In #: 08006948057\nUnited States Dial-In #: 8004518679
\nUruguay Dial-In #: 00040190315\nVenezuela Dial-In #: 08001627182
X-ALT-DESC;FMTTYPE=text/html:<html><body><h3>The following is a new meeting
request:</h3>\n\n<p>\n<table border='0'>\n<tr><th align=left>Subject:</th><t
d>oVirt - quantum integration </td></tr>\n<tr><th align=left>Organiser:</th>
<td>"Livnat Peer" <\;lpeer(a)redhat.com>\; </td></tr>\n</table>\n<p>\n<tab
le border='0'>\n<tr><th align=left>Time:</th><td>Wednesday\, 6 June\, 2012\,
4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem\n </td></tr></table>\n<p>\n<ta
ble border='0'>\n<tr><th align=left>Invitees:</th><td>engine-devel(a)ovirt.org
\; arch(a)ovirt.com </td></tr>\n</table>\n<div>*~*~*~*~*~*~*~*~*~*</div><br><b
r>Hi All\,<br><br>We'll have a discussion on oVirt-Quantum integration.<br>I
n the meeting we'll discuss -<br>http://www.ovirt.org/wiki/Quantum_and_oVirt
<br><br>Thanks\, Livnat<br><br>Bridge ID: 972506565679<br>Dial-in informatio
n:<br>Reservationless-Plus Toll Free Dial-In Number (US &\; Canada): (800
) 451-8679<br>Reservationless-Plus International Dial-In Number: (212) 729-5
016<br>Conference code: 972506565679<br><br>Global Access Numbers Local:<br>
Australia\, Sydney Dial-In #: 0289852326<br>Austria\, Vienna Dial-In #: 0125
34978196<br>Belgium\, Brussels Dial-In #: 027920405<br>China Dial-In #: 4006
205013<br>Denmark\, Copenhagen Dial-In #: 32729215<br>Finland\, Helsinki Dia
l-In #: 0923194436<br>France\, Paris Dial-In #: 0170377140<br>Germany\, Berl
in Dial-In #: 030300190579<br>Ireland\, Dublin Dial-In #: 014367793<br>Italy
\, Milan Dial-In #: 0236269529<br>Netherlands\, Amsterdam Dial-In #: 0207975
872<br>Norway\, Oslo Dial-In #: 21033188<br>Singapore Dial-In #: 64840858<br
>Spain\, Barcelona Dial-In #: 935452328<br>Sweden\, Stockholm Dial-In #: 085
0513770<br>Switzerland\, Geneva Dial-In #: 0225927881<br>United Kingdom Dial
-In #: 02078970515<br>United Kingdom Dial-In #: 08445790676<br>United Kingdo
m\, LocalCall Dial-In #: 08445790678<br>United States Dial-In #: 2127295016<
br><br><br>Global Access Numbers Tollfree:<br>Argentina Dial-In #: 800444101
6<br>Australia Dial-In #: 1800337169<br>Austria Dial-In #: 0800005898<br>Bah
amas Dial-In #: 18002054776<br>Bahrain Dial-In #: 80004377<br>Belgium Dial-I
n #: 080048325<br>Brazil Dial-In #: 08008921002<br>Bulgaria Dial-In #: 00800
1100236<br>Chile Dial-In #: 800370228<br>Colombia Dial-In #: 018009134033<br
>Costa Rica Dial-In #: 08000131048<br>Cyprus Dial-In #: 80095297<br>Czech Re
public Dial-In #: 800700318<br>Denmark Dial-In #: 80887114<br>Dominican Repu
blic Dial-In #: 18887512313<br>Estonia Dial-In #: 8000100232<br>Finland Dial
-In #: 0800117116<br>France Dial-In #: 0805632867<br>Germany Dial-In #: 8006
647541<br>Greece Dial-In #: 00800127562<br>Hong Kong Dial-In #: 800930349<br
>Hungary Dial-In #: 0680016796<br>Iceland Dial-In #: 8008967<br>India Dial-I
n #: 0008006501533<br>Indonesia Dial-In #: 0018030179162<br>Ireland Dial-In
#: 1800932401<br>Israel Dial-In #: 1809462557<br>Italy Dial-In #: 800985897<
br>Jamaica Dial-In #: 18002050328<br>Japan Dial-In #: 0120934453<br>Korea (S
outh) Dial-In #: 007986517393<br>Latvia Dial-In #: 80003339<br>Lithuania Dia
l-In #: 880030479<br>Luxembourg Dial-In #: 80026595<br>Malaysia Dial-In #: 1
800814451<br>Mexico Dial-In #: 0018664590915<br>New Zealand Dial-In #: 08008
88167<br>Norway Dial-In #: 80012994<br>Panama Dial-In #: 008002269184<br>Phi
lippines Dial-In #: 180011100991<br>Poland Dial-In #: 008001210187<br>Portug
al Dial-In #: 800814625<br>Russian Federation Dial-In #: 81080028341012<br>S
aint Kitts and Nevis Dial-In #: 18002059252<br>Singapore Dial-In #: 80061622
35<br>Slovak Republic Dial-In #: 0800001441<br>South Africa Dial-In #: 08009
81148<br>Spain Dial-In #: 800300524<br>Sweden Dial-In #: 200896860<br>Switze
rland Dial-In #: 800650077<br>Taiwan Dial-In #: 00801127141<br>Thailand Dial
-In #: 001800656966<br>Trinidad and Tobago Dial-In #: 18002024615<br>United
Arab Emirates Dial-In #: 8000650591<br>United Kingdom Dial-In #: 08006948057
<br>United States Dial-In #: 8004518679<br>Uruguay Dial-In #: 00040190315<br
>Venezuela Dial-In #: 08001627182 </body></html>
BEGIN:VALARM
ACTION:DISPLAY
TRIGGER;RELATED=START:-PT5M
DESCRIPTION:Reminder
END:VALARM
END:VEVENT
END:VCALENDAR
--=_2a8a8fbe-5347-4f29-bd00-34d59f114aef--
1
0
On 03/06/12 23:04, Itamar Heim wrote:
> fyi - gerrit has been upgraded to version 2.3.
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
Adding the devel list.
Latest gerrit started a concept of _draft_ branches, which should help all [WIP] users.
>From the notes:
" Also adds magic branches refs/drafts/ and refs/publish/ that will handle whether or not a patchset is a draft or goes straight to review. refs/for/ should be deprecated in favor of explicitly marking a patchset as a draft or directly to review. "
I strongly recommend everyone to take a look here:
http://gerrit-documentation.googlecode.com/svn/ReleaseNotes/ReleaseNotes-2.…
(as well as the rest of the new features and updates).
--
/d
"Willyoupleasehelpmefixmykeyboard?Thespacebarisbroken!"
2
1
Hi all,
As discussed last month[1], we had to deal with some issues which turned out to be a Maven bug.
Thanks to Juan and Asaf's work, our current sources now build properly using Maven 3.
So you're all invited to migrate into Maven 3. Other than upgrading your local maven package
no other action is needed.
For now, Maven 2 will also work for you, but I expect in the future we'd like to make use
of some advanced features, so migration to 3 is recommended.
Talking about advanced features, an interesting challenge is feedback on parallel builds [2].
So whoever wants to try it out and report if it improves run time without breaking anything,
will be appreciated.
Happy migration!
[1]http://lists.ovirt.org/pipermail/arch/2012-April/000490.html
[2] https://cwiki.apache.org/MAVEN/parallel-builds-in-maven-3.html
--
/d
"Email returned to sender -- insufficient voltage."
6
9
Re: [Engine-devel] [oVirt Jenkins] ovirt_engine_find_bugs - Build # 915 - Still Unstable!
by Eyal Edri 01 Jun '12
by Eyal Edri 01 Jun '12
01 Jun '12
FYI,
These set of patches introduced new HIGH findbugs warnings:
http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/913/changes:
webadmin: Gluster Volume Options - populating from help xml (details)
webadmin: Gluster Brick sub tab - removing columns (details)
webadmin: Support for mode specific Tabs and Sub Tabs (details)
restapi: Gluster Resources Implementation classes (details)
restapi: RSDL metadata for gluster related REST api (details)
restapi: Gluster Volumes Collection implementation (details)
engine: Add ID fields to gluster brick and option (details)
webadmin: Gluster Volume - add bricks enabling (#823284) (details)
webadmin: Gluster Volume - upadting actions (#823273) (details)
webadmin: Gluster Volume - validations fixed (#823277) (details)
bugs appear to be in GlusterVolumeEntity.java:
http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/913/findbugsResult/HIGH…
http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/913/findbugsResult/HIGH…
Please review and handle,
Eyal Edri
oVirt Infra Team
----- Original Message -----
> From: "Jenkins oVirt Server" <jenkins(a)ovirt.org>
> To: eedri(a)redhat.com, engine-patches(a)ovirt.org, oliel(a)redhat.com, yzaslavs(a)redhat.com, amureini(a)redhat.com,
> dfediuck(a)redhat.com
> Sent: Tuesday, May 22, 2012 12:11:33 PM
> Subject: [oVirt Jenkins] ovirt_engine_find_bugs - Build # 915 - Still Unstable!
>
> Project: http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/
> Build: http://jenkins.ovirt.org/job/ovirt_engine_find_bugs/915/
> Build Number: 915
> Build Status: Still Unstable
> Triggered By: Started by upstream project "ovirt_engine" build number
> 1,240
>
> -------------------------------------
> Changes Since Last Success:
> -------------------------------------
> Changes for Build #913
> [gchaplik] webadmin: Gluster Volume Options - populating from help
> xml
>
> [gchaplik] webadmin: Gluster Brick sub tab - removing columns
>
> [gchaplik] webadmin: Support for mode specific Tabs and Sub Tabs
>
> [sanjal] restapi: Gluster Resources Implementation classes
>
> [sanjal] restapi: RSDL metadata for gluster related REST api
>
> [sanjal] restapi: Gluster Volumes Collection implementation
>
> [sanjal] engine: Add ID fields to gluster brick and option
>
> [gchaplik] webadmin: Gluster Volume - add bricks enabling (#823284)
>
> [gchaplik] webadmin: Gluster Volume - upadting actions (#823273)
>
> [gchaplik] webadmin: Gluster Volume - validations fixed (#823277)
>
>
> Changes for Build #914
> [emesika] core:dbfunctions.sh script needs to be compatible with DWH
>
> [mpastern] restapi: fix rsdl regression
>
>
> Changes for Build #915
> [dfediuck] core: Use same ids for artifacts and plugins
>
> [amureini] core: Allow admin permissions in user views
>
> [amureini] core: Roles commands cleanup
>
> [amureini] core: Cleanup Permissions Commands
>
> [amureini] core: Roles commands - use the cached getRole()
>
> [amureini] core: is_inheritable property to MLA entities
>
>
>
>
> -----------------
> Failed Tests:
> -----------------
> No tests ran.
>
> ------------------
> Build Log:
> ------------------
> [...truncated 4148 lines...]
> [INFO] Assembling webapp [userportal] in
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/userportal-3.1.0-0001]
> [INFO] Processing war project
> [INFO] Copying webapp resources
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/src/main/webapp]
> [INFO] Webapp assembled in [147 msecs]
> [INFO] Building war:
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/userportal-3.1.0-0001.war
> [INFO] WEB-INF/web.xml already added, skipping
> [INFO]
> [INFO] --- maven-install-plugin:2.3.1:install (default-install) @
> userportal ---
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/userportal-3.1.0-0001.war
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/userportal/3.1.0-0001/userportal-3.1.0-0001.war
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/pom.xml
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/userportal/3.1.0-0001/userportal-3.1.0-0001.pom
> [INFO]
> [INFO] --- findbugs-maven-plugin:2.4.0:findbugs (default-cli) @
> userportal ---
> [INFO] Fork Value is true
> [INFO]
> [INFO] --- maven-checkstyle-plugin:2.6:check (default) @ webadmin ---
> [INFO] Starting audit...
> Audit done.
>
> [INFO]
> [INFO] --- maven-resources-plugin:2.5:testResources
> (default-testResources) @ webadmin ---
> [debug] execute contextualize
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/src/test/resources
> [INFO]
> [INFO] --- maven-compiler-plugin:2.3.2:testCompile
> (default-testCompile) @ webadmin ---
> [INFO] No sources to compile
> [INFO]
> [INFO] --- maven-surefire-plugin:2.10:test (default-test) @ webadmin
> ---
> [INFO] Tests are skipped.
> [INFO]
> [INFO] --- maven-war-plugin:2.1.1:war (default-war) @ webadmin ---
> [INFO] Packaging webapp
> [INFO] Assembling webapp [webadmin] in
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/webadmin-3.1.0-0001]
> [INFO] Processing war project
> [INFO] Copying webapp resources
> [/ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/src/main/webapp]
> [INFO] Webapp assembled in [147 msecs]
> OpenJDK 64-Bit Server VM warning: CodeCache is full. Compiler has
> been disabled.
> OpenJDK 64-Bit Server VM warning: Try increasing the code cache size
> using -XX:ReservedCodeCacheSize=
> [INFO] Building war:
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/webadmin-3.1.0-0001.war
> [INFO] WEB-INF/web.xml already added, skipping
> [INFO]
> [INFO] --- maven-install-plugin:2.3.1:install (default-install) @
> webadmin ---
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/webadmin-3.1.0-0001.war
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/webadmin/3.1.0-0001/webadmin-3.1.0-0001.war
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/pom.xml
> to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/ui/webadmin/3.1.0-0001/webadmin-3.1.0-0001.pom
> [INFO]
> [INFO] --- findbugs-maven-plugin:2.4.0:findbugs (default-cli) @
> webadmin ---
> [INFO] Fork Value is true
> [java] Warnings generated: 14
> [INFO] Done FindBugs Analysis....
> [java] Warnings generated: 56
> [INFO] Done FindBugs Analysis....
> [INFO]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building oVirt Server EAR 3.1.0-0001
> [INFO]
> ------------------------------------------------------------------------
> [WARNING] The POM for
> org.codehaus.mojo:gwt-maven-plugin:jar:1.3.2.google is missing, no
> dependency information available
> [WARNING] Failed to retrieve plugin descriptor for
> org.codehaus.mojo:gwt-maven-plugin:1.3.2.google: Plugin
> org.codehaus.mojo:gwt-maven-plugin:1.3.2.google or one of its
> dependencies could not be resolved: Failed to read artifact
> descriptor for org.codehaus.mojo:gwt-maven-plugin:jar:1.3.2.google
> [WARNING]
> *****************************************************************
> [WARNING] * Your build is requesting parallel execution, but project
> *
> [WARNING] * contains the following plugin(s) that are not marked as
> *
> [WARNING] * @threadSafe to support parallel building.
> *
> [WARNING] * While this /may/ work fine, please look for plugin
> updates *
> [WARNING] * and/or request plugins be made thread-safe.
> *
> [WARNING] * If reporting an issue, report it against the plugin in
> *
> [WARNING] * question, not against maven-core
> *
> [WARNING]
> *****************************************************************
> [WARNING] The following plugins are not marked @threadSafe in oVirt
> Server EAR:
> [WARNING] org.apache.maven.plugins:maven-dependency-plugin:2.1
> [WARNING]
> *****************************************************************
> [INFO]
> [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @
> engine-server-ear ---
> [INFO] Deleting /ephemeral0/ovirt_engine_find_bugs/ear/target
> [INFO]
> [INFO] --- maven-ear-plugin:2.6:generate-application-xml
> (default-generate-application-xml) @ engine-server-ear ---
> [INFO] Generating application.xml
> [INFO]
> [INFO] --- maven-resources-plugin:2.4.3:resources (default-resources)
> @ engine-server-ear ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /ephemeral0/ovirt_engine_find_bugs/ear/src/main/java
> [INFO] skip non existing resourceDirectory
> /ephemeral0/ovirt_engine_find_bugs/ear/src/main/resources
> [INFO]
> [INFO] --- maven-ear-plugin:2.6:ear (default-ear) @ engine-server-ear
> ---
> [INFO] Copying artifact[jar:org.ovirt.engine.core:common:3.1.0-0001]
> to[lib/engine-common.jar]
> [INFO] Copying artifact[jar:org.ovirt.engine.core:compat:3.1.0-0001]
> to[lib/engine-compat.jar]
> [INFO] Copying artifact[jar:org.ovirt.engine.core:dal:3.1.0-0001]
> to[lib/engine-dal.jar]
> [INFO] Copying artifact[jar:org.ovirt.engine.core:utils:3.1.0-0001]
> to[lib/engine-utils.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:engineencryptutils:3.1.0-0001]
> to[lib/engine-encryptutils.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:vdsbroker:3.1.0-0001]
> to[lib/engine-vdsbroker.jar]
> [INFO] Copying
> artifact[war:org.ovirt.engine.core:root-war:3.1.0-0001] to[root.war]
> (unpacked)
> [INFO] Copying artifact[war:org.ovirt.engine.ui:rmw-war:3.1.0-0001]
> to[ovirtengineweb.war] (unpacked)
> [INFO] Copying artifact[war:org.ovirt.engine.ui:rm-war:3.1.0-0001]
> to[ovirtengine.war] (unpacked)
> [INFO] Copying
> artifact[war:org.ovirt.engine.ui:components-war:3.1.0-0001]
> to[components.war] (unpacked)
> [INFO] Copying
> artifact[war:org.ovirt.engine.api:restapi-webapp:3.1.0-0001]
> to[restapi.war] (unpacked)
> [INFO] Copying
> artifact[war:org.ovirt.engine.ui:userportal:3.1.0-0001]
> to[userportal.war] (unpacked)
> [INFO] Copying artifact[war:org.ovirt.engine.ui:webadmin:3.1.0-0001]
> to[webadmin.war] (unpacked)
> [INFO] Copying
> artifact[ejb:org.ovirt.engine.ui:genericapi:3.1.0-0001]
> to[engine-genericapi.jar] (unpacked)
> [INFO] Copying
> artifact[ejb:org.ovirt.engine.core:scheduler:3.1.0-0001]
> to[engine-scheduler.jar] (unpacked)
> [INFO] Copying artifact[ejb:org.ovirt.engine.core:bll:3.1.0-0001]
> to[engine-bll.jar] (unpacked)
> [INFO] Copying artifact[jar:commons-codec:commons-codec:1.4]
> to[lib/commons-codec-1.4.jar]
> [INFO] Copying
> artifact[jar:org.hibernate:hibernate-validator:4.0.2.GA]
> to[lib/hibernate-validator-4.0.2.GA.jar]
> [INFO] Copying artifact[jar:javax.validation:validation-api:1.0.0.GA]
> to[lib/validation-api-1.0.0.GA.jar]
> [INFO] Copying artifact[jar:org.slf4j:slf4j-api:1.5.6]
> to[lib/slf4j-api-1.5.6.jar]
> [INFO] Copying artifact[jar:javax.xml.bind:jaxb-api:2.1]
> to[lib/jaxb-api-2.1.jar]
> [INFO] Copying artifact[jar:javax.xml.stream:stax-api:1.0-2]
> to[lib/stax-api-1.0-2.jar]
> [INFO] Copying artifact[jar:javax.activation:activation:1.1]
> to[lib/activation-1.1.jar]
> [INFO] Copying artifact[jar:com.sun.xml.bind:jaxb-impl:2.1.3]
> to[lib/jaxb-impl-2.1.3.jar]
> [INFO] Copying
> artifact[jar:org.hibernate:hibernate-annotations:3.4.0.GA]
> to[lib/hibernate-annotations-3.4.0.GA.jar]
> [INFO] Copying artifact[jar:org.hibernate:ejb3-persistence:1.0.2.GA]
> to[lib/ejb3-persistence-1.0.2.GA.jar]
> [INFO] Copying
> artifact[jar:org.hibernate:hibernate-commons-annotations:3.1.0.GA]
> to[lib/hibernate-commons-annotations-3.1.0.GA.jar]
> [INFO] Copying artifact[jar:org.hibernate:hibernate-core:3.3.0.SP1]
> to[lib/hibernate-core-3.3.0.SP1.jar]
> [INFO] Copying artifact[jar:antlr:antlr:2.7.6]
> to[lib/antlr-2.7.6.jar]
> [INFO] Copying artifact[jar:dom4j:dom4j:1.6.1]
> to[lib/dom4j-1.6.1.jar]
> [INFO] Copying artifact[jar:xml-apis:xml-apis:1.0.b2]
> to[lib/xml-apis-1.0.b2.jar]
> [INFO] Copying
> artifact[jar:org.codehaus.jackson:jackson-mapper-asl:1.9.4]
> to[lib/jackson-mapper-asl-1.9.4.jar]
> [INFO] Copying
> artifact[jar:org.codehaus.jackson:jackson-core-asl:1.9.4]
> to[lib/jackson-core-asl-1.9.4.jar]
> [INFO] Copying
> artifact[jar:org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.1_spec:1.0.0.Final]
> to[lib/jboss-interceptors-api_1.1_spec-1.0.0.Final.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:engine-tools-common:3.1.0-0001]
> to[lib/engine-tools-common-3.1.0-0001.jar]
> [INFO] Copying
> artifact[jar:commons-beanutils:commons-beanutils:1.8.2]
> to[lib/commons-beanutils-1.8.2.jar]
> [INFO] Copying artifact[jar:com.jcraft:jsch:0.1.42]
> to[lib/jsch-0.1.42.jar]
> [INFO] Copying artifact[jar:org.apache.mina:mina-core:2.0.1]
> to[lib/mina-core-2.0.1.jar]
> [INFO] Copying artifact[jar:org.apache.sshd:sshd-core:0.6.0]
> to[lib/sshd-core-0.6.0.jar]
> [INFO] Copying artifact[jar:commons-lang:commons-lang:2.4]
> to[lib/commons-lang-2.4.jar]
> [INFO] Copying artifact[jar:org.apache.xmlrpc:xmlrpc-client:3.1.3]
> to[lib/xmlrpc-client-3.1.3.jar]
> [INFO] Copying artifact[jar:org.apache.xmlrpc:xmlrpc-common:3.1.3]
> to[lib/xmlrpc-common-3.1.3.jar]
> [INFO] Copying
> artifact[jar:org.apache.ws.commons.util:ws-commons-util:1.0.2]
> to[lib/ws-commons-util-1.0.2.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-jdbc:2.5.6.SEC02]
> to[lib/spring-jdbc-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-tx:2.5.6.SEC02]
> to[lib/spring-tx-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework.ldap:spring-ldap-core:1.3.0.RELEASE]
> to[lib/spring-ldap-core-1.3.0.RELEASE.jar]
> [INFO] Copying
> artifact[jar:commons-httpclient:commons-httpclient:3.1]
> to[lib/commons-httpclient-3.1.jar]
> [INFO] Copying artifact[jar:org.quartz-scheduler:quartz:2.1.2]
> to[lib/quartz-2.1.2.jar]
> [INFO] Copying artifact[jar:c3p0:c3p0:0.9.1.1]
> to[lib/c3p0-0.9.1.1.jar]
> [INFO] Copying
> artifact[jar:org.ovirt.engine.core:searchbackend:3.1.0-0001]
> to[lib/searchbackend-3.1.0-0001.jar]
> [INFO] Copying
> artifact[jar:commons-collections:commons-collections:3.1]
> to[lib/commons-collections-3.1.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-core:2.5.6.SEC02]
> to[lib/spring-core-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-beans:2.5.6.SEC02]
> to[lib/spring-beans-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-context:2.5.6.SEC02]
> to[lib/spring-context-2.5.6.SEC02.jar]
> [INFO] Copying artifact[jar:aopalliance:aopalliance:1.0]
> to[lib/aopalliance-1.0.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-agent:2.5.6.SEC02]
> to[lib/spring-agent-2.5.6.SEC02.jar]
> [INFO] Copying
> artifact[jar:org.springframework:spring-aop:2.5.6.SEC02]
> to[lib/spring-aop-2.5.6.SEC02.jar]
> [INFO] Copy ear sources to
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine
> [INFO] Could not find manifest file:
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine/META-INF/MANIFEST.MF
> - Generating one
> [INFO] Building jar:
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine.ear
> [INFO]
> [INFO] --- maven-dependency-plugin:2.1:copy (copy-quartz-jar) @
> engine-server-ear ---
> [INFO] Configured Artifact: org.quartz-scheduler:quartz:2.1.2:jar
> [INFO] Copying quartz-2.1.2.jar to
> /ephemeral0/ovirt_engine_find_bugs/ear/target/quartz/quartz-2.1.2.jar
> [INFO]
> [INFO] --- maven-install-plugin:2.3.1:install (default-install) @
> engine-server-ear ---
> [INFO] Installing
> /ephemeral0/ovirt_engine_find_bugs/ear/target/engine.ear to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/engine-server-ear/3.1.0-0001/engine-server-ear-3.1.0-0001.ear
> [INFO] Installing /ephemeral0/ovirt_engine_find_bugs/ear/pom.xml to
> /home/jenkins/workspace/ovirt_engine_find_bugs/.repository/org/ovirt/engine/engine-server-ear/3.1.0-0001/engine-server-ear-3.1.0-0001.pom
> [INFO]
> [INFO] --- findbugs-maven-plugin:2.4.0:findbugs (default-cli) @
> engine-server-ear ---
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] oVirt Engine Root Project ......................... SUCCESS
> [11.175s]
> [INFO] oVirt Build Tools root ............................ SUCCESS
> [0.154s]
> [INFO] oVirt checkstyle .................................. SUCCESS
> [2.925s]
> [INFO] oVirt Checkstyle Checks ........................... SUCCESS
> [32.541s]
> [INFO] oVirt Modules - backend ........................... SUCCESS
> [0.137s]
> [INFO] oVirt Manager ..................................... SUCCESS
> [0.633s]
> [INFO] oVirt Modules - manager ........................... SUCCESS
> [1.512s]
> [INFO] CSharp Compatibility .............................. SUCCESS
> [1:18.689s]
> [INFO] Encryption Libraries .............................. SUCCESS
> [42.599s]
> [INFO] oVirt Tools ....................................... SUCCESS
> [0.205s]
> [INFO] oVirt Tools Common Library ........................ SUCCESS
> [25.939s]
> [INFO] Common Code ....................................... SUCCESS
> [2:09.368s]
> [INFO] Common utilities .................................. SUCCESS
> [1:43.075s]
> [INFO] Data Access Layer ................................. SUCCESS
> [1:39.624s]
> [INFO] engine beans ...................................... SUCCESS
> [0.237s]
> [INFO] engine scheduler bean ............................. SUCCESS
> [40.875s]
> [INFO] Vds broker ........................................ SUCCESS
> [1:44.474s]
> [INFO] Search Backend .................................... SUCCESS
> [59.374s]
> [INFO] Backend Logic @Service bean ....................... SUCCESS
> [2:17.939s]
> [INFO] oVirt RESTful API Backend Integration ............. SUCCESS
> [0.154s]
> [INFO] oVirt RESTful API interface ....................... SUCCESS
> [0.315s]
> [INFO] oVirt Engine API Definition ....................... SUCCESS
> [1:32.846s]
> [INFO] oVirt Engine API Commom Parent POM ................ SUCCESS
> [0.328s]
> [INFO] oVirt Engine API Common JAX-RS .................... SUCCESS
> [58.151s]
> [INFO] oVirt RESTful API Backend Integration Type Mappers SUCCESS
> [1:29.592s]
> [INFO] oVirt RESTful API Backend Integration JAX-RS Resources
> SUCCESS [1:34.159s]
> [INFO] oVirt RESTful API Backend Integration Webapp ...... SUCCESS
> [12.297s]
> [INFO] oVirt Engine Web Root ............................. SUCCESS
> [33.235s]
> [INFO] oVirt Configuration Tool .......................... SUCCESS
> [46.202s]
> [INFO] Notifier Service package .......................... SUCCESS
> [0.143s]
> [INFO] Notifier Service .................................. SUCCESS
> [56.794s]
> [INFO] Notifier Service Resources ........................ SUCCESS
> [9.712s]
> [INFO] oVirt Modules - frontend .......................... SUCCESS
> [3.064s]
> [INFO] oVirt APIs ........................................ SUCCESS
> [1.472s]
> [INFO] oVirt generic API ................................. SUCCESS
> [32.572s]
> [INFO] oVirt Modules - webadmin .......................... SUCCESS
> [0.146s]
> [INFO] oVirt Modules - ui ................................ SUCCESS
> [0.250s]
> [INFO] Extensions for GWT ................................ SUCCESS
> [1:17.416s]
> [INFO] UI Utils Compatibility (for UICommon) ............. SUCCESS
> [47.857s]
> [INFO] Frontend for GWT UI Projects ...................... SUCCESS
> [47.153s]
> [INFO] UICommon .......................................... SUCCESS
> [3:17.484s]
> [INFO] UICommonWeb ....................................... SUCCESS
> [3:41.508s]
> [INFO] oVirt GWT UI common infrastructure ................ SUCCESS
> [1:44.412s]
> [INFO] WebAdmin .......................................... SUCCESS
> [3:28.888s]
> [INFO] UserPortal ........................................ SUCCESS
> [2:12.619s]
> [INFO] oVirt WARs ........................................ SUCCESS
> [0.134s]
> [INFO] WPF Application Module ............................ SUCCESS
> [8.143s]
> [INFO] oVirt Web Application Module ...................... SUCCESS
> [32.504s]
> [INFO] Components Web Application Module ................. SUCCESS
> [6.230s]
> [INFO] oVirt Server EAR .................................. SUCCESS
> [17.227s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 23:52.051s (Wall Clock)
> [INFO] Finished at: Tue May 22 05:10:14 EDT 2012
> [INFO] Final Memory: 302M/781M
> [INFO]
> ------------------------------------------------------------------------
> [FINDBUGS] Collecting findbugs analysis files...
> [FINDBUGS] Parsing 30 files in
> /home/jenkins/workspace/ovirt_engine_find_bugs
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/beans/scheduler/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/beans/vdsbroker/target/findbugsXml.xml
> of module with 0 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/bll/target/findbugsXml.xml
> of module with 439 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/common/target/findbugsXml.xml
> of module with 335 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/compat/target/findbugsXml.xml
> of module with 71 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/dal/target/findbugsXml.xml
> of module with 27 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/engineencryptutils/target/findbugsXml.xml
> of module with 10 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/interface/common/jaxrs/target/findbugsXml.xml
> of module with 18 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/interface/definition/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/jaxrs/target/findbugsXml.xml
> of module with 23 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/restapi/types/target/findbugsXml.xml
> of module with 10 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/root/target/findbugsXml.xml
> of module with 5 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/searchbackend/target/findbugsXml.xml
> of module with 13 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/utils/target/findbugsXml.xml
> of module with 122 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/modules/vdsbroker/target/findbugsXml.xml
> of module with 238 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/tools/engine-config/target/findbugsXml.xml
> of module with 7 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/tools/engine-notifier/engine-notifier-service/target/findbugsXml.xml
> of module with 11 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/backend/manager/tools/engine-tools-common/target/findbugsXml.xml
> of module with 0 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/build-tools-root/ovirt-checkstyle-extension/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/api/genericapi/target/findbugsXml.xml
> of module with 1 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/wars/rmw-war/target/findbugsXml.xml
> of module with 0 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/frontend/target/findbugsXml.xml
> of module with 21 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/gwt-common/target/findbugsXml.xml
> of module with 65 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/gwt-extension/target/findbugsXml.xml
> of module with 29 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/uicommon/target/findbugsXml.xml
> of module with 420 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/uicommonweb/target/findbugsXml.xml
> of module with 602 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/uicompat/target/findbugsXml.xml
> of module with 40 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal-gwtp/target/findbugsXml.xml
> of module with 14 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/userportal/target/findbugsXml.xml
> of module with 118 warnings.
> [FINDBUGS] Successfully parsed file
> /home/jenkins/workspace/ovirt_engine_find_bugs/frontend/webadmin/modules/webadmin/target/findbugsXml.xml
> of module with 56 warnings.
> [FINDBUGS] Computing warning deltas based on reference build #912
> [FINDBUGS] Using set difference to compute new warnings
> Build step 'Publish FindBugs analysis results' changed build result
> to UNSTABLE
> Email was triggered for: Unstable
> Sending email for trigger: Unstable
>
>
2
2
31 May '12
Hello,
The changes required to build and run the engine using the jboss-as
packages in Fedora 17 have been recently merged:
http://gerrit.ovirt.org/4416
This means that from now on in order to install the RPMs you will need a
Fedora 17 machine.
In addition this also changes how the application server is used. We
have been running the engine as an application deployed to the default
instance of the application server. Starting now the engine will run a
private instance of the application server owned by the user ovirt ovirt
and managed by systemd, so please remember that you don't longer
need/should start the jboss-as service, but the ovirt-engine systemd
service:
systemctl start ovirt-engine.service
systemctl stop ovirt-engine.service
The locations of the files/directories used by this private instance of
the application server are also different:
1. You are probably familiar with the standalone.xml file. This is no
longer used, we use the /etc/ovirt-engine/engine-service.xml instead.
2. The location of the engine.ear file is still the same
(/usr/share/ovirt-engine/engine.ear), but the deployment marker file
engine.ear.dodeploy is not created in the same directory. Instead of
that a symlink to the engine.ear file is created in
/var/lib/ovirt-engine/deployments directory and the engine.ear.dodeploy
file is created there by the start/stop script.
3. Locations of log files are also slightly different. The main log file
is still /var/log/ovirt-engine/engine.log, but the server.log file is no
longer in the jboss-as directory, but in /var/log/ovirt-engine as well.
In addition there is a /var/log/ovirt-engine/console.log file that
stores the standard and error output of the engine.
There are other changes, but probably less relevant to most of you.
I made many tests, but I am sure that issues will appear, so please keep
an eye on this and let me know of any issues you may encounter.
Regards,
Juan Hernandez
2
1
Hello,
Not sure if it was already discussed, but I'd like to talk about oVirt
"Shared Memory".
Webadmin has the Shared Memory percent information in host general tab
[1]. Initially, I thought that Shared Memory was the KSM de-duplication
results. But comparing with my KSM stats, it does not make sense. My env:
3,5 GB virt-host.
6 identical VMs running with 1GB RAM each.
Webadmin host details:
Memory Sharing: Active
Shared Memory: 0%
KSM - How many shared pages are being used:
$ cat /sys/kernel/mm/ksm/pages_sharing
109056
KSM - How many more sites are sharing them i.e. how much saved
$ cat /sys/kernel/mm/ksm/pages_sharing
560128
Converting to Mbytes:
$ echo $(( (109056 * $(getconf PAGE_SIZE)) / (1024 * 1024) ))
426
$ echo $(( ( 560128 * $(getconf PAGE_SIZE)) / (1024 * 1024) ))
2188
With those KSM results, I could expect something but 0 in "Shared Memory".
Tracing the origin of "Shared Memory" in oVirt code, I realized it's
coming from memShared value (from getVdsStats vdsm command), which is
provided in Mbytes:
$ vdsClient -s 192.168.10.250 getVdsStats | grep memShared
memShared = 9
Finding memShared function in vdsm, we have:
$VDSM_ROOT/vdsm/API.py
...
stats['memShared'] = self._memShared() / Mbytes
...
def _memShared(self):
"""
Return an approximation of memory shared by VMs thanks to KSM.
"""
shared = 0
for v in self._cif.vmContainer.values():
if v.conf['pid'] == '0':
continue
try:
statmfile = file('/proc/' + v.conf['pid'] + '/statm')
shared += int(statmfile.read().split()[2]) * PAGE_SIZE_BYTES
except:
pass
return shared
...
memShared is calculated adding the shared pages value (3rd field) from
/proc/<VM_PID>/statm file from all running VMs, converting to bytes
through PAGE_SIZE value and transforming to Mbytes at the end. This
field in statm file currently is (it changed along kernel history) the
count of pages instantiated in the process address space which are
shared with a file, including executable, library or shared memory.
Despite vdsm code comment, KSM shared pages are not accounted here. KSM
works de-duplicating and sharing memory pages without process awareness.
Engine is calculating the percent considering total physical memory -
memSize value from getVdsCapabilities vdsm command:
$ vdsClient -s 192.168.10.250 getVdsCapabilities | grep memSize
memSize = 3574
Calculating the percent:
$ echo "scale=2; 9 * 100 / 3574" | bc
.25
So, we have arround 0,25%, rounded to 0%. "Shared Memory" is coherent,
but not reflecting the real page deduplication benefits. And unnoticed
administrators - me included - are led to think that Shared Memory is
related with KSM results.
IMO, memShared is not providing any representative information. On the
other hand, the missing KSM results are really important to oVirt
administrators, providing information about how much memory they are
over committed by, for capacity management, and how much money oVirt is
saving in memory.
In order to offer those KSM stats to engine, I sent a patch [2] (waiting
approval) adding "ksmShared" and "ksmSharing" values to vdsm getVdsStats
command in a standard way, with key names that fits with existing KSM
ones (ksmCpu, ksmPages and ksmState).
Before patch:
$ vdsClient -s 192.168.10.250 getVdsStats | grep ksm
ksmCpu = 1 -> the ksmd process cpu load
ksmPages = 664 -> pages to scan before ksmd goes to sleep
ksmState = True -> is ksm running?
With the patch:
$ vdsClient -s 192.168.10.250 getVdsStats | grep ksm
ksmCpu = 1
ksmPages = 664
ksmShared = 426 -> how many Mbytes of memory are being shared
ksmSharing = 2188 -> how many more sites are sharing them
i.e. how much Mbytes are being saved
ksmState = True
Finally my questions:
1 - Is sharedMem value (from /proc/PID/statm) that significant to be
kept in host details? If yes, why?
2 - What about - and how (%, Mb, ...) - to add the vdsm new ksmShared
and ksmSharing stats in host details?
Sorry about my long history. I look forward to hearing your comments.
All the best,
--
Pahim
[1] -
http://www.pahim.org/wp-content/uploads/2012/05/Screenshot-oVirt-Enterprise…
[2] - http://gerrit.ovirt.org/4755
2
2
31 May '12
Currently 'restore snapshot' done in two steps on a client side:
1. TryBackToAllSnapshotsOfVm
2. RestoreAllSnapshots
This implementation creates race condition on 1 and therefore unstable & bug prone,
i suggested refactoring 2 to include 1 as single atomic operation at backend.
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
4
7
30 May '12
Hello,
I think we have the opportunity now to clean the version number and use
3.1.0 instead of 3.1.0-0001 for the next release. I submitted the
corresponding change to gerrit for review:
http://gerrit.ovirt.org/4914
As far as I can tell there are no issues introduced by this change and
it allows a cleaner versioning schema for the RPM packages.
Please let me now if you foresee any issue.
Regards,
Juan Hernandez
2
3