[Engine-devel] SPM Priority Design - Wiki Page

Hi all, In the link below there is a wiki page describing the requirements and design of the SPM Priority feature. The feature allows the admin to define priorities between hosts regarding the SPM selection process http://www.ovirt.org/wiki/Features/SPMPriority Please feel free to share your comments. Thanks, Muli

Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?) - Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? - would love to have the spm-priority avail in the search criteria, so I would serach for all hosts that spm-priority==0/-1 Miki ----- Original Message -----
From: "Muli Salem" <msalem@redhat.com> To: engine-devel@ovirt.org Sent: Tuesday, December 27, 2011 3:16:46 PM Subject: [Engine-devel] SPM Priority Design - Wiki Page
Hi all,
In the link below there is a wiki page describing the requirements and design of the SPM Priority feature. The feature allows the admin to define priorities between hosts regarding the SPM selection process
http://www.ovirt.org/wiki/Features/SPMPriority
Please feel free to share your comments.
Thanks, Muli _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
Hi Miki, Can you please elaborate on this validation. When we edit a host it is currently a requirement that the host will be in maintenance. So what do you think the validation should be? Livnat
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? - would love to have the spm-priority avail in the search criteria, so I would serach for all hosts that spm-priority==0/-1
Miki
----- Original Message -----
From: "Muli Salem" <msalem@redhat.com> To: engine-devel@ovirt.org Sent: Tuesday, December 27, 2011 3:16:46 PM Subject: [Engine-devel] SPM Priority Design - Wiki Page
Hi all,
In the link below there is a wiki page describing the requirements and design of the SPM Priority feature. The feature allows the admin to define priorities between hosts regarding the SPM selection process
http://www.ovirt.org/wiki/Features/SPMPriority
Please feel free to share your comments.
Thanks, Muli _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

Send from my iPhone . On 28 בדצמ 2011, at 18:21, Livnat Peer <lpeer@redhat.com> wrote:
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
Hi Miki, Can you please elaborate on this validation. When we edit a host it is currently a requirement that the host will be in maintenance. So what do you think the validation should be? That there is at least one host in the cluster that can become spm.
Livnat
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? - would love to have the spm-priority avail in the search criteria, so I would serach for all hosts that spm-priority==0/-1
Miki
----- Original Message -----
From: "Muli Salem" <msalem@redhat.com> To: engine-devel@ovirt.org Sent: Tuesday, December 27, 2011 3:16:46 PM Subject: [Engine-devel] SPM Priority Design - Wiki Page
Hi all,
In the link below there is a wiki page describing the requirements and design of the SPM Priority feature. The feature allows the admin to define priorities between hosts regarding the SPM selection process
http://www.ovirt.org/wiki/Features/SPMPriority
Please feel free to share your comments.
Thanks, Muli _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 29/12/11 07:33, Miki Kenneth wrote:
Send from my iPhone .
On 28 בדצמ 2011, at 18:21, Livnat Peer <lpeer@redhat.com> wrote:
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
Hi Miki, Can you please elaborate on this validation. When we edit a host it is currently a requirement that the host will be in maintenance. So what do you think the validation should be? That there is at least one host in the cluster that can become spm.
What does the last host means? it is in maintenance mode, not taking part as SPM candidate ATM ?! It can be in the process of moving to another DC, for example. Until the host is activated there is no value in validating it's SPM priority. When the host is activated there is no point in this validation as we simply add a host to the DC, I don't think failing host-activation because of it's SPM priority is a valid flow. in short - I don't see where this validation can be valuable.
Livnat
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? - would love to have the spm-priority avail in the search criteria, so I would serach for all hosts that spm-priority==0/-1
Miki
----- Original Message -----
From: "Muli Salem" <msalem@redhat.com> To: engine-devel@ovirt.org Sent: Tuesday, December 27, 2011 3:16:46 PM Subject: [Engine-devel] SPM Priority Design - Wiki Page
Hi all,
In the link below there is a wiki page describing the requirements and design of the SPM Priority feature. The feature allows the admin to define priorities between hosts regarding the SPM selection process
http://www.ovirt.org/wiki/Features/SPMPriority
Please feel free to share your comments.
Thanks, Muli _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too.
UI-wise Isn't High-Medium-Low enough? Unless we are going to credit some with extra points (for example, for having a bond) or take off points (for not being connected to the ISO domain, for example) ? We can set in the engine 'High=8, Medium=5, Low=2' and have the API set finer granularity if desired. BTW, I'd do the 2nd order by latency to the master domain, not random. Y.
- There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?) - Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? - would love to have the spm-priority avail in the search criteria, so I would serach for all hosts that spm-priority==0/-1
Miki
----- Original Message -----
From: "Muli Salem"<msalem@redhat.com> To: engine-devel@ovirt.org Sent: Tuesday, December 27, 2011 3:16:46 PM Subject: [Engine-devel] SPM Priority Design - Wiki Page
Hi all,
In the link below there is a wiki page describing the requirements and design of the SPM Priority feature. The feature allows the admin to define priorities between hosts regarding the SPM selection process
http://www.ovirt.org/wiki/Features/SPMPriority
Please feel free to share your comments.
Thanks, Muli _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
SPM is DC level, not cluster level.
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID?
isn't this cluttering the hosts grid? general subtab maybe?

On 12/29/2011 12:41 PM, Itamar Heim wrote:
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
SPM is DC level, not cluster level.
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID?
isn't this cluttering the hosts grid? general subtab maybe?
Unless you re-use the currently almost useless 'SPM status' column, renaming it to 'SPM priority' and have it with High/Low/Med/SPM values. Y.
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
From: "Yaniv Kaul" <ykaul@redhat.com> To: "Itamar Heim" <iheim@redhat.com> Cc: "Miki Kenneth" <mkenneth@redhat.com>, engine-devel@ovirt.org Sent: Thursday, December 29, 2011 4:05:05 PM Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page
On 12/29/2011 12:41 PM, Itamar Heim wrote:
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
SPM is DC level, not cluster level. So?
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID?
isn't this cluttering the hosts grid? general subtab maybe?
Unless you re-use the currently almost useless 'SPM status' column, renaming it to 'SPM priority' and have it with High/Low/Med/SPM values. You need to see the SPM priority (for the DC) in one view in order to compare between Hosts, Otherwise, you will have to switch between all hosts in order to make sure you have selected the correct Hosts from all the rest. It can be sub-tab if we have search.... Y.
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 12/29/2011 04:11 PM, Miki Kenneth wrote:
----- Original Message -----
From: "Yaniv Kaul"<ykaul@redhat.com> To: "Itamar Heim"<iheim@redhat.com> Cc: "Miki Kenneth"<mkenneth@redhat.com>, engine-devel@ovirt.org Sent: Thursday, December 29, 2011 4:05:05 PM Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page
On 12/29/2011 12:41 PM, Itamar Heim wrote:
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?) SPM is DC level, not cluster level. So? - Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? isn't this cluttering the hosts grid? general subtab maybe?
Unless you re-use the currently almost useless 'SPM status' column, renaming it to 'SPM priority' and have it with High/Low/Med/SPM values. You need to see the SPM priority (for the DC) in one view in order to compare between Hosts, Otherwise, you will have to switch between all hosts in order to make sure you have selected the correct Hosts from all the rest. It can be sub-tab if we have search....
My suggestion allows you to see the SPM priority for the DC. Simply be on the relevant DC on the tree, and you should be able to sort by 'SPM' column the hosts. Y.
Y.
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
From: "Yaniv Kaul" <ykaul@redhat.com> To: "Miki Kenneth" <mkenneth@redhat.com> Cc: engine-devel@ovirt.org, "Itamar Heim" <iheim@redhat.com> Sent: Thursday, December 29, 2011 4:13:26 PM Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page
On 12/29/2011 04:11 PM, Miki Kenneth wrote:
----- Original Message -----
From: "Yaniv Kaul"<ykaul@redhat.com> To: "Itamar Heim"<iheim@redhat.com> Cc: "Miki Kenneth"<mkenneth@redhat.com>, engine-devel@ovirt.org Sent: Thursday, December 29, 2011 4:05:05 PM Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page
On 12/29/2011 12:41 PM, Itamar Heim wrote:
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?) SPM is DC level, not cluster level. So? - Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID? isn't this cluttering the hosts grid? general subtab maybe?
Unless you re-use the currently almost useless 'SPM status' column, renaming it to 'SPM priority' and have it with High/Low/Med/SPM values. You need to see the SPM priority (for the DC) in one view in order to compare between Hosts, Otherwise, you will have to switch between all hosts in order to make sure you have selected the correct Hosts from all the rest. It can be sub-tab if we have search....
My suggestion allows you to see the SPM priority for the DC. Simply be on the relevant DC on the tree, and you should be able to sort by 'SPM' column the hosts. I like it (was thinking about it myself.... :) Y.
Y.
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Miki Kenneth" <mkenneth@redhat.com> Cc: engine-devel@ovirt.org Sent: Thursday, December 29, 2011 5:41:41 AM Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
I don't agree with/understand the requirements <snip> - Enable setting a priority between -1 and 100 for a host (100 is the highest, -1 means never to choose this host). - When SPM selection process takes place, use the SPM priority to select an SPM. - Default for upgrading ovirt will be 50. </snip> Here we are asking the user to define priorities for the SPM selection. A user should be able to influence the selection but the actual choice should come down to runtime status. If Host A us running 100 VMs and host B is running 50 then it seems more efficient for B to be the SPM rather than A If Host C has multiple storage paths to the LUN then it seems a better fit than host B that has only one path. We need to step back and review the requirements here. Today SPM selection is random, that certainly needs to change but a user defining priorities is overly complex and won't solve the runtime selection issue. We need to start by defining an algorithm for selecting the SPM independent of user input. At selection time I'd suggest that we at least need to consider number of paths to the storage (for block based) number of VMs on that host, perhaps #cores? Perhaps we should dynamically create a score for a host that takes these factors into account, each may get higher rating - eg. maybe # storage paths is more important that # cores ? On top of this we can add some user defined preference that plays into the score. For example a user can say "this host can NEVER be an SPM" or can set a preference - eg. "Preferred SPM" or perhaps even "always SPM" This algorithm would apply for automated selection of SPMs but a user should be allowed to override this and at runtime say "make this node the SPM now"
SPM is DC level, not cluster level.
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID?
isn't this cluttering the hosts grid? general subtab maybe? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 12/29/2011 04:34 PM, Andrew Cathrow wrote: > > ----- Original Message ----- >> From: "Itamar Heim"<iheim@redhat.com> >> To: "Miki Kenneth"<mkenneth@redhat.com> >> Cc: engine-devel@ovirt.org >> Sent: Thursday, December 29, 2011 5:41:41 AM >> Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page >> >> On 12/27/2011 05:22 PM, Miki Kenneth wrote: >>> Few points: >>> - From requirement perspective - 0-10 scale is OK too. >>> - There is an RFE to allow manual SPM selection, let's make sure >>> that we clarify how we do that in the scenarios. >>> - There is a case were all Hosts are set as "no SPM" (-1), user >>> should be notify (on the last host?) > I don't agree with/understand the requirements > <snip> > - Enable setting a priority between -1 and 100 for a host (100 is the highest, -1 means never to choose this host). > - When SPM selection process takes place, use the SPM priority to select an SPM. > - Default for upgrading ovirt will be 50. > </snip> > > Here we are asking the user to define priorities for the SPM selection. A user should be able to influence the selection but the actual choice should come down to runtime status. > If Host A us running 100 VMs and host B is running 50 then it seems more efficient for B to be the SPM rather than A > If Host C has multiple storage paths to the LUN then it seems a better fit than host B that has only one path. > > We need to step back and review the requirements here. > > Today SPM selection is random, that certainly needs to change but a user defining priorities is overly complex and won't solve the runtime selection issue. > > We need to start by defining an algorithm for selecting the SPM independent of user input. > At selection time I'd suggest that we at least need to consider number of paths to the storage (for block based) number of VMs on that host, perhaps #cores? Paths to which storage domain? the master? all? on average? Do you prefer 2x10Gb iSCSI connection, or 4x1Gb? - What about available bandwidth to the storage (assuming it may be capped and compete with VM IO traffic) ? - Latency? > Perhaps we should dynamically create a score for a host that takes these factors into account, each may get higher rating - eg. maybe # storage paths is more important that # cores ? This is why I've suggested High(8)-Med(5)-Low(2) and add dynamically the system scoring, with whatever params we get to eventually. 'Never' could be 0, 'Always' could be '10'. Y. > > > On top of this we can add some user defined preference that plays into the score. For example a user can say "this host can NEVER be an SPM" or can set a preference - eg. "Preferred SPM" or perhaps even "always SPM" > > This algorithm would apply for automated selection of SPMs but a user should be allowed to override this and at runtime say "make this node the SPM now" > > > >> SPM is DC level, not cluster level. >> >>> - Need to be able to view in the GUI: >>> - the SPM priority for all Hosts, on the GRID? >> isn't this cluttering the hosts grid? general subtab maybe? >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel@ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel

----- Original Message ----- > From: "Yaniv Kaul" <ykaul@redhat.com> > To: "Andrew Cathrow" <acathrow@redhat.com> > Cc: engine-devel@ovirt.org > Sent: Thursday, December 29, 2011 9:47:24 AM > Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page > > On 12/29/2011 04:34 PM, Andrew Cathrow wrote: > > > > ----- Original Message ----- > >> From: "Itamar Heim"<iheim@redhat.com> > >> To: "Miki Kenneth"<mkenneth@redhat.com> > >> Cc: engine-devel@ovirt.org > >> Sent: Thursday, December 29, 2011 5:41:41 AM > >> Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page > >> > >> On 12/27/2011 05:22 PM, Miki Kenneth wrote: > >>> Few points: > >>> - From requirement perspective - 0-10 scale is OK too. > >>> - There is an RFE to allow manual SPM selection, let's make sure > >>> that we clarify how we do that in the scenarios. > >>> - There is a case were all Hosts are set as "no SPM" (-1), user > >>> should be notify (on the last host?) > > I don't agree with/understand the requirements > > <snip> > > - Enable setting a priority between -1 and 100 for a host (100 is > > the highest, -1 means never to choose this host). > > - When SPM selection process takes place, use the SPM priority to > > select an SPM. > > - Default for upgrading ovirt will be 50. > > </snip> > > > > Here we are asking the user to define priorities for the SPM > > selection. A user should be able to influence the selection but > > the actual choice should come down to runtime status. > > If Host A us running 100 VMs and host B is running 50 then it seems > > more efficient for B to be the SPM rather than A > > If Host C has multiple storage paths to the LUN then it seems a > > better fit than host B that has only one path. > > > > We need to step back and review the requirements here. > > > > Today SPM selection is random, that certainly needs to change but a > > user defining priorities is overly complex and won't solve the > > runtime selection issue. > > > > We need to start by defining an algorithm for selecting the SPM > > independent of user input. > > At selection time I'd suggest that we at least need to consider > > number of paths to the storage (for block based) number of VMs on > > that host, perhaps #cores? > > Paths to which storage domain? the master? all? on average? > Do you prefer 2x10Gb iSCSI connection, or 4x1Gb? > > - What about available bandwidth to the storage (assuming it may be > capped and compete with VM IO traffic) ? > - Latency? Yep, exactly - we need to use these kind of inputs into the algorithm and these can change hence the need to do this more dynamically rather than user selection which is invariably wrong. > > > Perhaps we should dynamically create a score for a host that takes > > these factors into account, each may get higher rating - eg. maybe > > # storage paths is more important that # cores ? > > This is why I've suggested High(8)-Med(5)-Low(2) and add dynamically > the > system scoring, with whatever params we get to eventually. 'Never' > could > be 0, 'Always' could be '10'. Yep, the big change that I'm suggesting is that these scores are only an input to the algorithm not the whole basis for selection. > Y. > > > > > > > On top of this we can add some user defined preference that plays > > into the score. For example a user can say "this host can NEVER be > > an SPM" or can set a preference - eg. "Preferred SPM" or perhaps > > even "always SPM" > > > > This algorithm would apply for automated selection of SPMs but a > > user should be allowed to override this and at runtime say "make > > this node the SPM now" > > > > > > > >> SPM is DC level, not cluster level. > >> > >>> - Need to be able to view in the GUI: > >>> - the SPM priority for all Hosts, on the GRID? > >> isn't this cluttering the hosts grid? general subtab maybe? > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel@ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > >

I'm about to implement removing template from a specified storage-domain. A template is meta-data + disks. The meta-data is in the ovirt-engine database, the disks are (potentially) scattered across several storage-domains. Removing a template from a specified storage domain means removing that template's disks from that storage-domain. API-wise, it's better to enable the deletion at the single-disk level, otherwise rollback and all sorts of unnecessary complexities enter the picture. So what I would like to go forward with is the following API: DELETE /api/templates/{template_xxx}/disks/{disk_yyy} <storage_domain id="domain_zzz"/> This means: "delete the disk 'disk_yyy' (which belongs to template 'template_xxx' from the storage-domain 'domain_zzz'. Any comments? Thanks, Ori.

----- Original Message -----
From: "Ori Liel" <oliel@redhat.com> To: engine-devel@ovirt.org Sent: Sunday, January 1, 2012 3:15:18 PM Subject: [Engine-devel] restapi - removing template from a specified storage-domain
I'm about to implement removing template from a specified storage-domain.
A template is meta-data + disks. The meta-data is in the ovirt-engine database, the disks are (potentially) scattered across several storage-domains. Removing a template from a specified storage domain means removing that template's disks from that storage-domain. API-wise, it's better to enable the deletion at the single-disk level, otherwise rollback and all sorts of unnecessary complexities enter the picture. So what I would like to go forward with is the following API:
DELETE /api/templates/{template_xxx}/disks/{disk_yyy}
<storage_domain id="domain_zzz"/>
This means: "delete the disk 'disk_yyy' (which belongs to template 'template_xxx' from the storage-domain 'domain_zzz'.
Any comments?
just that CURRENT backend only allows removing per storage domain(s), this functionallity is in plan, though (if that was what you meant..)
Thanks,
Ori. _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 01/01/2012 05:59 PM, Omer Frenkel wrote:
----- Original Message -----
From: "Ori Liel"<oliel@redhat.com> To: engine-devel@ovirt.org Sent: Sunday, January 1, 2012 3:15:18 PM Subject: [Engine-devel] restapi - removing template from a specified storage-domain
I'm about to implement removing template from a specified storage-domain.
A template is meta-data + disks. The meta-data is in the ovirt-engine database, the disks are (potentially) scattered across several storage-domains. Removing a template from a specified storage domain means removing that template's disks from that storage-domain. API-wise, it's better to enable the deletion at the single-disk level, otherwise rollback and all sorts of unnecessary complexities enter the picture. So what I would like to go forward with is the following API:
DELETE /api/templates/{template_xxx}/disks/{disk_yyy}
<storage_domain id="domain_zzz"/>
This means: "delete the disk 'disk_yyy' (which belongs to template 'template_xxx' from the storage-domain 'domain_zzz'.
Any comments?
just that CURRENT backend only allows removing per storage domain(s), this functionallity is in plan, though (if that was what you meant..)
I think the implementation for this can/should wait for the new support in the backend if makes more sense api-wise, rather than implementing based on the current limitations and later changing the api. the question is with the move to multiple storage domains, will an atomic[1] verb to delete all disks from a specific storage domain (per vm/template) will be provided, or only per disk. [1] well, delete is rollforward, so not sure i understand Ori's point about rollback. OTOH, the API should probably behave the same for move/copy of disks, etc.

----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Omer Frenkel" <ofrenkel@redhat.com> Cc: "Ori Liel" <oliel@redhat.com>, engine-devel@ovirt.org, "Geert Jansen" <gjansen@redhat.com> Sent: Sunday, January 1, 2012 9:44:47 PM Subject: Re: [Engine-devel] restapi - removing template from a specified storage-domain
On 01/01/2012 05:59 PM, Omer Frenkel wrote:
----- Original Message -----
From: "Ori Liel"<oliel@redhat.com> To: engine-devel@ovirt.org Sent: Sunday, January 1, 2012 3:15:18 PM Subject: [Engine-devel] restapi - removing template from a specified storage-domain
I'm about to implement removing template from a specified storage-domain.
A template is meta-data + disks. The meta-data is in the ovirt-engine database, the disks are (potentially) scattered across several storage-domains. Removing a template from a specified storage domain means removing that template's disks from that storage-domain. API-wise, it's better to enable the deletion at the single-disk level, otherwise rollback and all sorts of unnecessary complexities enter the picture. So what I would like to go forward with is the following API:
DELETE /api/templates/{template_xxx}/disks/{disk_yyy}
<storage_domain id="domain_zzz"/>
This means: "delete the disk 'disk_yyy' (which belongs to template 'template_xxx' from the storage-domain 'domain_zzz'.
Any comments?
just that CURRENT backend only allows removing per storage domain(s), this functionallity is in plan, though (if that was what you meant..)
I think the implementation for this can/should wait for the new support in the backend if makes more sense api-wise, rather than implementing based on the current limitations and later changing the api. the question is with the move to multiple storage domains, will an atomic[1] verb to delete all disks from a specific storage domain (per vm/template) will be provided, or only per disk.
[1] well, delete is rollforward, so not sure i understand Ori's point about rollback. OTOH, the API should probably behave the same for move/copy of disks, etc.
What I meant by rollback issues is: how do we handle a scenario in which deletion of disk1 succeeds, but then deletion of disk2 fails? When you say 'delete is rollforward', I assume you mean that delete operations don't require roll-back, and that even if only partial deletion was done - that's ok (I'm verifying because I'm not familiar with the term roll-forward). Assuming this is what you meant, I still prefer the per-disk deletion, because the scenario of partial success still requires extra handling: 1) Backend would have to generate a complex message that disk1 was deleted but disk2 not. 2) REST-API user might resend the same request to retry - and fail because this time disk1 does not exist.

On 12/29/2011 04:34 PM, Andrew Cathrow wrote:
----- Original Message -----
From: "Itamar Heim"<iheim@redhat.com> To: "Miki Kenneth"<mkenneth@redhat.com> Cc: engine-devel@ovirt.org Sent: Thursday, December 29, 2011 5:41:41 AM Subject: Re: [Engine-devel] SPM Priority Design - Wiki Page
On 12/27/2011 05:22 PM, Miki Kenneth wrote:
Few points: - From requirement perspective - 0-10 scale is OK too. - There is an RFE to allow manual SPM selection, let's make sure that we clarify how we do that in the scenarios. - There is a case were all Hosts are set as "no SPM" (-1), user should be notify (on the last host?)
I don't agree with/understand the requirements <snip> - Enable setting a priority between -1 and 100 for a host (100 is the highest, -1 means never to choose this host). - When SPM selection process takes place, use the SPM priority to select an SPM. - Default for upgrading ovirt will be 50. </snip>
Here we are asking the user to define priorities for the SPM selection. A user should be able to influence the selection but the actual choice should come down to runtime status. If Host A us running 100 VMs and host B is running 50 then it seems more efficient for B to be the SPM rather than A If Host C has multiple storage paths to the LUN then it seems a better fit than host B that has only one path.
We need to step back and review the requirements here.
Today SPM selection is random, that certainly needs to change but a user defining priorities is overly complex and won't solve the runtime selection issue.
We need to start by defining an algorithm for selecting the SPM independent of user input. At selection time I'd suggest that we at least need to consider number of paths to the storage (for block based) number of VMs on that host, perhaps #cores? Perhaps we should dynamically create a score for a host that takes these factors into account, each may get higher rating - eg. maybe # storage paths is more important that # cores ?
On top of this we can add some user defined preference that plays into the score. For example a user can say "this host can NEVER be an SPM" or can set a preference - eg. "Preferred SPM" or perhaps even "always SPM"
This algorithm would apply for automated selection of SPMs but a user should be allowed to override this and at runtime say "make this node the SPM now"
or we limit this to only allowing for now "allow using this host as SPM" checkbox, but keep the values as integer rather than boolean to allow this in the future easily, but for now only allow to flag which hosts should not become SPMs.
SPM is DC level, not cluster level.
- Need to be able to view in the GUI: - the SPM priority for all Hosts, on the GRID?
isn't this cluttering the hosts grid? general subtab maybe? _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
participants (8)
-
Andrew Cathrow
-
Itamar Heim
-
Livnat Peer
-
Miki Kenneth
-
Muli Salem
-
Omer Frenkel
-
Ori Liel
-
Yaniv Kaul