
Hi, I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager any comments/proposals for improvement are very welcomed. Martin.

On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one? as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones. one more question - how do you know to "return" the mac address to the correct pool on delete?

Hi, I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will be moved to a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network). Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center). Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead." I'll update feature page to contain these paragraphs. M. ----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature) On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one? as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones. one more question - how do you know to "return" the mac address to the correct pool on delete?

Hi, thanks for the very detailed answers. So here is another question: How are MACs handled which got assigned "by hand"? Do they also get registered with a global or with the datacenter pool? Are they tracked anyway? I'm currently assigning macs via API directly to the vms and do not let ovirt decide itself which mac goes where. Am 18.04.2014 12:17, schrieb Martin Mucha:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will be moved to a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin.
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Hi, I like to answer questions. Presence of questions in "motivated environment" means that there is flaw in documentation/study material, which needs to be fixed :) To answer your question. You got pool you want to use -- either global one (explicitly using method org.ovirt.engine.core.bll.network.macPoolManager.ScopedMacPoolManager#defaultScope()) or related to some scope, which you identify somehow -- like in previous mail: "give me pool for this data center". When you have this pool, you can allocate *some* new mac (system decides which one it will be) or you can allocate *explicit* one, use MAC address you've specified. I think that the latter is what you've meant by "assigning by hand". There is just performance difference between these two allocation. Once the pool, which has to be used, is identified, everything which comes after it happens on *this* pool. Example(I'm using naming from code here, storagePool is a db table for data center): ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac("00:1a:4a:15:c0:fe"); Lets discuss parts from this command: ScopedMacPoolManager.scopeFor() // means "I want scope ..." ScopedMacPoolManager.scopeFor().storagePool(storagePoolId) //... which is related to storagePool and identified by storagePoolID ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool() //... and I want existing pool for this scope ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac("00:1a:4a:15:c0:fe") //... and I want to add this mac address to it. So in short, whatever you do with pool you get anyhow, happens on this pool only. You do not have code-control on what pool you get, like if system is configured to use single pool only, then request for datacenter-related pool still return that sole one, but once you have that pool, everything happen on this pool, and, unless datacenter configuration is altered, same request in future for pool should return same pool. Now small spoiler(It's not merged to production branch yet) -- performance difference between allocating user provided MAC and MAC from mac pool range: You should try to avoid to allocate MAC which is outside of ranges of configured mac pool(either global or scoped one). It's perfectly OK, to allocate specific MAC address from inside these ranges, actually is little bit more efficient than letting system pick one for you. But if you use one from outside of those ranges, your allocated MAC end up in less memory efficient storage(approx 100 times less efficient). So if you want to use user-specified MACs, you can, but tell system from which range those MACs will be(via mac pool configuration). M. ----- Original Message ----- From: "Sven Kieske" <S.Kieske@mittwald.de> To: "Martin Mucha" <mmucha@redhat.com>, "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Tuesday, April 22, 2014 8:31:31 AM Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC Hi, thanks for the very detailed answers. So here is another question: How are MACs handled which got assigned "by hand"? Do they also get registered with a global or with the datacenter pool? Are they tracked anyway? I'm currently assigning macs via API directly to the vms and do not let ovirt decide itself which mac goes where. Am 18.04.2014 12:17, schrieb Martin Mucha:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will be moved to a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin.
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Regarding the UI mockup, I'd suggest having a checkbox next to the mac ranges, when the data center has no range (meaning the global in use) the checkbox is unchecked and the value of that text box will show the global ranges, disabled. In order to specify a specific range, the user will have to check that checkbox and modify the range (same behaviour as in edit vm interface dialog). I'd also recommend a tool tip with an example for the user (maybe with hovering the question mark icon). ----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Sven Kieske" <S.Kieske@mittwald.de> Cc: devel@ovirt.org, users@ovirt.org Sent: Tuesday, April 22, 2014 11:04:31 AM Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
Hi,
I like to answer questions. Presence of questions in "motivated environment" means that there is flaw in documentation/study material, which needs to be fixed :)
To answer your question. You got pool you want to use -- either global one (explicitly using method org.ovirt.engine.core.bll.network.macPoolManager.ScopedMacPoolManager#defaultScope()) or related to some scope, which you identify somehow -- like in previous mail: "give me pool for this data center". When you have this pool, you can allocate *some* new mac (system decides which one it will be) or you can allocate *explicit* one, use MAC address you've specified. I think that the latter is what you've meant by "assigning by hand". There is just performance difference between these two allocation. Once the pool, which has to be used, is identified, everything which comes after it happens on *this* pool.
Example(I'm using naming from code here, storagePool is a db table for data center): ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac("00:1a:4a:15:c0:fe");
Lets discuss parts from this command:
ScopedMacPoolManager.scopeFor() // means "I want scope ..." ScopedMacPoolManager.scopeFor().storagePool(storagePoolId) //... which is related to storagePool and identified by storagePoolID ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool() //... and I want existing pool for this scope ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac("00:1a:4a:15:c0:fe") //... and I want to add this mac address to it.
So in short, whatever you do with pool you get anyhow, happens on this pool only. You do not have code-control on what pool you get, like if system is configured to use single pool only, then request for datacenter-related pool still return that sole one, but once you have that pool, everything happen on this pool, and, unless datacenter configuration is altered, same request in future for pool should return same pool.
Now small spoiler(It's not merged to production branch yet) -- performance difference between allocating user provided MAC and MAC from mac pool range: You should try to avoid to allocate MAC which is outside of ranges of configured mac pool(either global or scoped one). It's perfectly OK, to allocate specific MAC address from inside these ranges, actually is little bit more efficient than letting system pick one for you. But if you use one from outside of those ranges, your allocated MAC end up in less memory efficient storage(approx 100 times less efficient). So if you want to use user-specified MACs, you can, but tell system from which range those MACs will be(via mac pool configuration).
M.
----- Original Message ----- From: "Sven Kieske" <S.Kieske@mittwald.de> To: "Martin Mucha" <mmucha@redhat.com>, "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Tuesday, April 22, 2014 8:31:31 AM Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
Hi,
thanks for the very detailed answers.
So here is another question:
How are MACs handled which got assigned "by hand"? Do they also get registered with a global or with the datacenter pool? Are they tracked anyway? I'm currently assigning macs via API directly to the vms and do not let ovirt decide itself which mac goes where.
Am 18.04.2014 12:17, schrieb Martin Mucha:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will be moved to a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin.
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 04/18/2014 01:17 PM, Martin Mucha wrote:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will ! be moved t o a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
I have to say i really don't like the notion of having to restart the engine for a change done via the webadmin to apply. also, iiuc your flow correctly, mac addresses may not go back to the pool anyway until an engine restart, since the change will only take effect on engine restart anyway, then available mac's per scope will be re-calculated.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?

Hi, I was describing current state, first iteration. Need of restart is something which should not exist, I've removed that necessity meantime. Altered flow: You allocate mac address for nic in data center without own pool, it gets registered in global pool. Then you modify settings of that data center so that new pool is created for it. All NICs for that data center is queries from DB, it's macs released from global pool and added to data center scope pool. And other way around. When you delete this scoped pool, all its content will be moved to global pool. Feature page is updated. Note: *previously* there was MAC placed in wrong pool only after modification of existing data center, which caused entirely new pool to be created (there wasn't pool for this scope, after modification there is). All other operations were fine. Now all manipulation with scoped pools should be ok. Note2: all that scoped pool handling is implemented as strategy. If we are unsatisfied with this implementation we could create another one and switch to it without modifying 'calling' code. Also many implementation may coexist and we can switch between them (on app start up) upon config. Question: When allocating MAC, not one specified by user, system picks available mac from given mac pool. Imagine, that after some time then mac pool ranges changes, and lets say that whole new interval of macs is used, not overlapping with former one. Then all previously allocated macs will be present in altered pool as a user specified ones -- since they are outside of defined ranges. With large number of this mac address this have detrimental effect on memory usage. So if this is a real scenario, it would be acceptable(or welcomed) for you to reassign all mac address which were selected by system? For example on engine start / vm start. M. ----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Tuesday, April 22, 2014 5:15:35 PM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC On 04/18/2014 01:17 PM, Martin Mucha wrote:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will ! be moved t o a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
I have to say i really don't like the notion of having to restart the engine for a change done via the webadmin to apply. also, iiuc your flow correctly, mac addresses may not go back to the pool anyway until an engine restart, since the change will only take effect on engine restart anyway, then available mac's per scope will be re-calculated.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?

On 04/23/2014 11:12 AM, Martin Mucha wrote:
Hi,
I was describing current state, first iteration. Need of restart is something which should not exist, I've removed that necessity meantime. Altered flow: You allocate mac address for nic in data center without own pool, it gets registered in global pool. Then you modify settings of that data center so that new pool is created for it. All NICs for that data center is queries from DB, it's macs released from global pool and added to data center scope pool. And other way around. When you delete this scoped pool, all its content will be moved to global pool. Feature page is updated.
Note: *previously* there was MAC placed in wrong pool only after modification of existing data center, which caused entirely new pool to be created (there wasn't pool for this scope, after modification there is). All other operations were fine. Now all manipulation with scoped pools should be ok.
Note2: all that scoped pool handling is implemented as strategy. If we are unsatisfied with this implementation we could create another one and switch to it without modifying 'calling' code. Also many implementation may coexist and we can switch between them (on app start up) upon config.
Question: When allocating MAC, not one specified by user, system picks available mac from given mac pool. Imagine, that after some time then mac pool ranges changes, and lets say that whole new interval of macs is used, not overlapping with former one. Then all previously allocated macs will be present in altered pool as a user specified ones -- since they are outside of defined ranges. With large number of this mac address this have detrimental effect on memory usage. So if this is a real scenario, it would be acceptable(or welcomed) for you to reassign all mac address which were selected by system? For example on engine start / vm start.
no. you don't change mac addresses on the fly. also, if the mac address isn't in the range of the scope, i don't see why you need to keep it in memory at all? iiuc, you keep in memory the unused-ranges of the various mac_pools. when a mac address is released, you need to check if it is in the range of the relevant mac_pool for the VM (default, dc, cluster, vm_pool). if it is, you need to return it to that mac_pool. otherwise, the mac_pool is not relevant for this out-of-range mac address, and you just stop using it. remember, you have to check the released mac address for the specific associated mac_pool, since we do (read: should[1]) allow overlapping mac addresses (hence ranges) in different mac_pool. so cases to consider: - mac_pool removed --> use the relevant mac_pool (say, the default one) for the below - mac_pool range extended - need to check if any affected VMs have mac addresses in the new range to not use them - mac_pool range reduced - just need to reduce it, unrelated to current vm's - mac_pool range changed all-together / new mac_pool defined affecting the VM (instead of the default one) - need to review all mac addresses in affected vm's to check if any are in the range and should be removed from the mac_pool ranges. the last 3 all are basically the same - on any change to mac_pool range just re-calculate the ranges in it by creating sub-ranges based on removing sorted groups/ranges of already allocated mac addresses? [1] iirc, we have a config allowing this today for manually configured mac addresses.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Tuesday, April 22, 2014 5:15:35 PM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
On 04/18/2014 01:17 PM, Martin Mucha wrote:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will!
!
be moved t o a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
I have to say i really don't like the notion of having to restart the engine for a change done via the webadmin to apply. also, iiuc your flow correctly, mac addresses may not go back to the pool anyway until an engine restart, since the change will only take effect on engine restart anyway, then available mac's per scope will be re-calculated.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?

no. you don't change mac addresses on the fly. ok, I was just asking if that's an option. No reallocating.
i don't see why you need to keep it in memory at all? What I did is not a rewrite, but alteration of existing code -- I just add one layer above existing pool implementation. I'm not sure about that, that code existed before I start working on it; one explanation could be, that if duplicates are not allowed in config, we want to check user input and detect when he tries to add same mac address twice. Yes, *this* can be done using simple db query. I'll check that out, I'm not sufficiently aware of context to be able to say confident "can be removed"/"must stay".
<iiuc, you keep in memory the unused-ranges of the various mac_pools. <when a mac address is released, you need to check if it is in the range <of the relevant mac_pool for the VM (default, dc, cluster, vm_pool). <if it is, you need to return it to that mac_pool. otherwise, the <mac_pool is not relevant for this out-of-range mac address, and you just <stop using it. currently it works like this: you identify pool you want and got some(based on system config). You release (free) mac from this pool without any care what type of mac it is. Method returns 'true' if it was released (== count of it's usages reaches zero or was not used at all). I think it does what you want, maybe with little less client code involvement. If client code provided wrong pool identification or releasing not used mac then it's a coding error and all we can do is log it.
remember, you have to check the released mac address for the specific associated mac_pool, since we do (read: should[1]) allow overlapping mac addresses (hence ranges) in different mac_pool.
there's no "free user specified mac address" method. There's only "freeMac" method. So the flow is like this: you identify pool somehow. By nic, for which you're releasing mac, by datacenter id, you name it. Then you release mac using freeMac method. If it was used, it'll be released; if it was used multiple times, usage count is decreased. I do not see how is overlapping with another pools related to that. You identified pool, freed mac from it, other pools remain intact. --- about cases you mentioned: I'll check whether those mac addresses, which were custom ones and after ranges alteration lies in the ranges of mac pool, those get marked as used in that pool. It should be true, but I rather write test for it. M. ----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Wednesday, April 23, 2014 10:32:33 PM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC On 04/23/2014 11:12 AM, Martin Mucha wrote:
Hi,
I was describing current state, first iteration. Need of restart is something which should not exist, I've removed that necessity meantime. Altered flow: You allocate mac address for nic in data center without own pool, it gets registered in global pool. Then you modify settings of that data center so that new pool is created for it. All NICs for that data center is queries from DB, it's macs released from global pool and added to data center scope pool. And other way around. When you delete this scoped pool, all its content will be moved to global pool. Feature page is updated.
Note: *previously* there was MAC placed in wrong pool only after modification of existing data center, which caused entirely new pool to be created (there wasn't pool for this scope, after modification there is). All other operations were fine. Now all manipulation with scoped pools should be ok.
Note2: all that scoped pool handling is implemented as strategy. If we are unsatisfied with this implementation we could create another one and switch to it without modifying 'calling' code. Also many implementation may coexist and we can switch between them (on app start up) upon config.
Question: When allocating MAC, not one specified by user, system picks available mac from given mac pool. Imagine, that after some time then mac pool ranges changes, and lets say that whole new interval of macs is used, not overlapping with former one. Then all previously allocated macs will be present in altered pool as a user specified ones -- since they are outside of defined ranges. With large number of this mac address this have detrimental effect on memory usage. So if this is a real scenario, it would be acceptable(or welcomed) for you to reassign all mac address which were selected by system? For example on engine start / vm start.
no. you don't change mac addresses on the fly. also, if the mac address isn't in the range of the scope, i don't see why you need to keep it in memory at all? iiuc, you keep in memory the unused-ranges of the various mac_pools. when a mac address is released, you need to check if it is in the range of the relevant mac_pool for the VM (default, dc, cluster, vm_pool). if it is, you need to return it to that mac_pool. otherwise, the mac_pool is not relevant for this out-of-range mac address, and you just stop using it. remember, you have to check the released mac address for the specific associated mac_pool, since we do (read: should[1]) allow overlapping mac addresses (hence ranges) in different mac_pool. so cases to consider: - mac_pool removed --> use the relevant mac_pool (say, the default one) for the below - mac_pool range extended - need to check if any affected VMs have mac addresses in the new range to not use them - mac_pool range reduced - just need to reduce it, unrelated to current vm's - mac_pool range changed all-together / new mac_pool defined affecting the VM (instead of the default one) - need to review all mac addresses in affected vm's to check if any are in the range and should be removed from the mac_pool ranges. the last 3 all are basically the same - on any change to mac_pool range just re-calculate the ranges in it by creating sub-ranges based on removing sorted groups/ranges of already allocated mac addresses? [1] iirc, we have a config allowing this today for manually configured mac addresses.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Tuesday, April 22, 2014 5:15:35 PM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
On 04/18/2014 01:17 PM, Martin Mucha wrote:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will!
!
be moved t o a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
I have to say i really don't like the notion of having to restart the engine for a change done via the webadmin to apply. also, iiuc your flow correctly, mac addresses may not go back to the pool anyway until an engine restart, since the change will only take effect on engine restart anyway, then available mac's per scope will be re-calculated.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?

----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 24, 2014 12:58:37 PM Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
no. you don't change mac addresses on the fly. ok, I was just asking if that's an option. No reallocating.
i don't see why you need to keep it in memory at all? What I did is not a rewrite, but alteration of existing code -- I just add one layer above existing pool implementation. I'm not sure about that, that code existed before I start working on it; one explanation could be, that if duplicates are not allowed in config, we want to check user input and detect when he tries to add same mac address twice. Yes, *this* can be done using simple db query. I'll check that out, I'm not sufficiently aware of context to be able to say confident "can be removed"/"must stay".
As Itamar stated, if a custom mac address was allocated out-of-range, once that mac address is released (by removing the vm, deleting its vnic or by changing it to other mac address), we don't need to preserve it anywhere in the system. Therefore it will not acquire any memory/management consideration. While in previous implementation (before this feature) we were able to reach that situation only by providing a custom mac address, with the new feature, such situation may occur by modifying an existing range on the data-center level. For example, a user define a data-center mac range of 00:00-00:20 and allocated a mac address of 00:15 (from range) to a vm. Next the user has reduced the range to 00:00-00:10, followed by removing that vm. mac 00:15 is no longer in user, by there is no meaning for it any more as from the data-center mac scope point of view.
<iiuc, you keep in memory the unused-ranges of the various mac_pools. <when a mac address is released, you need to check if it is in the range <of the relevant mac_pool for the VM (default, dc, cluster, vm_pool). <if it is, you need to return it to that mac_pool. otherwise, the <mac_pool is not relevant for this out-of-range mac address, and you just <stop using it.
currently it works like this: you identify pool you want and got some(based on system config). You release (free) mac from this pool without any care what type of mac it is. Method returns 'true' if it was released (== count of it's usages reaches zero or was not used at all). I think it does what you want, maybe with little less client code involvement. If client code provided wrong pool identification or releasing not used mac then it's a coding error and all we can do is log it.
remember, you have to check the released mac address for the specific associated mac_pool, since we do (read: should[1]) allow overlapping mac addresses (hence ranges) in different mac_pool.
there's no "free user specified mac address" method. There's only "freeMac" method. So the flow is like this: you identify pool somehow. By nic, for which you're releasing mac, by datacenter id, you name it. Then you release mac using freeMac method. If it was used, it'll be released; if it was used multiple times, usage count is decreased. I do not see how is overlapping with another pools related to that. You identified pool, freed mac from it, other pools remain intact.
When the global pool is the only one in use, there was no option to add the same mac address twice (blocked by AddVmInterface.canDoAction()). It doesn't look the same case with the new implementation, where each data-center scoped has its own mac storage. So this changes the previous behavior. Suppose couple data-centers share the same physical network - it may lead to issues where couple vms on the same network has the same mac.
--- about cases you mentioned: I'll check whether those mac addresses, which were custom obnes and after ranges alteration lies in the ranges of mac pool, those get marked as used in that pool. It should be true, but I rather write test for it.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Wednesday, April 23, 2014 10:32:33 PM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
On 04/23/2014 11:12 AM, Martin Mucha wrote:
Hi,
I was describing current state, first iteration. Need of restart is something which should not exist, I've removed that necessity meantime. Altered flow: You allocate mac address for nic in data center without own pool, it gets registered in global pool. Then you modify settings of that data center so that new pool is created for it. All NICs for that data center is queries from DB, it's macs released from global pool and added to data center scope pool. And other way around. When you delete this scoped pool, all its content will be moved to global pool. Feature page is updated.
Note: *previously* there was MAC placed in wrong pool only after modification of existing data center, which caused entirely new pool to be created (there wasn't pool for this scope, after modification there is). All other operations were fine. Now all manipulation with scoped pools should be ok.
Note2: all that scoped pool handling is implemented as strategy. If we are unsatisfied with this implementation we could create another one and switch to it without modifying 'calling' code. Also many implementation may coexist and we can switch between them (on app start up) upon config.
Question: When allocating MAC, not one specified by user, system picks available mac from given mac pool. Imagine, that after some time then mac pool ranges changes, and lets say that whole new interval of macs is used, not overlapping with former one. Then all previously allocated macs will be present in altered pool as a user specified ones -- since they are outside of defined ranges. With large number of this mac address this have detrimental effect on memory usage. So if this is a real scenario, it would be acceptable(or welcomed) for you to reassign all mac address which were selected by system? For example on engine start / vm start.
no. you don't change mac addresses on the fly. also, if the mac address isn't in the range of the scope, i don't see why you need to keep it in memory at all?
iiuc, you keep in memory the unused-ranges of the various mac_pools. when a mac address is released, you need to check if it is in the range of the relevant mac_pool for the VM (default, dc, cluster, vm_pool). if it is, you need to return it to that mac_pool. otherwise, the mac_pool is not relevant for this out-of-range mac address, and you just stop using it.
remember, you have to check the released mac address for the specific associated mac_pool, since we do (read: should[1]) allow overlapping mac addresses (hence ranges) in different mac_pool.
so cases to consider: - mac_pool removed --> use the relevant mac_pool (say, the default one) for the below - mac_pool range extended - need to check if any affected VMs have mac addresses in the new range to not use them - mac_pool range reduced - just need to reduce it, unrelated to current vm's - mac_pool range changed all-together / new mac_pool defined affecting the VM (instead of the default one) - need to review all mac addresses in affected vm's to check if any are in the range and should be removed from the mac_pool ranges.
the last 3 all are basically the same - on any change to mac_pool range just re-calculate the ranges in it by creating sub-ranges based on removing sorted groups/ranges of already allocated mac addresses?
[1] iirc, we have a config allowing this today for manually configured mac addresses.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Tuesday, April 22, 2014 5:15:35 PM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
On 04/18/2014 01:17 PM, Martin Mucha wrote:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data center. It's not configured yet to have its own mac pool. So in system is only one, global pool. We create few VMs and it's NICs will obtain its MAC from this global pool, marking them as used. Next we alter data center definition, so now it uses it's own mac pool. In system from this point on exists two mac pools, one global and one related to this data center, but those allocated MACs are still allocated in global pool, since new data center creation does not (yet) contain logic to get all assigned MACs related to this data center and reassign them in new pool. However, after app restart all VmNics are read from db and placed to appropriate pools. Lets assume, that we've performed such restart. Now we realized, that we actually don't want that data center have own mac pool, so we alter it's definition removing mac pool ranges. Pool related to this data center will be removed and it's content will!
!
be moved t o a scope above this data center -- into global scope pool. We know, that everything what's allocated in pool to be removed is still used, but we need to track it elsewhere and currently there's just one option, global pool. So to answer your last question. When I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is returned to the pool, the request goes like: "give me pool for this virtual machine, and whatever pool it is, I'm returning this MAC to it." Clients of ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is right for them, is done behind the scenes upon their identification (I want pool for this logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There are places in code, which requires pool related to given data center, identified by guid. For that request, only data center scope or something broader like global scope can be returned. So even if one want to use one pool per logical network, requests identified by data center id still can return only data center scope or broader, and there are no chance returning pool related to logical network (except for situation, where there is sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would you like just to pick a *sole* non-global scope you want to use in your system (like data center related pools ONLY plus one global, or logical network related pools ONLY plus one global) or would it be (more) beneficial to you to have implemented some sort of cascading and overriding? Like: "this data center uses *this* pool, BUT except for *this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
I have to say i really don't like the notion of having to restart the engine for a change done via the webadmin to apply. also, iiuc your flow correctly, mac addresses may not go back to the pool anyway until an engine restart, since the change will only take effect on engine restart anyway, then available mac's per scope will be re-calculated.
M.
----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:04:37 AM Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously, allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and logical network as potential ones.
one more question - how do you know to "return" the mac address to the correct pool on delete?
Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

I got a question regarding general mac handling: can you use the same macs on different datacenters? Am 10.04.2014 08:59, schrieb Martin Mucha:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin.
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Hi, sorry for the late answer. Currently, yes. You can. Formerly the was sole pool for whole app with possibility to allow/disallow duplicates(config option is named "ALLOW_DUPLICATES"; one mac being used multiple times or not). So there was a way, how to reach situation, where same MACs are being used among multiple data centers at the same time. This implementation of mac pool remained the same, only there are potentially many of them, one per scope -- data center. So if you configure your data centers / global pool such that there is an overlap in mac intervals, one mac can be allocated multiple times even if you've specified in configuration that you disallow duplicates. But I've already wrote some code detecting overlaps and fixing them in context of one mac pool. It can be easily refactored and used for whole ScopedMacPoolManager, so that if configured so, trying to add new scope/datacenter with specified mac pool ranges will fail if those mac ranges overlaps with any other existing pool definition. Would that be benefitial to you? And if answer is yes, can you describe a little bit your expectations/request? Whether you want to have a possibility to change this behavior in app configuration, whether is suficient to use for this configuration forementioned ALLOW_DUPLICATES or you'd like another one, ... M. ----- Original Message ----- From: "Sven Kieske" <S.Kieske@mittwald.de> To: "Martin Mucha" <mmucha@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 11:51:55 AM Subject: Re: [ovirt-devel] new feature I got a question regarding general mac handling: can you use the same macs on different datacenters? Am 10.04.2014 08:59, schrieb Martin Mucha:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin.
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Martin, I'd like to propose a different approach on how the ranges to be defined and stored. Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable. A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design. Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel ----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature Hi, I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

----- Original Message -----
From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Another major consideration is the API modelling. With the proposed design, the DataCenter element will be extended with a string type 'mac_pool_ranges' element which stores the DC mac pool ranges. In order to add/update/remove a range, the user will use the PUT method of the api/datacenters/{datacenter:id} api. I think a better approach (which completes the alternative) is having the data-center mac pool ranges as a sub collection of a specific data-center: List all of the data-center ranges: GET api/datacenters/{datacenter:id}/macpoolranges Add new range to a specific data-center: POST api/datacenters/{datacenter:id}/macpoolranges <mac_pool_range> <start>00:00:00:00:00:00</start> <end>00:00:00:00:00:AA</end> </mac_pool_range> Retrieve a specific range of a data-center: GET api/datacenters/{datacenter:id}/macpoolranges/{macpoolrange:id} Modifying a specific data-center range: PUT api/datacenters/{datacenter:id}/macpoolranges/{macpoolrange:id} <mac_pool_range> <end>00:00:00:00:00:BB</end> </mac_pool_range> Removing a specific data-center's range: DELETE api/datacenters/{datacenter:id}/macpoolranges/{macpoolrange:id} Such design will maintain the ability to extend the range, for example reporting the amount of allocated addresses in the range, add a name/description and so on. Juan, can you share your thoughts about the above from restapi pov ?
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Now for users@ovirt.org indeed. ----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC Martin, I'd like to propose a different approach on how the ranges to be defined and stored. Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable. A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design. Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel ----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature Hi, I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE. Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong? about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui! about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix. If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet). m. ----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC Now for users@ovirt.org indeed. ----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC Martin, I'd like to propose a different approach on how the ranges to be defined and stored. Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable. A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design. Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel ----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature Hi, I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required). [1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

thanks for bringing up this datatypes, I was not aware of them. Are we allowed/supposed to use vendor specific types if appropriate to? note: using this type will enforce a validity, right, but that does not mean that much (from other code perspective) since one's still obliged to do all checking on all other app layers avoiding calls from one layer to another with invalid data (calls to backend are expensive, call to db are even more expensive considering lot of users working simultaneously).
and will allow functionality such as comparison (is required). maybe I do not understand this. Which mac ranges comparison is currently required and not possible? Either I do not get it or I'm not aware of that use case.
m. ----- Original Message ----- From: "Moti Asayag" <masayag@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: "Yevgeny Zaspitsky" <yzaspits@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 8:21:50 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC ----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required). [1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Moti Asayag" <masayag@redhat.com> Cc: "Yevgeny Zaspitsky" <yzaspits@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:38:11 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
thanks for bringing up this datatypes, I was not aware of them.
Are we allowed/supposed to use vendor specific types if appropriate to? note: using this type will enforce a validity, right, but that does not mean that much (from other code perspective) since one's still obliged to do all checking on all other app layers avoiding calls from one layer to another with invalid data (calls to backend are expensive, call to db are even more expensive considering lot of users working simultaneously).
and will allow functionality such as comparison (is required). maybe I do not understand this. Which mac ranges comparison is currently required and not possible? Either I do not get it or I'm not aware of that use case.
If we plan at some point to support the search mechanism for mac address ranges (the search box in the webadmin on top of the main tabs)
m.
----- Original Message ----- From: "Moti Asayag" <masayag@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: "Yevgeny Zaspitsky" <yzaspits@redhat.com>, users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 8:21:50 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required).
[1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Hi, We would like to propose a little bit better solution from user experience side. We should have 3 fields for each range: 1) Start range 2) End range 3) Number of MACs When you have to fill in "End range" or "Number of MACs" (when start range is mandatory). And the 3rd field will be filled in automatically according to others. For example: 1) If "Start range" is 00:00:00:00:00:01 and "Number of MACs" is 5 then "End range" should be filled in with 00:00:00:00:00:05. 2) If "Start range" is 00:00:00:00:00:01 and "End range" is 00:00:00:00:00:05, then "Number of MACs" should be filled in with 5. For update: "End range" and "Number of MACs" should be updated automatically as well, so if you update "End range" the "Number of MACs" should be updated and vice versa. For adding several MAC pool ranges for DC we can use the "+" or "-" sign as we do for adding VNIC profile or Labels field. Regards, Genadi ----- Original Message ----- From: "Moti Asayag" <masayag@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Monday, April 28, 2014 9:21:50 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC ----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required). [1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, thanks for your input, I'll try to satisfy your request to be able to set range 'width' either by 'end boundary' or 'mac count' in gui design. Prior to that there are more fundamental decisions to be made -- like whether the pool definition is mandatory or optional, and how this influence the app for upgrading users. I'm pushing the idea of optional definition with one global pool as a fallback. And like I said in previous emails, from this point of view is gui design marginal, since we do not know what exact things should be displayed there(gui will be little bit different for optional pool definition). This is to be decided this week, after that we can discuss final design of gui. m. ----- Original Message ----- From: "Genadi Chereshnya" <gcheresh@redhat.com> To: "Moti Asayag" <masayag@redhat.com> Cc: devel@ovirt.org, users@ovirt.org, "Martin Mucha" <mmucha@redhat.com>, "Martin Pavlik" <mpavlik@redhat.com> Sent: Monday, April 28, 2014 8:47:11 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC Hi, We would like to propose a little bit better solution from user experience side. We should have 3 fields for each range: 1) Start range 2) End range 3) Number of MACs When you have to fill in "End range" or "Number of MACs" (when start range is mandatory). And the 3rd field will be filled in automatically according to others. For example: 1) If "Start range" is 00:00:00:00:00:01 and "Number of MACs" is 5 then "End range" should be filled in with 00:00:00:00:00:05. 2) If "Start range" is 00:00:00:00:00:01 and "End range" is 00:00:00:00:00:05, then "Number of MACs" should be filled in with 5. For update: "End range" and "Number of MACs" should be updated automatically as well, so if you update "End range" the "Number of MACs" should be updated and vice versa. For adding several MAC pool ranges for DC we can use the "+" or "-" sign as we do for adding VNIC profile or Labels field. Regards, Genadi ----- Original Message ----- From: "Moti Asayag" <masayag@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Monday, April 28, 2014 9:21:50 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC ----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required). [1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

1) In our opinion the pool definition should be optional. We should preserve the existing behavior. It will be useful especially for the upgrade scenarios. 2) As well for the "Number of MACs" we proposed earlier you should take into account the multicast addresses (if they are in the range) and to reduce them from the count of "Number of MACs" Genadi ----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: "Genadi Chereshnya" <gcheresh@redhat.com> Cc: "Moti Asayag" <masayag@redhat.com>, devel@ovirt.org, users@ovirt.org, "Martin Pavlik" <mpavlik@redhat.com> Sent: Monday, April 28, 2014 9:59:06 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC Hi, thanks for your input, I'll try to satisfy your request to be able to set range 'width' either by 'end boundary' or 'mac count' in gui design. Prior to that there are more fundamental decisions to be made -- like whether the pool definition is mandatory or optional, and how this influence the app for upgrading users. I'm pushing the idea of optional definition with one global pool as a fallback. And like I said in previous emails, from this point of view is gui design marginal, since we do not know what exact things should be displayed there(gui will be little bit different for optional pool definition). This is to be decided this week, after that we can discuss final design of gui. m. ----- Original Message ----- From: "Genadi Chereshnya" <gcheresh@redhat.com> To: "Moti Asayag" <masayag@redhat.com> Cc: devel@ovirt.org, users@ovirt.org, "Martin Mucha" <mmucha@redhat.com>, "Martin Pavlik" <mpavlik@redhat.com> Sent: Monday, April 28, 2014 8:47:11 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC Hi, We would like to propose a little bit better solution from user experience side. We should have 3 fields for each range: 1) Start range 2) End range 3) Number of MACs When you have to fill in "End range" or "Number of MACs" (when start range is mandatory). And the 3rd field will be filled in automatically according to others. For example: 1) If "Start range" is 00:00:00:00:00:01 and "Number of MACs" is 5 then "End range" should be filled in with 00:00:00:00:00:05. 2) If "Start range" is 00:00:00:00:00:01 and "End range" is 00:00:00:00:00:05, then "Number of MACs" should be filled in with 5. For update: "End range" and "Number of MACs" should be updated automatically as well, so if you update "End range" the "Number of MACs" should be updated and vice versa. For adding several MAC pool ranges for DC we can use the "+" or "-" sign as we do for adding VNIC profile or Labels field. Regards, Genadi ----- Original Message ----- From: "Moti Asayag" <masayag@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Monday, April 28, 2014 9:21:50 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC ----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required). [1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

ad 1) mine thinking was the same. If it's optional, then upgrade process is: 'you do not have to do anything', which seemed best to me. ad 2) yes, this has to be reflected in gui. Currently in business layer there are checks which do not let you use multicast address (exception is thrown when there is such attempt -- this is appropriate from mac pool perspective). When user specified mac ranges containing multicast address, this mac address is present in pool (due to implementation restriction), but it is flagged as used, so system never assigns it. And if user tried to do it by hand, it will fail, like I said. m. ----- Original Message ----- From: "Genadi Chereshnya" <gcheresh@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: "Moti Asayag" <masayag@redhat.com>, devel@ovirt.org, users@ovirt.org, "Martin Pavlik" <mpavlik@redhat.com> Sent: Monday, April 28, 2014 10:12:06 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC 1) In our opinion the pool definition should be optional. We should preserve the existing behavior. It will be useful especially for the upgrade scenarios. 2) As well for the "Number of MACs" we proposed earlier you should take into account the multicast addresses (if they are in the range) and to reduce them from the count of "Number of MACs" Genadi ----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: "Genadi Chereshnya" <gcheresh@redhat.com> Cc: "Moti Asayag" <masayag@redhat.com>, devel@ovirt.org, users@ovirt.org, "Martin Pavlik" <mpavlik@redhat.com> Sent: Monday, April 28, 2014 9:59:06 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC Hi, thanks for your input, I'll try to satisfy your request to be able to set range 'width' either by 'end boundary' or 'mac count' in gui design. Prior to that there are more fundamental decisions to be made -- like whether the pool definition is mandatory or optional, and how this influence the app for upgrading users. I'm pushing the idea of optional definition with one global pool as a fallback. And like I said in previous emails, from this point of view is gui design marginal, since we do not know what exact things should be displayed there(gui will be little bit different for optional pool definition). This is to be decided this week, after that we can discuss final design of gui. m. ----- Original Message ----- From: "Genadi Chereshnya" <gcheresh@redhat.com> To: "Moti Asayag" <masayag@redhat.com> Cc: devel@ovirt.org, users@ovirt.org, "Martin Mucha" <mmucha@redhat.com>, "Martin Pavlik" <mpavlik@redhat.com> Sent: Monday, April 28, 2014 8:47:11 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC Hi, We would like to propose a little bit better solution from user experience side. We should have 3 fields for each range: 1) Start range 2) End range 3) Number of MACs When you have to fill in "End range" or "Number of MACs" (when start range is mandatory). And the 3rd field will be filled in automatically according to others. For example: 1) If "Start range" is 00:00:00:00:00:01 and "Number of MACs" is 5 then "End range" should be filled in with 00:00:00:00:00:05. 2) If "Start range" is 00:00:00:00:00:01 and "End range" is 00:00:00:00:00:05, then "Number of MACs" should be filled in with 5. For update: "End range" and "Number of MACs" should be updated automatically as well, so if you update "End range" the "Number of MACs" should be updated and vice versa. For adding several MAC pool ranges for DC we can use the "+" or "-" sign as we do for adding VNIC profile or Labels field. Regards, Genadi ----- Original Message ----- From: "Moti Asayag" <masayag@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Monday, April 28, 2014 9:21:50 AM Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC ----- Original Message -----
From: "Martin Mucha" <mmucha@redhat.com> To: "Yevgeny Zaspitsky" <yzaspits@redhat.com> Cc: users@ovirt.org, devel@ovirt.org Sent: Monday, April 28, 2014 9:14:38 AM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Hi, you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL CODE.
Why I did it this way: I come from agile environment. This supposed to be FIRST increment. Not last. I hate waterfall style of work -- almighty solution in one swing. I'd like to make sure, that main part, that core principle is valid and approved. Making gui look nice is marginal. So it is data structure for first increment. We can definitely think of thousands of improvements, BUT this RFC already include more than 10 patch sets and NO core reviews. How can I know, that others will approve this and I'm not completely wrong?
about UX: it's wrong, but just fine for first increment. It can be used somehow and that just sufficient. Note: even with table to enter each from-to range there can be validation problem needed to be handled. Gui can changed to better one, when we know, that this feature works. But meantime others can test this feature functionality via this ugly, but very fast to write, gui!
about DB: I'm aware of DB normalization, and about all implications my "design" has. Yes, storing it in one varchar column is DB (very heavily used) antipattern, just fine for first increment and very easy to fix.
There is another motivation for using a normalized data, specifically for mac addresses - using the MAC addresses type [1] will enforce validity of the input and will allow functionality such as comparison (is required). [1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html
If it's up to me, I'd like to wait for approval of 'core' part of this change (lets call it spike), and finish remaining 'marginalities' after it. (just to make myself clear proper db design ISN'T marginal measuring it using absolute scale, but it IS very marginal related to situation where most of code wasn't approved/reviewed yet).
m.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: devel@ovirt.org, users@ovirt.org Sent: Sunday, April 27, 2014 2:22:04 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Now for users@ovirt.org indeed.
----- Original Message ----- From: "Yevgeny Zaspitsky" <yzaspits@redhat.com> To: "Martin Mucha" <mmucha@redhat.com> Cc: users@ovrit.org, devel@ovirt.org Sent: Sunday, April 27, 2014 2:29:46 PM Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
Martin,
I'd like to propose a different approach on how the ranges to be defined and stored.
Discussing this feature with Moti raised the alternative UX design: Defining ranges could be added as a left-tab on create DC dialog and a sub-tab on an existing DC. It would be a table of start and end address fields and we can add a calculated # of MACs in the range and/or summary for the DC. Also that will make string parsing unneeded, prevent possible user mistakes in the string format and make possible validating every field of the range on the UI side easier. As you can see on the screenshot you've attached even a single range doesn't fit to the text box. In case of multiple ranges managing them in a single line textbox would be very uncomfortable.
A range is an object with at least 2 members (start and end). And we have few of these for each data center. Storing a collection of the objects in a single field in a relational DB seems a bit awkward to me. That has few disadvantages: 1. is not normalized 2. make data validation nearly impossible 3. make querying the data very difficult 4. is restraining our ability to extend the object (e.g. a user might like to give a description to a range) So IMHO a satellite table with the FK to storage_pool would be a more robust design.
Best regards, ____________________ Yevgeny Zaspitsky Senior Software Engineer Red Hat Israel
----- Original Message ----- From: "Martin Mucha" <mmucha@redhat.com> To: users@ovirt.org, devel@ovirt.org Sent: Thursday, April 10, 2014 9:59:44 AM Subject: [ovirt-devel] new feature
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC pools, currently one per data center. http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed. Martin. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (6)
-
Genadi Chereshnya
-
Itamar Heim
-
Martin Mucha
-
Moti Asayag
-
Sven Kieske
-
Yevgeny Zaspitsky