[ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

Moti Asayag masayag at redhat.com
Sat Apr 26 19:40:33 UTC 2014



----- Original Message -----
> From: "Martin Mucha" <mmucha at redhat.com>
> To: "Itamar Heim" <iheim at redhat.com>
> Cc: users at ovirt.org, devel at ovirt.org
> Sent: Thursday, April 24, 2014 12:58:37 PM
> Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
> 
> >no. you don't change mac addresses on the fly.
> ok, I was just asking if that's an option. No reallocating.
> 
> >i don't see why you need to keep it in memory at all?
> What I did is not a rewrite, but alteration of existing code -- I just add
> one layer above existing pool implementation. I'm not sure about that, that
> code existed before I start working on it; one explanation could be, that if
> duplicates are not allowed in config, we want to check user input and detect
> when he tries to add same mac address twice. Yes, *this* can be done using
> simple db query. I'll check that out, I'm not sufficiently aware of context
> to be able to say confident "can be removed"/"must stay".

As Itamar stated, if a custom mac address was allocated out-of-range, once that
mac address is released (by removing the vm, deleting its vnic or by changing it
to other mac address), we don't need to preserve it anywhere in the system.
Therefore it will not acquire any memory/management consideration.

While in previous implementation (before this feature) we were able to reach that
situation only by providing a custom mac address, with the new feature, such 
situation may occur by modifying an existing range on the data-center level.

For example, a user define a data-center mac range of 00:00-00:20 and allocated
a mac address of 00:15 (from range) to a vm.
Next the user has reduced the range to 00:00-00:10, followed by removing that vm.
mac 00:15 is no longer in user, by there is no meaning for it any more as from
the data-center mac scope point of view.

> 
> <iiuc, you keep in memory the unused-ranges of the various mac_pools.
> <when a mac address is released, you need to check if it is in the range
> <of the relevant mac_pool for the VM (default, dc, cluster, vm_pool).
> <if it is, you need to return it to that mac_pool. otherwise, the
> <mac_pool is not relevant for this out-of-range mac address, and you just
> <stop using it.
> 
> currently it works like this: you identify pool you want and got some(based
> on system config). You release (free) mac from this pool without any care
> what type of mac it is. Method returns 'true' if it was released (== count
> of it's usages reaches zero or was not used at all). I think it does what
> you want, maybe with little less client code involvement. If client code
> provided wrong pool identification or releasing not used mac then it's a
> coding error and all we can do is log it.
> 
> >remember, you have to check the released mac address for the specific
> >associated mac_pool, since we do (read: should[1]) allow overlapping mac
> >addresses (hence ranges) in different mac_pool.
> 
> there's no "free user specified mac address" method. There's only "freeMac"
> method. So the flow is like this: you identify pool somehow. By nic, for
> which you're releasing mac, by datacenter id, you name it. Then you release
> mac using freeMac method. If it was used, it'll be released; if it was used
> multiple times, usage count is decreased. I do not see how is overlapping
> with another pools related to that. You identified pool, freed mac from it,
> other pools remain intact.
> 

When the global pool is the only one in use, there was no option to add the 
same mac address twice (blocked by AddVmInterface.canDoAction()).
It doesn't look the same case with the new implementation, where each data-center
scoped has its own mac storage. So this changes the previous behavior.
Suppose couple data-centers share the same physical network - it may lead to
issues where couple vms on the same network has the same mac.

> ---
> about cases you mentioned:
> I'll check whether those mac addresses, which were custom obnes and after
> ranges alteration lies in the ranges of mac pool, those get marked as used
> in that pool. It should be true, but I rather write test for it.
> 
> M.
> 
> ----- Original Message -----
> From: "Itamar Heim" <iheim at redhat.com>
> To: "Martin Mucha" <mmucha at redhat.com>
> Cc: users at ovirt.org, devel at ovirt.org
> Sent: Wednesday, April 23, 2014 10:32:33 PM
> Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
> 
> On 04/23/2014 11:12 AM, Martin Mucha wrote:
> > Hi,
> >
> > I was describing current state, first iteration. Need of restart is
> > something which should not exist, I've removed that necessity meantime.
> > Altered flow: You allocate mac address for nic in data center without own
> > pool, it gets registered in global pool. Then you modify settings of that
> > data center so that new pool is created for it. All NICs for that data
> > center is queries from DB, it's macs released from global pool and added
> > to data center scope pool. And other way around. When you delete this
> > scoped pool, all its content will be moved to global pool. Feature page is
> > updated.
> >
> > Note: *previously* there was MAC placed in wrong pool only after
> > modification of existing data center, which caused entirely new pool to be
> > created (there wasn't pool for this scope, after modification there is).
> > All other operations were fine. Now all manipulation with scoped pools
> > should be ok.
> >
> > Note2: all that scoped pool handling is implemented as strategy. If we are
> > unsatisfied with this implementation we could create another one and
> > switch to it without modifying 'calling' code. Also many implementation
> > may coexist and we can switch between them (on app start up) upon config.
> >
> > Question: When allocating MAC, not one specified by user, system picks
> > available mac from given mac pool. Imagine, that after some time then mac
> > pool ranges changes, and lets say that whole new interval of macs is used,
> > not overlapping with former one. Then all previously allocated macs will
> > be present in altered pool as a user specified ones -- since they are
> > outside of defined ranges. With large number of this mac address this have
> > detrimental effect on memory usage. So if this is a real scenario, it
> > would be acceptable(or welcomed) for you to reassign all mac address which
> > were selected by system? For example on engine start / vm start.
> 
> no. you don't change mac addresses on the fly.
> also, if the mac address isn't in the range of the scope, i don't see
> why you need to keep it in memory at all?
> 
> iiuc, you keep in memory the unused-ranges of the various mac_pools.
> when a mac address is released, you need to check if it is in the range
> of the relevant mac_pool for the VM (default, dc, cluster, vm_pool).
> if it is, you need to return it to that mac_pool. otherwise, the
> mac_pool is not relevant for this out-of-range mac address, and you just
> stop using it.
> 
> remember, you have to check the released mac address for the specific
> associated mac_pool, since we do (read: should[1]) allow overlapping mac
> addresses (hence ranges) in different mac_pool.
> 
> so cases to consider:
> - mac_pool removed --> use the relevant mac_pool (say, the default one)
>    for the below
> - mac_pool range extended - need to check if any affected VMs have mac
>    addresses in the new range to not use them
> - mac_pool range reduced - just need to reduce it, unrelated to current
>    vm's
> - mac_pool range changed all-together / new mac_pool defined affecting
>    the VM (instead of the default one) - need to review all mac
>    addresses in affected vm's to check if any are in the range and
>    should be removed from the mac_pool ranges.
> 
> the last 3 all are basically the same - on any change to mac_pool range
> just re-calculate the ranges in it by creating sub-ranges based on
> removing sorted groups/ranges of already allocated mac addresses?
> 
> 
> [1] iirc, we have a config allowing this today for manually configured
> mac addresses.
> 
> 
> >
> > M.
> >
> > ----- Original Message -----
> > From: "Itamar Heim" <iheim at redhat.com>
> > To: "Martin Mucha" <mmucha at redhat.com>
> > Cc: users at ovirt.org, devel at ovirt.org
> > Sent: Tuesday, April 22, 2014 5:15:35 PM
> > Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
> >
> > On 04/18/2014 01:17 PM, Martin Mucha wrote:
> >> Hi,
> >>
> >> I'll try to describe it little bit more. Lets say, that we've got one data
> >> center. It's not configured yet to have its own mac pool. So in system is
> >> only one, global pool. We create few VMs and it's NICs will obtain its
> >> MAC from this global pool, marking them as used. Next we alter data
> >> center definition, so now it uses it's own mac pool. In system from this
> >> point on exists two mac pools, one global and one related to this data
> >> center, but those allocated MACs are still allocated in global pool,
> >> since new data center creation does not (yet) contain logic to get all
> >> assigned MACs related to this data center and reassign them in new pool.
> >> However, after app restart all VmNics are read from db and placed to
> >> appropriate pools. Lets assume, that we've performed such restart. Now we
> >> realized, that we actually don't want that data center have own mac pool,
> >> so we alter it's definition removing mac pool ranges. Pool related to
> >> this data center will be removed and it's content will!
>   !
> >   be moved t
> > o a scope above this data center -- into global scope pool. We know, that
> > everything what's allocated in pool to be removed is still used, but we
> > need to track it elsewhere and currently there's just one option, global
> > pool. So to answer your last question. When I remove scope, it's pool is
> > gone and its content moved elsewhere. Next, when MAC is returned to the
> > pool, the request goes like: "give me pool for this virtual machine, and
> > whatever pool it is, I'm returning this MAC to it." Clients of
> > ScopedMacPoolManager do not know which pool they're talking to. Decision,
> > which pool is right for them, is done behind the scenes upon their
> > identification (I want pool for this logical network).
> >>
> >> Notice, that there is one "problem" in deciding which scope/pool to use.
> >> There are places in code, which requires pool related to given data
> >> center, identified by guid. For that request, only data center scope or
> >> something broader like global scope can be returned. So even if one want
> >> to use one pool per logical network, requests identified by data center
> >> id still can return only data center scope or broader, and there are no
> >> chance returning pool related to logical network (except for situation,
> >> where there is sole logical network in that data center).
> >>
> >> Thanks for suggestion for another scopes. One question: if we're
> >> implementing them, would you like just to pick a *sole* non-global scope
> >> you want to use in your system (like data center related pools ONLY plus
> >> one global, or logical network related pools ONLY plus one global) or
> >> would it be (more) beneficial to you to have implemented some sort of
> >> cascading and overriding? Like: "this data center uses *this* pool, BUT
> >> except for *this* logical network, which should use *this* one instead."
> >>
> >> I'll update feature page to contain these paragraphs.
> >
> > I have to say i really don't like the notion of having to restart the
> > engine for a change done via the webadmin to apply.
> > also, iiuc your flow correctly, mac addresses may not go back to the
> > pool anyway until an engine restart, since the change will only take
> > effect on engine restart anyway, then available mac's per scope will be
> > re-calculated.
> >
> >
> >
> >>
> >> M.
> >>
> >>
> >> ----- Original Message -----
> >> From: "Itamar Heim" <iheim at redhat.com>
> >> To: "Martin Mucha" <mmucha at redhat.com>, users at ovirt.org, devel at ovirt.org
> >> Sent: Thursday, April 10, 2014 9:04:37 AM
> >> Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new
> >> feature)
> >>
> >> On 04/10/2014 09:59 AM, Martin Mucha wrote:
> >>> Hi,
> >>>
> >>> I'd like to notify you about new feature, which allows to specify
> >>> distinct MAC pools, currently one per data center.
> >>> http://www.ovirt.org/Scoped_MacPoolManager
> >>>
> >>> any comments/proposals for improvement are very welcomed.
> >>> Martin.
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users at ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>
> >>
> >> (changed title to reflect content)
> >>
> >>> When specified mac ranges for given "scope", where there wasn't any
> >>> definition previously, allocated MAC from default pool will not be moved
> >>> to "scoped" one until next engine restart. Other way, when removing
> >>> "scoped" mac pool definition, all MACs from this pool will be moved to
> >>> default one.
> >>
> >> cna you please elaborate on this one?
> >>
> >> as for potential other "scopes" - i can think of cluster, vm pool and
> >> logical network as potential ones.
> >>
> >> one more question - how do you know to "return" the mac address to the
> >> correct pool on delete?
> >>
> _______________________________________________
> Devel mailing list
> Devel at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
> 



More information about the Users mailing list