Caching of data from the database done properly

Hi Everyone, I wanted to discuss a practice which seems to be pretty common in the engine which I find very limiting, dangerous and for some things it can even be a blocker. There are several places in the engine where we are using maps as cache in singletons to avoid reloading data from the database. Two prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2]. While it looks tempting to just use a map as cache, add some locks around it and create an injectable singleton, this has some drawbacks: 1) We have an autoritative source for our data and it offers transactions to take care of inconsistencies or parallel updates. Doing all that in a service again duplicates this. 2) Caching on the service layer is definitely not a good idea. It can introduce unwanted side effects when someone invokes the DAOs directly. 3) The point is more about the question if a cache is really needed: Do I just want that cache because I find it convenient to do a #getMacPoolForCluster(Guid clusterId) in a loop instead of just loading it once before the loop, or do my usage requirements really force me to use a cache? If you really need a cache, consider the following: 1) Do the caching on the DAO layer. This guarantees the best consistency across the data. 2) Yes this means either locking in the DAOs or a transactional cache. But before you complain, think about what in [1] and [2] is done. We do exactly that there, so the complexity is already introduced anyway. 3) Since we are working with transactions, a custom cache should NEVER cache writes (really just talking about our use case here). This makes checks for existing IDs before adding an entity or similar checks unnecessary, don't duplicate constraint checks like in [2]. 4) There should always be a way to disable the cache (even if it is just for testing). 5) If I can't convince you to move the cache to the DAO layer, still add a way to disable the cache. For as long as there is no general caching solution with something like ehcache or infinispan, in my eyes such small things matter a lot to keep a project maintainable. That are some of the best practices I have seen around caching database data. It would be great if we could agree on something like that. Maybe there is already an agreement on something and I am just not aware of it. Looking forward to hear your feedback. Roman [1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... [2] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl...

On Mon, Jul 4, 2016 at 11:58 PM, Roman Mohr <rmohr@redhat.com> wrote:
Hi Everyone,
I wanted to discuss a practice which seems to be pretty common in the engine which I find very limiting, dangerous and for some things it can even be a blocker.
There are several places in the engine where we are using maps as cache in singletons to avoid reloading data from the database. Two prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2].
While it looks tempting to just use a map as cache, add some locks around it and create an injectable singleton, this has some drawbacks:
1) We have an autoritative source for our data and it offers transactions to take care of inconsistencies or parallel updates. Doing all that in a service again duplicates this. 2) Caching on the service layer is definitely not a good idea. It can introduce unwanted side effects when someone invokes the DAOs directly. 3) The point is more about the question if a cache is really needed: Do I just want that cache because I find it convenient to do a #getMacPoolForCluster(Guid clusterId) in a loop instead of just loading it once before the loop, or do my usage requirements really force me to use a cache?
If you really need a cache, consider the following:
1) Do the caching on the DAO layer. This guarantees the best consistency across the data. 2) Yes this means either locking in the DAOs or a transactional cache. But before you complain, think about what in [1] and [2] is done. We do exactly that there, so the complexity is already introduced anyway. 3) Since we are working with transactions, a custom cache should NEVER cache writes (really just talking about our use case here). This makes checks for existing IDs before adding an entity or similar checks unnecessary, don't duplicate constraint checks like in [2]. 4) There should always be a way to disable the cache (even if it is just for testing). 5) If I can't convince you to move the cache to the DAO layer, still add a way to disable the cache.
I forgot to mention one thing: There are of course cases where something is loaded on startup. Mostly things which can have multiple sources. For instance for the application configuration itself it is pretty common, or like in the scheduler the scheduling policies where some are Java only, some are coming from other sources. It is still good But for normal business entities accessing parts of it through services and parts of it through services is not the best thing to do (if constructiong the whole business entity out of multiple daos is complex, Repositories can help, but the cache should still be in the dao layer). I hope you get what I mean.
For as long as there is no general caching solution with something like ehcache or infinispan, in my eyes such small things matter a lot to keep a project maintainable.
That are some of the best practices I have seen around caching database data. It would be great if we could agree on something like that. Maybe there is already an agreement on something and I am just not aware of it.
Looking forward to hear your feedback.
Roman
[1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... [2] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl...

Hi, some of information in mail are not exactly true. Namely MacPoolPerCluster *does not do caching*, it does not even have DB layer structures it could cache. So how it works is: pool has configuration upon which it initialize itself. After that, it looks into db for all used MACs, which currently happens to be querying all MAC addresses of all VmNics. So this is initialized from data in DB, but it does not cache them. Clients [of pool] asks pool for MAC address, which is then used somewhere without pool supervision. I don't want to question this design, and I'm not saying that it wouldn't be possible to move it's logic to db layer, but once, long, long time ago someone decided this to be done on bll, and so it is on bll layer. I understand, that these might come as a problem in arquillian testing, but that's to be resolved, since, not all singletons are for caching. And even if they are, testing framework should be able to cope with such common beans, we shouldn't limit ourselves not to use singletons. Therefore, I wouldn't invest into changing these 'caches', but into allowing more complex setups in our testing. If it's not possible, then 'reset' method is second best solution — we have to use write lock as suggested in cr, and then it should be fine. About drawbacks: ad 1) yes, this comes as an extra problem, one has to deal with tx on his own, that's true. This wasn't part of original solution, but it should be fixed already. ad 2) No. Caching done correctly is done closest to the consumer. I assume you can similarly ruin hibernate l2 cache via accessing data through different channel. But that's kinda common to all caches — if you bypass them, you'll corrupt them. So do not bypass them, or in this case, use them as they was designed. As it have been said, you ask pool for mac, or inform it, that you're going to use your own, and then use it. It means, that it's designed to actually bypass it on all writes. Therefore if someone writes a code using MAC without prior notification to the pool about such action, it would be a problem. To avoid this problem there has to be bigger refactor — pool would have to persist MAC addresses somehow and not vmNicDao, or if moved entirelly to db layer, there would have to be trigger on vmnic table or something like that... ad 3) It was requested, to have at least tens of millions of macs in pool. Forget about loop, initializing this structure for given clusterId is not acceptable even once per request. Loading that structure is quite cheap (now), but not that cheap to allow what you ask for. Moving whole stuff to db layer would be probably beneficial (when it was originally implemented), but it's worthless doing it now. About suggestions: neither of them applies to MacPoolPerCluster — point (3) for example: since pool is not a simple cache of db structure, and does not have corresponding data in db layer, it cannot cache writes and it even does not do any writes at all... —— Surely I can imagine better implementation of this, but it'd require bigger changes whose benefits aren't worth of cost of this change. (I hope that) we have to deal with it. This was original design, and since testing framework (with changed caches or without) should deal with singletons etc. anyways, it's not worthy to change it. If there aren't any better options(I don't know), reinitializing bean can be used (and I apologize for blocking that). I'd avoid bigger rewrites in this area. M. ----- Original Message -----
On Mon, Jul 4, 2016 at 11:58 PM, Roman Mohr <rmohr@redhat.com> wrote:
Hi Everyone,
I wanted to discuss a practice which seems to be pretty common in the engine which I find very limiting, dangerous and for some things it can even be a blocker.
There are several places in the engine where we are using maps as cache in singletons to avoid reloading data from the database. Two prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2].
While it looks tempting to just use a map as cache, add some locks around it and create an injectable singleton, this has some drawbacks:
1) We have an autoritative source for our data and it offers transactions to take care of inconsistencies or parallel updates. Doing all that in a service again duplicates this. 2) Caching on the service layer is definitely not a good idea. It can introduce unwanted side effects when someone invokes the DAOs directly. 3) The point is more about the question if a cache is really needed: Do I just want that cache because I find it convenient to do a #getMacPoolForCluster(Guid clusterId) in a loop instead of just loading it once before the loop, or do my usage requirements really force me to use a cache?
If you really need a cache, consider the following:
1) Do the caching on the DAO layer. This guarantees the best consistency across the data. 2) Yes this means either locking in the DAOs or a transactional cache. But before you complain, think about what in [1] and [2] is done. We do exactly that there, so the complexity is already introduced anyway. 3) Since we are working with transactions, a custom cache should NEVER cache writes (really just talking about our use case here). This makes checks for existing IDs before adding an entity or similar checks unnecessary, don't duplicate constraint checks like in [2]. 4) There should always be a way to disable the cache (even if it is just for testing). 5) If I can't convince you to move the cache to the DAO layer, still add a way to disable the cache.
I forgot to mention one thing: There are of course cases where something is loaded on startup. Mostly things which can have multiple sources. For instance for the application configuration itself it is pretty common, or like in the scheduler the scheduling policies where some are Java only, some are coming from other sources. It is still good
But for normal business entities accessing parts of it through services and parts of it through services is not the best thing to do (if constructiong the whole business entity out of multiple daos is complex, Repositories can help, but the cache should still be in the dao layer). I hope you get what I mean.
For as long as there is no general caching solution with something like ehcache or infinispan, in my eyes such small things matter a lot to keep a project maintainable.
That are some of the best practices I have seen around caching database data. It would be great if we could agree on something like that. Maybe there is already an agreement on something and I am just not aware of it.
Looking forward to hear your feedback.
Roman
[1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... [2] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl...
Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Hi Martin, Great feedback. Thanks for the clarifications. On Thu, Jul 7, 2016 at 3:25 PM, Martin Mucha <mmucha@redhat.com> wrote:
Hi,
some of information in mail are not exactly true. Namely MacPoolPerCluster *does not do caching*, it does not even have DB layer structures it could cache. So how it works is: pool has configuration upon which it initialize itself. After that, it looks into db for all used MACs, which currently happens to be querying all MAC addresses of all VmNics. So this is initialized from data in DB, but it does not cache them. Clients [of pool] asks pool for MAC address, which is then used somewhere without pool supervision. I don't want to question this design, and I'm not saying that it wouldn't be possible to move it's logic to db layer, but once, long, long time ago someone decided this to be done on bll, and so it is on bll layer.
had another look at the MacPoolPerCluster source. You are right. It is caching some calculations and not database data. I agree that this should not be in the dao layer or the database. Sorry for the wrong accusations regarding the MacPoolPerCluster class.
I understand, that these might come as a problem in arquillian testing, but that's to be resolved, since, not all singletons are for caching. And even if they are, testing framework should be able to cope with such common beans, we shouldn't limit ourselves not to use singletons. Therefore, I wouldn't invest into changing these 'caches', but into allowing more complex setups in our testing. If it's not possible, then 'reset' method is second best solution — we have to use write lock as suggested in cr, and then it should be fine.
For the context: [1] Having singletons is fine from my perspective too. It is just about caching data from the database. Spring offeres the @DirtiesContext (A little bit nicer than with Arquillian where as far as I have seen you would create different test cases and do new @Deployments). But I prefer to reset these rare singletons explicitly in a base class for every test. Otherwise it is always very hard to track down possible side effects in the class because you did not set up a new context. For me tests are first class citizens of an application, so having a way to reinitialize singletons directly is what I prefer. When it is about caching from the database it is normally not needed since you just disable the database cache during the tests.
About drawbacks: ad 1) yes, this comes as an extra problem, one has to deal with tx on his own, that's true. This wasn't part of original solution, but it should be fixed already.
As long as it is really just the last resort I am fine with it.
ad 2) No. Caching done correctly is done closest to the consumer. I assume you can similarly ruin hibernate l2 cache via accessing data through different channel. But that's kinda common to all caches — if you bypass them, you'll corrupt them. So do not bypass them, or in this case, use them as they was designed. As it have been said, you ask pool for mac, or inform it, that you're going to use your own, and then use it. It means, that it's designed to actually bypass it on all writes. Therefore if someone writes a code using MAC without prior notification to the pool about such action, it would be a problem. To avoid this problem there has to be bigger refactor — pool would have to persist MAC addresses somehow and not vmNicDao, or if moved entirelly to db layer, there would have to be trigger on vmnic table or something like that...
I missed the calculation part in the macPoolPerCluster, so this is ok, most of my comments do not apply there now. Of course you can cache wherever you have to to meet the requirements. Still the best thing you can have is that the loaded entities, when cached in higher layers get evicted too when you change something in the lower layers (e.g. DAO). This is the normal expectation and what all the hibernate level2 cache is about. Since we don't use all these fancy caches which do all the hard lifting for us it would be even more complex for us to do the caching on higher layers right.
ad 3) It was requested, to have at least tens of millions of macs in pool. Forget about loop, initializing this structure for given clusterId is not acceptable even once per request. Loading that structure is quite cheap (now), but not that cheap to allow what you ask for. Moving whole stuff to db layer would be probably beneficial (when it was originally implemented), but it's worthless doing it now.
It is really just about caching from the database. Moving logic to the dao layer or the database is definitely what I had in mind.
About suggestions: neither of them applies to MacPoolPerCluster — point (3) for example: since pool is not a simple cache of db structure, and does not have corresponding data in db layer, it cannot cache writes and it even does not do any writes at all...
Not caching writes yourself is just a general thing. Regarding the MacPoolPerCluster I was referring to the constraints checks. But again most of it does not really apply now to that class anyway.
—— Surely I can imagine better implementation of this, but it'd require bigger changes whose benefits aren't worth of cost of this change. (I hope that) we have to deal with it. This was original design, and since testing framework (with changed caches or without) should deal with singletons etc. anyways, it's not worthy to change it. If there aren't any better options(I don't know), reinitializing bean can be used (and I apologize for blocking that). I'd avoid bigger rewrites in this area.
For the context: [1] Hoping for a continuation of the discussion :) Roman
M.
----- Original Message -----
On Mon, Jul 4, 2016 at 11:58 PM, Roman Mohr <rmohr@redhat.com> wrote:
Hi Everyone,
I wanted to discuss a practice which seems to be pretty common in the engine which I find very limiting, dangerous and for some things it can even be a blocker.
There are several places in the engine where we are using maps as cache in singletons to avoid reloading data from the database. Two prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2].
While it looks tempting to just use a map as cache, add some locks around it and create an injectable singleton, this has some drawbacks:
1) We have an autoritative source for our data and it offers transactions to take care of inconsistencies or parallel updates. Doing all that in a service again duplicates this. 2) Caching on the service layer is definitely not a good idea. It can introduce unwanted side effects when someone invokes the DAOs directly. 3) The point is more about the question if a cache is really needed: Do I just want that cache because I find it convenient to do a #getMacPoolForCluster(Guid clusterId) in a loop instead of just loading it once before the loop, or do my usage requirements really force me to use a cache?
If you really need a cache, consider the following:
1) Do the caching on the DAO layer. This guarantees the best consistency across the data. 2) Yes this means either locking in the DAOs or a transactional cache. But before you complain, think about what in [1] and [2] is done. We do exactly that there, so the complexity is already introduced anyway. 3) Since we are working with transactions, a custom cache should NEVER cache writes (really just talking about our use case here). This makes checks for existing IDs before adding an entity or similar checks unnecessary, don't duplicate constraint checks like in [2]. 4) There should always be a way to disable the cache (even if it is just for testing). 5) If I can't convince you to move the cache to the DAO layer, still add a way to disable the cache.
I forgot to mention one thing: There are of course cases where something is loaded on startup. Mostly things which can have multiple sources. For instance for the application configuration itself it is pretty common, or like in the scheduler the scheduling policies where some are Java only, some are coming from other sources. It is still good
But for normal business entities accessing parts of it through services and parts of it through services is not the best thing to do (if constructiong the whole business entity out of multiple daos is complex, Repositories can help, but the cache should still be in the dao layer). I hope you get what I mean.
For as long as there is no general caching solution with something like ehcache or infinispan, in my eyes such small things matter a lot to keep a project maintainable.
That are some of the best practices I have seen around caching database data. It would be great if we could agree on something like that. Maybe there is already an agreement on something and I am just not aware of it.
Looking forward to hear your feedback.
Roman
[1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... [2] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl...
Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Tue, Jul 5, 2016 at 7:14 AM, Roman Mohr <rmohr@redhat.com> wrote:
On Mon, Jul 4, 2016 at 11:58 PM, Roman Mohr <rmohr@redhat.com> wrote:
Hi Everyone,
I wanted to discuss a practice which seems to be pretty common in the engine which I find very limiting, dangerous and for some things it can even be a blocker.
There are several places in the engine where we are using maps as cache in singletons to avoid reloading data from the database. Two prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2].
While it looks tempting to just use a map as cache, add some locks around it and create an injectable singleton, this has some drawbacks:
1) We have an autoritative source for our data and it offers transactions to take care of inconsistencies or parallel updates. Doing all that in a service again duplicates this. 2) Caching on the service layer is definitely not a good idea. It can introduce unwanted side effects when someone invokes the DAOs directly. 3) The point is more about the question if a cache is really needed: Do I just want that cache because I find it convenient to do a #getMacPoolForCluster(Guid clusterId) in a loop instead of just loading it once before the loop, or do my usage requirements really force me to use a cache?
If you really need a cache, consider the following:
1) Do the caching on the DAO layer. This guarantees the best consistency across the data. 2) Yes this means either locking in the DAOs or a transactional cache. But before you complain, think about what in [1] and [2] is done. We do exactly that there, so the complexity is already introduced anyway. 3) Since we are working with transactions, a custom cache should NEVER cache writes (really just talking about our use case here). This makes checks for existing IDs before adding an entity or similar checks unnecessary, don't duplicate constraint checks like in [2]. 4) There should always be a way to disable the cache (even if it is just for testing). 5) If I can't convince you to move the cache to the DAO layer, still add a way to disable the cache.
I forgot to mention one thing: There are of course cases where something is loaded on startup. Mostly things which can have multiple sources. For instance for the application configuration itself it is pretty common, or like in the scheduler the scheduling policies where some are Java only, some are coming from other sources. It is still good
But for normal business entities accessing parts of it through services and parts of it through services is not the best thing to do (if constructiong the whole business entity out of multiple daos is complex, Repositories can help, but the cache should still be in the dao layer).
I do not agree that the caching should be on the DAO layer - that might lead to getting an entity that is built of parts that are not coherent each with another if the different DAO caches are not in sync. I'd put the cache on the Repositories (non-existent currently) or a higher layer, just above the transaction boundaries, so the cache would contain service call results rather than raw data. Then the cache would prevent from the application accessing the DB connection pool for a connection. Yes, different service caches might have same entities duplicated in the memory, but I do not care of that until that's proven as a problem and if it would I'd go to improving cache - making that more capable. I hope you get what I mean.
For as long as there is no general caching solution with something like ehcache or infinispan, in my eyes such small things matter a lot to keep a project maintainable.
That are some of the best practices I have seen around caching database data. It would be great if we could agree on something like that. Maybe there is already an agreement on something and I am just not aware of it.
Looking forward to hear your feedback.
Roman
[1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... [2] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl...
Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

--Apple-Mail-29A64A2A-2BB4-4520-A285-129E03437FCA Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: base64 V2hhdCBhYm91dCBKUEEgaSB0aGluayBpbiAzLjUgaXQgcmFpc2VkIHRvIHN0YXJ0IHVzaW5nIGl0 Lg0KQXQgdGhlIHBhc3QgKDMuMykgd2UgZm91bmQgYmlnIHBlcmZvcm1hbmNlIGlzc3VlICBhcm91 bmQgaGlzdCBtb25pdG9yaW5nIGFyZWEgKHdoaWNoIGZpeGVkIGJ5IGEgbWFwIGV2ZW50dWFsbHkp Lg0KDQpSb3ksIA0KQ2FuIHlvdSB0ZWxsIHdoZXJlIHRoZSBKUEEgc3RhbmQgdG9kYXkgPw0KDQpS ZWdhcmRzLA0KLUVsZGFkDQoNCj4gT24gMTEg15HXmdeV15zXmSAyMDE2LCBhdCAyMDo1OSwgWWV2 Z2VueSBaYXNwaXRza3kgPHl6YXNwaXRzQHJlZGhhdC5jb20+IHdyb3RlOg0KPiANCj4gDQo+IA0K Pj4gT24gVHVlLCBKdWwgNSwgMjAxNiBhdCA3OjE0IEFNLCBSb21hbiBNb2hyIDxybW9ockByZWRo YXQuY29tPiB3cm90ZToNCj4+IE9uIE1vbiwgSnVsIDQsIDIwMTYgYXQgMTE6NTggUE0sIFJvbWFu IE1vaHIgPHJtb2hyQHJlZGhhdC5jb20+IHdyb3RlOg0KPj4gPiBIaSBFdmVyeW9uZSwNCj4+ID4N Cj4+ID4gSSB3YW50ZWQgdG8gZGlzY3VzcyBhIHByYWN0aWNlIHdoaWNoIHNlZW1zIHRvIGJlIHBy ZXR0eSBjb21tb24gaW4gdGhlDQo+PiA+IGVuZ2luZSB3aGljaCBJIGZpbmQgdmVyeSBsaW1pdGlu ZywgZGFuZ2Vyb3VzIGFuZCBmb3Igc29tZSB0aGluZ3MgaXQNCj4+ID4gY2FuIGV2ZW4gYmUgYSBi bG9ja2VyLg0KPj4gPg0KPj4gPiBUaGVyZSBhcmUgc2V2ZXJhbCBwbGFjZXMgaW4gdGhlIGVuZ2lu ZSB3aGVyZSB3ZSBhcmUgdXNpbmcgbWFwcyBhcw0KPj4gPiBjYWNoZSBpbiBzaW5nbGV0b25zIHRv IGF2b2lkIHJlbG9hZGluZyBkYXRhIGZyb20gdGhlIGRhdGFiYXNlLiBUd28NCj4+ID4gcHJvbWlu ZW50IG9uZXMgYXJlIHRoZSBRdW90YU1hbmFnZXJbMV0gYW5kIHRoZSBNYWNQb29sUGVyQ2x1c3Rl clsyXS4NCj4+ID4NCj4+ID4gV2hpbGUgaXQgbG9va3MgdGVtcHRpbmcgdG8ganVzdCB1c2UgYSBt YXAgYXMgY2FjaGUsIGFkZCBzb21lIGxvY2tzDQo+PiA+IGFyb3VuZCBpdCBhbmQgY3JlYXRlIGFu IGluamVjdGFibGUgc2luZ2xldG9uLCB0aGlzIGhhcyBzb21lIGRyYXdiYWNrczoNCj4+ID4NCj4+ ID4gMSkgV2UgaGF2ZSBhbiBhdXRvcml0YXRpdmUgc291cmNlIGZvciBvdXIgZGF0YSBhbmQgaXQg b2ZmZXJzDQo+PiA+IHRyYW5zYWN0aW9ucyB0byB0YWtlIGNhcmUgb2YgaW5jb25zaXN0ZW5jaWVz IG9yIHBhcmFsbGVsIHVwZGF0ZXMuDQo+PiA+IERvaW5nIGFsbCB0aGF0IGluIGEgc2VydmljZSBh Z2FpbiBkdXBsaWNhdGVzIHRoaXMuDQo+PiA+IDIpIENhY2hpbmcgb24gdGhlIHNlcnZpY2UgbGF5 ZXIgaXMgZGVmaW5pdGVseSBub3QgYSBnb29kIGlkZWEuIEl0IGNhbg0KPj4gPiBpbnRyb2R1Y2Ug dW53YW50ZWQgc2lkZSBlZmZlY3RzIHdoZW4gc29tZW9uZSBpbnZva2VzIHRoZSBEQU9zDQo+PiA+ IGRpcmVjdGx5Lg0KPj4gPiAzKSBUaGUgcG9pbnQgaXMgbW9yZSBhYm91dCB0aGUgcXVlc3Rpb24g aWYgYSBjYWNoZSBpcyByZWFsbHkgbmVlZGVkOg0KPj4gPiBEbyBJIGp1c3Qgd2FudCB0aGF0IGNh Y2hlIGJlY2F1c2UgSSBmaW5kIGl0IGNvbnZlbmllbnQgdG8gZG8gYQ0KPj4gPiAjZ2V0TWFjUG9v bEZvckNsdXN0ZXIoR3VpZCBjbHVzdGVySWQpIGluIGEgbG9vcCBpbnN0ZWFkIG9mIGp1c3QNCj4+ ID4gbG9hZGluZyBpdCBvbmNlIGJlZm9yZSB0aGUgbG9vcCwgb3IgZG8gbXkgdXNhZ2UgcmVxdWly ZW1lbnRzIHJlYWxseQ0KPj4gPiBmb3JjZSBtZSB0byB1c2UgYSBjYWNoZT8NCj4+ID4NCj4+ID4g SWYgeW91IHJlYWxseSBuZWVkIGEgY2FjaGUsIGNvbnNpZGVyIHRoZSBmb2xsb3dpbmc6DQo+PiA+ DQo+PiA+IDEpIERvIHRoZSBjYWNoaW5nIG9uIHRoZSBEQU8gbGF5ZXIuIFRoaXMgZ3VhcmFudGVl cyB0aGUgYmVzdA0KPj4gPiBjb25zaXN0ZW5jeSBhY3Jvc3MgdGhlIGRhdGEuDQo+PiA+IDIpIFll cyB0aGlzIG1lYW5zIGVpdGhlciBsb2NraW5nIGluIHRoZSBEQU9zIG9yIGEgdHJhbnNhY3Rpb25h bCBjYWNoZS4NCj4+ID4gQnV0IGJlZm9yZSB5b3UgY29tcGxhaW4sIHRoaW5rIGFib3V0IHdoYXQg aW4gWzFdIGFuZCBbMl0gaXMgZG9uZS4gV2UNCj4+ID4gZG8gZXhhY3RseSB0aGF0IHRoZXJlLCBz byB0aGUgY29tcGxleGl0eSBpcyBhbHJlYWR5IGludHJvZHVjZWQgYW55d2F5Lg0KPj4gPiAzKSBT aW5jZSB3ZSBhcmUgd29ya2luZyB3aXRoIHRyYW5zYWN0aW9ucywgYSBjdXN0b20gY2FjaGUgc2hv dWxkIE5FVkVSDQo+PiA+IGNhY2hlIHdyaXRlcyAocmVhbGx5IGp1c3QgdGFsa2luZyBhYm91dCBv dXIgdXNlIGNhc2UgaGVyZSkuIFRoaXMgbWFrZXMNCj4+ID4gY2hlY2tzIGZvciBleGlzdGluZyBJ RHMgYmVmb3JlIGFkZGluZyBhbiBlbnRpdHkgb3Igc2ltaWxhciBjaGVja3MNCj4+ID4gdW5uZWNl c3NhcnksIGRvbid0IGR1cGxpY2F0ZSBjb25zdHJhaW50IGNoZWNrcyBsaWtlIGluIFsyXS4NCj4+ ID4gNCkgVGhlcmUgc2hvdWxkIGFsd2F5cyBiZSBhIHdheSB0byBkaXNhYmxlIHRoZSBjYWNoZSAo ZXZlbiBpZiBpdCBpcw0KPj4gPiBqdXN0IGZvciB0ZXN0aW5nKS4NCj4+ID4gNSkgSWYgSSBjYW4n dCBjb252aW5jZSB5b3UgdG8gbW92ZSB0aGUgY2FjaGUgdG8gdGhlIERBTyBsYXllciwgc3RpbGwN Cj4+ID4gYWRkIGEgd2F5IHRvIGRpc2FibGUgdGhlIGNhY2hlLg0KPj4gPg0KPj4gDQo+PiBJIGZv cmdvdCB0byBtZW50aW9uIG9uZSB0aGluZzogVGhlcmUgYXJlIG9mIGNvdXJzZSBjYXNlcyB3aGVy ZQ0KPj4gc29tZXRoaW5nIGlzIGxvYWRlZCBvbiBzdGFydHVwLiBNb3N0bHkgdGhpbmdzIHdoaWNo IGNhbiBoYXZlIG11bHRpcGxlDQo+PiBzb3VyY2VzLg0KPj4gRm9yIGluc3RhbmNlIGZvciB0aGUg YXBwbGljYXRpb24gY29uZmlndXJhdGlvbiBpdHNlbGYgaXQgaXMgcHJldHR5DQo+PiBjb21tb24s IG9yIGxpa2UgaW4gdGhlIHNjaGVkdWxlciB0aGUgc2NoZWR1bGluZyBwb2xpY2llcyB3aGVyZSBz b21lDQo+PiBhcmUgSmF2YSBvbmx5LA0KPj4gc29tZSBhcmUgY29taW5nIGZyb20gb3RoZXIgc291 cmNlcy4gSXQgaXMgc3RpbGwgZ29vZA0KPj4gDQo+PiBCdXQgZm9yIG5vcm1hbCBidXNpbmVzcyBl bnRpdGllcyBhY2Nlc3NpbmcgcGFydHMgb2YgaXQgdGhyb3VnaA0KPj4gc2VydmljZXMgYW5kIHBh cnRzIG9mIGl0IHRocm91Z2ggc2VydmljZXMgaXMgbm90IHRoZSBiZXN0IHRoaW5nIHRvIGRvDQo+ PiAoaWYgY29uc3RydWN0aW9uZyB0aGUgd2hvbGUgYnVzaW5lc3MgZW50aXR5IG91dCBvZiBtdWx0 aXBsZSBkYW9zIGlzDQo+PiBjb21wbGV4LCBSZXBvc2l0b3JpZXMgY2FuIGhlbHAsIGJ1dCB0aGUg Y2FjaGUgc2hvdWxkIHN0aWxsIGJlIGluIHRoZQ0KPj4gZGFvIGxheWVyKS4NCj4gDQo+IEkgZG8g bm90IGFncmVlIHRoYXQgdGhlIGNhY2hpbmcgc2hvdWxkIGJlIG9uIHRoZSBEQU8gbGF5ZXIgLSB0 aGF0IG1pZ2h0IGxlYWQgdG8gZ2V0dGluZyBhbiBlbnRpdHkgdGhhdCBpcyBidWlsdCBvZiBwYXJ0 cyB0aGF0IGFyZSBub3QgY29oZXJlbnQgZWFjaCB3aXRoIGFub3RoZXIgaWYgdGhlIGRpZmZlcmVu dCBEQU8gY2FjaGVzIGFyZSBub3QgaW4gc3luYy4gDQo+IEknZCBwdXQgdGhlIGNhY2hlIG9uIHRo ZSBSZXBvc2l0b3JpZXMgKG5vbi1leGlzdGVudCBjdXJyZW50bHkpIG9yIGEgaGlnaGVyIGxheWVy LCBqdXN0IGFib3ZlIHRoZSB0cmFuc2FjdGlvbiBib3VuZGFyaWVzLCBzbyB0aGUgY2FjaGUgd291 bGQgY29udGFpbiBzZXJ2aWNlIGNhbGwgcmVzdWx0cyByYXRoZXIgdGhhbiByYXcgZGF0YS4gVGhl biB0aGUgY2FjaGUgd291bGQgcHJldmVudCBmcm9tIHRoZSBhcHBsaWNhdGlvbiBhY2Nlc3Npbmcg dGhlIERCIGNvbm5lY3Rpb24gcG9vbCBmb3IgYSBjb25uZWN0aW9uLiBZZXMsIGRpZmZlcmVudCBz ZXJ2aWNlIGNhY2hlcyBtaWdodCBoYXZlIHNhbWUgZW50aXRpZXMgZHVwbGljYXRlZCBpbiB0aGUg bWVtb3J5LCBidXQgSSBkbyBub3QgY2FyZSBvZiB0aGF0IHVudGlsIHRoYXQncyBwcm92ZW4gYXMg YSBwcm9ibGVtIGFuZCBpZiBpdCB3b3VsZCBJJ2QgZ28gdG8gaW1wcm92aW5nIGNhY2hlIC0gbWFr aW5nIHRoYXQgbW9yZSBjYXBhYmxlLg0KPiANCj4+IEkgaG9wZSAgeW91IGdldCB3aGF0IEkgbWVh bi4NCj4+IA0KPj4gPiBGb3IgYXMgbG9uZyBhcyB0aGVyZSBpcyBubyBnZW5lcmFsIGNhY2hpbmcg c29sdXRpb24gd2l0aCBzb21ldGhpbmcNCj4+ID4gbGlrZSBlaGNhY2hlIG9yIGluZmluaXNwYW4s IGluIG15IGV5ZXMgc3VjaCBzbWFsbCB0aGluZ3MgbWF0dGVyIGEgbG90DQo+PiA+IHRvIGtlZXAg YSBwcm9qZWN0IG1haW50YWluYWJsZS4NCj4+ID4NCj4+ID4gVGhhdCBhcmUgc29tZSBvZiB0aGUg YmVzdCBwcmFjdGljZXMgSSBoYXZlIHNlZW4gYXJvdW5kIGNhY2hpbmcNCj4+ID4gZGF0YWJhc2Ug ZGF0YS4gSXQgd291bGQgYmUgZ3JlYXQgaWYgd2UgY291bGQgYWdyZWUgb24gc29tZXRoaW5nIGxp a2UNCj4+ID4gdGhhdC4gTWF5YmUgdGhlcmUgaXMgYWxyZWFkeSBhbiBhZ3JlZW1lbnQgb24gc29t ZXRoaW5nIGFuZCBJIGFtIGp1c3QNCj4+ID4gbm90IGF3YXJlIG9mIGl0Lg0KPj4gPg0KPj4gPiBM b29raW5nIGZvcndhcmQgdG8gaGVhciB5b3VyIGZlZWRiYWNrLg0KPj4gPg0KPj4gPiBSb21hbg0K Pj4gPg0KPj4gPiBbMV0gaHR0cHM6Ly9naXRodWIuY29tL29WaXJ0L292aXJ0LWVuZ2luZS9ibG9i L21hc3Rlci9iYWNrZW5kL21hbmFnZXIvbW9kdWxlcy9ibGwvc3JjL21haW4vamF2YS9vcmcvb3Zp cnQvZW5naW5lL2NvcmUvYmxsL3F1b3RhL1F1b3RhTWFuYWdlci5qYXZhDQo+PiA+IFsyXSBodHRw czovL2dpdGh1Yi5jb20vb1ZpcnQvb3ZpcnQtZW5naW5lL2Jsb2IvbWFzdGVyL2JhY2tlbmQvbWFu YWdlci9tb2R1bGVzL2JsbC9zcmMvbWFpbi9qYXZhL29yZy9vdmlydC9lbmdpbmUvY29yZS9ibGwv bmV0d29yay9tYWNwb29sL01hY1Bvb2xQZXJDbHVzdGVyLmphdmENCj4+IF9fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+PiBEZXZlbCBtYWlsaW5nIGxpc3QN Cj4+IERldmVsQG92aXJ0Lm9yZw0KPj4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xp c3RpbmZvL2RldmVsDQo+IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXw0KPiBEZXZlbCBtYWlsaW5nIGxpc3QNCj4gRGV2ZWxAb3ZpcnQub3JnDQo+IGh0 dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby9kZXZlbA0K --Apple-Mail-29A64A2A-2BB4-4520-A285-129E03437FCA Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+V2hhdCBh Ym91dCBKUEEgaSB0aGluayBpbiAzLjUgaXQgcmFpc2VkIHRvIHN0YXJ0IHVzaW5nIGl0LjwvZGl2 PjxkaXYgaWQ9IkFwcGxlTWFpbFNpZ25hdHVyZSI+QXQgdGhlIHBhc3QgKDMuMykgd2UgZm91bmQg YmlnIHBlcmZvcm1hbmNlIGlzc3VlICZuYnNwO2Fyb3VuZCBoaXN0IG1vbml0b3JpbmcgYXJlYSAo d2hpY2ggZml4ZWQgYnkgYSBtYXAgZXZlbnR1YWxseSkuPC9kaXY+PGRpdiBpZD0iQXBwbGVNYWls U2lnbmF0dXJlIj48YnI+PC9kaXY+PGRpdiBpZD0iQXBwbGVNYWlsU2lnbmF0dXJlIj5Sb3ksJm5i c3A7PC9kaXY+PGRpdiBpZD0iQXBwbGVNYWlsU2lnbmF0dXJlIj5DYW4geW91IHRlbGwgd2hlcmUg dGhlIEpQQSBzdGFuZCB0b2RheSA/PC9kaXY+PGRpdiBpZD0iQXBwbGVNYWlsU2lnbmF0dXJlIj48 YnI+PGRpdj5SZWdhcmRzLDwvZGl2Pi1FbGRhZDwvZGl2PjxkaXY+PGJyPk9uIDExINeR15nXldec 15kgMjAxNiwgYXQgMjA6NTksIFlldmdlbnkgWmFzcGl0c2t5ICZsdDs8YSBocmVmPSJtYWlsdG86 eXphc3BpdHNAcmVkaGF0LmNvbSI+eXphc3BpdHNAcmVkaGF0LmNvbTwvYT4mZ3Q7IHdyb3RlOjxi cj48YnI+PC9kaXY+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj48ZGl2IGRpcj0ibHRyIj48 YnI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+ T24gVHVlLCBKdWwgNSwgMjAxNiBhdCA3OjE0IEFNLCBSb21hbiBNb2hyIDxzcGFuIGRpcj0ibHRy Ij4mbHQ7PGEgaHJlZj0ibWFpbHRvOnJtb2hyQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5y bW9ockByZWRoYXQuY29tPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFz cz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHgg I2NjYyBzb2xpZDtwYWRkaW5nLWxlZnQ6MWV4Ij48ZGl2IGNsYXNzPSJIT0VuWmIiPjxkaXYgY2xh c3M9Img1Ij5PbiBNb24sIEp1bCA0LCAyMDE2IGF0IDExOjU4IFBNLCBSb21hbiBNb2hyICZsdDs8 YSBocmVmPSJtYWlsdG86cm1vaHJAcmVkaGF0LmNvbSI+cm1vaHJAcmVkaGF0LmNvbTwvYT4mZ3Q7 IHdyb3RlOjxicj4NCiZndDsgSGkgRXZlcnlvbmUsPGJyPg0KJmd0Ozxicj4NCiZndDsgSSB3YW50 ZWQgdG8gZGlzY3VzcyBhIHByYWN0aWNlIHdoaWNoIHNlZW1zIHRvIGJlIHByZXR0eSBjb21tb24g aW4gdGhlPGJyPg0KJmd0OyBlbmdpbmUgd2hpY2ggSSBmaW5kIHZlcnkgbGltaXRpbmcsIGRhbmdl cm91cyBhbmQgZm9yIHNvbWUgdGhpbmdzIGl0PGJyPg0KJmd0OyBjYW4gZXZlbiBiZSBhIGJsb2Nr ZXIuPGJyPg0KJmd0Ozxicj4NCiZndDsgVGhlcmUgYXJlIHNldmVyYWwgcGxhY2VzIGluIHRoZSBl bmdpbmUgd2hlcmUgd2UgYXJlIHVzaW5nIG1hcHMgYXM8YnI+DQomZ3Q7IGNhY2hlIGluIHNpbmds ZXRvbnMgdG8gYXZvaWQgcmVsb2FkaW5nIGRhdGEgZnJvbSB0aGUgZGF0YWJhc2UuIFR3bzxicj4N CiZndDsgcHJvbWluZW50IG9uZXMgYXJlIHRoZSBRdW90YU1hbmFnZXJbMV0gYW5kIHRoZSBNYWNQ b29sUGVyQ2x1c3RlclsyXS48YnI+DQomZ3Q7PGJyPg0KJmd0OyBXaGlsZSBpdCBsb29rcyB0ZW1w dGluZyB0byBqdXN0IHVzZSBhIG1hcCBhcyBjYWNoZSwgYWRkIHNvbWUgbG9ja3M8YnI+DQomZ3Q7 IGFyb3VuZCBpdCBhbmQgY3JlYXRlIGFuIGluamVjdGFibGUgc2luZ2xldG9uLCB0aGlzIGhhcyBz b21lIGRyYXdiYWNrczo8YnI+DQomZ3Q7PGJyPg0KJmd0OyAxKSBXZSBoYXZlIGFuIGF1dG9yaXRh dGl2ZSBzb3VyY2UgZm9yIG91ciBkYXRhIGFuZCBpdCBvZmZlcnM8YnI+DQomZ3Q7IHRyYW5zYWN0 aW9ucyB0byB0YWtlIGNhcmUgb2YgaW5jb25zaXN0ZW5jaWVzIG9yIHBhcmFsbGVsIHVwZGF0ZXMu PGJyPg0KJmd0OyBEb2luZyBhbGwgdGhhdCBpbiBhIHNlcnZpY2UgYWdhaW4gZHVwbGljYXRlcyB0 aGlzLjxicj4NCiZndDsgMikgQ2FjaGluZyBvbiB0aGUgc2VydmljZSBsYXllciBpcyBkZWZpbml0 ZWx5IG5vdCBhIGdvb2QgaWRlYS4gSXQgY2FuPGJyPg0KJmd0OyBpbnRyb2R1Y2UgdW53YW50ZWQg c2lkZSBlZmZlY3RzIHdoZW4gc29tZW9uZSBpbnZva2VzIHRoZSBEQU9zPGJyPg0KJmd0OyBkaXJl Y3RseS48YnI+DQomZ3Q7IDMpIFRoZSBwb2ludCBpcyBtb3JlIGFib3V0IHRoZSBxdWVzdGlvbiBp ZiBhIGNhY2hlIGlzIHJlYWxseSBuZWVkZWQ6PGJyPg0KJmd0OyBEbyBJIGp1c3Qgd2FudCB0aGF0 IGNhY2hlIGJlY2F1c2UgSSBmaW5kIGl0IGNvbnZlbmllbnQgdG8gZG8gYTxicj4NCiZndDsgI2dl dE1hY1Bvb2xGb3JDbHVzdGVyKEd1aWQgY2x1c3RlcklkKSBpbiBhIGxvb3AgaW5zdGVhZCBvZiBq dXN0PGJyPg0KJmd0OyBsb2FkaW5nIGl0IG9uY2UgYmVmb3JlIHRoZSBsb29wLCBvciBkbyBteSB1 c2FnZSByZXF1aXJlbWVudHMgcmVhbGx5PGJyPg0KJmd0OyBmb3JjZSBtZSB0byB1c2UgYSBjYWNo ZT88YnI+DQomZ3Q7PGJyPg0KJmd0OyBJZiB5b3UgcmVhbGx5IG5lZWQgYSBjYWNoZSwgY29uc2lk ZXIgdGhlIGZvbGxvd2luZzo8YnI+DQomZ3Q7PGJyPg0KJmd0OyAxKSBEbyB0aGUgY2FjaGluZyBv biB0aGUgREFPIGxheWVyLiBUaGlzIGd1YXJhbnRlZXMgdGhlIGJlc3Q8YnI+DQomZ3Q7IGNvbnNp c3RlbmN5IGFjcm9zcyB0aGUgZGF0YS48YnI+DQomZ3Q7IDIpIFllcyB0aGlzIG1lYW5zIGVpdGhl ciBsb2NraW5nIGluIHRoZSBEQU9zIG9yIGEgdHJhbnNhY3Rpb25hbCBjYWNoZS48YnI+DQomZ3Q7 IEJ1dCBiZWZvcmUgeW91IGNvbXBsYWluLCB0aGluayBhYm91dCB3aGF0IGluIFsxXSBhbmQgWzJd IGlzIGRvbmUuIFdlPGJyPg0KJmd0OyBkbyBleGFjdGx5IHRoYXQgdGhlcmUsIHNvIHRoZSBjb21w bGV4aXR5IGlzIGFscmVhZHkgaW50cm9kdWNlZCBhbnl3YXkuPGJyPg0KJmd0OyAzKSBTaW5jZSB3 ZSBhcmUgd29ya2luZyB3aXRoIHRyYW5zYWN0aW9ucywgYSBjdXN0b20gY2FjaGUgc2hvdWxkIE5F VkVSPGJyPg0KJmd0OyBjYWNoZSB3cml0ZXMgKHJlYWxseSBqdXN0IHRhbGtpbmcgYWJvdXQgb3Vy IHVzZSBjYXNlIGhlcmUpLiBUaGlzIG1ha2VzPGJyPg0KJmd0OyBjaGVja3MgZm9yIGV4aXN0aW5n IElEcyBiZWZvcmUgYWRkaW5nIGFuIGVudGl0eSBvciBzaW1pbGFyIGNoZWNrczxicj4NCiZndDsg dW5uZWNlc3NhcnksIGRvbid0IGR1cGxpY2F0ZSBjb25zdHJhaW50IGNoZWNrcyBsaWtlIGluIFsy XS48YnI+DQomZ3Q7IDQpIFRoZXJlIHNob3VsZCBhbHdheXMgYmUgYSB3YXkgdG8gZGlzYWJsZSB0 aGUgY2FjaGUgKGV2ZW4gaWYgaXQgaXM8YnI+DQomZ3Q7IGp1c3QgZm9yIHRlc3RpbmcpLjxicj4N CiZndDsgNSkgSWYgSSBjYW4ndCBjb252aW5jZSB5b3UgdG8gbW92ZSB0aGUgY2FjaGUgdG8gdGhl IERBTyBsYXllciwgc3RpbGw8YnI+DQomZ3Q7IGFkZCBhIHdheSB0byBkaXNhYmxlIHRoZSBjYWNo ZS48YnI+DQomZ3Q7PGJyPg0KPGJyPg0KPC9kaXY+PC9kaXY+SSBmb3Jnb3QgdG8gbWVudGlvbiBv bmUgdGhpbmc6IFRoZXJlIGFyZSBvZiBjb3Vyc2UgY2FzZXMgd2hlcmU8YnI+DQpzb21ldGhpbmcg aXMgbG9hZGVkIG9uIHN0YXJ0dXAuIE1vc3RseSB0aGluZ3Mgd2hpY2ggY2FuIGhhdmUgbXVsdGlw bGU8YnI+DQpzb3VyY2VzLjxicj4NCkZvciBpbnN0YW5jZSBmb3IgdGhlIGFwcGxpY2F0aW9uIGNv bmZpZ3VyYXRpb24gaXRzZWxmIGl0IGlzIHByZXR0eTxicj4NCmNvbW1vbiwgb3IgbGlrZSBpbiB0 aGUgc2NoZWR1bGVyIHRoZSBzY2hlZHVsaW5nIHBvbGljaWVzIHdoZXJlIHNvbWU8YnI+DQphcmUg SmF2YSBvbmx5LDxicj4NCnNvbWUgYXJlIGNvbWluZyBmcm9tIG90aGVyIHNvdXJjZXMuIEl0IGlz IHN0aWxsIGdvb2Q8YnI+DQo8YnI+DQpCdXQgZm9yIG5vcm1hbCBidXNpbmVzcyBlbnRpdGllcyBh Y2Nlc3NpbmcgcGFydHMgb2YgaXQgdGhyb3VnaDxicj4NCnNlcnZpY2VzIGFuZCBwYXJ0cyBvZiBp dCB0aHJvdWdoIHNlcnZpY2VzIGlzIG5vdCB0aGUgYmVzdCB0aGluZyB0byBkbzxicj4NCihpZiBj b25zdHJ1Y3Rpb25nIHRoZSB3aG9sZSBidXNpbmVzcyBlbnRpdHkgb3V0IG9mIG11bHRpcGxlIGRh b3MgaXM8YnI+DQpjb21wbGV4LCBSZXBvc2l0b3JpZXMgY2FuIGhlbHAsIGJ1dCB0aGUgY2FjaGUg c2hvdWxkIHN0aWxsIGJlIGluIHRoZTxicj4NCmRhbyBsYXllcikuPGJyPjwvYmxvY2txdW90ZT48 ZGl2Pjxicj5JIGRvIG5vdCBhZ3JlZSB0aGF0IHRoZSBjYWNoaW5nIHNob3VsZCBiZSBvbiB0aGUg REFPIGxheWVyIC0gdGhhdCBtaWdodCBsZWFkIHRvIGdldHRpbmcgYW4gZW50aXR5IHRoYXQgaXMg YnVpbHQgb2YgcGFydHMgdGhhdCBhcmUgbm90IGNvaGVyZW50IGVhY2ggd2l0aCBhbm90aGVyIGlm IHRoZSBkaWZmZXJlbnQgREFPIGNhY2hlcyBhcmUgbm90IGluIHN5bmMuIDxicj5JJ2QgcHV0IHRo ZSBjYWNoZSBvbiB0aGUgUmVwb3NpdG9yaWVzIChub24tZXhpc3RlbnQgY3VycmVudGx5KSBvciBh IGhpZ2hlciBsYXllciwganVzdCBhYm92ZSB0aGUgdHJhbnNhY3Rpb24gYm91bmRhcmllcywgc28g dGhlIGNhY2hlIHdvdWxkIGNvbnRhaW4gc2VydmljZSBjYWxsIHJlc3VsdHMgcmF0aGVyIHRoYW4g cmF3IGRhdGEuIFRoZW4gdGhlIGNhY2hlIHdvdWxkIHByZXZlbnQgZnJvbSB0aGUgYXBwbGljYXRp b24gYWNjZXNzaW5nIHRoZSBEQiBjb25uZWN0aW9uIHBvb2wgZm9yIGEgY29ubmVjdGlvbi4gWWVz LCBkaWZmZXJlbnQgc2VydmljZSBjYWNoZXMgbWlnaHQgaGF2ZSBzYW1lIGVudGl0aWVzIGR1cGxp Y2F0ZWQgaW4gdGhlIG1lbW9yeSwgYnV0IEkgZG8gbm90IGNhcmUgb2YgdGhhdCB1bnRpbCB0aGF0 J3MgcHJvdmVuIGFzIGEgcHJvYmxlbSBhbmQgaWYgaXQgd291bGQgSSdkIGdvIHRvIGltcHJvdmlu ZyBjYWNoZSAtIG1ha2luZyB0aGF0IG1vcmUgY2FwYWJsZS48YnI+PGJyPjwvZGl2PjxibG9ja3F1 b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1s ZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPg0KSSBob3BlJm5ic3A7IHlvdSBn ZXQgd2hhdCBJIG1lYW4uPGJyPg0KPGRpdiBjbGFzcz0iSE9FblpiIj48ZGl2IGNsYXNzPSJoNSI+ PGJyPg0KJmd0OyBGb3IgYXMgbG9uZyBhcyB0aGVyZSBpcyBubyBnZW5lcmFsIGNhY2hpbmcgc29s dXRpb24gd2l0aCBzb21ldGhpbmc8YnI+DQomZ3Q7IGxpa2UgZWhjYWNoZSBvciBpbmZpbmlzcGFu LCBpbiBteSBleWVzIHN1Y2ggc21hbGwgdGhpbmdzIG1hdHRlciBhIGxvdDxicj4NCiZndDsgdG8g a2VlcCBhIHByb2plY3QgbWFpbnRhaW5hYmxlLjxicj4NCiZndDs8YnI+DQomZ3Q7IFRoYXQgYXJl IHNvbWUgb2YgdGhlIGJlc3QgcHJhY3RpY2VzIEkgaGF2ZSBzZWVuIGFyb3VuZCBjYWNoaW5nPGJy Pg0KJmd0OyBkYXRhYmFzZSBkYXRhLiBJdCB3b3VsZCBiZSBncmVhdCBpZiB3ZSBjb3VsZCBhZ3Jl ZSBvbiBzb21ldGhpbmcgbGlrZTxicj4NCiZndDsgdGhhdC4gTWF5YmUgdGhlcmUgaXMgYWxyZWFk eSBhbiBhZ3JlZW1lbnQgb24gc29tZXRoaW5nIGFuZCBJIGFtIGp1c3Q8YnI+DQomZ3Q7IG5vdCBh d2FyZSBvZiBpdC48YnI+DQomZ3Q7PGJyPg0KJmd0OyBMb29raW5nIGZvcndhcmQgdG8gaGVhciB5 b3VyIGZlZWRiYWNrLjxicj4NCiZndDs8YnI+DQomZ3Q7IFJvbWFuPGJyPg0KJmd0Ozxicj4NCiZn dDsgWzFdIDxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9vVmlydC9vdmlydC1lbmdpbmUvYmxv Yi9tYXN0ZXIvYmFja2VuZC9tYW5hZ2VyL21vZHVsZXMvYmxsL3NyYy9tYWluL2phdmEvb3JnL292 aXJ0L2VuZ2luZS9jb3JlL2JsbC9xdW90YS9RdW90YU1hbmFnZXIuamF2YSIgcmVsPSJub3JlZmVy cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL29WaXJ0L292aXJ0LWVuZ2lu ZS9ibG9iL21hc3Rlci9iYWNrZW5kL21hbmFnZXIvbW9kdWxlcy9ibGwvc3JjL21haW4vamF2YS9v cmcvb3ZpcnQvZW5naW5lL2NvcmUvYmxsL3F1b3RhL1F1b3RhTWFuYWdlci5qYXZhPC9hPjxicj4N CiZndDsgWzJdIDxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9vVmlydC9vdmlydC1lbmdpbmUv YmxvYi9tYXN0ZXIvYmFja2VuZC9tYW5hZ2VyL21vZHVsZXMvYmxsL3NyYy9tYWluL2phdmEvb3Jn L292aXJ0L2VuZ2luZS9jb3JlL2JsbC9uZXR3b3JrL21hY3Bvb2wvTWFjUG9vbFBlckNsdXN0ZXIu amF2YSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29t L29WaXJ0L292aXJ0LWVuZ2luZS9ibG9iL21hc3Rlci9iYWNrZW5kL21hbmFnZXIvbW9kdWxlcy9i bGwvc3JjL21haW4vamF2YS9vcmcvb3ZpcnQvZW5naW5lL2NvcmUvYmxsL25ldHdvcmsvbWFjcG9v bC9NYWNQb29sUGVyQ2x1c3Rlci5qYXZhPC9hPjxicj4NCl9fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fPGJyPg0KRGV2ZWwgbWFpbGluZyBsaXN0PGJyPg0KPGEg aHJlZj0ibWFpbHRvOkRldmVsQG92aXJ0Lm9yZyI+RGV2ZWxAb3ZpcnQub3JnPC9hPjxicj4NCjxh IGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby9kZXZlbCIgcmVs PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWls bWFuL2xpc3RpbmZvL2RldmVsPC9hPjxicj4NCjwvZGl2PjwvZGl2PjwvYmxvY2txdW90ZT48L2Rp dj48YnI+PC9kaXY+PC9kaXY+DQo8L2Rpdj48L2Jsb2NrcXVvdGU+PGJsb2NrcXVvdGUgdHlwZT0i Y2l0ZSI+PGRpdj48c3Bhbj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fXzwvc3Bhbj48YnI+PHNwYW4+RGV2ZWwgbWFpbGluZyBsaXN0PC9zcGFuPjxicj48c3Bh bj48YSBocmVmPSJtYWlsdG86RGV2ZWxAb3ZpcnQub3JnIj5EZXZlbEBvdmlydC5vcmc8L2E+PC9z cGFuPjxicj48c3Bhbj48YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlz dGluZm8vZGV2ZWwiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby9kZXZl bDwvYT48L3NwYW4+PC9kaXY+PC9ibG9ja3F1b3RlPjwvYm9keT48L2h0bWw+ --Apple-Mail-29A64A2A-2BB4-4520-A285-129E03437FCA--

Hi Yevgeny, On Mon, Jul 11, 2016 at 7:59 PM, Yevgeny Zaspitsky <yzaspits@redhat.com> wrote:
On Tue, Jul 5, 2016 at 7:14 AM, Roman Mohr <rmohr@redhat.com> wrote:
On Mon, Jul 4, 2016 at 11:58 PM, Roman Mohr <rmohr@redhat.com> wrote:
Hi Everyone,
I wanted to discuss a practice which seems to be pretty common in the engine which I find very limiting, dangerous and for some things it can even be a blocker.
There are several places in the engine where we are using maps as cache in singletons to avoid reloading data from the database. Two prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2].
While it looks tempting to just use a map as cache, add some locks around it and create an injectable singleton, this has some drawbacks:
1) We have an autoritative source for our data and it offers transactions to take care of inconsistencies or parallel updates. Doing all that in a service again duplicates this. 2) Caching on the service layer is definitely not a good idea. It can introduce unwanted side effects when someone invokes the DAOs directly. 3) The point is more about the question if a cache is really needed: Do I just want that cache because I find it convenient to do a #getMacPoolForCluster(Guid clusterId) in a loop instead of just loading it once before the loop, or do my usage requirements really force me to use a cache?
If you really need a cache, consider the following:
1) Do the caching on the DAO layer. This guarantees the best consistency across the data. 2) Yes this means either locking in the DAOs or a transactional cache. But before you complain, think about what in [1] and [2] is done. We do exactly that there, so the complexity is already introduced anyway. 3) Since we are working with transactions, a custom cache should NEVER cache writes (really just talking about our use case here). This makes checks for existing IDs before adding an entity or similar checks unnecessary, don't duplicate constraint checks like in [2]. 4) There should always be a way to disable the cache (even if it is just for testing). 5) If I can't convince you to move the cache to the DAO layer, still add a way to disable the cache.
I forgot to mention one thing: There are of course cases where something is loaded on startup. Mostly things which can have multiple sources. For instance for the application configuration itself it is pretty common, or like in the scheduler the scheduling policies where some are Java only, some are coming from other sources. It is still good
But for normal business entities accessing parts of it through services and parts of it through services is not the best thing to do (if constructiong the whole business entity out of multiple daos is complex, Repositories can help, but the cache should still be in the dao layer).
I do not agree that the caching should be on the DAO layer - that might lead to getting an entity that is built of parts that are not coherent each with another if the different DAO caches are not in sync.
I can't agree here. That is what transactions are for. A second layer cache normally follows transactions. You have interceptors to detect rollbacks and commits. If you don't have JTA in place there is then normally a small window where you can read stale data in different transactions (which is fine in most cases). It does have nothing to do with where the cache is. It is much easier to stay in sync since there is no way to by-bass the cache.
I'd put the cache on the Repositories (non-existent currently) or a higher layer, just above the transaction boundaries, so the cache would contain service call results rather than raw data.
What does that mean above the Transaction boundaries? Yes a second level cache is to have a cache across transaction boundaries and you also have that when you place them in the DAO layer. You would further make it very hard to track weather you are allowed to manipulate data through DAOs, Repositories or Services when you don't place the basic cache inside the DAOs since you might always by accident by-pass the cache. For higher layer caches in singletons it is also almost a prerequisite to have the basic cache in the DAO layer because you can then also listen on cache changes for dependent entities inside the singleton (all cache implementations I know have listeners) and invalidate or update derived caches. This, in combination with the possibility to disable the cache completely on all layers, makes the cache completely transparent on every layer. Which makes it very easy to write sane code when using all the different services, DAOs, Repositories, ... .
Then the cache would prevent from the application accessing the DB connection pool for a connection.
The cache inside the DAO is before you acquire a DB connection. Does not make much sense otherwise.
Yes, different service caches might have same entities duplicated in the memory, but I do not care of that until that's proven as a problem and if it would I'd go to improving cache - making that more capable.
Sometimes you need caches for derived values and that is fine too. The best thing you can have then is that you have caches in the DAO layer and use cache listeners (which all cache solutions have), or just fire events that in the current transaction some of the basic entities have changed. Then you listen on those changes in the service too and update your cache according to these events. Like all the other caches you then commit or rollback your changes by listening to the transaction (and keep it thread local before that). The benefit is that the caching complexity is invisible from outside and everything behaves as expected. Another interesting scenario is, when you want to get a derived resource from a service, which should be unique (let's say a mac address). In this case you would use a non transactional cache with locks (like the the MacPoolPerCluster does) to make the change visible application wide. Now if the transaction fails, you would release this resource as part of the rollback process, by listening to the transaction. The resource would then be visible as used just a little bit longer. If you want to return such a resource to a pool because you don't need it anymore, you would keep the resource release thread local until the transaction succeeds. Just in case something goes wrong with the transaction. All this can be done from within the service which makes the service still easy to use.
I hope you get what I mean.
For as long as there is no general caching solution with something like ehcache or infinispan, in my eyes such small things matter a lot to keep a project maintainable.
That are some of the best practices I have seen around caching database data. It would be great if we could agree on something like that. Maybe there is already an agreement on something and I am just not aware of it.
Looking forward to hear your feedback.
Roman
[1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... [2] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl...
Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Best Regards, Roman
participants (4)
-
Eldad Marciano
-
Martin Mucha
-
Roman Mohr
-
Yevgeny Zaspitsky