Repetitious access to remote resources or global value objects form a bottleneck for many services. Caching is a technique that can drastically improve the performance. For example by avoiding multiple read operations for the same data.
However there is a price, caching data that the application is accessing will increase the memory usage. Therefore it is very important to obtain a proper balance between the retrieval of the data and the memory usage. The quantity of data being cached and the moment when to load, either in the beginning when the application initializes or whenever it is required for the first time, depends on the requirements of the application.
The cache will identify the buffered resources using unique identifiers. When the resources stored in the cache are no longer required they could be released in order to lower the memory consumption.
Basically there are 2 main caching strategies: Primed and Demand cache.
A cache that is initialized from the beginning with default values is primed cache. A primed cache should be considered whenever it is possible to predict a subset or the entire set of data that the client will request, and to put it to the cache. In case the client request data with a key that matches one of the primed keys having no corresponding data in the cache it is assumed that there is no data in the datasource for that key as well. Disadvantage of primed cache is that it takes longer for the application to start-up and become functional. There is however a hidden disadvantage and that is that developers assume that the cache will always be populated and that it will always be populated with the entire data set. This assumption could lead to a disaster for your application because:
Draw back of not using a primed cache is that when the application is cold and the cache is empty grabbing data from the datastore is inefficient. A solution to bypass the problems as described above is to provide cache miss logic to operate on sets of identifiers beside the single identifier logic. Now, when the system starts, spawn a couple of workers that fetch ids from the resource (in configurable batch sizes) and insert those in the cache. This way you:
Demand cache loads information and stores it in cache whenever the information is requested by the system. A demand cache implementation will improve performance while running. A demand cache should be considered whenever populating the cache is either unfeasible of unnecessary.
Object which are applicable for caching are:
A cache requires: * max cache size - Defines how many elements a cache can hold * eviction policies - Defines what to do when the number of elements in cache exceeds the max cache size. The Least Recently Used (LRU) works best. Policies like MRU, LFU, etc, are usually not applicable in most practical situations and are expensive from performance point of view. * Time to live - Defines time after that a cache key should be removed from the cache (expired). * statistics Professional caches like EHCache or Guava Cache provide this kind of functionality out of the box and are constantly tested.