LRU Cache implementation in Java

A cache is an amount of faster memory used to improve data access by storing portions of a data set the whole of which is slower to access. Sometimes the total data set are not actually stored at all; instead, each data item is calculated as necessary, in which case the cache stores results from the calculations.

When we need a datum then first we check the cache if it contains the required datum. If it exists, the datum is used from the cache, without having to access the main data store. This is known as a ‘cache hit’. If the datum is not found in the cache, it is transferred from the main data store; this is known as a ‘cache miss’. When the cache fills up, items are ejected from the cache to make space for new items.

LRU is known as Least Recently Used where items are added to the cache as they are accessed; when the cache is full, the least recently used item is ejected. This type of cache is typically implemented as a linked list, so that an item in cache, when it is accessed again, can be moved back up to the head of the queue; items are ejected from the tail of the queue. Cache access overhead is again constant time. This algorithm is simple and fast, and it has a significant advantage over FIFO in being able to adapt somewhat to the data access pattern; frequently used items are less likely to be ejected from the cache. The main disadvantage is that it can still get filled up with items that are unlikely to be re-accessed soon; in particular, it can become useless in the face of scans over a larger number of items than fit in the cache. Nonetheless, this is by far the most frequently used caching algorithm. Continue reading “LRU Cache implementation in Java”