Introduction to JCache JSR 107

Caching libraries such as Ehcache, Hazelcast, Infinispan, GridGain, Apache Ignite in the Java ecosystem are common and indeed is needed in many different kinds of solutions to optimize the application in various ways. The simplest libraries are just in-memory object maps with simple evict rules, while the most advanced caching libraries have efficient cross-JVM features and configurable options to write the cache state to disk, and a good basis for memory heavy computation. Continue reading “Introduction to JCache JSR 107”

Ehcache implementation using Spring framework

The terms “buffer” and “cache” tend to be used interchangeably; note however they represent different things. A buffer is used traditionally as an intermediate temporary store for data between a fast and a slow entity. As one party would have to wait for the other affecting performance, the buffer alleviates this by allowing entire blocks of data to move at once rather then in small chunks. The data is written and read only once from the buffer. Furthermore, the buffers are visible to at least one party which is aware of it.

A cache on the other hand by definition is hidden and neither party is aware that caching occurs.It as well improves performance but does that by allowing the same data to be read multiple times in a fast fashion. Continue reading “Ehcache implementation using Spring framework”

LRU Cache implementation in Java

A cache is an amount of faster memory used to improve data access by storing portions of a data set the whole of which is slower to access. Sometimes the total data set are not actually stored at all; instead, each data item is calculated as necessary, in which case the cache stores results from the calculations.

When we need a datum then first we check the cache if it contains the required datum. If it exists, the datum is used from the cache, without having to access the main data store. This is known as a ‘cache hit’. If the datum is not found in the cache, it is transferred from the main data store; this is known as a ‘cache miss’. When the cache fills up, items are ejected from the cache to make space for new items.

LRU is known as Least Recently Used where items are added to the cache as they are accessed; when the cache is full, the least recently used item is ejected. This type of cache is typically implemented as a linked list, so that an item in cache, when it is accessed again, can be moved back up to the head of the queue; items are ejected from the tail of the queue. Cache access overhead is again constant time. This algorithm is simple and fast, and it has a significant advantage over FIFO in being able to adapt somewhat to the data access pattern; frequently used items are less likely to be ejected from the cache. The main disadvantage is that it can still get filled up with items that are unlikely to be re-accessed soon; in particular, it can become useless in the face of scans over a larger number of items than fit in the cache. Nonetheless, this is by far the most frequently used caching algorithm. Continue reading “LRU Cache implementation in Java”