Thursday, May 13, 2010

Scalable Application Architecture with Memory Caches

This post is dedicated towards an important feature that is required in creation of highly performant and scalable applications. Data has traditionally been stored in specialized software, aptly known as database. However, given the massive use of some specific data which is generally static in nature in applications that permit large number of users(thousands, if not more) to interact with the application simultaneously. Using a memory storage instead of a hard disk file results in huge performance gain as the requests pertaining to that data would not be read from a database located on a disk.Think of a in-memory data cache as a robust, persistent storage that can be used in certain transient tasks where the data is stored in RAM of cache machines.

The caching mechanism (framework responsible for maintaining the cache) has to do the following main tasks :
  • Maintain a cache (Obvious one)
  • Determine the pattern of requests (which requests are more in number)
  • Flush out resources and load new ones (different strategies can be used here, most common one is LRU)
  • Make sure that the data maintained in cache is correct (and if not, then to which extent)
  • Enforce consistency of cache across different machines (Generally this design is needed in cloud or clustered machines)
  • Maintain resource utilization (Evict the data from cache if server need increases)

Caching is generally found in distributed applications which are targeted to be used by large amount of people if they are not already in production environment. Today, most of the memory cache frameworks do not offer a synchronization mechanism between the data stored in the database and the memory cache. To overcome this problem, we need to explicitly set an expiration value of the cached object so that it gets refreshed upon requests after a certain period. This performance optimization can be done not only for database specific operations, but also on other data such as repeated web service calls, computation results, static content, etc.

A popular interface standard for java is the JCache, which was proposed as JSR 107. A memory cache software, eg: memcatched [http://www.memcached.org], stores key value pair of data. As soon as a request is generated, the key values are searched, which results in a cache hit or a cache miss scenario. JSR 107 has been adopted in different implementations, one of which is the Google App Engine, which is a cloud platform supporting python and java runtimes. Here, this comes in form of a memcache service for the java runtime. This can be better explained with the following example :


import com.google.appengine.api.memcache.MemcacheService;
import com.google.appengine.api.memcache.MemcacheServiceFactory;
......

MemcacheService cache=MemcacheServiceFactory.getMemcacheService();

.....

cache.put("key","value");

....

object=cache.get("key");

.....
cache.delete("key");


This really makes application scalability easier for the developers (Cloud environments do induce the responsibility of creating applications that can scale quickly). As of now, similar feature doesnt exist in Windows Azure, but what the future holds for this technology cannot be speculated.
Thus it is not surprising that this technology is used in prime websites like YouTube, Wikipedia, Amazon,SourceForge, Metacafe, Facebook, Twitter, etc and that too in large quantites (for eg: Facebook uses over 25 TB of memcache). So, it is imperative for software developers to understand the working and development of this technology.

No comments: