DirectMemory Cache is an open source alternative to Terracotta BigMemory ™ and has the final goal to let java applications use large (10, 20 GB and more) amounts of memory without slowing down the garbage collection process affecting overall system performance.
Although I started writing it just to understand how the amazing BigMemory worked and as a possible open source replacement for it, the project is now growing like a kind of cache abstraction layer over pluggable storage engines (including disk, network and NoSQL databases) implementing caching specific semantic (LRU eviction and item expiry with pluggable eviction strategies, etc..) and the off-heap store has become just 0ne of the supported storage engines.
DirectMemory itself manages the first (in heap) layer, which is of course the fastest and acts as a queued buffer for other layers, eviction (also pluggable, of both expired and over-quota items, where the quotas can be specified for every layer), performance monitoring (leveraging javasimon), etc. DirectMemory uses pluggable serializers (I have two as of today, one based on standard serialization and one based on protostuff-runtime, more efficient and that doesn’t require objects to implement Serializable) and I also have written a (simple and experimental) disk storage engine.
The project is in its early stages but is fully usable and well covered by unit tests; you can checkout the code and a simple web demo from github or just download the latest jar. DirectMemory is written in java, built and managed by Maven and leverages AOP (AspectJ) for performance monitoring and (in the next future) tracing, and eviction.
Next steps will be:
- Integration of a NoSQL backend (OrientDB would be just perfect, but I was also thinking about Voldemort)
- A network distribution aspect using JGroups or Hazelcast (of course OrientDB and Voldemort could also be a solution to this)
- Extensive testing with huge quantities (+4 GB) of ram – any volounteer for this? 🙂