Introducing DirectMemory cache

DirectMemory Cache is an open source alternative to Terracotta BigMemory ™ and has the final goal to let java applications use large (10, 20 GB and more) amounts of memory without slowing down the garbage collection process affecting overall system performance.

Although I started writing it just to understand how the amazing BigMemory worked and as a possible open source replacement for it, the project is now growing like a kind of cache abstraction layer over pluggable storage engines (including disk, network and NoSQL databases) implementing caching specific semantic (LRU eviction and item expiry with pluggable eviction strategies,  etc..) and the off-heap store has become just 0ne of the supported storage engines.

DirectMemory itself manages the first (in heap) layer, which is of course the fastest and acts as a queued buffer for other layers, eviction (also pluggable, of both expired and over-quota items, where the quotas can be specified for every layer), performance monitoring (leveraging javasimon), etc. DirectMemory uses pluggable serializers (I have two as of today, one based on standard serialization and one based on protostuff-runtime, more efficient and that doesn’t require objects to implement Serializable) and I also have written a (simple and experimental) disk storage engine.

The project is in its early stages but is fully usable and well covered by unit tests; you can checkout the code and a simple web demo from github or just download the latest jar. DirectMemory is written in java, built and managed by Maven and leverages AOP (AspectJ) for performance monitoring and (in the next future) tracing, and eviction.

Next steps will be:

  • Integration of a NoSQL  backend (OrientDB would be just perfect, but I was also thinking about Voldemort)
  • network distribution aspect using JGroups or Hazelcast (of course OrientDB and Voldemort could also be a solution to this)
  • Extensive testing with huge quantities (+4 GB) of ram – any volounteer for this? 🙂

Keep in touch here, on twitter or checkout the project wiki for further information and releases – feedback is much appreciated.

This entry was posted in DirectMemory Cache and tagged , , , , , . Bookmark the permalink.

6 Responses to Introducing DirectMemory cache

  1. Andrei Pozolotin says:

    did you try to benchmark DirectMemory agains BigMemory?

    • No, BigMemory is a paid product – which I cannot afford and I’m sure it performs better than DirectMemory (which is in its early stages and also a bit stuck, at the moment because of my lack of spare time). In any case usually paid software licenses prohibit publishing benchmarks result without permission

  2. james says:

    is it stable enough? i want use it in production env. thanks

  3. Yuan Chiang says:

    Hi Raffaele,

    I am a beginer to use direct memory in my work. Can you recommand some tutorial or examples where we can use as a reference? I am trying the following code example posted on direct-memory web site but it doesn’t work.

    cacheService = new DirectMemory()
    .setNumberOfBuffers( 100 )
    .setSize( Ram.Mb( 1 ) )
    .setInitialCapacity( 100000 )
    .setConcurrencyLevel( 4 )

    for (int i = 0; i < maxKeys; i++) {
    TestClass t = new TestClass(i);
    cacheService.put(new Integer(i), t);
    catch(Exception e){

    I found cacheService start losing cached elements gradually after I push them into cache.

    i also tried cacheService.schedualDisposalEvery(30000) but still get no luck.

    Can you point me out where do I get it wrong?

    Many thanks

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s