single key - multiple values buckets

Rusty Klophaus rusty at
Sat Feb 27 09:01:31 EST 2010

Hi Paweł,

There are currently no plans for bag type functionality in Riak.

For use cases like this, where you have a steady stream of many small
updates hitting the same objects repeatedly, I would recommend that you
batch things up:

   - Put something in front of Riak that will accumulate entries in memory,
   for each object. Something like Redis would be perfect for this, but
   memcached would work.

   - After you have accumulated a certain number of entries (anywhere
   between 10 to 1000, depending on the size and shape of your dataset, the
   amount of RAM you have), or after a certain time interval (depending on the
   durability you need), write the new entries to your Riak object.

This lets you use memory to mitigate a CPU/network/disk bound problem.

Does that make sense?


On Tue, Feb 23, 2010 at 3:48 AM, Paweł Peregud <paulperegud at>wrote:

> Hi,
> I was wandering, if there are plans to introduce buckets with mnesia
> bag functionality? Or something like trigger per bucket on PUT, such
> that it could work locally and merge new and existing data, if there
> is one, using user provided erlang function?
> I'm using Riak as a storage for my indexing data (dataset itself is in
> tens of GB). At this moment my indexer produces data at rate 2400
> records per second on single machine. Need to read/write each of that
> keys slows down things. For most of buckets I use simple value append
> would do the trick.
> Best,
> Paul.
> _______________________________________________
> riak-users mailing list
> riak-users at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the riak-users mailing list