Is Riak suitable for s small-record write-intensive billion-records application?

Joshua Muzaaya joshmuza at
Mon Oct 22 09:41:34 EDT 2012

riak can well do this, but you will have to be strong and willing to learn.
If you want a faster solution for the same, have you checked out Couchbase
Server ( , it too, can handle this data, and its
setup is painless, with already finished SDKs and stuff, JSON in-and-out
and built-in memcached Hashing. You can add or remove nodes from the
cluster at run-time.

This is not meant to down-market Riak, but you mentioned Billions of
records. Riak storage is known to have a few issues as data grows to
billions. However, Couchbase 2.0 has been kinda battle tested, using SQLite
at the storage layer. Get more info there.

So, besides riak, give couchbase a test too.

Designed with WiseStamp -

On Mon, Oct 22, 2012 at 11:23 AM, Jens Rantil <jens.rantil at>wrote:

> Hi Yassen,
> > Any given node can be stopped or additional nodes can be added with
> almost no interruption. If the active node is taken down, CARP will appoint
> a new active node and its front-end will start accepting requests replacing
> the gone node. New nodes will announce themselves to the front-end apps via
> multicast.
> But CARP only handles when the _machine_ goes down, right? Have you
> planned for the scenario when if Riak would go down, but the machine would
> be responsive? If not, haproxy could be an option.
> Regards,
> Jens
> _______________________________________________
> riak-users mailing list
> riak-users at

*Muzaaya Joshua
Systems Engineer
*"Through it all, i have learned to trust in Jesus. To depend upon His Word"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the riak-users mailing list