Is Riak suitable for s small-record write-intensive billion-records application?

Les Mikesell lesmikesell at gmail.com
Fri Oct 19 08:42:42 EDT 2012


On Fri, Oct 19, 2012 at 6:57 AM, Guido Medina <guido.medina at temetra.com> wrote:
> Riak is all about high availability, if eventually consistent data is not a
> problem

What is the 'eventually consistent' result of simultaneous inserts of
different values for a new key at different nodes?   Does partitioning
affect this case?

> OR, you can cover those aspects of the CAP concept with an in-memory
> caching system and a sort of a locking mechanism to emulate the core atomic
> action of your application (put-if-absent) then I would say, you are in the
> right place,

What happens if the partitioning that riak is so concerned about
happens between the inserter and the lock - or the nodes providing
redundancy for the lock?

> All this said, it is at your hands and tools to have an in-memory cache and
> locking mechanism.

If you have more than one writer, doesn't this need to be just as
distributed and robust as riak?

-- 
   Les Mikesell
     lesmikesell at gmail.com




More information about the riak-users mailing list