Is Riak suitable for s small-record write-intensive billion-records application?

Guido Medina guido.medina at temetra.com
Thu Oct 18 08:06:52 EDT 2012


Hi,

   That's exactly what Riak is designed for, there is no better usage of 
Riak than the scenario you are describing, now take into account the 
consistency, availability and concurrency of your writes, you might want 
to implement/use a sort of locking mechanism combined with in-memory 
cache where per key you can lock and make your operation atomic (sort of 
put-if-absent), we have a similar model where combined with in-memory 
cache that works like a charm.

Best regards,

Guido.

On 18/10/12 12:42, Yassen Damyanov wrote:
> Hi everyone,
>
> Absolutely new (and ignorant) to NoSQL solutions and to Riak (my
> apologies; but extensive experience with SQL RDBMS).
>
> We consider a NoSQL DB deployment for a mission-critical application
> where we need to store several hundreds of MILLIONS of data records,
> each record consisting of about 6 string fields, record total length
> is 160 bytes. There is a unique key in each record that seems suitable
> for hashing (20+ bytes string, e.g. "cle01_tpls01_2105328884").
>
> The application should be able to write several hundreds of new
> records per second, but first check if the unique key already exists.
> Writing is to be done only if it is not there. If it is, the app needs
> to retrieve the whole record and return it to the client and no
> writing is done in this case.
>
> I need to know if Riak would be suitable for such application. Please,
> advice, thanks!
>
> (Again, apologies for my ignorance. If we choose Riak, I promise to
> get educated ;)
>
> Yassen
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





More information about the riak-users mailing list