Is Riak suitable for s small-record write-intensive billion-records application?

Yassen Damyanov yassen.tis at gmail.com
Thu Oct 18 07:42:41 EDT 2012


Hi everyone,

Absolutely new (and ignorant) to NoSQL solutions and to Riak (my
apologies; but extensive experience with SQL RDBMS).

We consider a NoSQL DB deployment for a mission-critical application
where we need to store several hundreds of MILLIONS of data records,
each record consisting of about 6 string fields, record total length
is 160 bytes. There is a unique key in each record that seems suitable
for hashing (20+ bytes string, e.g. "cle01_tpls01_2105328884").

The application should be able to write several hundreds of new
records per second, but first check if the unique key already exists.
Writing is to be done only if it is not there. If it is, the app needs
to retrieve the whole record and return it to the client and no
writing is done in this case.

I need to know if Riak would be suitable for such application. Please,
advice, thanks!

(Again, apologies for my ignorance. If we choose Riak, I promise to
get educated ;)

Yassen




More information about the riak-users mailing list