Is Riak suitable for s small-record write-intensive billion-records application?
yassen.tis at gmail.com
Tue Oct 23 13:50:36 EDT 2012
Thanks Jens, Joshua, Sean, Jared; I'll surely give Couchbase a try (no offense).
CARP works for machines, not for services, right. I'll think on that aspect.
Again, any suggestion for a *storage backend* for our use case?
> We consider a NoSQL DB deployment for a mission-critical application where we need to store several hundreds of millions of data records, each record consisting of about 6 string fields, record total length is 160 bytes. There is a unique key in each record that seems suitable for hashing (20+ bytes string, e.g. "cle01_tpls01_2105328884").
> The application should be able to write several hundreds of new records per second, but first check if the unique key already exists. Writing is to be done only if it is not there. If it is, the app needs to retrieve the whole record and return it to the client and no writing is done in this case.
> We need to have a cluster of at least 2-3 nodes, which must be able to grow easily if a need be.
More information about the riak-users