Is Riak suitable for s small-record write-intensive billion-records application?
guido.medina at temetra.com
Fri Oct 19 07:57:22 EDT 2012
Riak is all about high availability, if eventually consistent data is
not a problem OR, you can cover those aspects of the CAP concept with an
in-memory caching system and a sort of a locking mechanism to emulate
the core atomic action of your application (put-if-absent) then I would
say, you are in the right place, now, Riak uses bloom filters and the
hashing mechanism from Google code (this is not my expertise though so I
could be wrong), you should be fine by letting Riak manage your hashing
and equals concept
And from the Java world, if Object A equals Object B then Object A hash
equals Object B hash, not the opposite though, two object can have the
same hash and still not be equal. If that is what you are referring to
Hashing a key is a no brainier job if Riak delegates that to whatever
best practice algorithm should be use for hashing, in this case I
strongly believe they are using Google's algorithms, all indicates they
do because of the Bloom filters (To tidy up both concepts they should
somehow) they are using from Google.
All this said, it is at your hands and tools to have an in-memory cache
and locking mechanism.
On 19/10/12 07:10, Yassen Damyanov wrote:
> On Fri, Oct 19, 2012, Yassen Damyanov <yassen.tis at gmail.com> wrote:
>> Whatever the solution, it needs to be symmetric, that is, all
>> nodes must be equivalent.
> With "symmetric" I mean more "interchangable" than "functionally
> equal". That is,
> if a node plays a central role and goes down, the system should be
> able to pick a new "master" on its own and any other node should be
> able to become such.
> Guys, your input is MUCH appreciated. Thank you!
> riak-users mailing list
> riak-users at lists.basho.com
More information about the riak-users