Is Riak suitable for s small-record write-intensive billion-records application?
guido.medina at temetra.com
Fri Oct 19 11:59:20 EDT 2012
About distributed locking mechanism, you might wanna take a look at
Google services, something called Chubby? Ctrl + F on that link:
On 19/10/12 16:47, Guido Medina wrote:
> Locking mechanism on a single server is easy, on a cluster is not,
> that's why you don't see too many multi masters databases right? Riak
> instead focused on high availability and partitioning, but no
> consistency, if you notice, consistency is related with locking, with
> 1 single access per key, so you have to decide which one you focus.
> From the Java world and specifically from the ConcurrentMap<K, V>
> idiom, you use put-if-absent or
> As Pseudo language you:
> *lock(key) (be synchronized or re-entrant lock, doesn't matter)*
> Once you get the lock, you verify if exists, if it doesn't create it,
> if it does, exit the lock ASAP, since it is meant to be a very quick
> atomic operation.
> Regarding siblings, Riak allow you to create many copies of the same
> key, and when you fetch that key, you get all the copies so YOU have
> to figure out how to assemble a consistent copy of your data base on
> all the written versions you have (because there is no distribute lock
> per key)
> I don't think I can explain in two more paragraph, you will have to
> watch this presentation:
> I'm limited to a certain level...
> On 19/10/12 16:32, Les Mikesell wrote:
>> On Fri, Oct 19, 2012 at 8:02 AM, Guido Medina<guido.medina at temetra.com> wrote:
>>> It depends, if you have siblings enabled at the bucket, then you need to
>>> resolve the conflicts using the object vclock,
>> How does that work for simultaneous initial inserts?
>>> if you are not using
>>> siblings, last write wins, either way, I haven't got any good results by
>>> delegating that tasks to Riak, with siblings, eventually I ran Riak out in
>>> speed of the writes making Riak fail (Due to LevelDB write speed?). And with
>>> last write wins then I don't think you would want unexpected results, and
>>> hence my recommendation: We use two things to resolve such issues; in-memory
>>> cache + locking mechanism.
>> The problem is where the inserting client should handle new keys and
>> updates differently, or at least be aware that its insert failed or
>> will be ignored later.
>>> For the last quote, the locking mechanism if well designed will always take
>>> care of that.
>> If it is easy, why doesn't riak handle it?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users