Riak Recap for Dec. 13 - 14
daniel.y.woo at gmail.com
Thu Dec 16 21:01:25 EST 2010
Thanks for your explanation, so in this case the partitions would be
Node1: p1 ~ p16
Node2: p17 ~ p32
Node3: p33 ~ p48
Node4: p49 ~ p64
Node1: p1 ~ p13 (remove 3 partitions)
Node2: p17 ~ p29 (remove 3 partitions)
Node3: p33 ~ p45 (remove 3 partitions)
Node4: p49 ~ p61 (remove 3 partitions)
Node5: p14, 15, 16, 30, 31, 32, 46, 47, 48, 62, 63, 64 (aprox 1/5 partitions
will be transferred to this new node)
Since there is no centralized node in Riak, how do we know the partition 32
is moved to node 5 from the client caller? Cassandra seems break half of the
adjacent node's data into the new node, that will be easy for the client to
search for the datum just around the node-circle clockwise, although it
causes unbalanced data distribution and you have to move them by command
lines. Riak seems to have this solved by moving partitions into new nodes
equally, that's very interesting, how do you guys make it? If the client
caller queries for partition 32 which was originally on node 2, how do the
client know it's on a new node now?
On Fri, Dec 17, 2010 at 6:56 AM, Mark Phillips <mark at basho.com> wrote:
> Hey Daniel,
> > So, I guess Riak would have to re-hash the whole partitions into all the
> > nodes, right? Is this done lazily when the node finds the requested data
> > missing?
> > Or is there a way to handle this with consistent re-hashing so we can
> > moving data around when new nodes added?
> Riak won't rehash all the partitions in the ring when new nodes are
> added. When you go from 4 -> 5 nodes, for example, approx. 1/5 of the
> existing partitions are transferred to the new node. The other 4/5s of
> the partitions will remain unchanged. As far as moving data around
> when new nodes are added, this is impossible to avoid. Data needs to
> be handed off to be spread around the ring.
> Hope that helps.
Thanks & Regards,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users