Partition distribution between nodes

Luke Bakken lbakken at basho.com
Thu Jun 5 09:55:11 EDT 2014


Hi Manu,

Partition distribution is determined by the claim algorithm. In this case
it more evenly distributes partitions in a "from scratch" scenario vs.
adding nodes. There has been work to improve the algorithm that you can
find here: https://github.com/basho/riak_core/pull/183

--
Luke Bakken
CSE
lbakken at basho.com


On Mon, Jun 2, 2014 at 11:51 PM, Manu Mäki - Compare Group <
m.maki at comparegroup.eu> wrote:

>  Hi Luke,
>
>  Do you have any idea why creating the cluster from scratch creates “more
> balanced” cluster? Is this because of the actual partition sizes not being
> of equal size?
>
>
>  Manu
>
>   From: Luke Bakken <lbakken at basho.com>
> Date: Monday 2 June 2014 19:34
> To: Manu Maki <m.maki at comparegroup.eu>
> Cc: "riak-users at lists.basho.com" <riak-users at lists.basho.com>
> Subject: Re: Partition distribution between nodes
>
>   Hi Manu,
>
>  I see similar vnode distribution in my local dev cluster. This is due to
> 64 not being evenly divisible by 5.
>
>  4 nodes:
>
>  $ dev1/bin/riak-admin member-status
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
>
> -------------------------------------------------------------------------------
> valid      25.0%      --      'dev1 at 127.0.0.1'
> valid      25.0%      --      'dev2 at 127.0.0.1'
> valid      25.0%      --      'dev3 at 127.0.0.1'
> valid      25.0%      --      'dev4 at 127.0.0.1'
>
> -------------------------------------------------------------------------------
>
>  5th node added:
>
>  $ dev1/bin/riak-admin member-status
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
>
> -------------------------------------------------------------------------------
> valid      18.8%      --      'dev1 at 127.0.0.1'
> valid      18.8%      --      'dev2 at 127.0.0.1'
> valid      18.8%      --      'dev3 at 127.0.0.1'
> valid      25.0%      --      'dev4 at 127.0.0.1'
> valid      18.8%      --      'dev5 at 127.0.0.1'
>
> -------------------------------------------------------------------------------
>
>  Cluster *from scratch* with 5 nodes:
>
>  $ dev1/bin/riak-admin member-status
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
>
> -------------------------------------------------------------------------------
> valid      20.3%      --      'dev1 at 127.0.0.1'
> valid      20.3%      --      'dev2 at 127.0.0.1'
> valid      20.3%      --      'dev3 at 127.0.0.1'
> valid      20.3%      --      'dev4 at 127.0.0.1'
> valid      18.8%      --      'dev5 at 127.0.0.1'
>
> -------------------------------------------------------------------------------
>
>  --
> Luke Bakken
> CSE
> lbakken at basho.com
>
>
> On Mon, Jun 2, 2014 at 6:52 AM, Manu Mäki - Compare Group <
> m.maki at comparegroup.eu> wrote:
>
>>  Hi all,
>>
>>  In the beginning we were running four nodes with n-value of 2. The
>> partitions were distributed 25% for each node. Now when we added fifth node
>> (still having n-value of 2), the partitions are distributed in following
>> way: 25%, 19%, 19%, 19% and 19%. The ring size in use is 64. Is this normal
>> behavior? The cluster seems to be working correctly. However I was
>> expecting each node to have 20% of the partitions.
>>
>>
>>  Best regards,
>> Manu Mäki
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users at lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20140605/64b060d4/attachment.html>


More information about the riak-users mailing list