Riak One partition handoff stall

Gaurav Sood gaurav.sood at mediologysoftware.com
Mon May 28 09:11:06 EDT 2018


Thanks Bryan

Below is the ouput of command riak-admin vnode_status. May be data transfer
has stopped on the claimant node.

Output of all commands is constant.

1)

 VNode: 342539446249430371453988632667878832731859189760
Backend: riak_kv_eleveldb_backend
Status:
[{stats,<<"                               Compactions\nLevel  Files
Size(MB) Time(sec) Read(MB) Write(MB)\n-------------------
-------------------------------\n  0        1        0         0
0         0\n">>},
 {read_block_error,<<"0">>},
 {fixed_indexes,true}]


2) 30GB data per server
4) I am not sure about the number of objects. Is there any way to get the
count of objects.

On Mon, May 28, 2018 at 4:57 PM, Bryan Hunt <bryan.hunt at erlang-solutions.com
> wrote:

> Are you constantly executing a particular riak command, in your system
> monitoring scripts, for example: `riak-admin vnode-status` ?
>
> What size is your data per server ?
>
> How many objects are you storing ?
>
> ---
> Erlang Solutions cares about your data and privacy; please find all
> details about the basis for communicating with you and the way we process
> your data in our Privacy Policy.You can update your email preferences or
> opt-out from receiving Marketing emails here.
>
> On 28 May 2018, at 08:29, Gaurav Sood <gaurav.sood at mediologysoftware.com>
> wrote:
>
> Hi All - Good Day!
>
> I have a 7 Node Raik_KV cluster. Recently I have upgraded this cluster
> from 1.4.2  to 1.4.12 on Ubuntu 16.04. After upgrading the cluster whenever
> I leave a node from cluster one partition hand off stalled every time &
> Active transfers shows 'waiting to handoff 1 partitions", to complete this
> process I need to reboot the riak service on all nodes one by one.
>
> I am not sure if it's configuration problem. Here is the current state of
> cluster.
>
> *#output of riak-admin member-status*
> ================================= Membership
> ==================================
> Status     Ring    Pending    Node
> ------------------------------------------------------------
> -------------------
> leaving     0.0%      --      'riak at 192.168.2.10'
> valid      14.1%      --      'riak at 192.168.2.11'
> valid      14.1%      --      'riak at 192.168.2.12'
> valid      15.6%      --      'riak at 192.168.2.13'
> valid      14.1%      --      'riak at 192.168.2.14'
> valid      14.1%      --      'riak at 192.168.2.15'
> valid      14.1%      --      'riak at 192.168.2.16'
> valid      14.1%      --      'riak at 192.168.2.17'
> ------------------------------------------------------------
> -------------------
> Valid:7 / Leaving:1 / Exiting:0 / Joining:0 / Down:0
>
> *#output of riak-admin transfers*
>
> 'riak at 192.168.2.10' waiting to handoff 1 partitions
>
> Active Transfers:
>
> (nothing here)
>
>
> *#Output of riak-admin ring_status*
> ================================== Claimant ==============================
> =====
> Claimant:  'riak at 192.168.2.10'
> Status:     up
> Ring Ready: true
>
> ============================== Ownership Handoff
> ==============================
> No pending changes.
>
> ============================== Unreachable Nodes
> ==============================
> All nodes are up and reachable
>
> *current Transfer Limit is 2.*
>
> Thanks
> Gaurav
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20180528/186c2b1f/attachment.html>


More information about the riak-users mailing list