WARNING: Not all replicas will be on distinct nodes

Daniel Miller dmiller at dimagi.com
Thu Dec 14 14:49:02 EST 2017


I have a 6 node cluster (now 7) with ring size 128. On adding the most
recent node I got the WARNING: Not all replicas will be on distinct nodes.
After the initial plan I ran the following sequence many times, but always
got the same plan output:

sudo riak-admin cluster clear && \
sleep 10 && \
sudo service riak start && \
sudo riak-admin wait-for-service riak_kv && \
sudo riak-admin cluster join riak at hqriak20.internal && \
sudo riak-admin cluster plan


The plan looked the same every time, and I eventually committed it because
the cluster capacity is running low:


Success: staged join request for 'riak at riak29.internal' to
'riak at riak20.internal'
=============================== Staged Changes
================================
Action         Details(s)
-------------------------------------------------------------------------------
join           'riak at riak29.internal'
-------------------------------------------------------------------------------


NOTE: Applying these changes will result in 1 cluster transition

###############################################################################
                         After cluster transition 1/1
###############################################################################

================================= Membership
==================================
Status     Ring    Pending    Node
-------------------------------------------------------------------------------
valid      17.2%     14.1%    'riak at riak20.internal'
valid      17.2%     14.8%    'riak at riak21.internal'
valid      16.4%     14.1%    'riak at riak22.internal'
valid      16.4%     14.1%    'riak at riak23.internal'
valid      16.4%     14.1%    'riak at riak24.internal'
valid      16.4%     14.8%    'riak at riak28.internal'
valid       0.0%     14.1%    'riak at riak29.internal'
-------------------------------------------------------------------------------
Valid:7 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

WARNING: Not all replicas will be on distinct nodes

Transfers resulting from cluster changes: 18
  2 transfers from 'riak at riak28.internal' to 'riak at riak29.internal'
  3 transfers from 'riak at riak21.internal' to 'riak at riak29.internal'
  3 transfers from 'riak at riak23.internal' to 'riak at riak29.internal'
  3 transfers from 'riak at riak24.internal' to 'riak at riak29.internal'
  4 transfers from 'riak at riak20.internal' to 'riak at riak29.internal'
  3 transfers from 'riak at riak22.internal' to 'riak at riak29.internal'


My understanding is that if some replicas are not on distinct nodes then I
may have permanent data loss if a single physical node is lost (please let
me know if that is not correct). Questions:

How do I diagnose which node(s) have duplicate replicas?
What can I do to fix this situation?

Thanks!
Daniel


P.S. I am unable to get anything useful out of `riak-admin diag`. It
appears to be broken on the version of Riak I'm using (2.2.1). Here's the
output I get:

$ sudo riak-admin diag
RPC to 'riak at hqriak20.internal' failed: {'EXIT',
                                                           {undef,
                                                            [{lager,
                                                              get_loglevels,
                                                              [],[]},

{riaknostic,run,
                                                              1,
                                                              [{file,

"src/riaknostic.erl"},
                                                               {line,118}]},
                                                             {rpc,

'-handle_call_call/6-fun-0-',
                                                              5,
                                                              [{file,
                                                                "rpc.erl"},

{line,205}]}]}}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20171214/c2d557e5/attachment-0002.html>


More information about the riak-users mailing list