Cannot get third node joined to cluster. Says it is already in a cluster of it's own.

Ray Cote rgacote at appropriatesolutions.com
Fri Jun 29 15:57:43 EDT 2012


Hello all:

I've managed to get my new deployment into an odd state.

I have a three-node cluster. 
After installation, I was running the riak-admin join commands. 
Node #3 happend to be down because of a configuration error -- but something seems to have been configured. 

Now, when I run stats on my first node, I see 
  "nodename": "riak at 192.168.231.231",
    "connected_nodes": [
        "riak at 192.168.231.232",
        "riak at 192.168.231.233"
    ],
   "ring_members": [
        "riak at 192.168.231.231",
        "riak at 192.168.231.232"
    ],
   "ring_ownership": "[{'riak at 192.168.231.231',32},{'riak at 192.168.231.232',32}]",
  
On the problemmatic ring, I see:
    "connected_nodes": [
        "riak at 192.168.231.231",
        "riak at 192.168.231.232"
    ],
    "ring_members": [
        "riak at 192.168.231.233"
    ],
    "ring_ownership": "[{'riak at 192.168.231.233',64}]",

My understanding is that all three should show in the ring_ownership. 

Now, when I try to add node #3, I'm told it is already a member of a cluster. 
When I try to force-remove node #3, I'm told it is not a member of the cluster. 
When I try to use leave on node #3 I'm told it is the only member.

Any recommendations/thoughts on how to correct this? 
(short of re-installing node #3)

Thanks
--Ray

When I try a riak-admin leave on node #3, it says it is the only 
-- 
Ray Cote, President Appropriate Solutions, Inc. 
We Build Software 
www.AppropriateSolutions.com 603.924.6079 




More information about the riak-users mailing list