EC2 and node names

Jeff Pollard jeff.pollard at gmail.com
Wed May 4 12:38:04 EDT 2011


Grant,

Thanks for the info, that makes total sense.

On Tue, May 3, 2011 at 10:52 AM, Grant Schofield <grant at basho.com> wrote:

> On May 3, 2011, at 12:35 PM, Jeff Pollard wrote:
>
> Grant,
>
> Thanks for the reply.  I think I understand what you're saying, but it's
> not 100% clear to me how exactly it applies to my question.  I'll try
> explaining my plan another way if that helps.
>
> Imagine we have 4 nodes running in a ring on EC2: riak at 10.5.0.1,
> riak at 10.5.0.2, riak at 10.5.0.3, riak at 10.5.0.4.  Then one day node
> riak at 10.5.0.3 goes down and is unrecoverable.  Our plan had been:
>
> # On a existing node in the cluster
> riak-admin remove riak at 10.5.0.3
> riak-admin ringready
> # .. wait for TRUE
>
> # Boot a new EC2 machine configured as riak at 10.5.0.5 (brand new IP)
> # Then on that new machine...
> riak start
> riak-admin join riak at 10.5.0.1
> riak-admin ringready
> # .. wait for TRUE
>
> In that scenario, I imagine the issue is that removing and adding a new
> node under a new node name is putting extra strain on the cluster to shift
> data around?  Simply replacing a new node under the same hostname (even
> though it has a different IP) would mean less work for the cluster?  And how
> much extra work are
>
>
> You're correct in assuming that there is extra strain on the cluster moving
> that data around, but the amount of that churn will depend on how much data
> you have in your Riak cluster and your use case. Removing and adding a node
> is a great way to replace a broken node, but if you have a large amount of
> data it might be quicker and load the cluster less to replace the broken
> node with a node that has the same hostname or IP and the old nodes data
> directory.
>
> If riak at 10.5.0.3 dies due to a disk problem and your backups won't contain
> the data you would need to replace the node as you described. The best
> solution for replacing a dead node will vary based on circumstances, your
> use case, and your availability needs.
>
> Grant
>
>
>
> Thanks!
>
> On Tue, May 3, 2011 at 6:24 AM, Grant Schofield <grant at basho.com> wrote:
>
>>
>> On May 2, 2011, at 7:58 PM, Jeff Pollard wrote:
>>
>> I was reviewing the Riak Operations webinar<http://blog.basho.com/2011/04/15/follow-up-to-riak-operations-webinar/>,
>> and it was mentioned that the preferred vm.args -name for EC2 environments
>> should be "riak at hostname" because you don't have to "rename data or do
>> anything weird" like you would if your nodes were named "riak at ip.address"
>> (approximately 40:05 in the video).
>>
>> I was looking for some elaboration on this tip, namely:
>>
>>    1. What is meant by "rename data or do anything weird"
>>
>> When you bring a cluster together using data copied from a different set
>> of nodes you need to re-ip the first node you plan to start but you also
>> have to change the ring manually on that node so when the subsequent nodes
>> join everything works properly.
>>
>>
>>    1. Is "hostname" in riak at hostname a public DNS host that you configure
>>    in your DNS to map to the EC2 public hostname (ec2-50-18-...)?
>>
>> You can use DNS (public or private) or host file entries that reference
>> the private IP of the node if you choose to use hostname.
>>
>>
>>    1. Does anyone have any best practices around vm.args -name in EC2
>>    environments?
>>
>> We haven't outlined any best practices ourselves, but I tend to believe
>> using a hostname that you can change the IP for via DNS or a hosts file is a
>> more flexible way of approaching the problem.
>>
>> Grant
>>
>>
>> Thanks!
>> _______________________________________________
>> riak-users mailing list
>> riak-users at lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20110504/dffa3d52/attachment.html>


More information about the riak-users mailing list