Tuning a Riak cluster.

Jeremiah Peschka jeremiah.peschka at gmail.com
Fri Feb 22 00:48:23 EST 2013


http://aws.amazon.com/dedicated-instances/

--
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop

On Feb 21, 2013, at 7:29 PM, "Kevin Burton" <rkevinburton at charter.net> wrote:

> This is has been most helpful. Thank you. Hopefully these “knobs” have been added to the AWS EC2 instances. Since you us linode then you don’t know whether AWS, Azure, Rackspace, Joyent, etc. policies are on the hosting hardware.
>  
> From: Alexander Sicular [mailto:siculars at gmail.com] 
> Sent: Thursday, February 21, 2013 8:54 PM
> To: Kevin Burton
> Cc: 'Sean Carey'; riak-users at lists.basho.com
> Subject: Re: Tuning a Riak cluster.
>  
> Well, I would say in any circumstance where you care about performance or the availability of your data. Obviously the gold standard is bare metal. A search on google for "aws guaranteed different physical machines" yielded this aws forum thread from 2006, https://forums.aws.amazon.com/message.jspa?messageID=55112. Things may have changed since then. But I use linode which tells you which physical hardware your vm is on.
>  
>  
> On Feb 21, 2013, at 9:43 PM, Kevin Burton <rkevinburton at charter.net> wrote:
> 
> 
> How strict is this “Under no circumstances should you have more than one VM (one logical node in a Riak cluster) on the same physical hardware” rule? It doesn’t fit my situation but there has to be some leniency because Riak has to work in a cloud and you are not guaranteed that your provisioned VM will be on different physical hardware than the other nodes.
>  
> From: Alexander Sicular [mailto:siculars at gmail.com] 
> Sent: Thursday, February 21, 2013 8:27 PM
> To: Kevin Burton
> Cc: 'Sean Carey'; riak-users at lists.basho.com
> Subject: Re: Tuning a Riak cluster.
>  
> It can't be said enough times but the number one thing you can do to ensure that you are getting true performance (not to mention redundancy) is to use different physical hardware for each of your nodes. Under no circumstances should you have more than one VM (one logical node in a Riak cluster) on the same physical hardware. Also, use multiple connections/threads/parallelism/whatever on client side and be sure to hit all the nodes in the cluster haproxy roundrobin-esque when writing to Riak. Everything else is in the noise.
> 
> -Alexander Sicular
>  
> @siculars
>  
> On Feb 21, 2013, at 9:04 PM, Kevin Burton <rkevinburton at charter.net> wrote:
> 
> 
> 
>  
> There each has about 20-30GB of disk space.  They each are a VM so I am not sure how to specify the CPU. They all seem to be 64 bit Intel processors but I could tell you the clock speed. The network is 1Gb Ethernet.
>  
> From: Sean Carey [mailto:carey at basho.com] 
> Sent: Thursday, February 21, 2013 7:59 PM
> To: Kevin Burton
> Cc: riak-users at lists.basho.com
> Subject: Re: Tuning a Riak cluster.
>  
> Kevin,
> Disk and CPU, and Network?
>  
> 
> Sean Carey
> @densone
> 
> On Thursday, February 21, 2013 at 20:31, Kevin Burton wrote:
> 
>  
> I have a cluster of 4 machines (4 Linux VM machines each allocated about 1 Gb of memory – yea I know it isn’t a lot). I would like to get some pointers on getting the fastest query time possible given these meager resources. Thank you.
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>  
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>  
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20130221/8e9baa45/attachment.html>


More information about the riak-users mailing list