Minimal number of nodes for production

Christian Dahlqvist christian at basho.com
Thu Apr 11 08:05:51 EDT 2013


Hi Daniel,

If you have 3 nodes in the cluster, you should not lose any data if one node goes down, but you may experience that some records, for which 2 replicas are gone, will return false not founds before read-repair can fix it. If you therefore retry whenever you do not find a key that you expect, you should be fine.

The other issue with having few nodes in the cluster is that each of the remaining nodes will need to manage a significant number of fallback partitions during node failure, which can quickly increase the load on these nodes. There is also a risk that the fallback partitions may not be divided equally between the remaining two nodes, leading to the load on one node increasing even further.

It is in order to avoid this and be able to ensure that nodes are reasonably evenly loaded in case of a failure that we recommend at least 5 nodes in any production cluster.

Best regards,

Christian



On 11 Apr 2013, at 12:01, ivenhov <iwan.daniel at gmail.com> wrote:

> With 3 nodes only if one of the nodes is down I should still be able to read
> write if I'm doing that with default setting (quorum). Am I wrong.
> I appreciate there is a case that SHA key space does not divide equally by 3
> so there is a slim possibility that  replicas for some keys would be on the
> same physical hardware. But still, 99% of keys should be available. Right?
> 
> Daniel
> 
> 
> 
> --
> View this message in context: http://riak-users.197444.n3.nabble.com/Minimal-number-of-nodes-for-production-tp4027625p4027635.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20130411/5bfb8f97/attachment.html>


More information about the riak-users mailing list