Large ring_creation_size

Dave Barnes dbarnes001 at gmail.com
Thu Apr 14 09:09:24 EDT 2011


Sorry I feel compelled to chime in.

Maybe you could assess your physical node limits and start with a small
configuration, then increase  it and increase it until you hit a limit.

Work small to large.

Once you find the pain point, lets us know what resource ran out.

You will learn a lot along the way on how your servers behave and we'll
discover a lot when you share the results.

Thanks for digging in,

Dave

On Wed, Apr 13, 2011 at 5:11 PM, Greg Nelson <grourk at dropcam.com> wrote:

>  Ok, how about in this case I described?  It runs out of memory with a
> single pair of nodes...
>
> (Or did you mean there's a connection between each pair of vnodes?)
>
> On Wednesday, April 13, 2011 at 1:56 PM, Jon Meredith wrote:
>
> Hi Greg et al,
>
> As you say largest known is not largest possible.  Internally within Basho,
> the largest cluster we've experimented with so far had 50 nodes.
>
> Going beyond that it's speculation from me about pain points.
>
> 1) It is true that you need enough file descriptors to start up all
> partitions when a node restarts - Riak checks if there is any handoff data
> pending for each partition.  We have work scheduled to address that in the
> medium term. The plan is to only spin up partitions the node owns and any
> that have been started as fallbacks that handoff has not completed for.
> Until that work is done you will need a high ulimit with large ring sizes.
>
> 2) It is also true that Erlang runs a fully connected network, so there
> will be connections between each node pair in the cluster.  We haven't
> determined the point at which it becomes a problem.
>
> So it looks like you'll be pushing the known limits.  Basho will do our
> very best to help overcome any obstacles as you encounter them.
>
> Jon Meredith
> Basho Technologies.
>
> On Wed, Apr 13, 2011 at 1:41 PM, Greg Nelson <grourk at dropcam.com> wrote:
>
>  The largest known riak cluster != the largest possible riak cluster.  ;-)
>
> The inter node communication of the cluster depends on the data set and
> usage pattern, doesn't it?  Or is there some constant overhead that tops out
> at a few hundred nodes?  I should point out that we'll have big data, but
> not a huge number of keys.
>
> The number of vnodes in the cluster should be equal to the
> ring_creation_size under normal circumstances, shouldn't it?  So when I have
> a one node cluster, that node is running ring_creation_size vnodes...  File
> descriptors probably isn't a problem -- these machines won't be doing
> anything else, and the limits are set to 65536.
>
> Thinking about the internode communication you mentioned, that's probably
> where the resource hog is..  socket buffers, etc.
>
> Anyway, I'd also love to hear more from basho.  :)
>
> On Wednesday, April 13, 2011 at 12:33 PM, siculars at gmail.com wrote:
>
> Ill just chime in and say that this is not practical for a few reasons. The
> largest known riak cluster has like 50 or 60 nodes. Afaik, inter node
> communication of erlang clusters top out at a few hundred nodes. I'm also
> under the impression that each physical node has to have enough file
> descriptors to accommodate every virtual node in the cluster.
>
> I'd love to hear more from basho.
>
> -alexander
>
>
> Sent from my Verizon Wireless BlackBerry
>
> -----Original Message-----
> From: Greg Nelson <grourk at dropcam.com>
> Sender: riak-users-bounces at lists.basho.com
> Date: Wed, 13 Apr 2011 12:13:34
> To: <riak-users at lists.basho.com>
> Subject: Large ring_creation_size
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20110414/f11f24fd/attachment.html>


More information about the riak-users mailing list