Hitting per-node limitations

Michael Russo mjrusso at gmail.com
Tue Jul 13 12:16:44 EDT 2010


Thanks Dean, this makes sense.

Best,
Michael

On Tue, Jul 13, 2010 at 11:30 AM, Dean Cookson <cookson at basho.com> wrote:

> Michael,
> The answer to your first question is basically "yes".  The sysadmin really
> should be monitoring memory usage and disk space and adding nodes to the
> cluster when/if needed.  The SNMP hooks in the EnterpriseDS version are very
> handy for that type of thing and I'd be happy to have that conversation off
> list, if you're interested.  That being said, the system does automatically
> balance objects across the cluster evenly and adding a new node will trigger
> a re-balance that does not interrupt cluster function.  Currently, there is
> not really a way to force an un-even distribution across the cluster.
>
> On the best-practices front, there is a section of the wiki that addresses
> exactly that at: https://wiki.basho.com/display/RIAK/Best+Practices
>
> Beyond that, I'd add that Riak is generally happier with more nodes rather
> than fewer (e.g. If N=3, 5 nodes is a good place to start).
>
> Best regards,
> Dean
>
> --
> Dean Cookson
> VP Business Development
> Basho Technologies, Inc.
> Google Voice: +1 415 692 1775
> cookson at basho.com
>
>
> On Jul 12, 2010, at 2:59 PM, Michael Russo wrote:
>
> > Hi all,
> >
> > I'm currently evaluating Riak and am looking to understand the system in
> more depth.  One of the best ways I know of doing this is to examine the
> edge cases.  So, without further ado:
> >
> > There are several per-node limitations that, if hit, could cause a Riak
> node to stop functioning properly.  For example,
> >
> > - disk: each node has a file system of finite size and cannot
> indefinitely accept writes
> > - memory: the Bitcask keydir structure must fit entirely in memory and
> thus cannot indefinitely accept writes
> >
> > From a management perspective, is the onus on the sysadmin to monitor
> disk and memory usage and to increase the number of nodes in the cluster as
> appropriate, or are there any built-in mechanisms to automatically
> re-balance data across the cluster?
> >
> > Furthermore, if a limit is inadvertently hit, is there a re-balancing
> mechanism available that can be manually triggered to compensate, or is it a
> requirement that every node take an equal share of partitions?
> >
> > Are there any other related best-practices?
> >
> > Thanks,
> > Michael
> > _______________________________________________
> > riak-users mailing list
> > riak-users at lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20100713/b6af4254/attachment.html>


More information about the riak-users mailing list