Hitting per-node limitations
mjrusso at gmail.com
Mon Jul 12 17:59:14 EDT 2010
I'm currently evaluating Riak and am looking to understand the system in
more depth. One of the best ways I know of doing this is to examine the
edge cases. So, without further ado:
There are several per-node limitations that, if hit, could cause a Riak node
to stop functioning properly. For example,
- disk: each node has a file system of finite size and cannot indefinitely
- memory: the Bitcask keydir structure must fit entirely in memory and thus
cannot indefinitely accept writes
>From a management perspective, is the onus on the sysadmin to monitor disk
and memory usage and to increase the number of nodes in the cluster as
appropriate, or are there any built-in mechanisms to automatically
re-balance data across the cluster?
Furthermore, if a limit is inadvertently hit, is there a re-balancing
mechanism available that can be manually triggered to compensate, or is it a
requirement that every node take an equal share of partitions?
Are there any other related best-practices?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users