How to prevent Riak crashes when out of memory?

Steve Edwards steve at edmodo.com
Mon Jun 4 16:52:18 EDT 2012


I have a cluster of riak nodes and one of them crashed because of a
failure to allocate memory.

eheap_alloc: Cannot allocate 2850821240 bytes of memory

I restarted the node and it came up gracefully but all my clients
reported that it was offline until I restarted every single node in
the cluster.
How can I tell what was trying to allocate that memory so that I can
give it more memory?
When the node is running, it looks like its only using 8 out of 16G of memory.
Is there a configuration option somewhere that I can modify to
increase the amount of memory available to the node?
I've tweaked all the javascript vm settings but even when there are no
mapreduce calls, memory usage still tops out at 8G.
I should also mention that I am using riak_search


Here are some of the config settings:

ERL_MAX_ETS_TABLES 4800
{map_js_vm_count, 16 },
{reduce_js_vm_count, 6 },
{hook_js_vm_count, 8 },

{js_max_vm_mem, 16},
js_thread_stack, 16},
{segment_full_read_size,20},
{buffer_delayed_write_size,1024 },
{buffer_rollover_size, 4194304},
{max_compact_segments, 20}


Here is some more detail about the memory error from erl_crash.dump
=erl_crash_dump:0.1
Fri Jun  1 19:46:03 2012
Slogan: eheap_alloc: Cannot allocate 2850821240 bytes of memory (of
type "heap").
System version: Erlang R14B03 (erts-5.8.4) [source] [64-bit] [smp:4:4]
[rq:4] [async-threads:64] [kernel-poll:true]
Compiled: Tue Sep  6 10:22:01 2011
Taints: crypto,bitcask_nifs
Atoms: 15614
=memory
total: 12474692688
processes: 6076294256
processes_used: 6076132696
system: 6398398432
atom: 1068865
atom_used: 1048912
binary: 402089120
code: 9947817
ets: 89367152




More information about the riak-users mailing list