riak 2.1.1 : Erlang crash dump
jmeredith at basho.com
Thu Oct 1 17:06:44 EDT 2015
It looks like Riak was unable to allocate 4Gb of memory. You may have to
reduce the amount of memory allocated for leveldb from the default 70%, try
setting this in your /etc/riak/riak.conf file
leveldb.maximum_memory.percent = 50
The memory footprint for Riak should stabilize after a few hours and on
servers with smaller amounts of memory, the 30% left over may not be enough.
Please let us know how you get on.
On Wed, Sep 30, 2015 at 5:31 PM Girish Shankarraman <
gshankarraman at vmware.com> wrote:
> I have 7 node cluster for riak with a ring_size of 128.
> *System Details:*
> Each node is a VM with 16GB of memory.
> The backend is using leveldb.
> sys_system_architecture : <<"x86_64-unknown-linux-gnu">>
> sys_system_version : <<"Erlang R16B02_basho8 (erts-5.10.3) [source]
> [64-bit] [smp:4:4] [async-threads:64] [kernel-poll:true] [frame-pointer]">>
> riak_control_version : <<"2.1.1-0-g5898c40">>
> cluster_info_version : <<"2.0.2-0-ge231144">>
> yokozuna_version : <<"2.1.0-0-gcb41c27”>>
> We have up to 400-1000 json records being written/sec. Each record might
> be a few 100 bytes.
> I see the following crash message in the erlang logs after a few hours of
> processing. Any suggestions on what could be going on here ?
> ===== Tue Sep 29 20:20:56 UTC 2015
> [os_mon] memory supervisor port (memsup): Erlang has closed^M
> [os_mon] cpu supervisor port (cpu_sup): Erlang has closed^M
> Crash dump was written to: /var/log/riak/erl_crash.dump^M
> eheap_alloc: Cannot allocate 3936326656 bytes of memory (of type "heap").^M
> Also tested running this at 50GB per Riak Node(VM) and things work but
> memory keeps growing, so throwing hardware at it doesn’t seem very scalable.
> — Girish Shankarraman
> riak-users mailing list
> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users