Fwd: Riak Cannot allocate bytes of memory (of type "heap")

Luke Bakken lbakken at basho.com
Mon Jan 25 14:52:34 EST 2016


Hi Byron -

I strongly suggest you monitor the number of siblings and object sizes
for your Riak objects. These sorts of allocation errors can often
caused by a very large object within your cluster.

This page gives information about which statistics to monitor:
http://docs.basho.com/riak/latest/ops/running/stats-and-monitoring/

> Are m3.large instances an issue for Riak?

It all depends on your use. Since you are running into these
allocation errors there is something up with how Riak is being used or
there just may not be enough resources available.

> Can you let me know what we might expect if we disable Active Anti-Entropy - will that make our solr queries return stale data?

Not necessarily. I will point this email thread to others who can
speak better to this.

Thanks -

--
Luke Bakken
Engineer
lbakken at basho.com

On Mon, Jan 25, 2016 at 11:34 AM, Sakoulas, Byron
<ByronSakoulas at catholichealth.net> wrote:
> Luke - thanks for replying.
>
> Are m3.large instances an issue for Riak? we were originally told those would be fine by Dimitri at basho.
> We raised the solr RAM to GB after having issues with solr running out of memory.
>
> Can you let me know what we might expect if we disable Active Anti-Entropy - will that make our solr queries return stale data?
>
> On 1/25/16, 12:17 PM, "Luke Bakken" <lbakken at basho.com> wrote:
>
>>Hello Byron -
>>
>>m3.large instances only support 7.5 GiB of RAM. You can see that Riak
>>crashed while attempting to allocate 2.12 GiB of RAM for leveldb.
>>
>>I suggest decreasing jvm (Solr) RAM back to the 1GiB setting that
>>ships with Riak. You can also experiment with disabling Active
>>Anti-Entropy to reduce memory usage. Hopefully someone with more
>>experience with Riak Search (Yokozuna) interaction with Active
>>Anti-Entropy will chime in on this thread.
>>
>>Or, increase the amount of RAM available to these VMs.
>>
>>Thanks
>>
>>--
>>Luke Bakken
>>Engineer
>>lbakken at basho.com
>>
>>
>>On Mon, Jan 25, 2016 at 10:10 AM, Sakoulas, Byron
>><ByronSakoulas at catholichealth.net> wrote:
>>> We are running an 8 node cluster of riak at AWS, and our nodes are consistently crashing with the error - Cannot allocate x bytes of memory (of type "heap”).
>>>
>>> Here are some of the specs for our env:
>>>
>>> 8 nodes - running on M3 Larges
>>> Level DB with 50% allocated
>>> Solr with 2Gig
>>> We use only Immutable and CRDT data
>>> We have a Custom search schema
>>> System config matches basho recommendations
>>> CentOs 7
>>> Riak 2.0.2
>>> Riak java client 2.0.0




More information about the riak-users mailing list