Riak consumes too much memory

Matthew Von-Maszewski matthewv at basho.com
Fri Oct 18 14:37:11 EDT 2013


Darren

File cache is favored over block cache.  The cost of a miss to the file cache is much larger than a miss to the block cache.  The block cache will release data for a new file cache entry, until it reaches a minimum of 2Mbytes.  Both caches use Google's original LRU formula to remove the least recently used cache entry to make space.

File cache will release any file cache entry that has not been accessed in 4 days.  This keeps old, stale files from taking up memory for no reason.

Matthew

On Oct 18, 2013, at 2:24 PM, Darren Govoni <darren at ontrenet.com> wrote:

> Sounds nice. And then the question is what happens when that limit is reached on a node?
> 
> On 10/18/2013 02:21 PM, Matthew Von-Maszewski wrote:
>> The user has the option of setting a default memory limit in the app.config / riak.conf file (either absolute number or percentage of total system memory).  There is a default percentage (which I am still adjusting) if the user takes no action.
>> 
>> The single memory value is then dynamically partitioned to each Riak vnode (and AAE vnodes) as the server takes on more or fewer vnodes throughout normal operations and node failures.
>> 
>> There is no human interaction required once the memory limit is established.
>> 
>> Matthew
>> 
>> 
>> On Oct 18, 2013, at 2:08 PM, darren <darren at ontrenet.com> wrote:
>> 
>>> Is it smart enough to manage itself?
>>> Or does it require human babysitting?
>>> 
>>> 
>>> Sent from my Verizon Wireless 4G LTE Smartphone
>>> 
>>> 
>>> 
>>> -------- Original message --------
>>> From: Matthew Von-Maszewski <matthewv at basho.com> 
>>> Date: 10/18/2013 1:48 PM (GMT-05:00) 
>>> To: Dave Martorana <dave at flyclops.com> 
>>> Cc: darren <darren at ontrenet.com>,riak-users at lists.basho.com 
>>> Subject: Re: Riak consumes too much memory 
>>> 
>>> 
>>> Dave,
>>> 
>>> flexcache will be a new feature in Riak 2.0.  There are some subscribers to this mailing list that like to download and try things early.  I was directing those subscribers to the GitHub branch that contains the work-in-progress code.
>>> 
>>> flexcache is a new method for sizing / accounting the memory used by leveldb.  It replaces the current method completely.  flexcache is therefore not an option, but an upgrade to the existing logic.
>>> 
>>> Again, the detailed discussion is here:  ttps://github.com/basho/leveldb/wiki/mv-flexcache
>>> 
>>> Matthew
>>> 
>>> 
>>> On Oct 18, 2013, at 12:33 PM, Dave Martorana <dave at flyclops.com> wrote:
>>> 
>>>> Matthew,
>>>> 
>>>> For we who don't quite understand, can you explain - does this mean mv-flexcache is a feature that just comes with 2.0, or is it something that will need to be turned on, etc?
>>>> 
>>>> Thanks!
>>>> 
>>>> Dave
>>>> 
>>>> 
>>>> On Thu, Oct 17, 2013 at 9:45 PM, Matthew Von-Maszewski <matthewv at basho.com> wrote:
>>>> It is already in test and available for your download now:
>>>> 
>>>> https://github.com/basho/leveldb/tree/mv-flexcache
>>>> 
>>>> Discussion is here:
>>>> 
>>>> https://github.com/basho/leveldb/wiki/mv-flexcache
>>>> 
>>>> This code is slated for Riak 2.0.  Enjoy!!
>>>> 
>>>> Matthew
>>>> 
>>>> On Oct 17, 2013, at 20:50, darren <darren at ontrenet.com> wrote:
>>>> 
>>>>> But why isn't riak smart enough to adjust itself to the available memory or lack thereof?
>>>>> 
>>>>> No serious enterprise technology should just consume everything and crash.
>>>>> 
>>>>> 
>>>>> Sent from my Verizon Wireless 4G LTE Smartphone
>>>>> 
>>>>> 
>>>>> 
>>>>> -------- Original message --------
>>>>> From: Matthew Von-Maszewski <matthewv at basho.com> 
>>>>> Date: 10/17/2013 8:38 PM (GMT-05:00) 
>>>>> To: ZhouJianhua <jh.zhou at outlook.com> 
>>>>> Cc: riak-users at lists.basho.com 
>>>>> Subject: Re: Riak consumes too much memory 
>>>>> 
>>>>> 
>>>>> Greetings,
>>>>> 
>>>>> The default config targets 5 servers and 16 to 32G of RAM.  Yes, the app.config needs some adjustment to achieve happiness for you:
>>>>> 
>>>>> - change ring_creation_size from 64 to 16 (remove the % from the beginning of the line)
>>>>> - add this line before "{data_root, <path>}" in eleveldb section: "{max_open_files, 40}," (be sure the comma is at the end of this line).
>>>>> 
>>>>> Good luck,
>>>>> Matthew
>>>>> 
>>>>> 
>>>>> On Oct 17, 2013, at 8:23 PM, ZhouJianhua <jh.zhou at outlook.com> wrote:
>>>>> 
>>>>>> Hi
>>>>>> 
>>>>>> I installed riak v1.4.2 on ubuntu12.04(64bit, 4G RAM) with apt-get,  run it with default app.conf but change the backend to leveldb, and test it with https://github.com/tpjg/goriakpbc . 
>>>>>> 
>>>>>> Just keep putting (key, value) to an bucket,  the memory always increasing, and in the end it crashed, as it cannot allocate memory. 
>>>>>> 
>>>>>> Should i change the configuration or other?
>>>>>> _______________________________________________
>>>>>> riak-users mailing list
>>>>>> riak-users at lists.basho.com
>>>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>>> 
>>>> 
>>>> _______________________________________________
>>>> riak-users mailing list
>>>> riak-users at lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>> 
>>>> 
>>> 
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20131018/3ab81851/attachment.html>


More information about the riak-users mailing list