eLevelDB max_open_files in 1.2.0
dbrady at weborama.com
Thu Aug 16 09:41:24 EDT 2012
I have a somewhat-related follow up question: is there a recommended maximum for the amount of data held on a machine?
I ask because at my previous company Cassandra was used, and we were advised to put no more 250 GB per physical box. The reasoning was that in a failure situation, rebuilding any more than this amount of data would cause too great a performance degradation.
----- Original Message -----
From: "Mark Phillips" <mark at basho.com>
To: "Dave Brady" <dbrady at weborama.com>
Cc: riak-users at lists.basho.com
Sent: Tuesday, August 14, 2012 7:34:10 PM GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna
Subject: Re: eLevelDB max_open_files in 1.2.0
On Sun, Aug 12, 2012 at 3:58 PM, Dave Brady <dbrady at weborama.com> wrote:
> First I want to thanks Basho for greatly expanding the documentation on the Wiki for configuring/tuning Riak and eLevelDB in 1.2.0! Big improvement over 1.1.x.
> My question is about max_open_files: here the documentation is confusing to me.
> It says to allocate one open file per 2 MB, then divide by the number of partitions. This is the same formula used in 1.1.x.
> It goes on to say that if you manually set this parameter in 1.1.x, to divide that value by two for 1.2.0.
> Should not the formula for 1.2.0, in that case, read as use one file per 4 MB?
Long story short, the answer is "yes" :)
With 1.2, 4MB is the advised file size, and you should be running with
no less that 20 files/nodes. I'll take a pass at updating the docs to
make this a bit easier to understand. Thanks for pointing that out.
> Dave Brady
> riak-users mailing list
> riak-users at lists.basho.com
More information about the riak-users