Riak 1.4 test on Azure - Webmachine error at path ...

Christian Rosnes christian.rosnes at gmail.com
Tue Jul 30 04:51:34 EDT 2013


On Sun, Jul 28, 2013 at 10:08 PM, Matthew Von-Maszewski
<matthewv at basho.com>wrote:

>
> leveldb has two independent caches:  file cache and data block cache.  You
> have raised the data block cache from its default 8M to 256M per your
> earlier note.  I would recommend the follow:
>
> {max_open_files, 50},          %% 50 * 4Mbytes allocation for file cache
> {cache_size, 104857600},  %% 100Mbytes for data block cache
>
> The max_open_files default is 20 (which is internally reduced by 10).  You
> are likely thrashing file opens.  The file cache is far more important to
> performance than the data block cache.
>
> Find the LOG file within one of your database "vnode" directories.  Look
> for a line like this ' compacted to: files[ 0 9 25 14 2 0 0 ]'.  You would
> like to be covering that total count of files (plus 10) with your
> max_open_files setting.  Take the cache_size down to as low as 8Mbytes to
> achieve the coverage.  Once you are down to 8Mbytes of cache_size, you
> should go no lower and give up on full max_open_files coverage.
>
> Summary:  total memory per vnode in 1.4 is (max_open_files - 10) * 4Mbytes
> + cache_size;
>
>
Thank you.

In app.config I have now set this for the eleveldb section:

  {cache_size, 8388608} %% 8MB
  {max_open_files, 260}

The 'max_open_files' was based on the highest sum of the 6 numbers I read
in the 'compacted to ...' line. Is this the right way to set this parameter,
or is it too high to have any significant benefit ?

After the sysctl.conf and app.config changes I've inserted about
50 million json objects via http and not seen any errors. The performance
for
each 1-hour test has ranged from 1700 to 1850 insert req/s.
The bucket used during testing now contains over 84 million json objects.

Btw - when I now check the logs:

[root at riak01 leveldb]# grep 'compacted' */LOG
2013/07/29-19:38:14.650538 7f96a1bdc700 compacted to: files[ 0 1 21 289 0 0
0 ]
2013/07/29-19:34:58.838520 7f9692138700 compacted to: files[ 2 0 30 288 0 0
0 ]
2013/07/29-19:38:02.037188 7f96c51b8700 compacted to: files[ 1 0 27 301 0 0
0 ]
2013/07/29-19:38:12.214409 7f96c51b8700 compacted to: files[ 1 0 26 302 0 0
0 ]
2013/07/29-19:37:06.503530 7f96a1bdc700 compacted to: files[ 1 1 22 284 0 0
0 ]
2013/07/29-19:31:41.932370 7f96bffff700 compacted to: files[ 0 1 25 291 0 0
0 ]
2013/07/29-19:32:18.097417 7f96bffff700 compacted to: files[ 0 1 24 292 0 0
0 ]
2013/07/29-19:30:20.986832 7f968e538700 compacted to: files[ 2 1 24 278 0 0
0 ]
2013/07/29-19:37:47.139039 7f96c51b8700 compacted to: files[ 3 0 20 300 0 0
0 ]
2013/07/29-19:15:10.950633 7f968db37700 compacted to: files[ 0 2 33 263 0 0
0 ]
2013/07/29-19:33:01.001246 7f968e538700 compacted to: files[ 1 1 30 280 0 0
0 ]
2013/07/29-19:31:41.494208 7f96bffff700 compacted to: files[ 1 1 25 283 0 0
0 ]
2013/07/29-19:31:57.008503 7f96bffff700 compacted to: files[ 1 1 24 284 0 0
0 ]
2013/07/29-19:39:00.008635 7f96a1bdc700 compacted to: files[ 0 1 31 289 0 0
0 ]

Could there be a benefit to increase 'max_open_files' even further, say to:
340 ?
(assuming there is enough memory)

Christian
@NorSoulx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20130730/a7e6fdd0/attachment.html>


More information about the riak-users mailing list