How do I improve Level DB performance?

Tim Haines tmhaines at
Thu May 10 16:33:02 EDT 2012

Hi there,

I've set up a new cluster, and have been doing pre-deployment benchmarks on
it. The benchmark I was running slowly sunk from 1000 TPS to 250 TPS over
the course of the single 8 hour benchmark doing 1 read+1 update using 1k
values.  I'm wondering if anyone might have suggestions on how I can
improve this.

Here's a graph of the benchmark results:
And the basho_bench config:

This is on a cluster with 5 nodes of identical hardware, with the benchmark
running from a separate server.  The nodes all have single proc quad-core
Sandy Beach CPUs, 8 GB of ram, and 600GB 15K RPM drives.  They're running
Ubuntu 10.04 with the ext4 filesystem, and noop ioscheduler.

For the benchmark above, Riak itself was setup in LevelDB, with a ringsize
of 1024, and the default config settings for LevelDB.

There was about 10GB of benchmark data already in each node.  While the
benchmarks are running, iowait hits about 20%, util about 80%, and each
server consumes about 4GB of ram.

Can anyone confirm this is expected performance from this kind of setup, or
offer suggestions on how to improve performance?

As an aside, I've since repaved the 5 nodes and adjusted it to a ring size
of 128 (I'm concerned about the scaling cap with this), and leveldb
settings like this:

        {write_buffer_size, 16777216},
        {max_open_files, 100},
        {block_size, 262144},
        {cache_size, 168430088}

I've set it running on another 8 hour benchmark to see what it turns up.


Tim Haines.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the riak-users mailing list