How do I improve Level DB performance?

Ryan Zezeski rzezeski at
Fri May 11 13:05:18 EDT 2012

On Fri, May 11, 2012 at 12:42 PM, Tim Haines <tmhaines at> wrote:

> I guess I was hoping that someone could look at these results and say
> "Given the use case and the hardware, Riak should be performing 10x what
> you're seeing, so something is configured wrong."  I'm not hearing that
> though.  What I'm hearing is "Is that a realistic use case?".
> So given this use case, and the hardware I have, these are expected
> results?
> Tim.

Yea, I don't want to come of as dodging the question.  I've seen lots of
people run benchmarks for use cases they don't even have.  That doesn't
seem to be the case here.

I take very little stock in absolute numbers for the most part.  I'm not
sure what numbers you should see because I've never tried this particular
case, with that hardware.  One question to ask is if it's truly leveldb or
riak causing the slowness?  I'm assuming you chose level either for indexes
or keys-not-in-memory but I imagine if you ran the same bench with bitcask
you'd see much better results.

Since the application's semantics are to always write a unique key you can
also take advantage of the `last_write_wins` bucket property.  It will
avoid some work in your case but still has to read the object for a backend
with index capabilities (in order to delete the old indexes).  Using it
with something like bitcask avoids the read.  It seems to me, for use cases
like this, it would be good to have a 'just_write_it' option with the
semantic of "I know this key is unique, and even if for some weird reason
it isn't, I don't care so just write what I pass you."

All that said, there is work currently going on to put blooms in leveldb to
alleviate the not-found issue.  I'm not sure what the status is but perhaps
someone else will chime in on that.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the riak-users mailing list