How do I improve Level DB performance?
tmhaines at gmail.com
Fri May 11 12:42:15 EDT 2012
On Fri, May 11, 2012 at 9:20 AM, Tim Haines <tmhaines at gmail.com> wrote:
> On Fri, May 11, 2012 at 9:13 AM, Ryan Zezeski <rzezeski at basho.com> wrote:
>> On Thu, May 10, 2012 at 11:14 PM, Tim Haines <tmhaines at gmail.com> wrote:
>>> With the adjusted ring size and settings, and adjusted to only do puts
>>> (so no missed reads), my cluster is doing about 400 puts per second:
>> Actually, every put (put from a riak API level) does a read on the
>> backend . This is needed to merge contents from the two objects .
>> Like Dave already mentioned the key generation strategy along with
>> leveldb's degrading performance on not-found means your benchmark will just
>> get worse the longer it runs.
>> Are you testing an actual use case here? Do you envision 100M objects
>> being written in a constant stream? Will your objects have a median size
>> of 1000 bytes? Basho bench also provides a pareto key generator which uses
>> a fraction of the key space most of the time. I'm not sure it matches your
>> use case but thought I'd mention it is there.
> Hi Ryan,
> Thanks. Greg just mentioned the reads on puts too. I'd changed the config
> to 250 bytes (matching about what I store for a tweet), and reran it
> overnight, and observed performance drop from 400 puts/s to 250 puts/s.
> Right now my use case has me constantly writing about 200 new tweets per
> second, so unless I'm missing something, this throughput measurement is a
> realistic indicator for me.
I guess I was hoping that someone could look at these results and say
"Given the use case and the hardware, Riak should be performing 10x what
you're seeing, so something is configured wrong." I'm not hearing that
though. What I'm hearing is "Is that a realistic use case?".
So given this use case, and the hardware I have, these are expected results?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users