Python client performance issue
tburdick at wrightwoodtech.com
Mon Feb 14 19:01:37 EST 2011
I would highly recommend looking in to the cProfile and pstat module and
profile the code that is going slow. If your using the protocol buffer
client it could possibly be related to the fact that python protocol buffers
is extraordinarily slow and is well known to be slow. Profile until proven
On Mon, Feb 14, 2011 at 7:09 AM, Mike Stoddart <stodge at gmail.com> wrote:
> I added some code to my system to test writing data into Riak. I'm
> using the Python client library with protocol buffers. I'm writing a
> snapshot of my current data, which is one json object containing on
> average 60 individual json sub-objects. Each sub object contains about
> 22 values.
> # Archived entry. ts is a formatted timestamp.
> entry = self._bucket.new(ts, data=data)
> # Now write the current entry.
> entry = self._bucket.new("current", data=data)
> I'm writing the same data twice; the archived copy and the current
> copy, which I can easily retrieve later. Performance is lower than
> expected; top is showing a constant cpu usage of 10-12%.
> I haven't decided to use Riak; this is to help me decide. But for now
> are there any optimisations I can do here? A similiar test with
> MongoDB shows a steady cpu usage of 1%. The cpu usages are for my
> client, not Riak's own processes. The only differences in my test app
> is the code that writes the data to the database. Otherwise all other
> code is 100% the same between these two test apps.
> Any suggestions appreciated.
> riak-users mailing list
> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users