Python client performance issue
stodge at gmail.com
Mon Feb 14 08:09:20 EST 2011
I added some code to my system to test writing data into Riak. I'm
using the Python client library with protocol buffers. I'm writing a
snapshot of my current data, which is one json object containing on
average 60 individual json sub-objects. Each sub object contains about
# Archived entry. ts is a formatted timestamp.
entry = self._bucket.new(ts, data=data)
# Now write the current entry.
entry = self._bucket.new("current", data=data)
I'm writing the same data twice; the archived copy and the current
copy, which I can easily retrieve later. Performance is lower than
expected; top is showing a constant cpu usage of 10-12%.
I haven't decided to use Riak; this is to help me decide. But for now
are there any optimisations I can do here? A similiar test with
MongoDB shows a steady cpu usage of 1%. The cpu usages are for my
client, not Riak's own processes. The only differences in my test app
is the code that writes the data to the database. Otherwise all other
code is 100% the same between these two test apps.
Any suggestions appreciated.
More information about the riak-users