dzagidulin at basho.com
Thu Oct 31 12:15:38 EDT 2013
How large are the objects that you're requesting? (in the 1000 objs
Also, what does your cluster configuration look like? How many nodes? Are
you load-balancing the GETs to your riak nodes (via something like
HAProxy), or are you making requests to a single riak node?
It sounds like the first thing to investigate, in your case, is - where is
the slowdown happening, on the client side or on the server side?
To get a reference data point on the cluster-side multiget performance, you
can use an external tool like https://github.com/basho/riak-data-migratoror
https://github.com/basho/basho_bench to get a rough idea of how many
requests/sec your Riak cluster can handle.
For example, you can download the Data Migrator (which has pretty good
multithreaded connection-pooled GETs that use the Java Riak Client) and do
an export of your cluster (or the bucket in question), and look at the
resulting objs/sec number.
If the bottleneck turns out to be on the Ruby client side, you should
investigate JRuby for better multithreaded performance, or use a
concurrency library like Celluloid.
On Tue, Oct 29, 2013 at 6:24 AM, Vincent Chavelle <
vincent.chavelle at gmail.com> wrote:
> I fell in love with riak and riak-cs, I have migrated all my stack on it
> (originally from mongodb).
> But I have one big issue. I have a lot of key to request simultaneously
> and thanks for multi_get implementation (ruby) it's already optimised for
> the client side (concurrent requests). But I would like to know if any
> server implementation to come because, in my case, it is very very slow to
> request 1000 objects (unlike mongodb).
> You will make me the happiest man in the world. And I could take off my
> hideous memory caching solution :-)
> Vincent Chavelle
> riak-users mailing list
> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users