riak TS max concurrent queries + overload error

Chris.Johnson at vaisala.com Chris.Johnson at vaisala.com
Wed Jul 27 18:19:01 EDT 2016


We are experiencing error messages from the client that we don’t totally understand. They look like the following:

<Riak::ProtobuffsErrorResponse: Expected success from Riak but received 1013. no response from backend>

Checking the riak error and crash logs, I’m seeing “overload” errors which I assume is causing the “no response from backend” client errors:


I’m curious if these overload errors are caused by clients requesting more concurrent TS queries than our current setting for timeseries_max_concurrent_queries allows OR if the the timeseries_max_concurrent_queries is set too high and we are causing riak to crash.

Do you have any recommendations on what timeseries_max_concurrent_queries should be set to relative to hardward specs? I assume it should be limited based on disk I/O bandwidth.

Also, does anyone have any recommendations on query pooling so we can guarantee that multiple clients will not generate more queries than the cluster can handle? I like HAProxy for HTTP connection pooling but it doesn’t seem like it would work well for limiting the number of global queries from multiple PBC clients.

Thank you!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20160727/9d09c2b8/attachment.html>

More information about the riak-users mailing list