Retrying requests to Riak

matthew hawthorne mhawthorne at
Thu Apr 7 23:07:22 EDT 2011

I don't use haproxy (we use hardware load balancers), but if I did, I
would want to handle it at that layer instead of in the riak client.

However, If you're using the java client you can implement a custom
HttpClient retry handler to do this:

I believe that doc is for a newer version of HttpClient than the java
client depends on, but the concept is similar across versions.

We actually do the opposite of what you're looking for -- we override
it to never retry after failures.  But it all depends on how errors
are handled in your architecture.


On Thu, Apr 7, 2011 at 9:47 PM, Greg Nelson <grourk at> wrote:
> Hello,
> I have a simple three node cluster that I have been using for testing and
> benchmarking Riak.  Lately I've been simulating various failure scenarios --
> like a node going down, disk going bad, etc.
> My application talks to Riak through an haproxy instance running locally on
> each application server.  It's configured to round-robin over the nodes in
> the cluster for both HTTP and PBC interfaces, and uses the HTTP /ping health
> check.  I assume this is a rather typical setup.
> I'm only using the HTTP interface right now...  When there's a failure and a
> node returns a 5XX error, I'd like to have my application retry the request
> on a different node.  I could of course build this retry logic into my
> application, but what I'm wondering is if there's another way that people
> are typically doing this.  Is there a way to configure haproxy to do this?
>  Do any of the Riak client libraries have this logic built-in?
> Thanks!
> Greeg
> _______________________________________________
> riak-users mailing list
> riak-users at

More information about the riak-users mailing list