Recovering datas when a node was joining again the cluster (with all node datas lost)

Germain Maurice germain.maurice at
Tue May 18 04:54:24 EDT 2010

Hi Dan,

Thank you for this "trick", it's faster than GET operation on objects.
HEAD requests on all docs will balance the replication for the node 
where we make the requests.
However, i make only about 100 000 HEAD requests by an hour, seems to be 
normal for you ?
The HEAD requests made the node to be repopulated with more than 120GB 
of datas.

Is there a "riak-admin" command to make this without knowledge of all 
keys of the bucket ?

See you for the next question ;)

Have a good day !

Le 17/05/10 18:11, Dan Reverri a écrit :
> Hi Germain,
> You can make a HEAD request to the bucket/key path. It will return 404 
> or 200 without the document body.
> On Mon, May 17, 2010 at 9:04 AM, Germain Maurice 
> <germain.maurice at 
> <mailto:germain.maurice at>> wrote:
>     Le 17/05/10 15:34, Paul R a écrit :
>         What should the user do to come back to the previous level of
>         replication ? A forced read repair, in other words a GET with
>         R=2 on all
>         objects of all buckets ?
>     Yes, I wonder too what is the best thing to do after a node crash.
>     Eventually, i'm doing read requests on all keys of the bucket.
>     I found that R=1 (on all bucket keys) on the new node will adjust
>     the replication level...
>     I wonder if R=3 or R=1, on the node to repopulate, aim to same
>     result ?
>     In order to do a read repair, we have to make read requests, but
>     it implies reading the stored object.
>     On a read repair, i assume that returning bodies is unecessary,
>     especially on large objects (that i don't have).
>     It would be useful to provide an API operation to test the
>     existance of an object without reading it...
>     I red REST API documentation, i didn't find this kind of operation.
>     Thanks.
>     _______________________________________________
>     riak-users mailing list
>     riak-users at <mailto:riak-users at>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the riak-users mailing list