Recovering Riak data if it can no longer load in memory

Vikram Lalit vikramlalit at gmail.com
Tue Jul 12 11:56:57 EDT 2016


Hi - I've been testing a Riak cluster (of 3 nodes) with an ejabberd
messaging cluster in front of it that writes data to the Riak nodes. Whilst
load testing the platform (by creating 0.5 million ejabberd users via
Tsung), I found that the Riak nodes suddenly crashed. My question is how do
we recover from such a situation if it were to occur in production?

To provide further context / details, the leveldb log files storing the
data suddenly became too huge, thus making the AWS Riak instances not able
to load them in memory anymore. So we get a core dump if 'riak start' is
fired on those instances. I had an n_val = 2, and all 3 nodes went down
almost simultaneously, so in such a scenario, we cannot even rely on a 2nd
copy of the data. One way to of course prevent it in the first place would
be to use auto-scaling, but I'm wondering is there a ex post facto / post
the event recovery that can be performed in such a scenario? Is it possible
to simply copy the leveldb data to a larger memory instance, or to curtail
the data further to allow loading in the same instance?

Appreciate if you can provide inputs - a tad concerned as to how we could
recover from such a situation if it were to happen in production (apart
from leveraging auto-scaling as a preventive measure).

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20160712/bd36a935/attachment-0002.html>


More information about the riak-users mailing list