riak failure on concurrent writes

Evan Vigil-McClanahan emcclanahan at basho.com
Wed Oct 3 12:08:31 EDT 2012


For fastest loading, you might try hitting more than one node, just
round-robining around the cluster should work.

Additionally, make sure that you've done basic tunings to the cluster,
and try adjusting max_open_files in the eleveldb config section to a
higher value (but not too high, as you don't have a lot of memory on
those nodes).  Once you get past the first level in leveldb, you can
run into file handle contention at low values.

Basic sysctl tuning values:

net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.core.wmem_max=8388608
net.core.rmem_max=8388608
net.core.netdev_max_backlog=10000
net.core.somaxconn=4000
net.ipv4.tcp_max_syn_backlog=40000
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_tw_reuse=1
vm.swappiness=0

Also try changing your scheduler to deadline or noop (noop is for SSDs, mostly):

http://doc.opensuse.org/products/draft/SLES/SLES-tuning_sd_draft/cha.tuning.io.html

On Wed, Oct 3, 2012 at 5:42 AM, Venki Yedidha
<venkatesh.yedidha at gmail.com> wrote:
> Hi All,
>
>         Now that I have a 5 node riak cluster of which all are running on
> eleveldb backend(4 GB ram for each node), I am trying to insert appr: 34000
> objects on one of the riak node in the cluster through riak-erlang client
> asynchronously...Its failing after some time to accept inserts...Do I need
> to change any settings in the config files so that riak runs stable to
> handle concurrent writes upto 35k with out failure?
>
>
>  Please help me on the above..
>
> Thanks,
> Venkatesh
>
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>




More information about the riak-users mailing list