riak-users Digest, Vol 33, Issue 30

Elias Levy fearsome.lucidity at gmail.com
Fri Apr 20 13:11:15 EDT 2012

On Fri, Apr 20, 2012 at 9:01 AM, <riak-users-request at lists.basho.com> wrote:

> Eventually this becomes the primary workload of the cluster and individual
> deletion latencies grow (more detailed measurements on the shape of this
> degradation are forthcoming if that is helpful).

Are you in EC2 or metal?

How are you deleting the values?   One call per key?

We perform our deletes using MR and a Erlang reduce function.  If you set
the pre-reduce option, this will keep the delete within the node that holds
the value.



% Data is a list of bucket and key pairs, intermixed with the counts of

% objects. Returns a count of deleted objects.

delete(List, _None) ->

  {ok, C} = riak:local_client(),

  Delete = fun(Bucket, Key) ->

    case C:delete(Bucket, Key, 0) of

      ok -> 1;

      _ -> 0



  F = fun(Elem, Acc) ->

    case Elem of

      {{Bucket, Key}, _KeyData} ->

        Acc + Delete(Bucket, Key);

      {Bucket, Key} ->

        Acc + Delete(Bucket, Key);

      [Bucket, Key] ->

        Acc + Delete(Bucket, Key);

      _ ->

        Acc + Elem



  [lists:foldl(F, 0, List)].

> We are using riak 1.1 directly from
> https://github.com/basho/riak/tree/1.1with the eleveldb backend. The
> eleveldb specific configuration follows, but
> fiddling with these settings hasn't noticeably impacted behavior we've
> seen. Planning to set delete_mode to immediate and see if that helps.
> Here's some other info that might be helpful but feel free to ask for
> anything else.
> N = 3 (changing to 2) on 9 physical nodes w 32GB memory each
> Our leveldb config looks like this:
>  %% eLevelDB Config
>  {eleveldb, [
>             {data_root, "/srv/riak/leveldb"},
>             {max_open_files, 400},
>             {block_size, 262144},
>             {cache_size, 1932735280},
>             {sync, false}
>            ]},
You can also increase the leveldb write_buffer_size config option.

                %% Amount of data to build up in memory (backed by an

                %% log on disk) before converting to a sorted on-disk file.

                %% Larger values increase performance, especially during

                %% loads. Up to two write buffers may be held in memory at

                %% same time, so you may wish to adjust this parameter to

                %% control memory usage. Also, a larger write buffer will

                %% result in a longer recovery time the next time the

                %% is opened. Default is: 4MB.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20120420/9788a39a/attachment.html>

More information about the riak-users mailing list