Fwd: Riak Map Reduce Performance

Sean Cribbs sean at basho.com
Fri Aug 26 09:33:36 EDT 2011


---------- Forwarded message ----------
From: Sean Cribbs <sean at basho.com>
Date: Fri, Aug 26, 2011 at 9:33 AM
Subject: Re: Riak Map Reduce Performance
To: "Fisher, Ryan" <rfisher at cyberpointllc.com>


Yes, list-keys is affected by the entire keyspace. Backends other than
Bitcask may have better performance when listing keys (the newish LevelDB
backend being one) because they store them in sorted order and can skip
around. That said, it will still invoke the key-listing across 1/N of the
vnodes in the cluster (aka "spamming").  If you want to compare it to RDBMS,
consider it a "full-table scan" across a table with no indexes. It's going
to be slow!

2011/8/26 Fisher, Ryan <rfisher at cyberpointllc.com>

Hi Sean,
>
> Thanks for the reply!
>
> I have seen the 'list-keys' performance hit mentioned on the website and in
> a number of forum postings.  It certainly makes sense, and I will need to
> think about a way to structure our data to minimize or completely avoid the
> full bucket map reduce.  I have seen where people store the key names in
> another bucket under a second key/value pair, and others have even suggested
> using external solutions to tracking keys.  I have also considered moving to
> riak-search which might be more what I need for the full range and complex
> queries I want to build in the future anyway.
>
> A couple follow up questions if you don't mind:
>
> Does the 'list-keys' performance hit only apply to a single bucket or is it
> for the entire riak cluster key space?  As a test I added a new bucket named
> 'bucket1' with a single key in it named 'key1'.  I also changed my map
> reduce to erlang (to avoid any javascript marshaling performance hits), and
> I believe this particular one should grab the key from memory and avoid
> hitting the disk, right?  I'm using the python client by the way…
>
> <code>
>
> query = self.client.add("bucket1")
> query.reduce(["riak_kv_mapreduce", "reduce_identity"])
> resultList = query.run(timeout=10*60*1000)
>
> </code>
>
> The query takes 62 seconds to complete and the result is the single bucket
> and key pair, like I expect, just a lot slower then I would expect.
>  Alternatively if I retrieve the key directly… the key (plus the value even…
> Which should incur the disk I/O) returns in less than 1 second.
>
> As an additional data point the javascript map/reduce I mentioned below
> over the 'archive' bucket that has over 50,000 keys is taking 86 seconds
> this morning.  This is better then what it was a day ago, however it still
> seems to me like something is not running properly.
>
> I also noticed that there were some error / crashes in the erlang and sasl
> logs (which I attached to this email).
>
> I do have around 200k keys across all my buckets, so could this be a memory
> thing (I am using bitcask)?  I have 4 nodes w/ 8GB each which I believe
>  according to all the calculators should be more than adequate for the
> number and length of my bucket + keys + overhead…
>
> Thanks again,
> ryan
>
>
> From: Sean Cribbs <sean at basho.com>
> Date: Wed, 24 Aug 2011 08:29:15 -0400
> To: Ryan Fisher <rfisher at cyberpointllc.com>
> Subject: Re: Riak Map Reduce Performance
>
> Ryan,
>
> It is most likely that you are running into the list-keys problem. That is,
> as the size of the keys stored in your cluster grows, the time it takes to
> list the keys in a single bucket gets longer and longer. If possible you
> will want to avoid doing full-bucket MapReduce queries (they start with
> list-keys), especially when your writes are so frequently creating new keys.
>
> On Tue, Aug 23, 2011 at 3:08 PM, Fisher, Ryan <rfisher at cyberpointllc.com>wrote:
>
>> Hi all,
>>
>> We have been using riak for a few months now (started using 14.0 and we
>> have recently upgraded to 14.2).  Development of our app has been going
>> well and I am now integrating my code w/ a larger system.  The testing of
>> the overall read / write performance of our cluster seems good as well.
>>
>> I am now starting to dive further into map reduce queries, and unlike the
>> regular read / writes that seem to perform very fast, I am seeing map
>> reduce performance is getting worse as our data set grows.
>>
>> The query I am using to test the map / reduce speed and get a key count is
>> this:
>>
>> map = function (v) { return [1]; }
>> reduce = Riak.reduceSum
>>
>>
>> It takes 138 seconds using that query on a bucket w/ 50,000 keys.
>> It takes around 20 seconds using that query on a bucket w/ 108 keys.
>>
>> Do these query times for map reduce seem appropriate?
>>
>> I'll try and give an overall picture of how we currently use riak and
>> maybe someone can say if the performance of our map / reduce operations is
>> on par or if there are things I could tweak to try and get the query times
>> to come down a bit.
>>
>> The system we have sends data to riak at a fairly fast pace and we need to
>> keep all incoming data for 30 minutes, so we can examine the data and
>> retrieve any individual keys.  After 30 minutes we can aggregate messages
>> into groups to reduce the overall number of keys and data.
>>
>> We currently have an "incoming" bucket where keys are written at a rate of
>> around 20 / second.  An archiving thread checks every so often for keys
>> that are older then 30 min. and if it finds any it removes them from the
>> 'incoming' bucket and aggregates them into an 'archive' bucket for the
>> given hour.
>>
>> As you can imagine this causes the bitcask files to fragment and grow
>> fairly large, however it seems like the best way to maintain some
>> granularity of the data, but not be forced to keep every single data point
>> that flows into the system.  It also gives us a predictable growth rate
>> for the 'archive' bucket even if the 'incoming' data increases beyond the
>> 20 / second.
>>
>> One thing I was wondering and planning on trying was to reduce the bitcask
>> configuration merge threshold and trigger values to help keep the files a
>> little smaller which might help the map reduce performance some?  Or
>> should that even matter since I'm only looking through keys and those
>> should be in memory anyway?
>>
>> We currently have a 4 node cluster running ubuntu 11.04 x64.  Each riak
>> node has 8 GB of memory, and 'free ­m' on the nodes reports around 4000
>> used and 4000 free on average.
>>
>> Basho Bench using the riakc_pb.config with a get=1, update=2, and put=3
>> for 5 minutes seems to be good... Here is the graph:
>> http://tinypic.com/r/30tm977/7
>>
>> So does anyone have any similarly sized systems where they use map reduce?
>>  Or can anyone recommend performance tweaks I can make that would help
>> accelerate these queries?
>>
>> Thank you,
>> -ryan
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users at lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> Sean Cribbs <sean at basho.com>
> Developer Advocate
> Basho Technologies, Inc.
> http://www.basho.com/
>
>


-- 
Sean Cribbs <sean at basho.com>
Developer Advocate
Basho Technologies, Inc.
http://www.basho.com/




-- 
Sean Cribbs <sean at basho.com>
Developer Advocate
Basho Technologies, Inc.
http://www.basho.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20110826/848370b7/attachment.html>


More information about the riak-users mailing list