examples of map reduce in post commit hook

Alexander Sicular siculars at gmail.com
Thu Feb 19 21:17:17 EST 2015

Map/reduce aside, in the general case, I do time series in Riak with deterministic materialized keys at specific time granularities. Ie. 


So my device or app stack will drop data into a one second resolution key (if second resolution is needed) into Riak memory backend in a materialized key (the combination of ID and timestamp). Then retrieve the data deterministically (so you skip the search) at a one minute resolution by issuing a 60 key multi get for each second in that minute. Have a trailing process that sweeps memory and does larger granularity rollups/analytics/math/whatever and write that to memory or disk. All depends on frequency. 



Sent from my iRotaryPhone

> On Feb 19, 2015, at 19:08, Christopher Meiklejohn <cmeiklejohn at basho.com> wrote:
>> On Feb 19, 2015, at 8:01 PM, Fred Grim <fgrim at vmware.com> wrote:
>> Given a specific data blob I want to move a time series into a search
>> bucket.  So first I have to build out the time series and then move it
>> over.  Maybe I should use the rabbitmq post commit hook to send the data
>> somewhere else for the query to be run or something like that?
> Given your scenario, it seems that a portion of these writes would have 
> MapReduce jobs that resulted in nothing happening — I assume you only
> bucket the series every so many writes or time period, correct?
> I’d highly recommend doing this externally, or identifying a method for
> pre-bucket’ing the data given the rate of ingestion.
> - Chris
> Christopher Meiklejohn
> Senior Software Engineer
> Basho Technologies, Inc.
> cmeiklejohn at basho.com
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

More information about the riak-users mailing list