Advice for storing records in Riak

Toby Corkindale toby.corkindale at strategicdata.com.au
Fri Apr 12 03:09:52 EDT 2013


Hi,
I wondered if I could get a little advice on good practices for storing 
my records in Riak, such that they perform reasonably well in map-reduce 
queries?

I have a little over 200 million records, currently stored in a regular 
SQL database. I'm expecting this dataset to continue to grow, of course.
Each record is reasonably small - some get up to couple of hundred 
bytes, but most are smaller, and consist of around a dozen numeric 
fields and some small alphanumeric identifier fields.

My initial trial of importing these into Riak were to take each database 
row and convert it into a small JSON of key=>value pairs.

I'm find two issues with this though.
1) It takes a really long time to import everything into Riak, at least 
compared to ingesting into PostgreSQL. (I'm using Riak's HTTP API)
2) An initial trial of some map-reduce queries was significantly slower 
than I was hoping; I suspect this is because of my data structure though.
My initial map phase was iterating over a high percentage of the keys, 
decoding the JSON, and then returning just one or two of the fields from 
the JSON structure, which is maybe an inefficient way to go about things?


So I was wondering if there's a better way to be approaching the 
problem.. I wondered about breaking up the records further, and storing 
individual fields against keys, rather than the whole record as a JSON 
object.

Eg. This was my initial method:
Key: {id}:{recordtype}:{recordid}
Value: { field1: "foo", field2: "bar", field3: "baz" }

I wondered about this, creating one key for each field:
{id}:{recordtype}:{recordid}:field1 ==> "foo"
{id}:{recordtype}:{recordid}:field2 ==> "bar"
{id}:{recordtype}:{recordid}:field3 ==> "baz"

That would avoid the need for one of the map phases; but on the other 
hand, now I'd be creating an order of magnitude more overall keys in the db.


On the other hand, I wondered about going the other way, and grouping 
records under one key. So instead of having keys 100, 101, 102 .. 109, I 
would have one key 10x that contained a JSON structure with an array of 
records.. (I don't know whether I store 10, 50 or 100 records per key)

This would speed up the time taken to ingest data to Riak, and reduce 
the number of total queries made by the map phase.. but would increase 
the work DONE in the map phase and add inefficiencies as sometimes only 
a few rows of the set would actually be required for a given query.



And the third consideration is that maybe I just need to scale up the 
cluster size to have more machines. Currently it's running on a small 
cluster of four nodes while trialling Riak. (And I'm comparing 
performance with a single, but significantly more powerful, PostgreSQL node)


There's nothing to stop me trying out all these methods, but I thought 
I'd poll the community for advice since no doubt implemented similar 
things before and know what rough things may or may not work well.


Thanks,
Toby




More information about the riak-users mailing list