Migration from memcachedb to riak
dkrotkine at gmail.com
Wed Jul 10 05:13:59 EDT 2013
( first post here, hi everybody... )
If you don't need MR, 2i, etc, then BitCask will be faster. You just need
to make sure all your keys fit in memory, which should not be a problem.
How many keys do you have and what's their average length ?
About the values,you can save a lot of space by choosing an appropriate
serialization. We use Sereal to serialize our data, and it's small
enough that we don't need to compress it further (it can automatically use
snappy to compress further). There is a php client 
If you use leveldb, it can compress using snappy, but I've been a bit
disappointed by snappy, because it didn't work well with our data. If you
serialize your php object as verbose string (I don't know what's the usual
way to serialize php objects), then you should probably benchmark different
compressions algorithms on the application side.
On 10 July 2013 10:49, Edgar Veiga <edgarmveiga at gmail.com> wrote:
> Hello all!
> I have a couple of questions that I would like to address all of you guys,
> in order to start this migration the best as possible.
> - I'm responsible for the migration of a pure key/value store that for now
> is being stored on memcacheDB.
> - We're serializing php objects and storing them.
> - The total size occupied it's ~2TB.
> - The idea it's to migrate this data to a riak cluster with elevelDB
> backend (starting with 6 nodes, 256 partitions. This thing is scaling very
> - We only need to access the information by key. *We won't need neither
> map/reduces, searches or secondary indexes*. It's a pure key/value store!
> My questions are:
> - Do you have any riak fine tunning tip regarding this use case (due to
> the fact that we will only use the key/value capabilities of riak)?
> - It's expected that those 2TB would be reduced due to the levelDB
> compression. Do you think we should compress our objects to on the client?
> Best regards,
> Edgar Veiga
> riak-users mailing list
> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users