Migration from memcachedb to riak
guido.medina at temetra.com
Wed Jul 10 05:29:24 EDT 2013
If you are using Java you could store Riak keys as binaries using
Jackson smile format, supposedly it will compress faster and better than
default Java serialization, we use it for very large keys (say a key
with a large collection of entries), the drawback is that you won't be
able to easily read that key with other clients; say, write in Java and
Application compression usually comes at the cost of performance and CPU
usage, you surely want to compress without compromising the CPU by a lot.
On 10/07/13 10:13, damien krotkine wrote:
> ( first post here, hi everybody... )
> If you don't need MR, 2i, etc, then BitCask will be faster. You just
> need to make sure all your keys fit in memory, which should not be a
> problem. How many keys do you have and what's their average length ?
> About the values,you can save a lot of space by choosing an
> appropriate serialization. We use Sereal to serialize our data, and
> it's small enough that we don't need to compress it further (it can
> automatically use snappy to compress further). There is a php client 
> If you use leveldb, it can compress using snappy, but I've been a bit
> disappointed by snappy, because it didn't work well with our data. If
> you serialize your php object as verbose string (I don't know what's
> the usual way to serialize php objects), then you should probably
> benchmark different compressions algorithms on the application side.
> : https://github.com/Sereal/Sereal/wiki/Sereal-Comparison-Graphs
> : https://github.com/tobyink/php-sereal/tree/master/PHP
> On 10 July 2013 10:49, Edgar Veiga <edgarmveiga at gmail.com
> <mailto:edgarmveiga at gmail.com>> wrote:
> Hello all!
> I have a couple of questions that I would like to address all of
> you guys, in order to start this migration the best as possible.
> - I'm responsible for the migration of a pure key/value store that
> for now is being stored on memcacheDB.
> - We're serializing php objects and storing them.
> - The total size occupied it's ~2TB.
> - The idea it's to migrate this data to a riak cluster with
> elevelDB backend (starting with 6 nodes, 256 partitions. This
> thing is scaling very fast).
> - We only need to access the information by key. *We won't need
> neither map/reduces, searches or secondary indexes*. It's a pure
> key/value store!
> My questions are:
> - Do you have any riak fine tunning tip regarding this use case
> (due to the fact that we will only use the key/value capabilities
> of riak)?
> - It's expected that those 2TB would be reduced due to the levelDB
> compression. Do you think we should compress our objects to on the
> Best regards,
> Edgar Veiga
> riak-users mailing list
> riak-users at lists.basho.com <mailto:riak-users at lists.basho.com>
> riak-users mailing list
> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users