Is storing billions of small files a good Riak-CS/KV usecase?
daniel.abrahamsson at klarna.com
Wed Oct 7 09:56:26 EDT 2015
If you don't use any level-db specific features, you could switch to
bitcask and use its expiry_secs option to handle deletes. That way you
don't have to worry about deleting data at all. Note that older bitcask
versions (prior to Riak 2.0) had issues with deleting data which could make
deleted data re-appear.
When it comes to level-db and deletes, you should be aware that mass
deletion may cause level db compaction, which might put strain on your
cluster. I don't have enough experience with leveldb to give you any
On Wed, Oct 7, 2015 at 3:43 PM, David Heidt <david.heidt at msales.com> wrote:
> Hi List,
> would you say that storing billions of very small (json) files is a good
> usecase for riak kv or cs?
> here's what I would do:
> * create daily buckets ( i.e. 2015-10-07)
> * up to 130 Million inserts per day
> * about 150.000 read-ony accesses/day
> * no updates on existing keys/files
> * delete buckets (including keys/files) older than x days
> I already have a working riak-kv/leveldb cluster (inserts and lookups are
> going smoothly), but when it comes to mass deletion of keys I found no way
> to do this.
> riak-users mailing list
> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users