maximum limit of keys per buckets

Karthik K oss.akk at gmail.com
Fri Jan 6 20:37:05 EST 2012


I am using Riak with LevelDB as the storage engine.

app.config:

    {storage_backend, riak_kv_eleveldb_backend},


 {eleveldb, [
             {data_root, "/var/lib/riak/leveldb"},
             {write_buffer_size, 4194304}, %% 4MB in bytes
            {max_open_files, 50}, %% Maximum number of files open at once
per partition
            {block_size, 65536}, %% 4K blocks
            {cache_size, 33554432}, %% 32 MB default cache size
per-partition
            {verify_checksums, true} %% make sure data is what we expected
it to be
            ]},




I want to insert a million keys into the store ( into a given bucket ) .

pseudo-code:
            riakClient = RiakFactory.pbcClient();
            myBucket =
riakClient.createBucket("myBucket").nVal(1).execute();
            for (int i = 1; i <= 1000000; ++i) {
                final String key = String.valueOf(i);
                myBucket.store(key, new String(payload)).returnBody(false);
            }


after this operation, when I do:

   int count = 0;
   for (String key : myBucket.keys() ) {
         ++count;
   }
   return count;

This returns a total of 14K keys, while I was expecting close to 1 million
or so.

I am using riak-java-client (pbc).

Which setting / missing client code can explain the discrepancy ?  Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20120106/d7c87968/attachment.html>


More information about the riak-users mailing list