Does RiakKV require a lot of memory?

wakuda_tsutomu at wakuda_tsutomu at
Sun Mar 26 20:38:24 EDT 2017


The following problem occurred, Please give me on solution.

I registered a 10mb to 1000mb file.
Then RiakKV hung up when the batch program was accessing files from 100 MB to 600 MB.
RiakKV hung up due to insufficient memory.

1.The problem that occurred this time is the effect of data accessed to a 20-1000 MB file?
  Or is it another problem?

2.During RiakKV data update, when you want to access RiakKV data, 
  do you need to do something special?
  Do you need a lot of memory?

file: 3,882,892
file size(total): 332.46GB(346,243,500,913byte)
file objects:
  1-  10MB = 3,800,000(all)
 20-  50MB = 40 to 100
100-1000MB = 50

(Riak Sever)
OS:Red Hat Enterprise Linux Server release 6.7 (Santiago)
 (Linux patdevsrv02 2.6.32-573.el6.x86_64)
CPU:Intel(R) Xeon(R) CPU E5640  @2.67GHz * 2
 Filesystem      Size  Used Avail Use% Mounted on
 /dev/sda3       261G   67G  182G  27% /
 tmpfs           5.9G  300K  5.9G   1% /dev/shm
 /dev/sda1       477M   71M  381M  16% /boot
 /dev/sdb1       275G  243G   19G  93% /USR1
 /dev/sdc1       1.7T  1.5T   76G  96% /USR2
 /dev/sdd1       1.1T  736G  309G  71% /USR3 <<< Store
 /dev/sde1       1.1T  1.1T   18G  99% /USR4
 /dev/sdf1       1.4T  1.1T  365G  74% /media/USB-HDD1

RiakKV 2.2.0 (riak-2.2.0-1.el6.x86_64.rpm)

(Riak Node)
1 Node.

!!!We plan to add two nodes to the cluster at a later date.!!!

Java1.8 (jre-8u121-linux-x64.rpm)

(riak setting)
storage_backend = leveldb
leveldb.maximum_memory.percent = 50
object.size.maximum = 2GB
listener.http.internal =
listener.http.internal =
platform_data_dir = /USR3/riak
nodename = riak at ...
riak_control = on

!!!Settings other than these remain the default.!!!

(linux setting)
* soft nofile 65536
* hard nofile 65536

Thank you.

Tsutomu Wakuda

More information about the riak-users mailing list