Issues with garbage collection on RiakCS

Luke Bakken lbakken at
Mon Aug 18 17:46:29 EDT 2014

Hi David,

I can see from your BOSH specs that you're using Riak 1.4.7 and Riak
CS 1.4.4. I'd like to first recommend upgrading to Riak 1.4.10 and
Riak CS 1.5.0 as both versions have had important bug fixes. CS 1.5.0
specifically has had fixes related to GC.

I'll dig further into your repo but I thought I'd start by pointing
out the GC docs here:

Luke Bakken
lbakken at

On Mon, Aug 18, 2014 at 10:34 AM, David Sabeti <dsabeti at> wrote:
> Hi all,
> Our team at Cloud Foundry is building a RiakCS service for CF users and one
> of our deployments is seeing an issue with deleting objects from the
> blobstore.
> We were seeing that our disk usage was approaching 100%, so we deleted some
> of the stale objects in the blobstore using s3cmd. If we run `s3cmd du`, it
> seems that we successfully freed up space, but when we ran `df` inside the
> RiakCS host, we still saw that our disk usage was close to 100%.
> We understand now that Riak will remove deleted keys asynchronously, but we
> haven't succeeded in configuring GC so that it is more responsive to
> deletions, despite having tried tweaking several parameters. On Friday, we
> uploaded several files and deleted them, hoping to see that they were gone
> from the disk on Monday. When we came back after the weekend, we saw that
> garbage collection still had not occurred. If it helps, you can look at our
> configuration for Riak and RiakCS.
> Has anyone else encountered this issue, where garbage collection appears to
> never occur? Would be great to get help configuring RiakCS so that GC
> happens more often. Maybe there is a way to run GC manually when disk is
> filling up?
> Thanks,
> David & Raina
> CF Services Team
> _______________________________________________
> riak-users mailing list
> riak-users at

More information about the riak-users mailing list