anti_entropy and solr taking up suspiciously large amounts of space

Rohit Sanbhadti sanbhadtirohit at vmware.com
Mon Apr 3 16:38:55 EDT 2017


Thanks for the help Matthew. We’ve changed anti_entropy to passive for now. We’re aware of the repercussions for solr (we can already see some inconsistencies in returned results) but the alternative seems much worse for our use case (heavy write throughput). We’re looking at ways of reducing our reliance on riak search on our application’s side.

--
Rohit S.

From: Matthew Von-Maszewski <matthewv at basho.com>
Date: Monday, April 3, 2017 at 1:20 PM
To: Rohit Sanbhadti <sanbhadtirohit at vmware.com>
Cc: "riak-users at lists.basho.com" <riak-users at lists.basho.com>
Subject: Re: anti_entropy and solr taking up suspiciously large amounts of space

Rohit,

My apologies for the delayed reply.  Too many conflicting demands on my time the past two weeks.

I reviewed the riak-debug package you shared.  I also discussed its contents with other Riak developers.

There does not appear to be anything unexpected.  The anti_entropy bloat is due to the bitcask backend not actively communicating TTL expirations to AAE.  This is a known issue.  Similarly, bitcask is not communicating TTL expirations to solr.  (The leveldb backend recently added expiry / TTL.  And it fails the same way in this scenario as bitcask.)

We have engineering designs in the works that will eventually correct this situation.  But the designs do not translate to code that you can use today.  My apologies.

--> The 100% accurate approach today is to disable bitcask's TTL and create external jobs that prune your data via Delete operations.  Yes, this is going to create a bunch of extra disk operations.  But I am being honest.

--> You could reduce only the anti_entropy disk usage by changing the "anti_entropy" setting in riak.conf from "active" to "passive".  But this does nothing for solr.

Matthew

On Mar 22, 2017, at 10:56 AM, Matthew Von-Maszewski <matthewv at basho.com<mailto:matthewv at basho.com>> wrote:

Rohit,

Would you please run “riak-debug” on one server from the command line and send the tar.gz file it creates directly to me.  Do not copy the email list.

Notes on its usage are here:  http://docs.basho.com/riak/kv/2.2.1/using/cluster-operations/inspecting-node/#riak-debug<https://urldefense.proofpoint.com/v2/url?u=http-3A__docs.basho.com_riak_kv_2.2.1_using_cluster-2Doperations_inspecting-2Dnode_-23riak-2Ddebug&d=DwMFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=qO7g3ZLnCDJ2Kp8qz13HpzC2NAX5qNCtEhoWeGshuEI&m=F2tum9BJMam5SPXqlMGtemScUXJlyeHRj3ycreweJrk&s=S8i0MoZkDZ5UE6nfduDCMIs_gS3XXBpP4yfrKFFBCzw&e=>

The resulting debug package will give me and others at Basho a better picture of the problem.  The alternative is a about twenty rounds of “what about this, oh, then what about that”.

Thanks,
Matthew



On Mar 21, 2017, at 9:53 PM, Rohit Sanbhadti <sanbhadtirohit at vmware.com<mailto:sanbhadtirohit at vmware.com>> wrote:

Matthew,

To clarify, this happens on all nodes in our cluster (10+ nodes) although the exact size varies by 10s of GB. I’ve rolling restarted the cluster the last time this happened (last week) with no significant change in size, although the output from riak-admin aae-status and riak-admin search aae-status shows empty after the restart.

--
Rohit S.

On 3/21/17, 5:25 PM, "Matthew Von-Maszewski" <matthewv at basho.com<mailto:matthewv at basho.com>> wrote:

   Rohit,

   If you restart the node does the elevated anti_entropy size decline after the restart?

   Matthew



On Mar 21, 2017, at 8:00 PM, Rohit Sanbhadti <sanbhadtirohit at vmware.com<mailto:sanbhadtirohit at vmware.com>> wrote:

Running Riak 2.2.0 on Ubuntu 16.04.1, we’ve noticed that anti_entropy is taking up way too much space on all of our nodes. We use multi_backend with mostly bitcask backends (relevant part of config pasted below). Has anyone seen this before, or does anyone have an idea what might be causing this?

storage_backend = multi
multi_backend.bitcask_1day.storage_backend = bitcask
multi_backend.bitcask_1day.bitcask.data_root = /var/lib/riak/bitcask_1day
multi_backend.bitcask_1day.bitcask.expiry = 1d
multi_backend.bitcask_1day.bitcask.expiry.grace_time = 1h
multi_backend.bitcask_2day.storage_backend = bitcask
multi_backend.bitcask_2day.bitcask.data_root = /var/lib/riak/bitcask_2day
multi_backend.bitcask_2day.bitcask.expiry = 2d
multi_backend.bitcask_2day.bitcask.expiry.grace_time = 1h
multi_backend.bitcask_4day.storage_backend = bitcask
multi_backend.bitcask_4day.bitcask.data_root = /var/lib/riak/bitcask_4day
multi_backend.bitcask_4day.bitcask.expiry = 4d
multi_backend.bitcask_4day.bitcask.expiry.grace_time = 1h
multi_backend.bitcask_8day.storage_backend = bitcask
multi_backend.bitcask_8day.bitcask.data_root = /var/lib/riak/bitcask_8day
multi_backend.bitcask_8day.bitcask.expiry = 8d
multi_backend.bitcask_8day.bitcask.expiry.grace_time = 1h
multi_backend.leveldb_mult.storage_backend = leveldb
multi_backend.leveldb_mult.leveldb.maximum_memory.percent = 30
multi_backend.leveldb_mult.leveldb.data_root = /var/lib/riak/leveldb_mult
multi_backend.bitcask_mult.storage_backend = bitcask
multi_backend.bitcask_mult.bitcask.data_root = /var/lib/riak/bitcask_mult
multi_backend.default = leveldb_mult

$ du –h –d 1 /var/lib/riak

4.0K    /var/lib/riak/riak_kv_exchange_fsm
52K     /var/lib/riak/generated.configs
224K    /var/lib/riak/cluster_meta
424K    /var/lib/riak/ring
1.1M    /var/lib/riak/kv_vnode
2.1M    /var/lib/riak/bitcask_1day
3.6M    /var/lib/riak/bitcask_4day
33M     /var/lib/riak/yz_temp
1.1G    /var/lib/riak/bitcask_2day
5.9G    /var/lib/riak/yz_anti_entropy
20G     /var/lib/riak/yz
24G     /var/lib/riak/leveldb_mult
25G     /var/lib/riak/bitcask_mult
27G     /var/lib/riak/bitcask_8day
139G    /var/lib/riak/anti_entropy
240G    /var/lib/riak


--
Rohit S.

_______________________________________________
riak-users mailing list
riak-users at lists.basho.com<mailto:riak-users at lists.basho.com>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.basho.com_mailman_listinfo_riak-2Dusers-5Flists.basho.com&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=qO7g3ZLnCDJ2Kp8qz13HpzC2NAX5qNCtEhoWeGshuEI&m=zcfqvyFJVK17vAft30Cv9abkK5P_f2JtRonrl0CRYnM&s=wSNoX28DcQ-CCfWp9vRUA-70z3LeWFNGHG3-LPKzAtQ&e=




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20170403/4fd11906/attachment-0002.html>


More information about the riak-users mailing list