Riak YZ/Solr creating invalid segments

Josh Yudaken josh at smyte.com
Mon Jan 11 02:06:52 EST 2016


We've been hitting random issues with corrupted solr indices on single
machines within our riak cluster. This seems to happen fairly randomly
(possibly handoff related) but has also been triggered by clean
shutdown/startup of nodes.

The error in question is the following in our solr.log:
2016-01-11 05:08:20,283 [ERROR]
null:org.apache.solr.common.SolrException: So
lrCore 'entity_search20151228_2' is not available due to init failure:
Error opening new searcher
Caused by: java.io.FileNotFoundException:
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)

Unfortunately the standard Lucene CheckIndex is unable to recover this
error, but there is a patch available at:

After modifying the patch to run on Lucene 4.7 [any plans on
upgrading?] we managed to bring our nodes back up and they seem to be
functioning fine.

Have you seen these issues anywhere else? Any advice on how to try
solve them besides continually running the updated CheckIndex script
after each failure?


More information about the riak-users mailing list