Yokozuna max practical bucket limit

Elias Levy fearsome.lucidity at gmail.com
Mon Apr 8 18:25:37 EDT 2013

Thinking about Yokozuna it would appear that for some set of hardware specs
there must be some maximum practical number of indexed buckets.  Yokozuna
creates one Solr core per bucket per node.  Scaling the Riak cluster will
reduce the amount of data indexed per core, but not the number of cores
node.  I assume there is some static overhead per Solr core, and thus a
maximum number of indexed buckets per cluster based on the per node

Any idea what this may be be, roughly?  Has anyone tried to max out the
number of indexed buckets?

Searching the Solr mailing list it seems some folks have up to 800 cores
per slave, but their hardware is unknown and queries are being served by
slaves, so the cores are only indexing.

It looks like there is some ongoing work in Solr to support large number of
cores by dynamically loading and unloading them (
http://wiki.apache.org/solr/LotsOfCores).  Is this something Yokozuna may
make use of?  It may be to expensive a hit for latencies.

Elias Levy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20130408/3f5c83bf/attachment.html>

More information about the riak-users mailing list