Yokozuna max practical bucket limit

Ryan Zezeski rzezeski at basho.com
Mon Apr 8 22:09:44 EDT 2013


Elias,

This is exactly why I chose not to make a core per partition.  My gut
feeling was that most users are likely to have more partitions than indexed
buckets.  I don't know the overhead per-core or what the limits might be.
 I would recommend the Solr mailing list for questions like that.  I've
also looked at that "LotsOfCores" page before.  One benefit to using Solr
is that any improvements made to it should also trickle down to Yokozuna.

That said, I still plan to allow a one-to-many mapping from index to
buckets.  That would allow many KV buckets to index under the same core.  I
have an idea of how to implement it.  I'm fairly certain it would work just
fine.  I just need to add a GitHub issue and then it's a "simple matter of
coding."

-Z


On Mon, Apr 8, 2013 at 6:25 PM, Elias Levy <fearsome.lucidity at gmail.com>wrote:

> Thinking about Yokozuna it would appear that for some set of hardware
> specs there must be some maximum practical number of indexed buckets.
>  Yokozuna creates one Solr core per bucket per node.  Scaling the Riak
> cluster will reduce the amount of data indexed per core, but not the number
> of cores node.  I assume there is some static overhead per Solr core, and
> thus a maximum number of indexed buckets per cluster based on the per node
> resources.
>
> Any idea what this may be be, roughly?  Has anyone tried to max out the
> number of indexed buckets?
>
> Searching the Solr mailing list it seems some folks have up to 800 cores
> per slave, but their hardware is unknown and queries are being served by
> slaves, so the cores are only indexing.
>
> It looks like there is some ongoing work in Solr to support large number
> of cores by dynamically loading and unloading them (
> http://wiki.apache.org/solr/LotsOfCores).  Is this something Yokozuna may
> make use of?  It may be to expensive a hit for latencies.
>
> Elias Levy
>
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20130408/ed699e28/attachment.html>


More information about the riak-users mailing list