Expected vs Actual Bucket Behavior

Eric Filson efilson at gmail.com
Tue Jul 20 18:00:14 EDT 2010

On Tue, Jul 20, 2010 at 3:02 PM, Justin Sheehy <justin at basho.com> wrote:

> Hi, Eric!  Thanks for your thoughts.
> On Tue, Jul 20, 2010 at 12:39 PM, Eric Filson <efilson at gmail.com> wrote:
> > I would think that this requirement,
> > retrieving all objects in a bucket, to be a _very_ common
> > place occurrence for modern web development and perhaps (depending on
> > requirements) _the_ most common function aside from retrieving a single
> k/v
> > pair.
> I tend to see people that mostly try to write applications that don't
> select everything from a whole bucket/table/whatever as a very
> frequent occurrence, but different people have different requirements.
>  Certainly, it is sometimes unavoidable.

Indeed, in my case it is :(

> > In my mind, this seems to leave the only advantage to buckets in this
> > application to be namespacing... While certainly important, I'm fuzzy on
> > what the downside would be to allowing buckets to exist as a separate
> > partition/pseudo-table/etc... so that retrieving all objects in a bucket
> > would not need to read all objects in the entire system
> The namespacing aspect is a huge advantage for many people.  Besides
> the obvious way in which that allows people to avoid collisions, it is
> a powerful tool for data modeling.  For example, sets of 1-to-1
> relationships can be very nicely represented as something like
> "bucket1/keyA, bucket2/keyA, bucket3/keyA", which allows related items
> to be fetched without any intermediate queries at all.

I agree however, the same thing can be accomplished by prefixing your keys
with a "namespace"...

bucket_1_keyA, bucket_2_keyA, bucket_3_keyA

Obviously, buckets in Riak have additional functionality and allow for some
more complex but easier to use m/r functions across multiple buckets,

> One of the things that many users have become happily used to is that
> buckets in Riak are generally "free"; they come into existence on
> demand, and you can use as many of them as you want in the above or
> any other fashion.  This is in essence what conflicts with your
> desire.  Making buckets more fundamentally isolated from each other
> would be difficult without incurring some incremental cost per bucket.

For me, I am more than willing to add a small amount of overhead to the
storage engine for increased functionality and reduced overhead on the
application layer.  Again this is obviously application specific and I'm not
saying it should all be converted over for all buckets exiting in their own
space for every implementation but certainly a different storage engine or
configuration option to allow this level/type of access would be nice :)

> > I might recommend a hybrid
> > solution (based in my limited knowledge of Riak)... What about allowing a
> > bucket property named something like "key_index" that points to a key
> > containing a value of "keys in bucket".  Then, when calling GET
> > /riak/bucket, Riak would use the key_index to immediately reduce its
> result
> > set before applying m/r funcs.  While I understand this is essentially
> what
> > a developer would do, it would certainly alleviate some code requirements
> > (application side) as well as make the behavior of retrieving a bucket's
> > contents more "expected" and efficient.
> A much earlier incarnation of Riak actually stored bucket keylists
> explicitly in a fashion somewhat like what you describe.  We removed
> this as one of our biggest goals is predictable and understandable
> behavior in a distributed systems sense, and a model like this one
> turns each write operation into at least two operations.  This isn't
> just a performance issue, but also adds complexity.  For instance, it
> is not immediately obvious what should be returned to the client if a
> data item write succeeds, but the read/write of the index fails?

Haha, these are the exact reasons I would cite as a developer for using a
similar method on Riak's side... without the option of auto bucket indexing
it effectively places this double write into the application side where it
requires more cycles and more data across the wire.  Instead of doing a
single write, from the application side, and allowing Riak to handle this,
you have to GET index_key, UPDATE index_key, ADD new_key... So rather than
having a single transaction with Riak, you have to have three transactions
with Riak + Application functionality.  Inherently, this adds another level
of complexity into the application code base for something that could be
done more efficiently by the DB engine itself.

I would think a separate error number and message would suffice as a return
error, obviously though, this would require developers being made aware so
they can code for the exception.

Also, this would be optional, if the index_key wasn't set for the bucket
then this setup wouldn't be used.  This would at least make the system more
flexible to the application requirements and developer preferences.

> Most people using distributed data systems (including but not limited
> to Riak) do explicit data modeling, using things like key identity as
> above, or objects that contain links to each other (Riak has great
> support for this) or other data modeling means to plan out their
> expected queries in advance.
> > Anyway, information is pretty limited on riak right now, seeing as how
> it's
> > so new, but talk in my development circles is very positive and lively.
> Please do let us know any aspects of information on Riak that you
> think are missing.  We think that between the wiki, the web site, and
> various other materials, the information is pretty good.  Riak's been
> open source for about a year, and in use longer than that; while there
> are many things much older than Riak, we don't see relative youth as a
> reason not to do things right.
> Thanks again for your thoughts, and I hope that this helps with your
> understanding.

Some very valuable information, for me, would be seeing a breakdown of how
Riak scales out...

Something like showing how many keys in how many buckets takes how long with
how many nodes... (extended by, now with 2 more machines, now with more
complex m/r funcs, now with twice as many keys, etc...) I know this largely
depends on whatever map/reduce functions are being run however even a simple
example would be nice to see.  As it is I have no idea how many queries per
second of what type I can run with how many active nodes?  Again, I realize
this is something that needs to be benchmarked for any sort of accuracy but
I'm speaking more of targeting developers, like myself, who are looking into
this as a newer technology that may work for them.  It is a very large
commitment of time and resources to design and implement something then
benchmark it in order to obtain the "if it will work for this application
efficiently" answer. Having some baseline stats from which to start may
prompt more developers to explore Riak as a storage solution.

And one more thanks for hearing me out and your feedback.  I'd also like to
reiterate that I'm coming from a limited nosql background... however I feel
that's the case with the majority of developers out there today.  My
recommendations for options are based on the real world application design
challenges I've personally been presented with over my career and that I
feel may be common to many other developers as well.  Obviously, even adding
a single option such that I've mentioned is a massive undertaking on Basho's
part but they are definitely pieces of functionality that would make me say,
"done, Riak it is".  Rather than... is there something else which would
better suit my needs... and when vying for adoption rate that's a major
factor :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20100720/e96e0b29/attachment.html>

More information about the riak-users mailing list