Deciphering Key Filter Errors

Jason J. W. Williams jasonjwwilliams at gmail.com
Thu Mar 3 15:32:12 EST 2011


...btw I've found with the key filter, I have to supply a JavaScript
map function if I want to use a JS reduce function. If I use just a JS
reduce function I get bad_json errors. However, my understanding is
that supplying a map function makes Riak load the key data, where as
supplying just a reduce function won't do that. So I'm in a bit of a
conundrum about how to preformat the output without issuing a map,
since the formatting I want to do is splitting the key name. Thank you
for your help.

-J

On Thu, Mar 3, 2011 at 1:26 PM, Dan Reverri <dan at basho.com> wrote:
> A bug has already been filed for this issue:
> https://issues.basho.com/show_bug.cgi?id=1002
> Thanks,
> Dan
> Daniel Reverri
> Developer Advocate
> Basho Technologies, Inc.
> dan at basho.com
>
>
> On Thu, Mar 3, 2011 at 12:19 PM, Dan Reverri <dan at basho.com> wrote:
>>
>> Hi Jason,
>> I'm able to reproduce the issue when the keys I am filtering do not
>> contain the token. For example, if my token is "-" and my key is
>> "helloworld" the tokens for that key become:
>> ["helloworld"]
>> Grabbing the second element of that list returns an error:
>> 3> string:tokens("helloworld", "-").
>> ["helloworld"]
>> 4> lists:nth(2, ["helloworld"]).
>> ** exception error: no function clause matching lists:nth(1,[])
>> It seems Riak should catch errors during the filtering process and discard
>> keys that cause errors. I will file a bug.
>> Thanks,
>> Dan
>> Daniel Reverri
>> Developer Advocate
>> Basho Technologies, Inc.
>> dan at basho.com
>>
>>
>> On Thu, Mar 3, 2011 at 10:28 AM, Jason J. W. Williams
>> <jasonjwwilliams at gmail.com> wrote:
>>>
>>> If someone could help me understand just this error, that would help a
>>> lot: https://gist.github.com/852450
>>>
>>> Thank you in advance.
>>>
>>> -J
>>>
>>> On Wed, Mar 2, 2011 at 11:55 PM, Jason J. W. Williams
>>> <jasonjwwilliams at gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I'm experimenting using key filters to implement indexes. My approach
>>> > is for each data key in bucket A, to create a new empty key in a
>>> > dedicated index bucket where the original key name and value of the
>>> > indexed field is encoded in the key name for a new index key.
>>> >
>>> > Data key looks like this:
>>> >
>>> > Bucket - riak_perf_test
>>> > Key - ccode_<unique_6_digit_ID> : {"redeemed_count": 23 }
>>> >
>>> > For each data key created, an index key is created:
>>> >
>>> > Bucket - idx=redeemed_count=ccode
>>> > Key - ccode/23
>>> >
>>> > (in both keys 23 changes on a per-key basis based on what
>>> > "redeemed_count" is set to)
>>> >
>>> >
>>> > My goal is to be able to do a key filtered Map Reduce job on
>>> > idx=redeemed_count=ccode that generates a list of all data key names
>>> > with a redeemed_count < 50.
>>> >
>>> > The job (using curl) is here: https://gist.github.com/852451
>>> >
>>> > It errors out almost immediately in sasl-error.log
>>> > (https://gist.github.com/852450), but the request doesn't immediately
>>> > error out to the client. The only error the client sees is an eventual
>>> > timeout error.
>>> >
>>> > So my question is, what is the error in sasl-error.log telling me is
>>> > wrong with my job construction? And also, why is there only a timeout
>>> > error generated to the client instead of a map_reduce_error as I've
>>> > seen for non-key filtered jobs?
>>> >
>>> > Thank you in advance for any help. I greatly appreciate it.
>>> >
>>> > -J
>>> >
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users at lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>




More information about the riak-users mailing list