Updating data in a production setup
mark at basho.com
Wed Apr 18 13:28:32 EDT 2012
On Wed, Apr 18, 2012 at 6:43 AM, vijayakumar <forgetvijay at gmail.com> wrote:
> Thanks for your help. I could say the number of keys would be in the
> range of millions and total number of buckets is 4.
What hardware are you running this on?
> On Tue, Apr 17, 2012 at 9:15 PM, Mark Phillips <mark at basho.com> wrote:
>> Hi Vijayakumar,
>> On Sat, Apr 14, 2012 at 2:57 AM, vijayakumar <forgetvijay at gmail.com>wrote:
>>> As our application moves from one release to another, we need to alter
>>> the existing records in riak [Say changes like adding new field to the json
>>> with a default value]. What's the ideal way to handle such schema changes
>>> (if indexing is required for such fields). Is it possible to run a
>>> mapreduce on existing buckets and update the records? I am not find any
>>> help links for the above mentioned migration.
>>> Riak Version:1.1
>> The short answer is "yes, that's possible". That said, at the moment I'm
>> not aware of any existing code/resources that could walk you through it.
>> Anyone have anything they can share?
>> Keep in mind that running something like this over all your data is going
>> to put a lot of load on your cluster and might lead to some timeouts and
>> interesting debugging. Out of curiosity, how many keys do you need to
>>> Thanks and Regards,
>>> riak-users mailing list
>>> riak-users at lists.basho.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the riak-users