Slow performance using linkwalk, help wanted

Karsten Thygesen karthy at netic.dk
Mon Nov 8 15:26:21 EST 2010


Hi Kevin

I tune in here as well as I'm involved in the POC is have configured the cluster. 

We do use the default ringsize of 64. We have done this after reading all the nice recommendations on this list - especially a posting claiming, that about 10 partitions pr. node was not all wrong, so we figured, that 64 might be the right value for a 4 node cluster.

Regarding the vectorclock explosion, then this is worth looking into - we where hoping to achive performancemeassures in the range that you have found - otherwise we can not use this technology :-(

Jan, can you please elaborate on the vectorclock issue?

Best regards,
Karsten 

On Nov 8, 2010, at 17:51 , Kevin Smith wrote:

> Jan - 
> 
> I've run some tests using a 8 GB, 4-core Linux box I had handy along with my MBP as a client using riak-java-client over HTTP. For the test I configured a user record as you described linked to 250 1KB entries in a separate bucket named "documents". I spun up 5 Java threads to simulate 5 concurrent users. Each thread performed the link walk from the user to the documents 2500 times. From that I was able to observe the follow performance (all times in milliseconds):
> 
> Average runtime: 124
> 99th percentile: 220
> 99.5th percentile: 263
> 99.9th percentile: 949
> 
> The large difference between the 99.5th and 99.9th seems to correlate to the beginning of the run so I think those times might reflect the time required for Java's server JIT to fully kick in as well as GC times to stabilize.
> 
> I was able to reduce performance by triggering "vector clock explosion". Setting a bucket's "allow_mult" value to true and then overwriting existing entries with new values while omitting the old entries' vector clock information causes the object's vector clock data to bloat which will impact read times. Is there any chance this is occurring in your application?
> 
> Another possibility is the number of partitions in your cluster is not large enough to provide good parallelization for your workload. What's the value of ring_creation_size in your cluster's app.config? Riak will run with a default ring size of 64 partitions if the entry isn't present.
> 
> --Kevin
> 
> On Nov 8, 2010, at 9:45 AM, Jan Buchholdt wrote:
> 
>> Kevin
>> 
>> We are using HTTP, (have tried PB without any performance gain) and
>> using riak-java-client as client lib.
>> 
>> --
>> Jan Buchholdt
>> Software Pilot
>> Trifork A/S
>> Cell +45 50761121
>> 
>> 
>> 
>> On 2010-11-08 14:20, Kevin Smith wrote:
>>> Jan -
>>> 
>>> Which protocol (HTTP or protocol buffers) and client lib are you using?
>>> 
>>> --Kevin
>>> On Nov 8, 2010, at 6:36 AM, Jan Buchholdt wrote:
>>> 
>>>> We are evaluating Riak for a project, but having a hard time making it fast enough for our need.
>>>> 
>>>> Our model is very simple and looks like this:
>>>> 
>>>> ---------------------                         * ---------------------
>>>> |       Person      | ------------------------>   |   Document        |
>>>> ---------------------                           ---------------------
>>>> 
>>>> We have a set of persons and each person can have many documents.
>>>> 
>>>> Our typical queries are:
>>>> 
>>>> Get an overview of all the persons documents. This query returns the person along with a subset of data from all the persons documents.
>>>> Get document by id.
>>>> 
>>>> Our requirements are that these quires should be performed under in under 100millis when we have 10 requests per second or less load.
>>>> 
>>>> The size of the data:
>>>> A document is approximately 1 kb
>>>> No data for a persons except the personidentifier
>>>> Around 6 million persons.
>>>> Each person has from from 0 to a couple of thousand documents.
>>>> All in all we have 120 mio documents.
>>>> Most persons don't have more than 1 to 10 documents, but then we have some few "heavy" persons having 500 to 1000 documents.
>>>> 
>>>> Riak setup:
>>>> 4 Nodes.
>>>> Hardware configuration for each node:
>>>> HP ProLiant DL360 G7
>>>> 18 gb ram
>>>> SAS discs
>>>> Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Proc 1
>>>> Solaris 10 update 9
>>>> 
>>>> We use the default bitcask storage engine
>>>> We replicate data to 3 machines when it is written.
>>>> Reads are read from just one machine
>>>> 
>>>> We tried implementing our datamodel using Riak links as described below:
>>>> 
>>>> Persons are stored in a person bucket using their person identifier as key
>>>> /person/
>>>> {personid}
>>>> Documents are saved in another bucket
>>>> /document/
>>>> {documented}
>>>> At each person we store links to the persons documents.
>>>> 
>>>> We are having problems with the query fetching all the documents for a person.  Reading all the documents for a person is done using a link walk. The linkwalk start reading all the document keys using the personid. It then fetches all documents.
>>>> For persons with 1 - 5 documents the response times are often over 100 mills. And for the "heavy" persons with many documents response times are several seconds. But we are very new to Riak and are probably using a wrong approach.
>>>> 
>>>> Below are our thoughts (having almost no experience with Riak):
>>>> 
>>>> The chosen datamodel is good for writes. Writing a new document results in 3 operations against Riak. Writing the document using its id as key. Reading the Person to get all the persons document links. Append the new document's key to the persons links and write back the person.
>>>> 
>>>> Reading, using linkwalk, is slow because it is expensive to fetch many documents even though the linkwalk can read their keys right away by reading the links for the person. Even though we have 4 nodes and linkwalks are parallelized many documents need to be retrieved from one node. Having to fetch for example 100 documents on one node (one disc) is expensive. We do not know how data is stored but are afraid Riak is doing a lot of disk seeks.
>>>> 
>>>> We are considering another more denormalized approach where we write all the documents for a person in one "blob". But then we are afraid our writes become slow, because when adding a new document the blob must be read, the new document inserted and the blob written back.
>>>> 
>>>> We could really need some input. Is our assumptions wrong? (we have not yet dug into the problems). Is there a good datamodel for our requirements? etc?.
>>>> We haven't looked at Riak search at all. Maybe it could solve some of our problems.
>>>> 
>>>> 
>>>> 
>>>> --  --
>>>> Jan Buchholdt
>>>> Software Pilot
>>>> Trifork A/S
>>>> Cell +45 50761121
>>>> 
>>>> 
>>>> _______________________________________________
>>>> riak-users mailing list
>>>> riak-users at lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
>> _______________________________________________
>> riak-users mailing list
>> riak-users at lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1919 bytes
Desc: not available
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20101108/2a4fd5da/attachment.p7s>


More information about the riak-users mailing list