Performance Tuning in OmniOS

Jared Morrow jared at basho.com
Tue Jan 21 12:47:29 EST 2014


What type of RAID did you chose for your spool of 5 volumes?  If you chose
the default of raidz, you will not be getting much of a performance boost
over vanilla EBS, just a big integrity boost.  Also, unless you are using
provisioned IOPS for EBS, you are starting from an extremely slow
base-case, so adding ZFS on top might not help matters much.

If speed is the concern, as a test I'm willing to bet if you do another
test run against the two instance storage disks on that m1.large, you will
probably beat those 5 EBS volumes pretty easily.

-Jared


On Tue, Jan 21, 2014 at 9:22 AM, Hari John Kuriakose <ejhari at gmail.com>wrote:

> Hello,
>
> I am using standard EBS devices, with a zpool in an instance comprising of
> five 40GB volumes.
> Each of the Riak instance is of m1.large type.
>
> I have made the following changes in zfs properties:
>
> # My reason: the default sst block size for leveldb is 4k.
> zfs set recordsize=4k tank/riak
> # My reason: by default, leveldb verifies checksums automatically.
> zfs set checksum=off tank/riak
> zfs set atime=off tank/riak
> zfs set snapdir=visible tank/riak
>
> And I did the following with help from Basho AWS tuning docs:
>
> projadd -c "riak" -K "process.max-file-descriptor=(basic,65536,deny)"
> user.riak
> bash -c "echo 'set rlim_fd_max=65536' >> /etc/system"
> bash -c "echo 'set rlim_fd_cur=65536' >> /etc/system"
> ndd -set /dev/tcp tcp_conn_req_max_q0 40000
> ndd -set /dev/tcp tcp_conn_req_max_q 4000
> ndd -set /dev/tcp tcp_tstamp_always 0
> ndd -set /dev/tcp tcp_sack_permitted 2
> ndd -set /dev/tcp tcp_wscale_always 1
> ndd -set /dev/tcp tcp_time_wait_interval 60000
> ndd -set /dev/tcp tcp_keepalive_interval 120000
> ndd -set /dev/tcp tcp_xmit_hiwat 2097152
> ndd -set /dev/tcp tcp_recv_hiwat 2097152
> ndd -set /dev/tcp tcp_max_buf 8388608
>
> Thanks again.
>
>
> On Tue, Jan 21, 2014 at 9:12 PM, Hector Castro <hector at basho.com> wrote:
>
>> Hello,
>>
>> Can you please clarify what type of disk you are using within AWS?
>> EBS, EBS with PIOPS, instance storage? In addition, maybe some details
>> on volume sizes and instance types.
>>
>> These details may help someone attempting to answer your question.
>>
>> --
>> Hector
>>
>>
>> On Tue, Jan 21, 2014 at 8:11 AM, Hari John Kuriakose <ejhari at gmail.com>
>> wrote:
>> >
>> > I am running LevelDB on ZFS in Solaris (OmniOS specifically) in Amazon
>> AWS.
>> > The iops is very very low. There is no significant progress with tuning
>> too.
>> >
>> > Why I chose ZFS is that since LevelDB requires the node to be stopped
>> before
>> > taking a backup, I needed a filesystem with snapshot ability. And the
>> most
>> > favourable Amazon community AMI seemed to be using OmniOS (fork of
>> Solaris).
>> > Everything is fine, except the performance.
>> >
>> > I did all the AWS tuning proposed by Basho but still Basho Bench gave
>> twice
>> > iops on Ubuntu as compared to OmniOS, under same conditions. Also, I am
>> > using riak-js client library, and its a 5 node Riak cluster with 8GB ram
>> > each.
>> >
>> > Could not yet figure out what is really causing the congestion in
>> OmniOS.
>> > Any pointers will be really helpful.
>> >
>> > Thanks and regards,
>> > Hari John Kuriakose.
>> >
>> >
>> > _______________________________________________
>> > riak-users mailing list
>> > riak-users at lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>>
>
>
> _______________________________________________
> riak-users mailing list
> riak-users at lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20140121/ea1f7793/attachment.html>


More information about the riak-users mailing list