A Non-Blocking Benchmark
A couple of weeks ago I asked the question “Why non-blocking?”. And I didn’t reach a definitive answer, although it seemed that writing non-blocking code is not the better option – it’s not supposed to be faster or have higher throughput, even though conventional wisdom says it should.
So, leaving behind the theoretical questions, I decided to do a benchmark. The code is quite simple – it reads a 46kb file into memory and then writes it to the response. That’s the simplest scenario that’s still close the the regular usecase of a web application – reading stuff from the database, performing some logic on it, and then writing a view to the client (it’s disk I/O vs network I/O in case the database is on another server, but let’s disregard that for now)
There are 5 distinct scenarios: Servlet using BIO connector, Servlet using NIO connector, Node.js, Node.js using sync file reading and Spray (a scala non-blocking web framework). Gatling was used to perform the tests, and was run on a t2.small AWS instance; the application code was run on a separate m3.large instance.
The code used in the benchmark as well as the full results are available on GitHub. (Note: please let me know if you spot something really wrong with the benchmark that skews the results)
What do the results tell us? That it doesn’t matter whether it’s blocking or non-blocking. Differences in response time and requests/sec (as well as the other factors) are negligible.
Spray appears to be slightly better when the load is not so high, whereas BIO happens to have more errors on a really high load (but being fastest at the same time), Node.js is surprisingly fast for a javascript runtime (kudos to Google for V8).
The differences in the different runs are way more likely to be due to the host VM current CPU and disk utilization or the network latency, rather than the programming model or the framework used.
After reaching this conclusion, the fact that spray is seemingly faster bugged me (especially given that I executed the spray tests half an hour after the rest), so I wanted to rerun the tests this morning. And my assumption about the role of infrastructure factors could not have been proven more right. I ran the 60 thousand requests test and the mean time was 3 seconds (for both spray and servlet), with a couple of hundred failures and only 650 requests/sec. This aligned with my observation that AWS works a lot faster when I start and delete cloud formation stacks early in the morning (GMT+2, when Europe is still sleeping and the US is already in bed).
The benchmark is still valid, as I executed it within 1 hour on a Sunday afternoon. But the whole experiment convinced me even more of what I concluded in my previous post – that non-blocking doesn’t have visible benefits and one should not force himself to use the possibly unfriendly callback programming model for the sake of imaginary performance gains. Niche cases aside, for the general scenario you should pick the framework, language and programming model that people in the team are most comfortable with.
A couple of weeks ago I asked the question “Why non-blocking?”. And I didn’t reach a definitive answer, although it seemed that writing non-blocking code is not the better option – it’s not supposed to be faster or have higher throughput, even though conventional wisdom says it should.
So, leaving behind the theoretical questions, I decided to do a benchmark. The code is quite simple – it reads a 46kb file into memory and then writes it to the response. That’s the simplest scenario that’s still close the the regular usecase of a web application – reading stuff from the database, performing some logic on it, and then writing a view to the client (it’s disk I/O vs network I/O in case the database is on another server, but let’s disregard that for now)
There are 5 distinct scenarios: Servlet using BIO connector, Servlet using NIO connector, Node.js, Node.js using sync file reading and Spray (a scala non-blocking web framework). Gatling was used to perform the tests, and was run on a t2.small AWS instance; the application code was run on a separate m3.large instance.
The code used in the benchmark as well as the full results are available on GitHub. (Note: please let me know if you spot something really wrong with the benchmark that skews the results)
What do the results tell us? That it doesn’t matter whether it’s blocking or non-blocking. Differences in response time and requests/sec (as well as the other factors) are negligible.
Spray appears to be slightly better when the load is not so high, whereas BIO happens to have more errors on a really high load (but being fastest at the same time), Node.js is surprisingly fast for a javascript runtime (kudos to Google for V8).
The differences in the different runs are way more likely to be due to the host VM current CPU and disk utilization or the network latency, rather than the programming model or the framework used.
After reaching this conclusion, the fact that spray is seemingly faster bugged me (especially given that I executed the spray tests half an hour after the rest), so I wanted to rerun the tests this morning. And my assumption about the role of infrastructure factors could not have been proven more right. I ran the 60 thousand requests test and the mean time was 3 seconds (for both spray and servlet), with a couple of hundred failures and only 650 requests/sec. This aligned with my observation that AWS works a lot faster when I start and delete cloud formation stacks early in the morning (GMT+2, when Europe is still sleeping and the US is already in bed).
The benchmark is still valid, as I executed it within 1 hour on a Sunday afternoon. But the whole experiment convinced me even more of what I concluded in my previous post – that non-blocking doesn’t have visible benefits and one should not force himself to use the possibly unfriendly callback programming model for the sake of imaginary performance gains. Niche cases aside, for the general scenario you should pick the framework, language and programming model that people in the team are most comfortable with.
Which of the Servlet tests are non-blocking? I don’t see any callbacks in the only .java file – https://github.com/Glamdring/bozho-benchmarks/blob/master/nonblocking/code/servlet/src/main/java/com/test/TestServlet.java
None – it’s just that they were using a NIO connector.
So you are comparing non-blocking Java Servlet to other technologies? Why not compare blocking Java to non-blocking Java? Also your tests clearly shows that non-blocking does have a value. On every occasion the non-blocking Node beats the blocking Node and it also beats blocking Java. How do you conclude that non-blocking has no value?
In addition the results for non-blocking code should improve even further (or rather the blocking results should get worse) as the latency for the blocking call (in this case reading the file) increases.
The blocking comes in a couple of places – reading the request, reading the file, and writing the response. The BIO testcase is blocking everywhere, the NIO testcase is blocking only on file read. Also, why do you say node always beats blocking Java, when for 52500 requests BIO is both faster and has more req/sec? The point is, even where Node is better, it is by a tiny margin that is way more likely to be gained or lost due to other factors
52500 is the only case where blocking performs better and on every other test the non-blocking code manages more requests per second. Also NIO is better than BIO although as you said it does block on occasion.
and the 52500 case is the one you should be most of the time – not overloaded with requests. So which one is “better”? š
What are the limits of the machines in terms of network bandwidth, CPU overhead for the network operations, I/O overhead, how much data is transferred in each direction during the test runs? For example the results for small instances here are quite far than 1Gbps bandwidth: http://serverbear.com/compare?Sort=Host&Order=asc&Server+Type=Cloud&Monthly+Cost=-&RAM=-&Bandwidth+Benchmark=- They have instructions how to run the benchmark and test your instances.
Well depends on what your goals are also as I said you will see results more in favor of async if you had more waiting. For example read 10 files instead of one in a single request or have the request block longer in some way.
What async programming tries to solve (at least on the JVM, where threads are available) is issues related to Littleās Law. As long as youāre not hitting Littleās Lawās limits, you really *shoudnāt* see much of a difference between threads and async. But as your machine probably canāt handle 30K threads well, sooner or later you will be hitting those limits.
See a theoretical analysis here: http://blog.paralleluniverse.co/2014/02/04/littles-law/
and a benchmark here: http://blog.paralleluniverse.co/2014/05/29/cascading-failures/
showing some very clear results (the benchmark uses Quasar, so you can keep your simple, blocking, synchronous code, while the library turns that to async code behind the scenes).
btw, for the sake of correctness š
In your Spray example it’s better not to block in the directive directly, but rather use a Future with a stand-alone execution context.
That’s what suggested by all spray guidelines.
Just google a kind of comparison of different methods:
https://github.com/zcox/spray-blocking-test
Wanted to also put a link to TechEmpower benchmarks, but they removed Spray results as outdated š
https://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=json
And here is a bit old but worth checking blog about benchmarking spray as well š
http://spray.io/blog/2013-05-24-benchmarking-spray/
Amazing Webpage, Stick to the fantastic work. thnx. http://bit.ly/2f0xJ92
The file is not actually “large” enough, consider transactions that take really long time.
And during that time, resources may be occupied, causing contention in other places of the system, etc..
Good start, but you should always benchmark from a more powerful machine than the machine you’re benchmarking!
>The file is not actually ālargeā enough, consider transactions that take really long time.
+++
Exactly! Longer processing time dramatically increases the benefit of non-blocking: https://www.tandemseven.com/blog/performance-java-vs-node/