- Servlet spec, thread per request model
- Traditional synchronous style programming with blocking operations – easier to program
- Not based on Servlet spec – based on Netty – more flexible threading options including event loop(s) and thread pool(s)
- Performance comes from async (event loop based) implementation but decent support for falling back to blocking operations if needed
Win 7, 64 bit | 24 GB RAM | Intel Xeon CPU. X5550 @ 2.67 GHz 2.66 GHz
Note: I had JMeter running on the same machine as the test server, only interested in relative performance, not a throughput benchmark
Objective and Setup
Determine the relative performance of the two frameworks in a scenario where handling a request involves an operation which blocks for 100ms. There is no other significant load.
Event loop’s speed over thread/request models has been proven by NodeJS for a while now. I am more interested in investigating how Vert.x holds up if it’s introduced to a existing/real world code base which will invariably have some blocking operations.
Vert.x test runs
This uses a thread pool to offload all blocking operations from the event loop
1a. Reducing the size of the worker thread pool
The machine only has 16 virtual cores so I would think 100 threads would have sufficed. It seems that if the worker pool is not big enough to take requests off the event loop quickly, it slows things down significantly.
1c. Under load – double the number of users
The degradation is graceful, did not notice anything out of the ordinary. No significant change in CPU/Memory usage. Note that the performance is bound by the blocking op.
The Verticles are communicating via the Vertx EventBus. This is an alternate design pattern available within Vertx. I wanted to see the overhead of communicating via the EventBus. Seems minimal.
Tomcat test runs
Tomcat’s maxThreads setting is 1000 to match Vertx. Interestingly, Tomcat only launched a max of 267 threads during the test.
System was generally stable with resource usage comparable to Vertx. Notice the high number of context switches as compared to Vertx.
2. Under load – double users
No significant change in CPU/Memory usage but response time drops significantly.
Writing async/reactive code isn’t necessarily free since it involves more cognitive overhead and fewer people are comfortable with it. Adoption of a new paradigm is also likely to introduce some bugs initially.
So the question is – Is the performance jump in Vertex worth the trouble of the async programming model?
My answer from the above is – yes
The test compared apples to apples and arguably put Vertx at a disadvantage. Still, Vertex comes out ahead comfortably. It’s also less ceremony (no Servlet spec) and feels very light to work with.
The downside – you have to be aware of the threading choices you make in Vertex – sharper knife. Tomcat is more production proven and synchronous programming feels more natural.