LMAX recently open-sourced The Disruptor – one of the core frameworks upon which we build our ultra-high performance financial exchange. Today, we published a white paper detailing how The Disruptor works, and highlighting the sorts of performance benefits that can be achieved by using it.
The Disruptor is essentially a library which we (and now you!) can use to do message passing within your application. If you like, it’s a queue on steroids. But this stuff is far more fascinating than just that for a number of reasons.
Firstly, the raw performance figures. Our testing shows that the latency you can achieve with The Disruptor is 3 orders of magnitude less than you can achieve with ArrayBlockingQueue. And with that, comes throughput that’s an order of magnitude higher! Win-win. But there’s more. The Disruptor actually goes faster under higher load. We’ve had this monster passing messages with latencies as low as 50ns. That’s approaching the theoretical limit of what you can achieve with the hardware. Still think Java is slow?
Here’s a chart from the white paper, showing the relative latencies. It’s worth keeping in mind that this chart uses a log-log scale.
Secondly, the implementation approach. The Disruptor has been designed and built by stepping away from the problem, and re-evaluating it from a CS101 perspective. A lot of the principles used fly in the face of modern, main-stream concurrency ideas. For example, most deployments of The Disruptor will allow you pass messages from multiple producers to multiple consumers without a single lock. Not locking means not going to the kernel for lock arbitration, and that means no latency spikes.
Thirdly, the consistency. In our tests, ArrayBlockingQueue was giving us a mean latency of over 30,000ns, and a 99.99% tail of 4,000,000ns. The Disruptor was showing a mean latency of just 52ns, and a 99.99% tail of around 8,000ns. The consistency with which The Disruptor out-performs traditional queueing/message-passing techniques leads to less jitter and latency spikes. This is vital in a financial environment. Latency spikes lead to unhappy market makers who have no confidence in the prices they’re making. That leads to wider spreads. And that, ultimately, leads to unhappy customers.
EDIT: Turns out Mike Barker has written a similar post today!
EDIT 2: Also, Trisha Gee has written a post which goes into some of the basics about what a ring buffer is. Can you tell we’re all quite excited?