From Stock Markets To Ledgers, Part I
Why fairness, not speed, is the ultimate validation of stock markets
From 9.30 am to 4.00 pm, Eastern Time, all bets are off at 18 Broad Street.
Of course, the people that move most of the trading volume aren’t around. They’re in their own offices, far away from the action. They might not even be in New York City. In their behalf, a machine no bigger than a wardrobe starts humming along, systematically opening and closing positions, fast.
Trading is algorithmic trading.
We’re still in love with the idea that trading is still done by men shouting to each other. The Dark Knight Rises, released in 2012, features a packed Gotham Stock Exchange trading floor where Bane leads a hit on to execute a trading order to bankrupt Bruce Wayne.
Now, you’re sophisticated enough to know that you don’t need to hit the stock market to bankrupt anybody. You just need to reuse a feature flag, like it happened to Knight Capital.
What you may not know is that since the passing of Regulation NMS in 2005, the trading industry has revamped itself from the ground up. Coupled with massive improvements in technology, stock markets have been forced to say goodbye to their traditional business model of taking a cut on every execution for something much more subtle, and much less difficult to achieve operationally.
And that is fairness.

Over the next few weeks, The Payments Engineer Playbook dives deep into the technology that keeps stock markets moving. We’re going to look at it from the perspective of those who’ve built it, and from the veterans that made it possible in the first place, and have witnessed its rapid evolution.
Not only that. We’re going to translate most of the learnings of this somewhat mysterious industry into practical takeaways for those who are building ledgers and payment systems.
In today’s article, I’ll be answering the question that has probably popped right now into your mind: what has stock market technology got to do with the wider fintech industry?
The short answer is: much more than you think.
In this article, we’ll see:
Why speed isn’t the most important thing for stock markets
Why horizontal scaling didn’t work for stock markets
How markets determine time
The link between ledgers and markets
Enough intro, let’s dive in.
A Newsletter For The Engineers That Keep Money Moving
Designing payment systems for interviews is easy. Designing them for millions of transactions is not.
After nearly a decade building and maintaining large-scale money software, I’ve seen what works (and what doesn’t) about software that moves money around. In The Payments Engineer Playbook, I share one in-depth article every Wednesday with breakdowns of how money actually moves.
If you’re an engineer or founder who needs to understand money software in depth, join close to 2,000 subscribers from companies like Shopify, Modern Treasury, Coinbase or Flywire to learn how real payment engineers plan, scale, and build money software.
Fast Stock Markets Don’t Matter
OK. Here we go. Focus. Speed. I am speed. One winner, 42 losers. I eat losers for breakfast.
— Lightning McQueen, Cars
The most important quality of stock market systems isn’t speed.
Of course, stock markets must be fast enough. But these systems aren’t built to be the fastest systems around. That’s because they’re not in a race with anybody.
Stock markets aren’t players. They’re the playing field.
There are two ways to interact with stock markets: place orders and fetch market data.
Traditionally, stock markets were in the business of intermediating buy and sell orders placed in their ecosystem. The fetching data part mostly took care of itself: loud traders shouting at each other were broadcasting all the relevant information to a wide enough audience.
Now, the tables have completely turned: stock markets make way less money on intermediation. They’re now in the business of broadcasting data.
It’s not that players don’t trade in stock markets. Rather, it is that brokers try to avoid placing orders in stock markets as much as possible, doing their best instead to cross buy and sell orders from their own clients.
This is called “internalizing”.
The tiny broker can’t possibly internalize orders. They simply don’t manage that much volume, and if they execute one leg of a trade against their own inventory, they run the very real risk of the market going the wrong way.
That’s why they often go, not for the stock markets, but for larger brokers, called “wholesalers”. They paradoxically give better prices than the stock markets, and at the same time give rebates to the smaller brokers, the infamous Payment For Order Flow.
There are two ways to understand this paradox, what Matt Levine calls the Good and the Bad Models.
The Good Model goes like this [...] Look, we hate trading with all these hedge funds on the public stock exchange. So much adverse selection. If we could just trade with your delightful retail customers, who trade small lots and never know anything we don’t know, we would never lose money. So we could afford to charge them a much lower spread.
The Bad Model goes like this [...] only naive rubes pay those posted prices. Look. Instead of sending your customers’ orders to the exchange [...], send their orders to us. We’ll give them a better price [...] It turns around and buys stock at the real price, [making] instant risk-free profit.
— Matt Levine, What Does Payment for Order Flow Buy?
Adverse selection, that is, the risk of being on the other side of a trade made by someone who knows something you don’t, has created a clean divide into wholesalers, who trade mostly with small brokers or even directly with punks like you and me, and market makers, who trade mostly with institutional investors.
Wholesalers can provide better prices, even pay rebates, because those who buy and sell orders on their platform are naive, unsophisticated, or plain gamblers, the so called retail investors. Trading volume at their level is mostly random, and easy to match internally. Price improvements and rebates abound to increase trading volume, because trading volume correlates with profit.
Market makers can’t do that, because they get their volume from massive, professionalize, consolidated players, so called institutional investors: pension funds, large trading firms, Wall Street behemoths. Those won’t place small orders, and won’t place them unless they know what they’re up to. And market makers can’t internalize their orders so easily, can be sure that their clients know something they don’t, and therefore can’t give them better prices.
What they can give them is fairness.
Fairness, that is, the guarantee that data is disseminated to all players at the same time, used to be taken for granted. Those loud traders, shouting at each other, were broadcasting their bids and offers to a wide enough audience.
But now that markets are electronic, market data is propagated through the network, and one-to-many communication isn’t that easy when everyone is competing for getting that data before everyone else.
Fair stock markets are desirable, because they provide an ecosystem that’s attractive to institutional investors. They don’t care so much about the trading fees as they care about having a leveled playing field.
Speed is irrelevant; Fairness is not.
And that’s too bad, because speed is more or less straightforward. Big upfront costs, disseminated over big volumes with economies of scale.
Fairness is a totally different beast.
Fair, Deterministic Matching Engines
In the beginning, scale was achieved on gateways.
Organizing a bunch of machines to accept requests in parallel is the single most common approach to adapt to increased traffic. Every website with a modicum of traffic uses this approach. It’s called horizontally, and it works everywhere.
But it doesn’t work in stock markets.
Not that they didn’t try it. In the early 2000s, horizontal scaling was part of the playbook, so to say. Most stock markets put a bunch of gateways in parallel, let connections come in, and disseminate the data in a round-robin fashion. Fair, and simple.
Turns out, traders caught up to this, and began to play the system.
In retrospective, it was too easy not to. Moments before the market opens, savvy traders bombarded the gateway with connection requests, making sure they’ve connected to all possible gateways. They then monitor latency to figure out which gateway was the most reliable and responsive, and route their orders there.
And then, cheekily, stuffed the other gateways with bogus orders, way out of the best bid and ask, so that they would clog those gateways without any possibility of getting executed.
Eventually, stock markets abandoned horizontal scaling for gateways, but the growing traffic made it impossible to serve all traders with one of everything. So, they ended up parallelizing the matching engines, the component responsible for executing trades by pairing buy and sell orders.
But here’s the thing: matching engines cannot be parallelized.
At the matching engine layer, fairness is equivalent to making sure that events are in order. This is extremely difficult to do when you parallelize machines, because you can’t rely on the machine internal clock any longer.
See, when someone in a movie says “let’s synchronize our clocks”, it is the physicist equivalent to someone saying enhance that image.
You can’t compare timestamps from two different clocks with absolute precision. And Einstein’s paper on why you can’t is wonderful, and very accessible2.
This is why Google uses atomic clocks and GPS to keep its Spanner database consistent within 7 milliseconds (by contrast, your iPhone gives you a synchronization of around 100 to 250).
And if you can’t compare two timestamps from two different clocks, then there is a limit on the proximity of two IDs that you can confidently say are chronologically ordered.
In the end, matching engines have been paralellized using two techniques.
One is using atomic clocks, just like Google’s Spanner. This is helpful because matching engines turn into an event sourcing mechanism of sorts, where time is derived not from the internal clock, but from milisecond-separated pulses that get added to the event log. This keeps time consistent across machines and enables things like Time in Force orders that get placed with a specific expiration date if not executed.
The other is splitting matching engines by security and customer type. One matching engine for a bunch of equities, other for bonds, other for a specific set of commodities, etc.
This keeps matching engines not only fair, but also deterministic. The order of correlated events is consistent, but the machines don’t get overwhelmed by the traffic.
Fair, Deterministic Ledgers
Many of the lessons learned by stock markets can be applied to ledgers.
Both markets and ledgers maintain a single source of truth on heavy traffic, and must produce consistent results while at the same time make data highly available.
Ledgers, in other words, are to accounting what stock markets are to trading.
Both ledgers and stock markets benefit from speed, but are ultimately defined by consistency and accuracy.
That’s why many of the lessons in stock markets will ultimately be applied to ledgers:
From centralized trust to algorithmic fairness.
From “fast transactions” to provably correct, ordered transactions.
From “process it quickly” to process it fairly and reproducibly.
In other words: ledgers that are “fast” may look cool now, but the ultimate validation of ledgers is fairness and integrity.
The future of ledgers is not speed; it’s fairness, determinism, auditability.
Over the next few weeks, we’re going to dive deep into stock markets technology from the point of view of building superior ledger products. We’ll begin by discussing the problems with traditional fault tolerant strategies, and how stock markets eventually tackled this problem.
But that will be next week. I’ll see you then.






