A couple of quants had built a random forest regression model that could take inputs like time of day, exchange, order volume etc and spit out an interval of what latency had historically been in that range.
If the latency moved outside that range, an alert would fire and then I would co-ordinate a response with the a variety of teams e.g. trading, networking, Linux etc
If we excluded changes on our side as the culprit, we would reach out to the exchange and talk to our sales rep there would might also pull in networking etc.
Some exchanges, EUREX comes to mind, were phenomenal at helping us identify issues. e.g. they once swapped out a cable that was a few feet longer than the older cable and that's why the latency increased.
One day, it's IEX, of Flash Boys fame, that triggers an alert. Nothing changed on our side so we call them. We are going back and forth with the networking engineer and then the sales rep says, in almost hushed tones:
"Look, I've worked at other exchange so I get where you are coming from in asking these questions. Problem is, b/c of our founding ethos, we are actually not allowed to track our own internal latency so we really can't help you identify the root cause. I REALLY wish it was different."
I love this story b/c HN, as a technology focused site, often thinks all problems have technical solutions but sometimes it's actually a people or process solution.
Also, incentives and "philosophy of the founders" matter a lot too.
First, it is of course possible to apply horizontal scaling through sharding. My order on Tesla doesn't affect your order on Apple, so it's possible to run each product on its own matching engine, its own set of gateways, etc. Most exchanges don't go this far: they might have one cluster for stocks starting A-E, etc. So they don't even exhaust the benefits available from horizontal scaling, partly because this would be expensive.
On the other hand, it's not just the sequencer that has to process all these events in strict order - which might make you think it's just a matter of returning a single increasing sequence number for every request. The matching engine which sits downstream of the sequencer also has to consume all the events and apply a much more complicated algorithm: the matching algorithm described in the article as "a pure function of the log".
Components outside of that can generally be scaled more easily: for example, a gateway cares only about activity on the orders it originally received.
The article is largely correct that separating the sequencer from the matching engine allows you to recover if the latter crashes. But this may only be a theoretical benefit. Replaying and reprocessing a day's worth of messages takes a substantial fraction of the day, because the system is already operating close to its capacity. And after it crashed, you still need to figure out which customers think they got their orders executed, and allow them to cancel outstanding orders.
A notable edge case here is that if EVERYTHING (e.g. market data AND orders) goes through the sequencer then you can, essentially, Denial of Service to key parts of the trading flow.
e.g. one of the first exchanges to switch to a sequencer model was famous for having big market data bursts and then huge order entry delays b/c each order got stuck in the sequencer queue. In other words, the queue would be 99.99% market data with orders sprinkled in randomly.
But for the required stringent latency, Kafka for head of line (HoL) blocking under concurrent events can be an issue though [1].
[1] What If We Could Rebuild Kafka from Scratch? (220 comments)
What happens outside the exchange really doesn’t matter. The ordering will not happen until it hits the exchange.
And that is why algorithmic traders want their algos in a closet as close to the exchange both physically and also in terms of network hops as possible.
GPS can provide fairly accurate timestamps. There's a few other GLONASS systems as well for extra reliability.
How is this avoiding data loss if the lead sequencer goes down after acking but without the replica receiving the write?
Of the many thing trading platforms are attempting to do, the two most relevant here are the overall latency and more importantly where serialization occurs on the system.
Latency itself is only relevant as it applies to the “uncertainty” period where capital is tied up before the result of the instruction is acknowledged. Firms can only have so much capital risk, and so these moments end up being little dead periods. So long as the latency is reasonably deterministic though it’s mostly inconsequential if a platform takes 25us or 25ms to return an order acknowledgement (this is slightly more relevant in environments where there are potentially multiple venues to trade a product on, but in terms of global financial systems these environments are exceptions and not the norm). Latency is really only important when factored alongside some metric indicating a failure of business logic (failures to execute on aggressive orders or failures to cancel in time are two typical metrics)
The most important to many participants is where serialization occurs on the trading venue (what the initial portion of this blog is about; determining who was “first”). Usually this is to the tune of 1-2ns (in some cases lower). There are diminishing returns however to making this absolute in physical terms. A small handful of venues have attempted to address serialization at the very edge of their systems, but the net result is just a change in how firms that are extremely sensitive to being first apply technical expertise to the problem.
Most “good” venues permit an amount of slop in their systems (usually to the tune of 5-10% of the overall latency) which reduces the benefits of playing the sorts of ridiculous games to be “first”. There ends up being a hard limit to the economic benefit of throwing man hours and infrastructure at the problem.