In the News:
The Trading Mesh
The Trading Mesh spoke with Mark Bisker, Head of EPAM’s Capital Markets Competency Center, to find out more about the ICE iMpact Market Data Handler, a low latency feed handler for ICE market data based on the proprietary technology.
Regular followers of The Trading Mesh will be aware of the fact that we often talk about FPGAs (Field Programmable Gate Arrays) and their use in the high performance trading logistics chain, particularly in the areas of market data distribution and, increasingly, order handling. In fact, a whole section of The Trading Mesh website is dedicated to the use of FPGA technology in trading.
However, it’s interesting to note that there are some technology vendors who are taking a different route to achieving low latency in both market data feed handling and order/trade messaging.
One such firm doing some very interesting things in this space is EPAM Systems, who last month announced the release of their ICE iMpact Market Data Handler, a low latency feed handler for ICE market data based on EPAM’s B2BITS FIX Antenna C++ technology.
Despite not using FPGA technology, the market data feed handler claims an impressive 400 nanoseconds mean latency (from network socket read to user call-back). And for order/trade messaging, the product offers wire-to-wire tick-to-trade latency of just 6 microseconds.
We spoke with Mark Bisker, Head of Capital Markets Competency Center at EPAM Systems, to find out more about the technology and who it is aimed at.
The Trading Mesh: I understand that you’re not aiming this solution at the ultra-low latency HFT firms, but at the much wider community of market participants who need to be fast without necessarily needing to be the fastest. Is that correct?
Mark Bisker: Yes, this is not specifically aimed at HFT firms who are doing pure “trading-ahead” arbitrage. It is more for firms who are running algorithms where speed is important but not the only factor.
This could be for strategies like statistical arbitrage for example. Or market making, particularly in options, where many of our clients are continuously making markets across thousands and thousands of instruments.
TTM: Looking specifically at firms running cross-market stat-arb strategies, are you able to offer a low-latency feed of market data and an order/trading gateway for multiple markets through a single API for example?
MB: Yes indeed. If a firm is trading equities on BATS and futures on ICE and CME for example, or say if they are running arbitrage strategies for energy trading across multiple venues, that is all accessible through a single interface, so if you’re a quant hedge fund running value-based strategies covering a range of markets and you have to get in and out of the market fast but you don’t have to be the absolute fastest – you’re not in the one or two microsecond game - then this solution would work for you.
TTM: We’ve talked about trading firms, market makers and stat-arb players, but I understand your software is used by a number of exchanges too. How does that work?
MB: Exchanges need to be optimised for both latency and for throughput when receiving messages delivered to the matching engine and responding back. When managing their connections with what might be hundreds of members, they need to have a model of how to efficiently consume the data from those members in a particular sequence, in high performance.
Let’s say the exchange has 200-300 TCP connections. It needs to be able to fairly allocate those connections through to the matching engine. Our library allows those APIs to launch multiple threads and take advantage of multi-core architecture, where particular threads are bound to particular cores and CPU cache aware user-driven task scheduling becomes the key factor.
TTM: And there are exchanges using this live?
MB: Yes, MoEx (Moscow Exchange) is the biggest user of this product and IEX is also using it. In fact, there are a number of exchanges and ATSes around the world that are using this as a critical component of their connectivity layer for feeding their matching engines efficiently. And it’s important both ways, where the exchange is consuming data but also publishing data. It depends on the strategy of the matching engine.
The key elements here are the high performance computing, the optimization, and the way we capitalise on modern Intel multi-core architecture, together with our efficient use of cache.
TTM: Thank you Mark.
Original publication is here.