NATAS 2013 – Re-Use, Re-Use and More Re-Use

Status
Not open for further replies.
T

Theo Hildyard

Guest
I was in New York last week, on a perfect spring day, for the Waters North America Trading Architecture Summit. Unfortunately, I was locked in a cavernous hotel for the entire time but Waters did put on a great show as always. The summit was well attended with standing room only for some panel discussions.

Of the two streams during the Waters Summit – Trading Technology and Enterprise Infrastructure – I spent most of my time in the former, working for a CEP platform vendor as I do. While the stream was focused on electronic trading, algorithms, latency and related areas, the conversation broadened to innovation, OTC clearing, software quality and a bunch of other issues. I was struck, however, by how many times a certain group of themes, 1) TCO, 2) Buy v’s build and the impact on software quality and 3) Operational risk from erroneous algorithms, cropped up.

Without attributing comments or themes to people or firms, here are few sound-bites that caught my attention.

One speaker named TCO reduction as his number one priority. And his number two priority, and his number three priority – a telling sign that firms continue to wrestle with the competing objectives of offering class leading and differentiated services, but at a reduced cost. Looking at individual components of cost and how they change over time was cited as key component in this dilemma. For example, a firm may find that raw processing power is the largest single contributor to cost today, but that the storage demands of big data might soon claim that top spot. Hence, planning for technologies like database compression should figure on people’s agenda sooner rather than later. Equally, re-use of software across multiple business areas or asset classes could yield huge savings. Firms might need to combat the “not invented here” syndrome but if fixed income trading is to approach the levels of efficiency seen in cash equities for example, then surely it makes sense to re-use as much of the cash equity infrastructure as possible. For example re-using asset class agnostic and customizable smart order routers, order management services, real-time risk services etc. should be looked at.

The subject of software quality permeated several panels along with the buy v’s build debate. A particularly insightful comment alluded to how the buy v’s build debate is actually a moot point nowadays. We have all lived in a hybrid world for some time. What is actually important is that we engineer software for re-use (there’s that word again) as a means to dramatically improve software quality. In fact, according to one panelist, industries renowned for high software quality, such as defense and aerospace, place re-use as the top of their list of contributors to software quality. Re-use does not even feature on the capital markets list apparently. There is surely a lesson there. And if re-use across subtly (or even dramatically) different business use cases is required of software, that software must be flexible enough to adapt yet cost effective enough to implement through the use of a rich set of re-usable core components.

Related to improving software quality, the subject of managing the operation risk of Algos and HFT was another dominant feature of the discussion. One panel discussion described how regulators seem to want to manage this risk by regulating complexity out of existence by mandating simplicity. But the feeling in the room was that that complexity cannot be undone and measurement and management of operational risk should be the focus – i.e. monitoring those operational risks (aka algorithms) in as near to real-time as possible and interceding if boundaries, such as order to trade based ratios or price or volume based limits, are broken. But, if we introduce improved risk checks very early in the cycle what of latency? Well, one speaker used the term “concurrent trade” checks as opposed to pre-trade or post-trade. An interesting term – I take it to mean leaving your algo events on a critical path of low latency but have a parallel less critical path that does the checking. When the checks detect a breach they act as quickly as possible. Sure they will have let some erroneous order flow through, but not much. In fact, only as much as can be traded in the difference in latency between the critical and the less critical paths – which is guaranteed to be better than the difference between the critical path and a human being hitting the kill switch.. And no latency has apparently been introduced to the critical path.

So, all-in-all, a good day with the biggest single take-home point for me being how a hybrid (of buy v’s build) approach to software development can support re-use and hence reduce TCO and raise quality. Oh, and can someone please keep an eye on those Algos.

Congratulations again to Waters for hosting a successful event.

If you would like to know more about our thoughts on Buy v’s Build and the merits of a hybrid approach, please see our white paper at http://www.progress.com/buybuild

Continue reading...
 
Status
Not open for further replies.
Top