Taking Stock

0
828
John Parkinson, Affiliate Partner, Waterstone Management Group

It’s been hard to miss the recent hype and media misunderstanding related to Blackrock’s decision to make more use of model-based investment strategies for its actively managed stock funds. Generally (and often hysterically) reported as “man vs machine” in the media, it’s clear to me that most people have no idea about what’s actually being done by Blackrock (and others) and what that means for continuing human employment in the financial investment markets.

A couple of years ago I led a project at a large hedge fund that was aimed at (a) investigating whether there were measurable differences in performance between different investment managers (controlling for factors such as style, strategy, product definition, experience, and duration) and (b) could we figure out what the best and most consistent performers did that made them be successful. The underlying motive was not actually to weed out bad performers – they did that effectively already. It was to see if we could free up additional capacity within the top performers so that they could handle more client money without burning out. The fund owners had the sense that some of what the top managers did do did not require whatever it was that made them so good. If we could understand what those activities were, they could be delegated or automated.

We used a combination of big data analytics and supervised machine learning to deconstruct what was actually going on when investment research, analysis and decision was being performed. As we built and trained a series of models, several things became clear.

  • There were indeed some outstanding and consistent performers, but they all used different strategies to achieve their top quartile performance levels. Given the same input data and analytic tools, they would often arrive at different decisions and strategies – but all the decisions worked out well.
  • How the data was organized and presented made a huge difference to the operation of the analytics and decision-making processes, but not to the outcomes. Adding automation for filtering and classifying the available data made it easier to use – and in fact helped less talented managers improve their performance, but did not affect (in a statistically significant way) the top performing cohort. What it did do was relieve them of a lot of routine chores, giving them more time to think through strategies and product designs.
  • Once the managers became comfortable with the automation we deployed, their capacity to focus on results-oriented activities rather than routine tasks increased significantly. Until they became comfortable and started to trust the automation, it reduced their performance. We nearly missed this, but by running the models on previous data and outcomes, with no humans involved, we saw that the drop off was driven by a lack of trust in the suggestions that the automated presented – causing delays as the work was redone by the managers before decisions were made.
  • To get optimum performance, the support automation needed to be “tuned” to each investment managers preferred ways of working. Although we built a common toolchain, how it got used in practice varied widely. Trying to enforce too much standardization also reduced performance.

By the time we were done, we had some interesting conclusions (and the data to support them).

  • Our “fully automated” decision engine never beat the best managers or even the average managers consistently. It did, mostly, beat the lowest performing quartile of human decision makers.
  • The “assisted” and “augmented” toolchain, once it had become trusted and was actually being used lifted the performance of the middle two quartiles by between 10% and 30% depending on a variety of factors that I am not at liberty to divulge but did not do much for their productivity. They were better at what they did, but only did about the same amount of it
  • The same toolchains, deployed for effective use by the top quartile of investment managers, did not significantly change improve their performance but nearly doubled their productivity

The lesson here?

Man plus machine beat man alone and machine alone almost every time, but not in the same way in every scenario we tried.

We are not yet at the stage where algorithms can routinely outperform expert human intellect at everything, but where automation can do better, using it to augment, guide and inform human decision-making can pay off big time. At least when making financial investment decisions.

John Parkinson
Affiliate Partner
Waterstone Management Group

LEAVE A REPLY

Please enter your comment!
Please enter your name here