Genetic Algorithms and the Hedge Fund Technological Arms Race
May 7, 2015
May 6, 2015. Aidyia, a hedge fund that uses artificial intelligence for trading markets, is getting publicity. It is not the first hedge fund to do this. Rebellion Research has been doing it for years now. However, Aidyia is getting buzz since it is new, due to its location in Hong Kong, and because of its chief scientist, Ben Goertzel, who keeps an active blog at http://multiverseaccordingtoben.blogspot.com/.
Here is a sample of why he is getting buzz:
“I personally think the whole world financial and economic system is going to transform into something utterly different, once robots and AIs eliminate the need for (and relative value of) human effort in most domains of practical endeavor. So I view these issues with AIs and asset management as "transitional", in a sense. But that doesn't make them unimportant, obviously -- for the period between now and Singularity [my emphasis], they will be relevant.”
So you can tell just from his terminology that he is of the Ray Kurzweil-Walther Rathenau-Star Trek science fiction mold. And that quirky-coolness creates some market infatuation, at least until the market eats its little darling. Google has Geoff Hinton on staff, bought DeepMind, and purchased a quantum computer for horsepower. Baidu hired Andrew Ng. Facebook has Yann LeCunto direct Ai research. Perhaps IBM should be concerned. While they are challenging Gary Kasparov, others are running money with their toys.
Dr. Goertzel goes on to states his case for what makes Aidyia stand out over the crowd of HFT bots:
“Most HFT systems have minimal AI in them -- they're based on reacting super-quickly not super-smartly. The use of HFT shouldn't be conflated with the use of AI. HFT could be pretty much eliminated from any market by imposing a per-transaction tax like we have here in Hong Kong; but this wouldn't get rid of AI. Our AI predictors at Aidyia are currently being used to predict asset price movements 20 days in advance, not microseconds in advance.
…AI inside a trading system is not a total protection against the stupidity -- or emotional pathology -- of the humans trading that system.”
That’s an unfair generalization, as there is a large amount of learning and pattern-recognition systems built into the HFT industry, including sophisticated web-scapers that comb twitter for information. But it does draw a clear line on some differences. Aidyia is using machine learning to position or swing trade, and its advantage is the absence of what he refers to as human “emotional pathology”.
He has a point about emotional pathology. In 2011, the Proceedings of the National Academy of Science published a study that demonstrated that parole approval rates were highest after meal breaks (65%) and was nearly zero just before the next meal break. That someone’s life is ruined over and over again because the judge is hungry. Not only does this completely suck, it points to a deep flaw in the subconscious wiring that informs human decision-making.
There are two ways to circumvent these human pathologies. The first by using artificial intelligence to make obtain non-emotionally biased decisions. This presumably results in sound choices based on meaningful data that selectively buys chaos and sells order. In short, it is a mean-reversion strategy that exploits out-of-the-money optionalities when emotions are running high. The second is to accurately anticipate how collective emotions will herd money. Keynes nailed this when he said “Successful investing is anticipating the anticipations of others.” This is called momentum trading when it works and being a lemming when it doesn’t.
Both have very human implementations, but the presumption is that a machine without human biases can better uncover pricing inefficiencies and value a company undergoing OTM moves; or a machine can ride prices up and shorting them down more efficiently. This is not necessarily an insufferable presumption: maybe they can. Maybe Aidyia can do it better than everyone else in the space.
However, machine learning has its own potential weaknesses that counterbalance emotional pathologies. Call these weaknesses spurious learning.
Imagine a robot observes all available information on a trader with the goal of identifying his successful days and the days he gets his face ripped off. Completely unbiased and undiscriminating, the robot observes that on 60% of good trading days, the trader scratched his left butt cheek before he brushed his teeth. On bad days, 55% of the time the trader didn’t scratch at all. The robot concludes that a left butt cheek scratching is decidedly important to trading in general. Before the machine learns that a scratch matters not, he has lost money and the plug gets pulled. Humor aside, the point is that unbiased learning leads to weighting spurious factors way too high until losses arrive. This is the same criticism applied to neural nets and other sophisticated techniques that obtain a significant connection between doughnut sales and the hyperbolic tangency of your dog’s pedigree. Human emotional pathology and machine spurious learning are both mechanisms that attempt to reach an elusive quality called expertness.
Expertness is really nothing more than an algorithm that searches for optimal solutions to a given problem. In artificial intelligence, one of the chief methods to do this is called genetic algorithms. Genetic algorithms compute optimal solutions from data inputs while adapting to changes in the data. To do this, start with a random population of candidate computational algorithms operating semi-independent of overt control, like a robot. Each algorithm is ranked within every data environment by how well it performs a given task. The most successful algorithms are allowed to grow in size and reach through more informational environments through reproduction. As they move through different information environments, mutations and cross-breeding is allowed to further improve fitness. This process continues for many generations. The changes in the information environment trigger the rankings to change, and a degree of (pseudo-)random chance plays a role in the changing fitness as well. This kind of computation is better thought of as a search method than a means of reasoning deductively, and it has a straightforward biological analogue.
In the article “Bacterial natural transformation by highly fragmented and damaged DNA” it was shown that bacteria evolve in their environments in the same way that genetic algorithms do. Bacteria absorb DNA molecules that are continuously released through decomposition of organic matter and are ubiquitous in most environments. This fragmented DNA is a nutrient source for microbes, but this fragmented DNA also is a source of natural genetic exchange that induces random mutation. The findings reveal that DNA fragments (including truly ancient ones) are present in large quantities in the environment, and are acquired by bacteria through natural transformation. Thus mutations can arise due to a purely random occurrence. Fitness over a landscape—whether it is physical or informational—implies expertness.
Mutations induced by genetic algorithms do not necessarily enhance expertness. Just as in nature, most random mutations lead to defects that inhibit reproduction and lead to hereditary extinction. Genetic algorithmic mutation and cross-breeding can lead to extremely good fitness within a very specific type of landscape, but fitness drops off the cliff when the algorithm operates in even a slightly different landscape. For example, the leopard can run down the fast-moving wildebeest like a champ, but it doesn’t have the brute force power to take out a more wide diversity of prey like the lion does.
IBM’s Deep Blue actually offers some interesting insights for Aidyia. Aidyia is after all expecting a machine learning algorithm to recognize patterns (chess openings) and optimize a search subset of all possible moves (leading to checkmate). Deep Blue beat Kasparov when the play came as expected—Kasparov’s laying the Grunfeld and being positional. Kasparov owned Deep Blue in game two when he played an aggressive off-tempo game. It is the unexpected—out-of the-money moves meant in a very general sense—that rattle emotional pathology and spurious learning alike. I’m not saying one is better than the other, but I wouldn’t be surprised if machine performed better in near-the-money situations. When risk premia is screaming and you have invested money, all bets are off who wins.
In one sense, markets are and always will be ultimately unpredictable systems. If a machine was able to actually predict market trends the markets would cease to exist because everyone would in time take the same side of the trade. That’s why you can only have true liquidity in markets when you have many players trying to do different things for different reasons. No liquidity, no market.
In another sense, a market is predictive because it is a system that follows rules, just like a chess game. However, the market equivalent of chess requires that the rules change and the chess boards change subject to random triggers. A square can vanish, the piece on it lost forever. The total number of squares can grow geometrically. The queen may not work well or at all based on a newly arrived rule set.
Winning reduces to having informational advantages about the rules and the board. This implies that some players are more equal than others, able to bend, break, add, usurp, and subtract rules. Adding the behavior of the manipulators to the analysis to get better predictive results until nobody takes the other side. And ultimately you have no market.
Machine learning may well outperform emotive reasoning. If not now, technological improvement may well make it possible in the future. But when it comes to predicting the future, it is nothing more than a gimmick.
Gimmicks serve a purpose in the investment industry. They can draw in new money like nothing else. Based on Hong Kong, this seems likely to draw in new Chinese money eager to try out the latest gizmo. I’ve seen Rene Thom’s catastrophe theory, Smale’s chaos theory Mandelbrot’s fractals all draw their share. They often usefully call a big-time top on markets. After all, it takes real euphoria to buy into something untested and hard to understand.
It typically ends up in with an investor letter something like Black Mesa’s:
August 7, 2007: “We are responding to unprecedented market events. In brief, we believe a very large (or several very large) trading entities are liquidating massive market-neutral portfolios. Black Mesa bets on historically-precedented, value-oriented security selections. We are therefore susceptible to high or sustained levels of market activity that run contrary to such selections.”
August 8: “There is the possibility that, as great as the liquidations have been so far, that it is just the beginning of a spiral of me-too liquidations. The question: when will it end? History never repeats in the same way twice, and we aren’t forecasting that we will be up 70% (or at all) for 2007. But we are keen observers of past experiences and hope to gain useful perspective thereby. Perhaps in a few months we will be better able to understand current market conditions. We are doing our best to make sense of what we see in the noisy present and at in the best interest of our investors.”
September, 2007: Black Mesa Capital was BK.
When it comes to market prediction, Ultron will do no better than Isaac Newton did.