I thought I would take the opportunity of the long weekend to put down some of the design principles we use for the development of our model. In particular I will focus on unique aspects of financial market trading that perhaps make it more challenging than many AI problems to solve.
If we consider the usual problem space that an AI is used to address, the process is generally static. By this I mean that training examples can be drawn (within reason) historically and the underlying process can be considered not to change dramatically between the training set and the live sample.
This is, I agree, not strictly true…language does evolve over time and semantic change over the course of decades but the key here is we are talking about an exceptionally slow evolution of the underlying model.
Financial markets are entirely different.
The underlying process and the factors that influence the direction of the market changes over time – sometimes over a very short period of time. Any model to be useful needs to either model the state change when input parameters switch between relevant and less relevant or (the approach adopted by this team) have a fluid model structure that uses different parameters over time and evolves based upon changing market structures.
The challenge is to provide continual evolution without over-fitting due to excessive overoptimization.
In this blog post I’ll discuss the general approach adopted and then, in subsequent posts I’ll drill down into more detail on specific aspects of the overall model.
The first thing to highlight is that we don’t use a straightforward machine learning algorithm as the whole picture. We have 3 key elements that together make up the evolving AI trading models.
Step 1 is a process where from a seed pool of selected parameters, market attributes, market relationships etc. At the outset a very large population of candidate models is created; this population will, initially use random parameters from the pre-seeded pool. There is then a mutation process applied where a small number of these attributes are modified, combined and otherwise altered.
Step 2. Each model is then trained to trade the specified markets. The parameters of these models will be randomly selected using the prior GA process. If we are training on related markets we will often focus on just once model to cover all markets (this is the approach adopted for the cryptocurrency markets). This technique is especially useful when we are dealing with markets with limited data histories.
Step 3. Finally the resulting models with their results are cross analysed and the best models are selected. The selection criteria is a combination of performance and robustness against over-fitting. The best performing models are then combined using GA techniques and a new generation created which undergoes the same process…and so on.
Over time the model finds a set of models that perform best on the market as it stands at that point in time. The best candidate models are then used for trading the relevant markets.
That’s not, however, where the story ends. As discussed earlier the process of the underlying markets alters over time … the models must do likewise. This process defined above does not end with a live candidate set of models … new candidates are created continually and matched head to head with the live models. When they exceed performance on the market as it is now they are then replaced.
As you can see this makes prior performance less reliable as an indicator of future performance but with the benefit that the models continually evolve to match the market conditions as they currently stand.
Given our belief that markets are a non-stationary process then prior performance of any model, static or otherwise, is highly suspect so we believe this is a small price to pay.
I will drill into more details of the modelling approach over coming weeks.
Have a great holiday break…
— Wintermute —