And We exit Bitcoin…

The bots have closed out their partial bitcoin position overnight at a rate of $2320.01 (incl fees).

We have now been pretty flat throughout this period of consolidation (for bitcoin) and outright decline (for many of the altcoins). This is overall encouraging given that the bots, at this stage, can’t go short. The Bots achieved an excellent return of >100% during the rapid bull run in April and May then over June and July pulled back and retained the large part of that profit. The ending profit is a little over 120%, and drawdowns have been extremely limited.

We are now intending to suspend the real-time experiment to allow the funds to be moved across into the AICoin ICO (http://www.aicoin.ico) but we will keep publishing here for more technical aspects of the bots performance once they start trading on behalf of the AICoin collective.

We also intend to publish further research results that we undertake and answer any technical questions on the models or architecture.

To tie up this experiment I am now pulling together the statements from the two brokers used (Bitstamp and Poloniex) and publish the statements as proof of the results achieved. Before doing this I will move the funds from Poloniex into Bitstamp so we are comparing like with like (the funds were originally deposited into Bitstamp).

Thank you all for following our experiment and hopefully there will be many more exciting and interesting things to discuss in the future.

— Wintermute —

 

The Power Of Genetic Algorithms

Much has rightly been made about the power of deep learning models to sift through large amounts of data to find nuggets of value … but what defines value? … and how is that value defined?

This is a problem we are constantly up against when developing an AI model for trading markets. How do you define the fitness function? What criteria are used in the trading decision making and how is that to be represented?

A simple change we wanted to make illustrates this problem clearly and also shows why our architecture helps feed in these changes in a controlled manner.

When we first trained the model they were bootstrapped from trading decisions made by conventional (static) trading models. Instead of learning to trade from first principles they were given the task of learning to improve upon what was already working – a much simpler learning task. This gave us a good starting point in the model evolution but, given the training requirements, also created a limitation. Instead of giving the models the full range of decision making capability (to take partial positions) it was always either fully in or fully out of a particular market – unlikely to be an optimum trading style.

So now we have working models that have identified some key trading characteristics and are working well, how do we change this without breaking what we already have – how do we implement a modified behaviour function; in this case how do we include the capability to scale in and out of positions over time?

With our architecture we can implement this change in a simple manner and let the overall model take the strain.

Earlier I posted a high level overview of the structure in our trading models – key to this is a genetic algorithm front end which is continually shifting the pieces about and trying to improve upon the characteristics of the deep learning models. It has the flexibility to change network structures, input parameters and also…key to this discussion … output expectations.

To add the capability of partial positions we simply needed to add this as a possible behaviour in the pool of attributes open to the GA selection. We can bias the mutation to select this attribute for a seeding period to force the attribute into the population then we step back and let the natural selection take it’s course. If this capability can a) be learned by the deep learning element of the architecture and b) adds true value to the function then, over time, this attribute will start to successfully gain a foothold in the model population and, eventually, candidate models will be presented to trade against the current lead models for dominance.

We have been seeding this capability for a few weeks now, it will be interesting to see if the model takes on a more staged approach for position taking or if it sticks with its binary position taking.

It has also been suggested we allow the models to sell short as well as take long or flat positions…while our initial bias was to exclude short positions because of the disproportionate risk/reward profile this is certainly something we could add into the mix in the future and let the models decide.

 

— Wintermute —

 

 

AI Model Architecture

I’ve been promising since I started this blog to present some of the key design decisions and architectural choices we have made. Time constraints have limited that but this weekend I have finally put together an overview of what we’re doing and how we approach the problem.

Just for clarity, this architecture is the full solution when we go into production. The infrastructure we are using for our live trading diary is identical except it doesn’t link through to the hedging engine. With the relatively small amount of capital we are trading with this level of integration wasn’t required – but it will be essential as we move onto a full production footing. As always, comments and questions gratefully received as often the best ideas come from an open discussion about the pro’s and cons of a solution.

Introduction

When designing an AI solution to trade cryptocurrency markets we imposed some key design requirements which we believe are critical to utilising AI to trade these markets:

  1. Any solution must evolve over time. Financial markets are not stationary processes. The factors that impact performance change over time and the mathematical characteristics of a market will change over time – sometimes rapidly. Any solution developed must be capable of adapting, and sometimes radically changing the model to track changes in the market as they are happening.
  2. Any model developed must be robust and not over-optimised to a market or specific market condition. This creates some interesting conflicts between requirement 1 and requirement 2. Any solution must be designed in such a way that the fitness function considers not just performance but consistency. Any indication of over-fitting or over-optimisation needs to be identified and rectified.
  3. Ability to bootstrap with limited trade history. By working with an exchange, we have had access to a significant amount of anonymized trade data but this alone is insufficient to give the history required to analyse trade opportunities and identify the best opportunities. To overcome this limitation, we created significant trade history using “naïve” standard trading models and indicators. In the first instance the AI model was set the task of beating the static indicators. Instead of trying to learn from first principles the models were targeted with improving upon an existing trade set. By starting with a simpler problem to solve the model learning could be kicked off successfully. We then extended this approach to improve upon some of the learned models building layer upon layer of expertise.

Architecture

The central engine of the AI model is the deep learning algorithm that learns the key characteristics to profitably trade the cryptocurrency markets; but this is only one part of the whole.  The AI trading model, in its entirety consists of four parts:

  • Pre-processing to generate model ideas and structure
  • Deep Learning training module
  • Live Candidate Evaluation
  • Trade Execution

These four elements work together to make the entire solution effective and scalable as the funds traded by the model increases.

 

 

 

 

 

Step 1. Pre-processing.

The pre-processing stage is driven by genetic algorithm based models. We start with a large population that sets the “DNA” for the models that are then trained. This DNA controls the structure of the deep learning network, the learning parameters, the iteration count / early stopping criteria and the inputs to the model (what price series to include, which static models to use etc).

There are some unique twists we have added to this technology to make it more suitable for our purposes; gender and mutation.

The entire population is split 50:50 “male”and “female”. The elements of the DNA for both genders are identical but there is a difference in the Fitness Function for male and female models when we rank the models for fitness to produce offspring.

This difference in fitness function prevents the models from a tendency to cluster around a single “strongest” candidate. This is critical for our domain space as the population must maintain diversity to support Key Requirement 1 – the ability to adapt as market conditions change.

Mutation is used, again, to prevent a stale population with all parameters in a similar range but we have extended this to the model inputs. The model can combine models as part of its DNA structure. Over time this will lead to entirely new static models being evaluated by the model and included in the population if found to be successful.

Step 2. Deep Learning Algorithm.

Rather than re-invent the wheel we have utilised TensorFlow, a set of deep learning libraries developed by Googles Machine Intelligence research organization. The libraries are optimised to run on NVIDIA GPU’s allowing incredibly efficient learning runs to be executed on large amounts of data and complex models.

By parallelizing the control software to run multiple instances in parallel we can run multiple models in parallel against a bank of NVIDIA cards to power through the entire population rapidly and generate new candidate models.

The models learn using the criteria controlled by the GA element of the model and evolve their own unique set of characteristics that make them unique amongst the trained models.

Step 3. Live Candidate Evaluation.

With any deep learning implementation one of the key control points is preventing “over-fitting” to the presented example cases. The models, if care isn’t taken, can go beyond learning the characteristics of a market and instead “memorize” the market behaviour and recall perfectly what has happened in the past but have no valid model to project forward and trade markets profitably.

There are two approaches one can take to overcome this problem. One approach is to analyse the degrees of freedom in the model structure and try and limit the model to a complexity that is appropriate to the data available for learning. The alternative approach, and the one that we use, is to let the GA models present a very large number of candidate models. These models are then weeded out over time using live trading data to see if the results stack up over time. Unless a model has similar return characteristics on live trading as in the learnt data it is discarded. We believe the power of the Genetic Algorithms to evolve to the correct learning characteristics far outweighs the ability for any group of people to analyse the structure manually and determine the best network structure.

Step 4. Market Execution.

The result of Step 3 will be a set of live models that are the best candidates at any point in time. These candidates evaluate live market data 24×7 and execute orders automatically to take advantage of opportunities it identifies in the marketplace. To enable automatic execution to be safely implemented and also to allow the model to trade in relatively large size in markets that are relatively illiquid we have integrated the model with an automated hedging engine.

This engine has access to multiple exchanges with trading capability on all of them. Rather than hitting a single exchange at a single point in time the model will gradually feed orders into the market across multiple exchanges. It will start with the optimum exchanges (if selling then those exchanges that have the highest prices) and start selling on these exchanges. As the price starts to fall below other exchanges then the model will switch to the new “best” exchanges. By continually switching and selling relatively small parcels the hedging engine will both achieve the optimum execution price and ensure that the action of the bot doesn’t have a detrimental impact on the market as a whole.

Summary

By combining proprietary software with the power of Googles Tensorflow we believe we have produced an architecture that is innovative but robust. Our unique combination of Genetic Algorithms for structure and control with Deep Learning technology for specific learning and model development has created a solution that meets the key design requirements stated at the outset.

By delivering on these design requirements we have a solution that is not only effective now but has the capability to remain profitable in the future, adapting as conditions in the market change and learning to take advantage of new opportunities as they present themselves.

Finally, by making the trading entirely automated, integrating with a hedging solution that controls the execution and risk we can operate in a truly 24×7 manner, taking advantage of opportunities whenever they present themselves.

— Wintermute —

XRP Underperformance

I just thought a short post would be worthwhile to highlight an interesting indication of the models behaviour. Just before the latest spike up in all cryptocurrencies the models re-entered their long positions across the board – all except Ripple (XRP) which, after divesting the models remained flat.

This behaviour is encouraging given the follow on performance since the weekend. All the cryptocurrencies we monitor have made large gains since the weekend and the models have benefitted – all except Ripple. Ripple has slipped from 0.35 (where the models sold out) to 0.28 (as we write).

This is the type of divergence we have been waiting to see to highlight selective,  profitable behaviour by the models.

Scores on  the doors – we’re now up 147.4% on original capital. Long positions currently held in Bitcoin (BTC), Etherium (ETH), ZCash (ZEC), Ether Classic (ETC), DASH, and Monero (XMR). We are flay Ripple (XRP)

CurrencyStarting BalanceUSDCryptoRateUSD EquivTOTALGain/loss
Bitcoin$16,900.00$0.0013.362640$35,270.40$35,270.40108.70%
Etherium$4,800.00$0.0070.10779112199.43$13,981.60$13,981.60191.28%
Zcash$610.00$0.007.14789692228.3$1,631.86$1,631.86167.52%
Ether Classic$203.00$0.0065.2324039318.475$1,205.17$1,205.17493.68%
DASH$610.00$0.008.10813346142.9$1,158.65$1,158.6589.94%
Monero$305.00$0.0012.350651545.5$561.95$561.9584.25%
Ripple$510.00$5,411.4300.2783$0.00$5,411.43961.06%
TOTAL$23,938.00$5,411.43$53,809.64$59,221.07147.39%
Hold BTC$23,938.00$0.0022.907177032640$60,474.95$60,474.95152.63%

— Wintermute —

Training Approach – High Level Overview

I thought I would take the opportunity of the long weekend to put down some of the design principles we use for the development of our model. In particular I will focus on unique aspects of financial market trading that perhaps make it more challenging than many AI problems to solve.
If we consider the usual problem space that an AI is used to address, the process is generally static. By this I mean that training examples can be drawn (within reason) historically and the underlying process can be considered not to change dramatically between the training set and the live sample.

This is, I agree, not strictly true…language does evolve over time and semantic change over the course of decades but the key here is we are talking about an exceptionally slow evolution of the underlying model.
Financial markets are entirely different.

The underlying process and the factors that influence the direction of the market changes over time – sometimes over a very short period of time. Any model to be useful needs to either model the state change when input parameters switch between relevant and less relevant or (the approach adopted by this team) have a fluid model structure that uses different parameters over time and evolves based upon changing market structures.
The challenge is to provide continual evolution without over-fitting due to excessive overoptimization.

In this blog post I’ll discuss the general approach adopted and then, in subsequent posts I’ll drill down into more detail on specific aspects of the overall model.
The first thing to highlight is that we don’t use a straightforward machine learning algorithm as the whole picture. We have 3 key elements that together make up the evolving AI trading models.

Step 1 is a process where from a seed pool of selected parameters, market attributes, market relationships etc. At the outset a very large population of candidate models is created; this population will, initially use random parameters from the pre-seeded pool. There is then a mutation process applied where a small number of these attributes are modified, combined and otherwise altered.

Step 2. Each model is then trained to trade the specified markets. The parameters of these models will be randomly selected using the prior GA process. If we are training on related markets we will often focus on just once model to cover all markets (this is the approach adopted for the cryptocurrency markets). This technique is especially useful when we are dealing with markets with limited data histories.
Step 3. Finally the resulting models with their results are cross analysed and the best models are selected. The selection criteria is a combination of performance and robustness against over-fitting. The best performing models are then combined using GA techniques and a new generation created which undergoes the same process…and so on.
Over time the model finds a set of models that perform best on the market as it stands at that point in time. The best candidate models are then used for trading the relevant markets.

That’s not, however, where the story ends. As discussed earlier the process of the underlying markets alters over time … the models must do likewise. This process defined above does not end with a live candidate set of models … new candidates are created continually and matched head to head with the live models. When they exceed performance on the market as it is now they are then replaced.
As you can see this makes prior performance less reliable as an indicator of future performance but with the benefit that the models continually evolve to match the market conditions as they currently stand.
Given our belief that markets are a non-stationary process then prior performance of any model, static or otherwise, is highly suspect so we believe this is a small price to pay.
I will drill into more details of the modelling approach over coming weeks.
Have a great holiday break…

— Wintermute —

So What Are Cryptocurrencies?

As promised I thought a brief overview of what cryptocurrencies are may be a useful intro.

Some of you may be familiar with Bitcoin. Bitcoin is the leading cryptocurrency of the moment and was released into the world by it’s anonymous creator under the name Satoshi Nakamoto.

The concept of Bitcoin was first published on the internet in November 2008 under the title “bitcoin: A Peer-to-Peer Electronic Cash System” from this white paper the original bitcoin code was developed and released in January 2009 as an Open Source project and Satoshi mined the first block of bitcoins ever (the genesis block). The value of the first bitcoin transactions were negotiated by individuals on the bitcointalk forums with one notable transaction of 10,000BTC used to purchase two pizzas delivered by Papa John’s (current value of that 10,000BTC is in excess of $10m).

Cryptocurrencies are digital or virtual currencies that use cryptography for security. A defining feature of a cryptocurrency (until recently) is it is organic in nature; it is not issued by any central authority, rendering it theoretically immune to Government interference or manipulation.

This feature may be watered down by some of the private blockchain based currencies now being proposed by Central Banks to issue their own currencies. For our purposes we will differentiate between Public Cryptocurrencies (those held on a public Blockchain such as bitcoin) and Private Cryptocurrencies (those released on a private chain under the control of an issuing authority.

The other unique element of cryptocurrencies – all transactions are executed by “signing” the transaction with a private key. This private key is known only to the holder of the funds and by signing the transaction the holder of the funds issues a transaction from his own wallet address to the recipients wallet address. Without this private key the funds cannot be transferred – another benefit that prevents automatic confiscation or removal of your funds.

These transactions are all held on the Peer to Peer network and are validated by participants on this network. For bitcoin and most of the other leading public blockchains this validation uses a “Proof of Work” model. This sets the task for “miners” to solve a cryptographic puzzle that allows them to fix the next block of transactions on the blockchain. By rewarding these miners with distribution of bitcoin the network encourages the mining activity and the collective participants on the network validating the chains ensures that the blockchain is not corrupted or rewritten by a “bad actor” in the infrastructure.

It’s worth noting that there are other validation schemes being proposed, especially for private blockchains and in the public chain domain some are proposing alternative validation such as “Proof of Stake” – these are untried at this stage but overcome one of the leading concerns with Proof of Work, the unnecessary power consumption and computing power utilised for “wasted” effort.

For the purpose of our test trading we will focus entirely on Public Cryptocurrencies. Our starting markets will be:

Bitcoin – $16,900 trading Capital
Etherium – $4800 trading capital
ZCash – $610 trading capital
Etherium Classic – $203 trading capital
DASH – $610 trading capital
Monero – $305 trading capital
Ripple – $510 trading capital

We will follow a simple model if simply switching into the cryptocurrencies when the model believes the market will rise and move back into USD when the model believes the market is going to decline.

The exchanges we will use are First Global Credit for the Bitcoins (www.firstglobalcredit.com) and Poloniex for the other coins (www.poloniex.com). Poloniex does not support USD holdings so we will switch into Tether-USD, a USD proxy tradable on Poloniex.

We will publish a trading update whenever trades are executed or weekly if no trades have executed during the week.

— Wintermute —

Welcome to Our AI Bot

Hi, we’re really excited about our new project so we wanted to share some of the details….There’s our first hurdle – we’re in stealth mode and are not permitted to divulge to much information on the end product we are working towards…so What can I say? So we’ve discussed with the founders what is proprietary and what can be shared – the good news we can give a lot of insight into our modelling, the general purpose of the model and some testing results. Hopefully over time we can also start to reveal some of the specifics of the primary models we are working on.

So, as I’m sure you’ve guessed we are working on utilising AI. We’ve set ourselves up to compete in one of the most challenging arena’s out there – financial trading. As I said – we can’t publish results yet for the primary model but what we have done is started training an AI trading bot to trade cryptocurrencies.

For those who are unfamiliar with the field I will present a summary in a later post but suffice to say it is a volatile set of markets that will provide a real testing ground for the models.

Our intention with this blog is to use it as a real-time lab report, giving details of trades executed in the markets and also giving some insight into the underlying structure of the models we are developing and the thinking behind them.

We welcome any questions, comments, or insights – the purpose is to share ideas and learn from others – not just spout our views to everyone.

Thanks for reading

–Wintermute —