The ability to optimise your algorithms


The optimisation feature allows you to try and improve upon the results of a previously run backtest simulation. By 'improve' we make the assumption that as a user, you are trying to accomplish one or both of :

  1. Increase profit and/or
  2. Reduce risk

As everyone who is familiar with the financial markets knows, increased returns very rarely come without some form of increased risk. Therefore when you start an optimisation run you first need to indicate your preference for either increasing profits and/or reducing risk. After each backtest simulation, a set of statistics are available for you to see how profitable the model was with the chosen parameters; these statistics are used by the optimisation process to help determine whether an optimisation run has been successful. Where applicable, these statistics are grouped into being either a 'profit indicator' or a 'risk indicator' and as a user you select your preference for either increasing a profit indicator or reducing a risk indicator. You can currently select up to 5 profit or risk indicators and the system will compare your chosen indicators from the original backtest simulation with those produced from any optimisation run. If there is a net improvement, then the program will consider the optimisation as being successful.


To take an example, suppose you ran a backtest which produced the following statistics :

Net profit (pips)0.09852
Gross profit (pips)0.24132
Gross loss (pips)-0.1428
Profit factor1.68992
Largest drawdown (pips)-0.00325
Net profit as a %age of drawdown3,031.385%
Max consecutive losing trades11

Now, you would like to try and improve your model such that you initiate an optimisation run and assign the following priorities (highest first)

  1. (increase) net profit
  2. (decrease) max consecutive losing trades
  3. (increase) profit factor
  4. (increase) net profit as a %age of drawdown
  5. (decrease) gross loss

Even though you have both profit and risk indicators selected, the net average of the priorities suggest that whilst you don't want to completely ignore risk, you overall priority is to increase profit.

Now, lets say the results from an optimisation run were :

Net profit (pips)0.09458
Gross profit (pips)0.22587
Gross loss (pips)-0.13129
Profit factor1.72039
Largest drawdown (pips)-0.00325
Net profit as a %age of drawdown2,910.154%
Max consecutive losing trades11

The changes we see are :

  • increased : profit factor
  • no change : max consecutive losing trades
  • decreased : net profit, gross loss, net profit as a %age of drawdown

So this reflects a deterioration in performance (on a net weighted basis) from our initial backtest simulation and hence would NOT be considered a successful optimisation. In contrast, had the statistics from an optimisation run produced the following results :

Net profit (pips)0.10184
Gross profit (pips)0.24195
Gross loss (pips)-0.14011
Profit factor1.72684
Largest drawdown (pips)-0.00325
Net profit as a %age of drawdown3,133.538%
Max consecutive losing trades8

This represents a clear improvement in performance from our initial backtest simulation and so hence WOULD be considered a successful optimisation.

Avoiding fitting-the-curve


A potential problem with trying to optimise any algorithmic trading rule is that you may end up fitting-the-curve. This can be thought of as trying to continuously optimise input parameters such that you get the absolute best possible outcome. By the very nature of financial markets, it is impossible to use past prices to predict future prices with 100% accuracy. So we attempt to address this curve fitting problem by comparing what is called 'in-sample' data sets versus 'out-of-sample' data sets. In effect this means that your original backtest simulation (which is the 'in-sample' data set) is repeatedly adjusted and re-run over a wider date range. By running the model on a wider date range (the out-of-sample data set) the system will check whether the results observed on the in-sample data set backtest can be improved upon when running against the wider data set. If the results show an improvement in the risk / return statistics then the system considers the optimisation as having been successful.

Although there is no hard and fast rule on exactly what ratio the 'in-sample' range should be to the 'out-of-sample' range, a commonly accepted number seems to be around the 70%-80% range. Accordingly, we currently use 75%. Therefore it is important to note that you will not be able to optimise a previously run simulation if the range of its start and end dates is greater than 75% of the entire range of data we have.

Whilst it is naturally up to you to choose your start and end dates for any backtest simulation, we would generally suggest that you first start by choosing dates that cover no more than approximately one or two years worth of data. Those dates would then form the 'in-sample' range in any future optimisation run and since we currently have data going back to the start of 2008 (for FX data), there would be enough out-of-sample data such that it doesn't violate the 75% rule.

Results of an optimisation run

If an optimised backtest run results in an improvement over your initial results, then the rule which led to the improvement will be saved under your list of models. This means you'll be able to see the exact parameters used to do further analysis or optimisations as you wish.

The details for any optimisation run that DID NOT lead to an improvement over your initial backtest simulation are discarded.

Register for a pilot account and stay up to date

We will be inviting people to join our group of pilot users. If you are interested in joining us and be one of the first to use what we're working on, we would love to hear from you. Please fill out the form below with your preferences and we'll be in touch. (We will not share your details with any third party).

If you are interested in joining us in an investor or partnership capacity, please click here.

Follow us on twitter for updates

Or you can drop us an email

© 2014 - 2016 AlgoReplay