Home › Forums › EA Studio › EA Studio Tools and Settings › EA Studio Tools and Settings: Errors and Solutions › Reply To: EA Studio Tools and Settings: Errors and Solutions
Hi Traders
Although a few of these topics have been covered in short in other threads, I thought I’d start a new thread as this is more related to the overall workflow in EA studio.
I have a lot to cover here but I will try and keep it as short as possible but the foundation of this is based on the teachings in Petko’s courses. I have been trading in demo environments for over a year now, however, with the introduction of EA Studio, it opens up a whole new world of possibilities and combinations of factors never considered before with creating strategies and I am still struggling to find an efficient workflow to manage this process as each step presents it own issues.
I will start with the generating strategies:
Firstly, the number of combinations of acceptance criteria is a bit overwhelming and in some instances counter productive. I know we all want winning strategies but setting a high win/loss ratio does not really mean you will have a great strategy because your losses could be higher than your wins as an example. So typically I would just set for 200 trades and a profit factor of 1.2 over 1 years data, otherwise the criteria becomes too strict and I do not get many strategies at all. I know we’re only looking for the good ones but more on this just now.
Typically I would generate strategies on premium data because I cannot download enough data from my broker to generate any strategies, unless I wait a few weeks for more data to become available.
For my own sanity, I have run a test on the EURUSD currency pair using the same generation settings but 4 different scenarios. Generator ran for 12hours. I have a high end PC in trading terms and latency to the server is ~20ms.
1. Generator Only
2. Generator + Monte Carlo
3. Generator + Normalisation
4. Generator + Normalisation + Monte Carlo
I chose the top 10 strategies and ran them on 3 demo accounts from 1 broker and additional demo account from another broker, and they were run as individual EAs and as Portfolio EAs.
There were quite a few differences in order data between individual EA’s vs Portfolio EA’s and running these on different accounts. The second broker was vastly different but I did not account for the account difference in account currency or currency specifications. Nevertheless, the results are all over the place, but generally speaking, Scenario 1 was the worst, scenario 2 and 3 performed better and scenario 4 had very mixed results.
So the question is to do any form of optimisation or not?
But with the strategies being generated on premium data and not being able to generate strategies on actual broker data, the only solution is to run it through the Validator to recalculate on the broker data and of course, the result are very different. This means I would need to optimise the strategies for my broker data? But now we run the risk of curve fitting again. Therefore, I use OOS to discard “over optimised” historic data and only focus on current market performance, as we’re only really concerned about performance in current market conditions. The issue here is that there is no easy way to focus on strategy specific OOS data when running a collection through the validator, without going into each strategy to check the OSS data. Some strategies that don’t look great on the complete back test, perform quite well in the OOS but all the visible generation stats are focused on the complete back test. Could this be a product development suggestion?
Although you can set OSS acceptance criteria at the generation stage, some strategies don’t meet the OSS criteria until it has been optimised. I have run the same collection through the validator with different optimisation and robust testing methods and each produces a vastly different result. Same acceptance basic acceptance criteria, of course.
There also doesn’t seem to be any easy way to keep track of your EA’s because each time it goes through the validator or gets saved as an EA or in a Portfolio, the strategy ID and MagicNB changes, so tracking the live performance vs back test result is also a very daunting task, even with a platform like FXBlue. This also makes the change from Demo to Live very difficult because you cannot compare the two in the different environments, specifically in a Portfolio and a large numbers of individual EA’s are a nightmare to manage.
So with all of the combinations to choose from when generating strategies, the differences in execution between single EA’s & Portfolios, broker data differences, execution challenges, risk of curve fitting with different broker data and anything else mentioned above, how do you actually know if you have a good strategy or not?
As you can see, I have ran numerous tests but I seem to be going in circles trying to find a decent and efficient workflow. I must say, I have learnt a lot through Petko’s courses and the EA studio software and I open to learning more, so any comments or suggestions the these problems or my workflow will be greatly appreciated.