Part of the galgoz series: article #3
To view all the articles in chronological order, go to gzarruk.com/galgoz. Or download the latest version of the code from the galgoz repository.
It is time to start preparing the grounds for backtesting whichever strategy I come up with. And to backtest we need data, lots of data. Therefore, galgoz will need a method to fetch and store historical data for any given instrument.
At the moment of writing this, galgoz has a method for fetching data (fetch_candles) and another one to transform the OANDA API response into a pandas DataFrame (candles_df). However, the API allows you to fetch a maximum of 5,000 candles in a single request, which I knew but didn’t factor in when coding the first version. Therefore, it will be necessary to refactor it to make it recursive; fetch data in batches and append them together before saving.
To successfully download historical data from the OANDA API without hitting the 5000 candles limit, three things have to be implemented:
- Use time increments according to the data granularity
- Refactor the fetch_dandles method to download data in batches
- Store the data
Time Increments
Learn more Download and Prepare Data for Backtesting Strategies