Merge branch 'freqtrade:develop' into develop

This commit is contained in:
Stefano Ariestasia 2023-01-06 08:04:33 +08:00 committed by GitHub
commit 148338cdf8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
105 changed files with 2626 additions and 1191 deletions

View File

@ -20,7 +20,7 @@ Please do not use bug reports to request new features.
* Operating system: ____
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: ____ (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.

View File

@ -18,7 +18,7 @@ Have you search for this feature before requesting it? It's highly likely that a
* Operating system: ____
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: ____ (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Describe the enhancement

View File

@ -18,7 +18,7 @@ Please do not use the question template to report bugs or to request new feature
* Operating system: ____
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: ____ (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question

View File

@ -88,7 +88,7 @@ jobs:
run: |
cp config_examples/config_bittrex.example.json config.json
freqtrade create-userdir --userdir user_data
freqtrade hyperopt --datadir tests/testdata -e 5 --strategy SampleStrategy --hyperopt-loss SharpeHyperOptLossDaily --print-all
freqtrade hyperopt --datadir tests/testdata -e 6 --strategy SampleStrategy --hyperopt-loss SharpeHyperOptLossDaily --print-all
- name: Flake8
run: |
@ -148,6 +148,19 @@ jobs:
if: runner.os == 'macOS'
run: |
brew update
# homebrew fails to update python due to unlinking failures
# https://github.com/actions/runner-images/issues/6817
rm /usr/local/bin/2to3 || true
rm /usr/local/bin/2to3-3.11 || true
rm /usr/local/bin/idle3 || true
rm /usr/local/bin/idle3.11 || true
rm /usr/local/bin/pydoc3 || true
rm /usr/local/bin/pydoc3.11 || true
rm /usr/local/bin/python3 || true
rm /usr/local/bin/python3.11 || true
rm /usr/local/bin/python3-config || true
rm /usr/local/bin/python3.11-config || true
brew install hdf5 c-blosc
python -m pip install --upgrade pip wheel
export LD_LIBRARY_PATH=${HOME}/dependencies/lib:$LD_LIBRARY_PATH
@ -410,7 +423,7 @@ jobs:
python setup.py sdist bdist_wheel
- name: Publish to PyPI (Test)
uses: pypa/gh-action-pypi-publish@v1.6.1
uses: pypa/gh-action-pypi-publish@v1.6.4
if: (github.event_name == 'release')
with:
user: __token__
@ -418,7 +431,7 @@ jobs:
repository_url: https://test.pypi.org/legacy/
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@v1.6.1
uses: pypa/gh-action-pypi-publish@v1.6.4
if: (github.event_name == 'release')
with:
user: __token__

View File

@ -15,9 +15,9 @@ repos:
additional_dependencies:
- types-cachetools==5.2.1
- types-filelock==3.2.7
- types-requests==2.28.11.5
- types-requests==2.28.11.7
- types-tabulate==0.9.0.0
- types-python-dateutil==2.8.19.4
- types-python-dateutil==2.8.19.5
# stages: [push]
- repo: https://github.com/pycqa/isort

View File

@ -1,6 +1,7 @@
# ![freqtrade](https://raw.githubusercontent.com/freqtrade/freqtrade/develop/docs/assets/freqtrade_poweredby.svg)
[![Freqtrade CI](https://github.com/freqtrade/freqtrade/workflows/Freqtrade%20CI/badge.svg)](https://github.com/freqtrade/freqtrade/actions/)
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04864/status.svg)](https://doi.org/10.21105/joss.04864)
[![Coverage Status](https://coveralls.io/repos/github/freqtrade/freqtrade/badge.svg?branch=develop&service=github)](https://coveralls.io/github/freqtrade/freqtrade?branch=develop)
[![Documentation](https://readthedocs.org/projects/freqtrade/badge/)](https://www.freqtrade.io)
[![Maintainability](https://api.codeclimate.com/v1/badges/5737e6d668200b7518ff/maintainability)](https://codeclimate.com/github/freqtrade/freqtrade/maintainability)

View File

@ -79,9 +79,7 @@
"test_size": 0.33,
"random_state": 1
},
"model_training_parameters": {
"n_estimators": 1000
}
"model_training_parameters": {}
},
"bot_name": "",
"force_entry_enable": true,

View File

@ -300,7 +300,11 @@ A backtesting result will look like that:
| Absolute profit | 0.00762792 BTC |
| Total profit % | 76.2% |
| CAGR % | 460.87% |
| Sortino | 1.88 |
| Sharpe | 2.97 |
| Calmar | 6.29 |
| Profit factor | 1.11 |
| Expectancy | -0.15 |
| Avg. stake amount | 0.001 BTC |
| Total trade volume | 0.429 BTC |
| | |
@ -400,7 +404,11 @@ It contains some useful key metrics about performance of your strategy on backte
| Absolute profit | 0.00762792 BTC |
| Total profit % | 76.2% |
| CAGR % | 460.87% |
| Sortino | 1.88 |
| Sharpe | 2.97 |
| Calmar | 6.29 |
| Profit factor | 1.11 |
| Expectancy | -0.15 |
| Avg. stake amount | 0.001 BTC |
| Total trade volume | 0.429 BTC |
| | |
@ -447,6 +455,9 @@ It contains some useful key metrics about performance of your strategy on backte
- `Absolute profit`: Profit made in stake currency.
- `Total profit %`: Total profit. Aligned to the `TOTAL` row's `Tot Profit %` from the first table. Calculated as `(End capital Starting capital) / Starting capital`.
- `CAGR %`: Compound annual growth rate.
- `Sortino`: Annualized Sortino ratio.
- `Sharpe`: Annualized Sharpe ratio.
- `Calmar`: Annualized Calmar ratio.
- `Profit factor`: profit / loss.
- `Avg. stake amount`: Average stake amount, either `stake_amount` or the average when using dynamic stake amount.
- `Total trade volume`: Volume generated on the exchange to reach the above profit.

View File

@ -5,7 +5,7 @@ You can analyze the results of backtests and trading history easily using Jupyte
## Quick start with docker
Freqtrade provides a docker-compose file which starts up a jupyter lab server.
You can run this server using the following command: `docker-compose -f docker/docker-compose-jupyter.yml up`
You can run this server using the following command: `docker compose -f docker/docker-compose-jupyter.yml up`
This will create a dockercontainer running jupyter lab, which will be accessible using `https://127.0.0.1:8888/lab`.
Please use the link that's printed in the console after startup for simplified login.

View File

@ -4,20 +4,22 @@ This page explains how to run the bot with Docker. It is not meant to work out o
## Install Docker
Start by downloading and installing Docker CE for your platform:
Start by downloading and installing Docker / Docker Desktop for your platform:
* [Mac](https://docs.docker.com/docker-for-mac/install/)
* [Windows](https://docs.docker.com/docker-for-windows/install/)
* [Linux](https://docs.docker.com/install/)
To simplify running freqtrade, [`docker-compose`](https://docs.docker.com/compose/install/) should be installed and available to follow the below [docker quick start guide](#docker-quick-start).
!!! Info "Docker compose install"
Freqtrade documentation assumes the use of Docker desktop (or the docker compose plugin).
While the docker-compose standalone installation still works, it will require changing all `docker compose` commands from `docker compose` to `docker-compose` to work (e.g. `docker compose up -d` will become `docker-compose up -d`).
## Freqtrade with docker-compose
## Freqtrade with docker
Freqtrade provides an official Docker image on [Dockerhub](https://hub.docker.com/r/freqtradeorg/freqtrade/), as well as a [docker-compose file](https://github.com/freqtrade/freqtrade/blob/stable/docker-compose.yml) ready for usage.
Freqtrade provides an official Docker image on [Dockerhub](https://hub.docker.com/r/freqtradeorg/freqtrade/), as well as a [docker compose file](https://github.com/freqtrade/freqtrade/blob/stable/docker-compose.yml) ready for usage.
!!! Note
- The following section assumes that `docker` and `docker-compose` are installed and available to the logged in user.
- The following section assumes that `docker` is installed and available to the logged in user.
- All below commands use relative directories and will have to be executed from the directory containing the `docker-compose.yml` file.
### Docker quick start
@ -31,13 +33,13 @@ cd ft_userdata/
curl https://raw.githubusercontent.com/freqtrade/freqtrade/stable/docker-compose.yml -o docker-compose.yml
# Pull the freqtrade image
docker-compose pull
docker compose pull
# Create user directory structure
docker-compose run --rm freqtrade create-userdir --userdir user_data
docker compose run --rm freqtrade create-userdir --userdir user_data
# Create configuration - Requires answering interactive questions
docker-compose run --rm freqtrade new-config --config user_data/config.json
docker compose run --rm freqtrade new-config --config user_data/config.json
```
The above snippet creates a new directory called `ft_userdata`, downloads the latest compose file and pulls the freqtrade image.
@ -64,7 +66,7 @@ The `SampleStrategy` is run by default.
Once this is done, you're ready to launch the bot in trading mode (Dry-run or Live-trading, depending on your answer to the corresponding question you made above).
``` bash
docker-compose up -d
docker compose up -d
```
!!! Warning "Default configuration"
@ -84,27 +86,27 @@ You can now access the UI by typing localhost:8080 in your browser.
#### Monitoring the bot
You can check for running instances with `docker-compose ps`.
You can check for running instances with `docker compose ps`.
This should list the service `freqtrade` as `running`. If that's not the case, best check the logs (see next point).
#### Docker-compose logs
#### Docker compose logs
Logs will be written to: `user_data/logs/freqtrade.log`.
You can also check the latest log with the command `docker-compose logs -f`.
You can also check the latest log with the command `docker compose logs -f`.
#### Database
The database will be located at: `user_data/tradesv3.sqlite`
#### Updating freqtrade with docker-compose
#### Updating freqtrade with docker
Updating freqtrade when using `docker-compose` is as simple as running the following 2 commands:
Updating freqtrade when using `docker` is as simple as running the following 2 commands:
``` bash
# Download the latest image
docker-compose pull
docker compose pull
# Restart the image
docker-compose up -d
docker compose up -d
```
This will first pull the latest image, and will then restart the container with the just pulled version.
@ -116,43 +118,43 @@ This will first pull the latest image, and will then restart the container with
Advanced users may edit the docker-compose file further to include all possible options or arguments.
All freqtrade arguments will be available by running `docker-compose run --rm freqtrade <command> <optional arguments>`.
All freqtrade arguments will be available by running `docker compose run --rm freqtrade <command> <optional arguments>`.
!!! Warning "`docker-compose` for trade commands"
Trade commands (`freqtrade trade <...>`) should not be ran via `docker-compose run` - but should use `docker-compose up -d` instead.
!!! Warning "`docker compose` for trade commands"
Trade commands (`freqtrade trade <...>`) should not be ran via `docker compose run` - but should use `docker compose up -d` instead.
This makes sure that the container is properly started (including port forwardings) and will make sure that the container will restart after a system reboot.
If you intend to use freqUI, please also ensure to adjust the [configuration accordingly](rest-api.md#configuration-with-docker), otherwise the UI will not be available.
!!! Note "`docker-compose run --rm`"
!!! Note "`docker compose run --rm`"
Including `--rm` will remove the container after completion, and is highly recommended for all modes except trading mode (running with `freqtrade trade` command).
??? Note "Using docker without docker-compose"
"`docker-compose run --rm`" will require a compose file to be provided.
??? Note "Using docker without docker"
"`docker compose run --rm`" will require a compose file to be provided.
Some freqtrade commands that don't require authentication such as `list-pairs` can be run with "`docker run --rm`" instead.
For example `docker run --rm freqtradeorg/freqtrade:stable list-pairs --exchange binance --quote BTC --print-json`.
This can be useful for fetching exchange information to add to your `config.json` without affecting your running containers.
#### Example: Download data with docker-compose
#### Example: Download data with docker
Download backtesting data for 5 days for the pair ETH/BTC and 1h timeframe from Binance. The data will be stored in the directory `user_data/data/` on the host.
``` bash
docker-compose run --rm freqtrade download-data --pairs ETH/BTC --exchange binance --days 5 -t 1h
docker compose run --rm freqtrade download-data --pairs ETH/BTC --exchange binance --days 5 -t 1h
```
Head over to the [Data Downloading Documentation](data-download.md) for more details on downloading data.
#### Example: Backtest with docker-compose
#### Example: Backtest with docker
Run backtesting in docker-containers for SampleStrategy and specified timerange of historical data, on 5m timeframe:
``` bash
docker-compose run --rm freqtrade backtesting --config user_data/config.json --strategy SampleStrategy --timerange 20190801-20191001 -i 5m
docker compose run --rm freqtrade backtesting --config user_data/config.json --strategy SampleStrategy --timerange 20190801-20191001 -i 5m
```
Head over to the [Backtesting Documentation](backtesting.md) to learn more.
### Additional dependencies with docker-compose
### Additional dependencies with docker
If your strategy requires dependencies not included in the default image - it will be necessary to build the image on your host.
For this, please create a Dockerfile containing installation steps for the additional dependencies (have a look at [docker/Dockerfile.custom](https://github.com/freqtrade/freqtrade/blob/develop/docker/Dockerfile.custom) for an example).
@ -166,15 +168,15 @@ You'll then also need to modify the `docker-compose.yml` file and uncomment the
dockerfile: "./Dockerfile.<yourextension>"
```
You can then run `docker-compose build --pull` to build the docker image, and run it using the commands described above.
You can then run `docker compose build --pull` to build the docker image, and run it using the commands described above.
### Plotting with docker-compose
### Plotting with docker
Commands `freqtrade plot-profit` and `freqtrade plot-dataframe` ([Documentation](plotting.md)) are available by changing the image to `*_plot` in your docker-compose.yml file.
You can then use these commands as follows:
``` bash
docker-compose run --rm freqtrade plot-dataframe --strategy AwesomeStrategy -p BTC/ETH --timerange=20180801-20180805
docker compose run --rm freqtrade plot-dataframe --strategy AwesomeStrategy -p BTC/ETH --timerange=20180801-20180805
```
The output will be stored in the `user_data/plot` directory, and can be opened with any modern browser.
@ -185,7 +187,7 @@ Freqtrade provides a docker-compose file which starts up a jupyter lab server.
You can run this server using the following command:
``` bash
docker-compose -f docker/docker-compose-jupyter.yml up
docker compose -f docker/docker-compose-jupyter.yml up
```
This will create a docker-container running jupyter lab, which will be accessible using `https://127.0.0.1:8888/lab`.
@ -194,7 +196,7 @@ Please use the link that's printed in the console after startup for simplified l
Since part of this image is built on your machine, it is recommended to rebuild the image from time to time to keep freqtrade (and dependencies) up-to-date.
``` bash
docker-compose -f docker/docker-compose-jupyter.yml build --no-cache
docker compose -f docker/docker-compose-jupyter.yml build --no-cache
```
## Troubleshooting

View File

@ -26,10 +26,7 @@ FreqAI is configured through the typical [Freqtrade config file](configuration.m
},
"data_split_parameters" : {
"test_size": 0.25
},
"model_training_parameters" : {
"n_estimators": 100
},
}
}
```

View File

@ -15,7 +15,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| `identifier` | **Required.** <br> A unique ID for the current model. If models are saved to disk, the `identifier` allows for reloading specific pre-trained models/data. <br> **Datatype:** String.
| `live_retrain_hours` | Frequency of retraining during dry/live runs. <br> **Datatype:** Float > 0. <br> Default: `0` (models retrain as often as possible).
| `expiration_hours` | Avoid making predictions if a model is more than `expiration_hours` old. <br> **Datatype:** Positive integer. <br> Default: `0` (models never expire).
| `purge_old_models` | Delete obsolete models. <br> **Datatype:** Boolean. <br> Default: `False` (all historic models remain on disk).
| `purge_old_models` | Delete all unused models during live runs (not relevant to backtesting). If set to false (not default), dry/live runs will accumulate all unused models to disk. If <br> **Datatype:** Boolean. <br> Default: `True`.
| `save_backtest_models` | Save models to disk when running backtesting. Backtesting operates most efficiently by saving the prediction data and reusing them directly for subsequent runs (when you wish to tune entry/exit parameters). Saving backtesting models to disk also allows to use the same model files for starting a dry/live instance with the same model `identifier`. <br> **Datatype:** Boolean. <br> Default: `False` (no models are saved).
| `fit_live_predictions_candles` | Number of historical candles to use for computing target (label) statistics from prediction data, instead of from the training dataset (more information can be found [here](freqai-configuration.md#creating-a-dynamic-target-threshold)). <br> **Datatype:** Positive integer.
| `follow_mode` | Use a `follower` that will look for models associated with a specific `identifier` and load those for inferencing. A `follower` will **not** train new models. <br> **Datatype:** Boolean. <br> Default: `False`.

View File

@ -247,14 +247,40 @@ where `unique-id` is the `identifier` set in the `freqai` configuration file. Th
![tensorboard](assets/tensorboard.jpg)
### Custom logging
FreqAI also provides a built in episodic summary logger called `self.tensorboard_log` for adding custom information to the Tensorboard log. By default, this function is already called once per step inside the environment to record the agent actions. All values accumulated for all steps in a single episode are reported at the conclusion of each episode, followed by a full reset of all metrics to 0 in preparation for the subsequent episode.
`self.tensorboard_log` can also be used anywhere inside the environment, for example, it can be added to the `calculate_reward` function to collect more detailed information about how often various parts of the reward were called:
```py
class MyRLEnv(Base5ActionRLEnv):
"""
User made custom environment. This class inherits from BaseEnvironment and gym.env.
Users can override any functions from those parent classes. Here is an example
of a user customized `calculate_reward()` function.
"""
def calculate_reward(self, action: int) -> float:
if not self._is_valid(action):
self.tensorboard_log("is_valid")
return -2
```
!!! Note
The `self.tensorboard_log()` function is designed for tracking incremented objects only i.e. events, actions inside the training environment. If the event of interest is a float, the float can be passed as the second argument e.g. `self.tensorboard_log("float_metric1", 0.23)` would add 0.23 to `float_metric`. In this case you can also disable incrementing using `inc=False` parameter.
### Choosing a base environment
FreqAI provides two base environments, `Base4ActionEnvironment` and `Base5ActionEnvironment`. As the names imply, the environments are customized for agents that can select from 4 or 5 actions. In the `Base4ActionEnvironment`, the agent can enter long, enter short, hold neutral, or exit position. Meanwhile, in the `Base5ActionEnvironment`, the agent has the same actions as Base4, but instead of a single exit action, it separates exit long and exit short. The main changes stemming from the environment selection include:
FreqAI provides three base environments, `Base3ActionRLEnvironment`, `Base4ActionEnvironment` and `Base5ActionEnvironment`. As the names imply, the environments are customized for agents that can select from 3, 4 or 5 actions. The `Base3ActionEnvironment` is the simplest, the agent can select from hold, long, or short. This environment can also be used for long-only bots (it automatically follows the `can_short` flag from the strategy), where long is the enter condition and short is the exit condition. Meanwhile, in the `Base4ActionEnvironment`, the agent can enter long, enter short, hold neutral, or exit position. Finally, in the `Base5ActionEnvironment`, the agent has the same actions as Base4, but instead of a single exit action, it separates exit long and exit short. The main changes stemming from the environment selection include:
* the actions available in the `calculate_reward`
* the actions consumed by the user strategy
Both of the FreqAI provided environments inherit from an action/position agnostic environment object called the `BaseEnvironment`, which contains all shared logic. The architecture is designed to be easily customized. The simplest customization is the `calculate_reward()` (see details [here](#creating-a-custom-reward-function)). However, the customizations can be further extended into any of the functions inside the environment. You can do this by simply overriding those functions inside your `MyRLEnv` in the prediction model file. Or for more advanced customizations, it is encouraged to create an entirely new environment inherited from `BaseEnvironment`.
All of the FreqAI provided environments inherit from an action/position agnostic environment object called the `BaseEnvironment`, which contains all shared logic. The architecture is designed to be easily customized. The simplest customization is the `calculate_reward()` (see details [here](#creating-a-custom-reward-function)). However, the customizations can be further extended into any of the functions inside the environment. You can do this by simply overriding those functions inside your `MyRLEnv` in the prediction model file. Or for more advanced customizations, it is encouraged to create an entirely new environment inherited from `BaseEnvironment`.
!!! Note
FreqAI does not provide by default, a long-only training environment. However, creating one should be as simple as copy-pasting one of the built in environments and removing the `short` actions (and all associated references to those).
Only the `Base3ActionRLEnv` can do long-only training/trading (set the user strategy attribute `can_short = False`).

View File

@ -72,11 +72,25 @@ pip install -r requirements-freqai.txt
If you are using docker, a dedicated tag with FreqAI dependencies is available as `:freqai`. As such - you can replace the image line in your docker-compose file with `image: freqtradeorg/freqtrade:develop_freqai`. This image contains the regular FreqAI dependencies. Similar to native installs, Catboost will not be available on ARM based devices.
### FreqAI position in open-source machine learning landscape
Forecasting chaotic time-series based systems, such as equity/cryptocurrency markets, requires a broad set of tools geared toward testing a wide range of hypotheses. Fortunately, a recent maturation of robust machine learning libraries (e.g. `scikit-learn`) has opened up a wide range of research possibilities. Scientists from a diverse range of fields can now easily prototype their studies on an abundance of established machine learning algorithms. Similarly, these user-friendly libraries enable "citzen scientists" to use their basic Python skills for data-exploration. However, leveraging these machine learning libraries on historical and live chaotic data sources can be logistically difficult and expensive. Additionally, robust data-collection, storage, and handling presents a disparate challenge. [`FreqAI`](#freqai) aims to provide a generalized and extensible open-sourced framework geared toward live deployments of adaptive modeling for market forecasting. The `FreqAI` framework is effectively a sandbox for the rich world of open-source machine learning libraries. Inside the `FreqAI` sandbox, users find they can combine a wide variety of third-party libraries to test creative hypotheses on a free live 24/7 chaotic data source - cryptocurrency exchange data.
### Citing FreqAI
FreqAI is [published in the Journal of Open Source Software](https://joss.theoj.org/papers/10.21105/joss.04864). If you find FreqAI useful in your research, please use the following citation:
```bibtex
@article{Caulk2022,
doi = {10.21105/joss.04864},
url = {https://doi.org/10.21105/joss.04864},
year = {2022}, publisher = {The Open Journal},
volume = {7}, number = {80}, pages = {4864},
author = {Robert A. Caulk and Elin Törnquist and Matthias Voppichler and Andrew R. Lawless and Ryan McMullan and Wagner Costa Santos and Timothy C. Pogue and Johan van der Vlugt and Stefan P. Gehring and Pascal Schmidt},
title = {FreqAI: generalizing adaptive modeling for chaotic time-series market forecasts},
journal = {Journal of Open Source Software} }
```
## Common pitfalls
FreqAI cannot be combined with dynamic `VolumePairlists` (or any pairlist filter that adds and removes pairs dynamically).
@ -99,6 +113,8 @@ Code review and software architecture brainstorming:
Software development:
Wagner Costa @wagnercosta
Emre Suzen @aemr3
Timothy Pogue @wizrds
Beta testing and bug reporting:
Stefan Gehring @bloodhunter4rc, @longyu, Andrew Lawless @paranoidandy, Pascal Schmidt @smidelis, Ryan McMullan @smarmau, Juha Nykänen @suikula, Johan van der Vlugt @jooopiert, Richárd Józsa @richardjosza, Timothy Pogue @wizrds
Stefan Gehring @bloodhunter4rc, @longyu, Andrew Lawless @paranoidandy, Pascal Schmidt @smidelis, Ryan McMullan @smarmau, Juha Nykänen @suikula, Johan van der Vlugt @jooopiert, Richárd Józsa @richardjosza

View File

@ -365,7 +365,7 @@ class MyAwesomeStrategy(IStrategy):
timeframe = '15m'
minimal_roi = {
"0": 0.10
},
}
# Define the parameter spaces
buy_ema_short = IntParameter(3, 50, default=5)
buy_ema_long = IntParameter(15, 200, default=50)

View File

@ -23,6 +23,7 @@ You may also use something like `.*DOWN/BTC` or `.*UP/BTC` to exclude leveraged
* [`StaticPairList`](#static-pair-list) (default, if not configured differently)
* [`VolumePairList`](#volume-pair-list)
* [`ProducerPairList`](#producerpairlist)
* [`RemotePairList`](#remotepairlist)
* [`AgeFilter`](#agefilter)
* [`OffsetFilter`](#offsetfilter)
* [`PerformanceFilter`](#performancefilter)
@ -173,6 +174,48 @@ You can limit the length of the pairlist with the optional parameter `number_ass
`ProducerPairList` can also be used multiple times in sequence, combining the pairs from multiple producers.
Obviously in complex such configurations, the Producer may not provide data for all pairs, so the strategy must be fit for this.
#### RemotePairList
It allows the user to fetch a pairlist from a remote server or a locally stored json file within the freqtrade directory, enabling dynamic updates and customization of the trading pairlist.
The RemotePairList is defined in the pairlists section of the configuration settings. It uses the following configuration options:
```json
"pairlists": [
{
"method": "RemotePairList",
"pairlist_url": "https://example.com/pairlist",
"number_assets": 10,
"refresh_period": 1800,
"keep_pairlist_on_failure": true,
"read_timeout": 60,
"bearer_token": "my-bearer-token"
}
]
```
The `pairlist_url` option specifies the URL of the remote server where the pairlist is located, or the path to a local file (if file:/// is prepended). This allows the user to use either a remote server or a local file as the source for the pairlist.
The user is responsible for providing a server or local file that returns a JSON object with the following structure:
```json
{
"pairs": ["XRP/USDT", "ETH/USDT", "LTC/USDT"],
"refresh_period": 1800,
}
```
The `pairs` property should contain a list of strings with the trading pairs to be used by the bot. The `refresh_period` property is optional and specifies the number of seconds that the pairlist should be cached before being refreshed.
The optional `keep_pairlist_on_failure` specifies whether the previous received pairlist should be used if the remote server is not reachable or returns an error. The default value is true.
The optional `read_timeout` specifies the maximum amount of time (in seconds) to wait for a response from the remote source, The default value is 60.
The optional `bearer_token` will be included in the requests Authorization Header.
!!! Note
In case of a server error the last received pairlist will be kept if `keep_pairlist_on_failure` is set to true, when set to false a empty pairlist is returned.
#### AgeFilter
Removes pairs that have been listed on the exchange for less than `min_days_listed` days (defaults to `10`) or more than `max_days_listed` days (defaults `None` mean infinity).

View File

@ -1,6 +1,7 @@
![freqtrade](assets/freqtrade_poweredby.svg)
[![Freqtrade CI](https://github.com/freqtrade/freqtrade/workflows/Freqtrade%20CI/badge.svg)](https://github.com/freqtrade/freqtrade/actions/)
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04864/status.svg)](https://doi.org/10.21105/joss.04864)
[![Coverage Status](https://coveralls.io/repos/github/freqtrade/freqtrade/badge.svg?branch=develop&service=github)](https://coveralls.io/github/freqtrade/freqtrade?branch=develop)
[![Maintainability](https://api.codeclimate.com/v1/badges/5737e6d668200b7518ff/maintainability)](https://codeclimate.com/github/freqtrade/freqtrade/maintainability)

View File

@ -92,6 +92,8 @@ One account is used to share collateral between markets (trading pairs). Margin
"margin_mode": "cross"
```
Please read the [exchange specific notes](exchanges.md) for exchanges that support this mode and how they differ.
## Set leverage to use
Different strategies and risk profiles will require different levels of leverage.

View File

@ -11,9 +11,6 @@
{% endif %}
<div class="md-sidebar md-sidebar--primary" data-md-component="sidebar" data-md-type="navigation" {{ hidden }}>
<div class="md-sidebar__scrollwrap">
<div id="widget-wrapper">
</div>
<div class="md-sidebar__inner">
{% include "partials/nav.html" %}
</div>
@ -44,25 +41,4 @@
<script src="https://code.jquery.com/jquery-3.4.1.min.js"
integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=" crossorigin="anonymous"></script>
<!-- Load binance SDK -->
<script async defer src="https://public.bnbstatic.com/static/js/broker-sdk/broker-sdk@1.0.0.min.js"></script>
<script>
window.onload = function () {
var sidebar = document.getElementById('widget-wrapper')
var newDiv = document.createElement("div");
newDiv.id = "widget";
try {
sidebar.prepend(newDiv);
window.binanceBrokerPortalSdk.initBrokerSDK('#widget', {
apiHost: 'https://www.binance.com',
brokerId: 'R4BD3S82',
slideTime: 4e4,
});
} catch(err) {
console.log(err)
}
}
</script>
{% endblock %}

View File

@ -13,12 +13,12 @@ Feel free to use a visual Database editor like SqliteBrowser if you feel more co
sudo apt-get install sqlite3
```
### Using sqlite3 via docker-compose
### Using sqlite3 via docker
The freqtrade docker image does contain sqlite3, so you can edit the database without having to install anything on the host system.
``` bash
docker-compose exec freqtrade /bin/bash
docker compose exec freqtrade /bin/bash
sqlite3 <database-file>.sqlite
```

View File

@ -773,7 +773,7 @@ class DigDeeperStrategy(IStrategy):
* Sell 100@10\$ -> Avg price: 8.5\$, realized profit 150\$, 17.65%
* Buy 150@11\$ -> Avg price: 10\$, realized profit 150\$, 17.65%
* Sell 100@12\$ -> Avg price: 10\$, total realized profit 350\$, 20%
* Sell 150@14\$ -> Avg price: 10\$, total realized profit 950\$, 40%
* Sell 150@14\$ -> Avg price: 10\$, total realized profit 950\$, 40% <- *This will be the last "Exit" message*
The total profit for this trade was 950$ on a 3350$ investment (`100@8$ + 100@9$ + 150@11$`). As such - the final relative profit is 28.35% (`950 / 3350`).

View File

@ -364,8 +364,8 @@ class AwesomeStrategy(IStrategy):
timeframe_mins = timeframe_to_minutes(timeframe)
minimal_roi = {
"0": 0.05, # 5% for the first 3 candles
str(timeframe_mins * 3)): 0.02, # 2% after 3 candles
str(timeframe_mins * 6)): 0.01, # 1% After 6 candles
str(timeframe_mins * 3): 0.02, # 2% after 3 candles
str(timeframe_mins * 6): 0.01, # 1% After 6 candles
}
```
@ -989,38 +989,18 @@ from freqtrade.persistence import Trade
The following example queries for the current pair and trades from today, however other filters can easily be added.
``` python
if self.config['runmode'].value in ('live', 'dry_run'):
trades = Trade.get_trades([Trade.pair == metadata['pair'],
Trade.open_date > datetime.utcnow() - timedelta(days=1),
Trade.is_open.is_(False),
trades = Trade.get_trades_proxy(pair=metadata['pair'],
open_date=datetime.now(timezone.utc) - timedelta(days=1),
is_open=False,
]).order_by(Trade.close_date).all()
# Summarize profit for this pair.
curdayprofit = sum(trade.close_profit for trade in trades)
# Summarize profit for this pair.
curdayprofit = sum(trade.close_profit for trade in trades)
```
Get amount of stake_currency currently invested in Trades:
``` python
if self.config['runmode'].value in ('live', 'dry_run'):
total_stakes = Trade.total_open_trades_stakes()
```
Retrieve performance per pair.
Returns a List of dicts per pair.
``` python
if self.config['runmode'].value in ('live', 'dry_run'):
performance = Trade.get_overall_performance()
```
Sample return value: ETH/BTC had 5 trades, with a total profit of 1.5% (ratio of 0.015).
``` json
{"pair": "ETH/BTC", "profit": 0.015, "count": 5}
```
For a full list of available methods, please consult the [Trade object](trade-object.md) documentation.
!!! Warning
Trade history is not available during backtesting or hyperopt.
Trade history is not available in `populate_*` methods during backtesting or hyperopt, and will result in empty results.
## Prevent trades from happening for a specific pair

View File

@ -2,12 +2,37 @@
Debugging a strategy can be time-consuming. Freqtrade offers helper functions to visualize raw data.
The following assumes you work with SampleStrategy, data for 5m timeframe from Binance and have downloaded them into the data directory in the default location.
Please follow the [documentation](https://www.freqtrade.io/en/stable/data-download/) for more details.
## Setup
### Change Working directory to repository root
```python
import os
from pathlib import Path
# Change directory
# Modify this cell to insure that the output shows the correct path.
# Define all paths relative to the project root shown in the cell output
project_root = "somedir/freqtrade"
i=0
try:
os.chdirdir(project_root)
assert Path('LICENSE').is_file()
except:
while i<4 and (not Path('LICENSE').is_file()):
os.chdir(Path(Path.cwd(), '../'))
i+=1
project_root = Path.cwd()
print(Path.cwd())
```
### Configure Freqtrade environment
```python
from freqtrade.configuration import Configuration
# Customize these according to your needs.
@ -15,14 +40,14 @@ from freqtrade.configuration import Configuration
# Initialize empty configuration object
config = Configuration.from_files([])
# Optionally (recommended), use existing configuration file
# config = Configuration.from_files(["config.json"])
# config = Configuration.from_files(["user_data/config.json"])
# Define some constants
config["timeframe"] = "5m"
# Name of the strategy class
config["strategy"] = "SampleStrategy"
# Location of the data
data_location = config['datadir']
data_location = config["datadir"]
# Pair to analyze - Only use one pair here
pair = "BTC/USDT"
```
@ -36,12 +61,12 @@ from freqtrade.enums import CandleType
candles = load_pair_history(datadir=data_location,
timeframe=config["timeframe"],
pair=pair,
data_format = "hdf5",
data_format = "json", # Make sure to update this to your data
candle_type=CandleType.SPOT,
)
# Confirm success
print("Loaded " + str(len(candles)) + f" rows of data for {pair} from {data_location}")
print(f"Loaded {len(candles)} rows of data for {pair} from {data_location}")
candles.head()
```

View File

@ -11,18 +11,3 @@
.rst-versions .rst-other-versions {
color: white;
}
#widget-wrapper {
height: calc(220px * 0.5625 + 18px);
width: 220px;
margin: 0 auto 16px auto;
border-style: solid;
border-color: var(--md-code-bg-color);
border-width: 1px;
border-radius: 5px;
}
@media screen and (max-width: calc(76.25em - 1px)) {
#widget-wrapper { display: none; }
}

148
docs/trade-object.md Normal file
View File

@ -0,0 +1,148 @@
# Trade Object
## Trade
A position freqtrade enters is stored in a `Trade` object - which is persisted to the database.
It's a core concept of freqtrade - and something you'll come across in many sections of the documentation, which will most likely point you to this location.
It will be passed to the strategy in many [strategy callbacks](strategy-callbacks.md). The object passed to the strategy cannot be modified directly. Indirect modifications may occur based on callback results.
## Trade - Available attributes
The following attributes / properties are available for each individual trade - and can be used with `trade.<property>` (e.g. `trade.pair`).
| Attribute | DataType | Description |
|------------|-------------|-------------|
`pair`| string | Pair of this trade
`is_open`| boolean | Is the trade currently open, or has it been concluded
`open_rate`| float | Rate this trade was entered at (Avg. entry rate in case of trade-adjustments)
`close_rate`| float | Close rate - only set when is_open = False
`stake_amount`| float | Amount in Stake (or Quote) currency.
`amount`| float | Amount in Asset / Base currency that is currently owned.
`open_date`| datetime | Timestamp when trade was opened **use `open_date_utc` instead**
`open_date_utc`| datetime | Timestamp when trade was opened - in UTC
`close_date`| datetime | Timestamp when trade was closed **use `close_date_utc` instead**
`close_date_utc`| datetime | Timestamp when trade was closed - in UTC
`close_profit`| float | Relative profit at the time of trade closure. `0.01` == 1%
`close_profit_abs`| float | Absolute profit (in stake currency) at the time of trade closure.
`leverage` | float | Leverage used for this trade - defaults to 1.0 in spot markets.
`enter_tag`| string | Tag provided on entry via the `enter_tag` column in the dataframe
`is_short` | boolean | True for short trades, False otherwise
`orders` | Order[] | List of order objects attached to this trade (includes both filled and cancelled orders)
`date_last_filled_utc` | datetime | Time of the last filled order
`entry_side` | "buy" / "sell" | Order Side the trade was entered
`exit_side` | "buy" / "sell" | Order Side that will result in a trade exit / position reduction.
`trade_direction` | "long" / "short" | Trade direction in text - long or short.
`nr_of_successful_entries` | int | Number of successful (filled) entry orders
`nr_of_successful_exits` | int | Number of successful (filled) exit orders
## Class methods
The following are class methods - which return generic information, and usually result in an explicit query against the database.
They can be used as `Trade.<method>` - e.g. `open_trades = Trade.get_open_trade_count()`
!!! Warning "Backtesting/hyperopt"
Most methods will work in both backtesting / hyperopt and live/dry modes.
During backtesting, it's limited to usage in [strategy callbacks](strategy-callbacks.md). Usage in `populate_*()` methods is not supported and will result in wrong results.
### get_trades_proxy
When your strategy needs some information on existing (open or close) trades - it's best to use `Trade.get_trades_proxy()`.
Usage:
``` python
from freqtrade.persistence import Trade
from datetime import timedelta
# ...
trade_hist = Trade.get_trades_proxy(pair='ETH/USDT', is_open=False, open_date=current_date - timedelta(days=2))
```
`get_trades_proxy()` supports the following keyword arguments. All arguments are optional - calling `get_trades_proxy()` without arguments will return a list of all trades in the database.
* `pair` e.g. `pair='ETH/USDT'`
* `is_open` e.g. `is_open=False`
* `open_date` e.g. `open_date=current_date - timedelta(days=2)`
* `close_date` e.g. `close_date=current_date - timedelta(days=5)`
### get_open_trade_count
Get the number of currently open trades
``` python
from freqtrade.persistence import Trade
# ...
open_trades = Trade.get_open_trade_count()
```
### get_total_closed_profit
Retrieve the total profit the bot has generated so far.
Aggregates `close_profit_abs` for all closed trades.
``` python
from freqtrade.persistence import Trade
# ...
profit = Trade.get_total_closed_profit()
```
### total_open_trades_stakes
Retrieve the total stake_amount that's currently in trades.
``` python
from freqtrade.persistence import Trade
# ...
profit = Trade.total_open_trades_stakes()
```
### get_overall_performance
Retrieve the overall performance - similar to the `/performance` telegram command.
``` python
from freqtrade.persistence import Trade
# ...
if self.config['runmode'].value in ('live', 'dry_run'):
performance = Trade.get_overall_performance()
```
Sample return value: ETH/BTC had 5 trades, with a total profit of 1.5% (ratio of 0.015).
``` json
{"pair": "ETH/BTC", "profit": 0.015, "count": 5}
```
## Order Object
An `Order` object represents an order on the exchange (or a simulated order in dry-run mode).
An `Order` object will always be tied to it's corresponding [`Trade`](#trade-object), and only really makes sense in the context of a trade.
### Order - Available attributes
an Order object is typically attached to a trade.
Most properties here can be None as they are dependant on the exchange response.
| Attribute | DataType | Description |
|------------|-------------|-------------|
`trade` | Trade | Trade object this order is attached to
`ft_pair` | string | Pair this order is for
`ft_is_open` | boolean | is the order filled?
`order_type` | string | Order type as defined on the exchange - usually market, limit or stoploss
`status` | string | Status as defined by ccxt. Usually open, closed, expired or canceled
`side` | string | Buy or Sell
`price` | float | Price the order was placed at
`average` | float | Average price the order filled at
`amount` | float | Amount in base currency
`filled` | float | Filled amount (in base currency)
`remaining` | float | Remaining amount
`cost` | float | Cost of the order - usually average * filled
`order_date` | datetime | Order creation date **use `order_date_utc` instead**
`order_date_utc` | datetime | Order creation date (in UTC)
`order_fill_date` | datetime | Order fill date **use `order_fill_utc` instead**
`order_fill_date_utc` | datetime | Order fill date

View File

@ -6,14 +6,14 @@ To update your freqtrade installation, please use one of the below methods, corr
Breaking changes / changed behavior will be documented in the changelog that is posted alongside every release.
For the develop branch, please follow PR's to avoid being surprised by changes.
## docker-compose
## docker
!!! Note "Legacy installations using the `master` image"
We're switching from master to stable for the release Images - please adjust your docker-file and replace `freqtradeorg/freqtrade:master` with `freqtradeorg/freqtrade:stable`
``` bash
docker-compose pull
docker-compose up -d
docker compose pull
docker compose up -d
```
## Installation via setup script

View File

@ -652,7 +652,7 @@ Common arguments:
You can also use webserver mode via docker.
Starting a one-off container requires the configuration of the port explicitly, as ports are not exposed by default.
You can use `docker-compose run --rm -p 127.0.0.1:8080:8080 freqtrade webserver` to start a one-off container that'll be removed once you stop it. This assumes that port 8080 is still available and no other bot is running on that port.
You can use `docker compose run --rm -p 127.0.0.1:8080:8080 freqtrade webserver` to start a one-off container that'll be removed once you stop it. This assumes that port 8080 is still available and no other bot is running on that port.
Alternatively, you can reconfigure the docker-compose file to have the command updated:
@ -662,7 +662,7 @@ Alternatively, you can reconfigure the docker-compose file to have the command u
--config /freqtrade/user_data/config.json
```
You can now use `docker-compose up` to start the webserver.
You can now use `docker compose up` to start the webserver.
This assumes that the configuration has a webserver enabled and configured for docker (listening port = `0.0.0.0`).
!!! Tip

View File

@ -1,5 +1,5 @@
""" Freqtrade bot """
__version__ = '2022.12.dev'
__version__ = '2023.1.dev'
if 'dev' in __version__:
try:

View File

@ -355,6 +355,13 @@ def _validate_freqai_include_timeframes(conf: Dict[str, Any]) -> None:
f"Main timeframe of {main_tf} must be smaller or equal to FreqAI "
f"`include_timeframes`.Offending include-timeframes: {', '.join(offending_lines)}")
# Ensure that the base timeframe is included in the include_timeframes list
if main_tf not in freqai_include_timeframes:
feature_parameters = conf.get('freqai', {}).get('feature_parameters', {})
include_timeframes = [main_tf] + freqai_include_timeframes
conf.get('freqai', {}).get('feature_parameters', {}) \
.update({**feature_parameters, 'include_timeframes': include_timeframes})
def _validate_freqai_backtest(conf: Dict[str, Any]) -> None:
if conf.get('runmode', RunMode.OTHER) == RunMode.BACKTEST:

View File

@ -31,7 +31,7 @@ HYPEROPT_LOSS_BUILTIN = ['ShortTradeDurHyperOptLoss', 'OnlyProfitHyperOptLoss',
'CalmarHyperOptLoss',
'MaxDrawDownHyperOptLoss', 'MaxDrawDownRelativeHyperOptLoss',
'ProfitDrawDownHyperOptLoss']
AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList', 'ProducerPairList',
AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList', 'ProducerPairList', 'RemotePairList',
'AgeFilter', 'OffsetFilter', 'PerformanceFilter',
'PrecisionFilter', 'PriceFilter', 'RangeStabilityFilter',
'ShuffleFilter', 'SpreadFilter', 'VolatilityFilter']
@ -61,6 +61,7 @@ USERPATH_FREQAIMODELS = 'freqaimodels'
TELEGRAM_SETTING_OPTIONS = ['on', 'off', 'silent']
WEBHOOK_FORMAT_OPTIONS = ['form', 'json', 'raw']
FULL_DATAFRAME_THRESHOLD = 100
ENV_VAR_PREFIX = 'FREQTRADE__'
@ -608,8 +609,7 @@ CONF_SCHEMA = {
"backtest_period_days",
"identifier",
"feature_parameters",
"data_split_parameters",
"model_training_parameters"
"data_split_parameters"
]
},
},

View File

@ -20,8 +20,8 @@ from freqtrade.persistence import LocalTrade, Trade, init_db
logger = logging.getLogger(__name__)
# Newest format
BT_DATA_COLUMNS = ['pair', 'stake_amount', 'amount', 'open_date', 'close_date',
'open_rate', 'close_rate',
BT_DATA_COLUMNS = ['pair', 'stake_amount', 'max_stake_amount', 'amount',
'open_date', 'close_date', 'open_rate', 'close_rate',
'fee_open', 'fee_close', 'trade_duration',
'profit_ratio', 'profit_abs', 'exit_reason',
'initial_stop_loss_abs', 'initial_stop_loss_ratio', 'stop_loss_abs',
@ -241,6 +241,33 @@ def find_existing_backtest_stats(dirname: Union[Path, str], run_ids: Dict[str, s
return results
def _load_backtest_data_df_compatibility(df: pd.DataFrame) -> pd.DataFrame:
"""
Compatibility support for older backtest data.
"""
df['open_date'] = pd.to_datetime(df['open_date'],
utc=True,
infer_datetime_format=True
)
df['close_date'] = pd.to_datetime(df['close_date'],
utc=True,
infer_datetime_format=True
)
# Compatibility support for pre short Columns
if 'is_short' not in df.columns:
df['is_short'] = False
if 'leverage' not in df.columns:
df['leverage'] = 1.0
if 'enter_tag' not in df.columns:
df['enter_tag'] = df['buy_tag']
df = df.drop(['buy_tag'], axis=1)
if 'max_stake_amount' not in df.columns:
df['max_stake_amount'] = df['stake_amount']
if 'orders' not in df.columns:
df['orders'] = None
return df
def load_backtest_data(filename: Union[Path, str], strategy: Optional[str] = None) -> pd.DataFrame:
"""
Load backtest data file.
@ -269,24 +296,7 @@ def load_backtest_data(filename: Union[Path, str], strategy: Optional[str] = Non
data = data['strategy'][strategy]['trades']
df = pd.DataFrame(data)
if not df.empty:
df['open_date'] = pd.to_datetime(df['open_date'],
utc=True,
infer_datetime_format=True
)
df['close_date'] = pd.to_datetime(df['close_date'],
utc=True,
infer_datetime_format=True
)
# Compatibility support for pre short Columns
if 'is_short' not in df.columns:
df['is_short'] = 0
if 'leverage' not in df.columns:
df['leverage'] = 1.0
if 'enter_tag' not in df.columns:
df['enter_tag'] = df['buy_tag']
df = df.drop(['buy_tag'], axis=1)
if 'orders' not in df.columns:
df['orders'] = None
df = _load_backtest_data_df_compatibility(df)
else:
# old format - only with lists.

View File

@ -9,14 +9,16 @@ from collections import deque
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from pandas import DataFrame
from pandas import DataFrame, to_timedelta
from freqtrade.configuration import TimeRange
from freqtrade.constants import Config, ListPairsWithTimeframes, PairWithTimeframe
from freqtrade.constants import (FULL_DATAFRAME_THRESHOLD, Config, ListPairsWithTimeframes,
PairWithTimeframe)
from freqtrade.data.history import load_pair_history
from freqtrade.enums import CandleType, RPCMessageType, RunMode
from freqtrade.exceptions import ExchangeError, OperationalException
from freqtrade.exchange import Exchange, timeframe_to_seconds
from freqtrade.misc import append_candles_to_dataframe
from freqtrade.rpc import RPCManager
from freqtrade.util import PeriodicCache
@ -120,7 +122,7 @@ class DataProvider:
'type': RPCMessageType.ANALYZED_DF,
'data': {
'key': pair_key,
'df': dataframe,
'df': dataframe.tail(1),
'la': datetime.now(timezone.utc)
}
}
@ -131,7 +133,7 @@ class DataProvider:
'data': pair_key,
})
def _add_external_df(
def _replace_external_df(
self,
pair: str,
dataframe: DataFrame,
@ -157,6 +159,85 @@ class DataProvider:
self.__producer_pairs_df[producer_name][pair_key] = (dataframe, _last_analyzed)
logger.debug(f"External DataFrame for {pair_key} from {producer_name} added.")
def _add_external_df(
self,
pair: str,
dataframe: DataFrame,
last_analyzed: datetime,
timeframe: str,
candle_type: CandleType,
producer_name: str = "default"
) -> Tuple[bool, int]:
"""
Append a candle to the existing external dataframe. The incoming dataframe
must have at least 1 candle.
:param pair: pair to get the data for
:param timeframe: Timeframe to get data for
:param candle_type: Any of the enum CandleType (must match trading mode!)
:returns: False if the candle could not be appended, or the int number of missing candles.
"""
pair_key = (pair, timeframe, candle_type)
if dataframe.empty:
# The incoming dataframe must have at least 1 candle
return (False, 0)
if len(dataframe) >= FULL_DATAFRAME_THRESHOLD:
# This is likely a full dataframe
# Add the dataframe to the dataprovider
self._replace_external_df(
pair,
dataframe,
last_analyzed=last_analyzed,
timeframe=timeframe,
candle_type=candle_type,
producer_name=producer_name
)
return (True, 0)
if (producer_name not in self.__producer_pairs_df
or pair_key not in self.__producer_pairs_df[producer_name]):
# We don't have data from this producer yet,
# or we don't have data for this pair_key
# return False and 1000 for the full df
return (False, 1000)
existing_df, _ = self.__producer_pairs_df[producer_name][pair_key]
# CHECK FOR MISSING CANDLES
timeframe_delta = to_timedelta(timeframe) # Convert the timeframe to a timedelta for pandas
local_last = existing_df.iloc[-1]['date'] # We want the last date from our copy
incoming_first = dataframe.iloc[0]['date'] # We want the first date from the incoming
# Remove existing candles that are newer than the incoming first candle
existing_df1 = existing_df[existing_df['date'] < incoming_first]
candle_difference = (incoming_first - local_last) / timeframe_delta
# If the difference divided by the timeframe is 1, then this
# is the candle we want and the incoming data isn't missing any.
# If the candle_difference is more than 1, that means
# we missed some candles between our data and the incoming
# so return False and candle_difference.
if candle_difference > 1:
return (False, candle_difference)
if existing_df1.empty:
appended_df = dataframe
else:
appended_df = append_candles_to_dataframe(existing_df1, dataframe)
# Everything is good, we appended
self._replace_external_df(
pair,
appended_df,
last_analyzed=last_analyzed,
timeframe=timeframe,
candle_type=candle_type,
producer_name=producer_name
)
return (True, 0)
def get_producer_df(
self,
pair: str,

View File

@ -52,7 +52,7 @@ def _process_candles_and_indicators(pairlist, strategy_name, trades, signal_cand
return analysed_trades_dict
def _analyze_candles_and_indicators(pair, trades, signal_candles):
def _analyze_candles_and_indicators(pair, trades: pd.DataFrame, signal_candles: pd.DataFrame):
buyf = signal_candles
if len(buyf) > 0:
@ -120,7 +120,7 @@ def _do_group_table_output(bigdf, glist):
else:
agg_mask = {'profit_abs': ['count', 'sum', 'median', 'mean'],
'profit_ratio': ['sum', 'median', 'mean']}
'profit_ratio': ['median', 'mean', 'sum']}
agg_cols = ['num_buys', 'profit_abs_sum', 'profit_abs_median',
'profit_abs_mean', 'median_profit_pct', 'mean_profit_pct',
'total_profit_pct']

View File

@ -1,4 +1,6 @@
import logging
import math
from datetime import datetime
from typing import Dict, Tuple
import numpy as np
@ -190,3 +192,119 @@ def calculate_cagr(days_passed: int, starting_balance: float, final_balance: flo
:return: CAGR
"""
return (final_balance / starting_balance) ** (1 / (days_passed / 365)) - 1
def calculate_expectancy(trades: pd.DataFrame) -> float:
"""
Calculate expectancy
:param trades: DataFrame containing trades (requires columns close_date and profit_ratio)
:return: expectancy
"""
if len(trades) == 0:
return 0
expectancy = 1
profit_sum = trades.loc[trades['profit_abs'] > 0, 'profit_abs'].sum()
loss_sum = abs(trades.loc[trades['profit_abs'] < 0, 'profit_abs'].sum())
nb_win_trades = len(trades.loc[trades['profit_abs'] > 0])
nb_loss_trades = len(trades.loc[trades['profit_abs'] < 0])
if (nb_win_trades > 0) and (nb_loss_trades > 0):
average_win = profit_sum / nb_win_trades
average_loss = loss_sum / nb_loss_trades
risk_reward_ratio = average_win / average_loss
winrate = nb_win_trades / len(trades)
expectancy = ((1 + risk_reward_ratio) * winrate) - 1
elif nb_win_trades == 0:
expectancy = 0
return expectancy
def calculate_sortino(trades: pd.DataFrame, min_date: datetime, max_date: datetime,
starting_balance: float) -> float:
"""
Calculate sortino
:param trades: DataFrame containing trades (requires columns profit_abs)
:return: sortino
"""
if (len(trades) == 0) or (min_date is None) or (max_date is None) or (min_date == max_date):
return 0
total_profit = trades['profit_abs'] / starting_balance
days_period = max(1, (max_date - min_date).days)
expected_returns_mean = total_profit.sum() / days_period
down_stdev = np.std(trades.loc[trades['profit_abs'] < 0, 'profit_abs'] / starting_balance)
if down_stdev != 0 and not np.isnan(down_stdev):
sortino_ratio = expected_returns_mean / down_stdev * np.sqrt(365)
else:
# Define high (negative) sortino ratio to be clear that this is NOT optimal.
sortino_ratio = -100
# print(expected_returns_mean, down_stdev, sortino_ratio)
return sortino_ratio
def calculate_sharpe(trades: pd.DataFrame, min_date: datetime, max_date: datetime,
starting_balance: float) -> float:
"""
Calculate sharpe
:param trades: DataFrame containing trades (requires column profit_abs)
:return: sharpe
"""
if (len(trades) == 0) or (min_date is None) or (max_date is None) or (min_date == max_date):
return 0
total_profit = trades['profit_abs'] / starting_balance
days_period = max(1, (max_date - min_date).days)
expected_returns_mean = total_profit.sum() / days_period
up_stdev = np.std(total_profit)
if up_stdev != 0:
sharp_ratio = expected_returns_mean / up_stdev * np.sqrt(365)
else:
# Define high (negative) sharpe ratio to be clear that this is NOT optimal.
sharp_ratio = -100
# print(expected_returns_mean, up_stdev, sharp_ratio)
return sharp_ratio
def calculate_calmar(trades: pd.DataFrame, min_date: datetime, max_date: datetime,
starting_balance: float) -> float:
"""
Calculate calmar
:param trades: DataFrame containing trades (requires columns close_date and profit_abs)
:return: calmar
"""
if (len(trades) == 0) or (min_date is None) or (max_date is None) or (min_date == max_date):
return 0
total_profit = trades['profit_abs'].sum() / starting_balance
days_period = max(1, (max_date - min_date).days)
# adding slippage of 0.1% per trade
# total_profit = total_profit - 0.0005
expected_returns_mean = total_profit / days_period * 100
# calculate max drawdown
try:
_, _, _, _, _, max_drawdown = calculate_max_drawdown(
trades, value_col="profit_abs", starting_balance=starting_balance
)
except ValueError:
max_drawdown = 0
if max_drawdown != 0:
calmar_ratio = expected_returns_mean / max_drawdown * math.sqrt(365)
else:
# Define high (negative) calmar ratio to be clear that this is NOT optimal.
calmar_ratio = -100
# print(expected_returns_mean, max_drawdown, calmar_ratio)
return calmar_ratio

View File

@ -3,7 +3,6 @@
from freqtrade.exchange.common import remove_credentials, MAP_EXCHANGE_CHILDCLASS
from freqtrade.exchange.exchange import Exchange
# isort: on
from freqtrade.exchange.bibox import Bibox
from freqtrade.exchange.binance import Binance
from freqtrade.exchange.bitpanda import Bitpanda
from freqtrade.exchange.bittrex import Bittrex

View File

@ -1,28 +0,0 @@
""" Bibox exchange subclass """
import logging
from typing import Dict
from freqtrade.exchange import Exchange
logger = logging.getLogger(__name__)
class Bibox(Exchange):
"""
Bibox exchange class. Contains adjustments needed for Freqtrade to work
with this exchange.
Please note that this exchange is not included in the list of exchanges
officially supported by the Freqtrade development team. So some features
may still not work as expected.
"""
# fetchCurrencies API point requires authentication for Bibox,
# so switch it off for Freqtrade load_markets()
@property
def _ccxt_config(self) -> Dict:
# Parameters to add directly to ccxt sync/async initialization.
config = {"has": {"fetchCurrencies": False}}
config.update(super()._ccxt_config)
return config

View File

@ -11,7 +11,7 @@ from freqtrade.enums import CandleType, MarginMode, TradingMode
from freqtrade.exceptions import DDosProtection, OperationalException, TemporaryError
from freqtrade.exchange import Exchange
from freqtrade.exchange.common import retrier
from freqtrade.exchange.types import Tickers
from freqtrade.exchange.types import OHLCVResponse, Tickers
from freqtrade.misc import deep_merge_dicts, json_load
@ -31,7 +31,7 @@ class Binance(Exchange):
"ccxt_futures_name": "future"
}
_ft_has_futures: Dict = {
"stoploss_order_types": {"limit": "limit", "market": "market"},
"stoploss_order_types": {"limit": "stop", "market": "stop_market"},
"tickers_have_price": False,
}
@ -112,7 +112,7 @@ class Binance(Exchange):
since_ms: int, candle_type: CandleType,
is_new_pair: bool = False, raise_: bool = False,
until_ms: Optional[int] = None
) -> Tuple[str, str, str, List]:
) -> OHLCVResponse:
"""
Overwrite to introduce "fast new pair" functionality by detecting the pair's listing date
Does not work for other exchanges, which don't return the earliest data when called with "0"

View File

@ -36,7 +36,7 @@ from freqtrade.exchange.exchange_utils import (CcxtModuleType, amount_to_contrac
price_to_precision, timeframe_to_minutes,
timeframe_to_msecs, timeframe_to_next_date,
timeframe_to_prev_date, timeframe_to_seconds)
from freqtrade.exchange.types import Ticker, Tickers
from freqtrade.exchange.types import OHLCVResponse, Ticker, Tickers
from freqtrade.misc import (chunks, deep_merge_dicts, file_dump_json, file_load_json,
safe_value_fallback2)
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
@ -1813,32 +1813,18 @@ class Exchange:
:param candle_type: '', mark, index, premiumIndex, or funding_rate
:return: List with candle (OHLCV) data
"""
pair, _, _, data = self.loop.run_until_complete(
pair, _, _, data, _ = self.loop.run_until_complete(
self._async_get_historic_ohlcv(pair=pair, timeframe=timeframe,
since_ms=since_ms, until_ms=until_ms,
is_new_pair=is_new_pair, candle_type=candle_type))
logger.info(f"Downloaded data for {pair} with length {len(data)}.")
return data
def get_historic_ohlcv_as_df(self, pair: str, timeframe: str,
since_ms: int, candle_type: CandleType) -> DataFrame:
"""
Minimal wrapper around get_historic_ohlcv - converting the result into a dataframe
:param pair: Pair to download
:param timeframe: Timeframe to get data for
:param since_ms: Timestamp in milliseconds to get history from
:param candle_type: Any of the enum CandleType (must match trading mode!)
:return: OHLCV DataFrame
"""
ticks = self.get_historic_ohlcv(pair, timeframe, since_ms=since_ms, candle_type=candle_type)
return ohlcv_to_dataframe(ticks, timeframe, pair=pair, fill_missing=True,
drop_incomplete=self._ohlcv_partial_candle)
async def _async_get_historic_ohlcv(self, pair: str, timeframe: str,
since_ms: int, candle_type: CandleType,
is_new_pair: bool = False, raise_: bool = False,
until_ms: Optional[int] = None
) -> Tuple[str, str, str, List]:
) -> OHLCVResponse:
"""
Download historic ohlcv
:param is_new_pair: used by binance subclass to allow "fast" new pair downloading
@ -1869,15 +1855,16 @@ class Exchange:
continue
else:
# Deconstruct tuple if it's not an exception
p, _, c, new_data = res
p, _, c, new_data, _ = res
if p == pair and c == candle_type:
data.extend(new_data)
# Sort data again after extending the result - above calls return in "async order"
data = sorted(data, key=lambda x: x[0])
return pair, timeframe, candle_type, data
return pair, timeframe, candle_type, data, self._ohlcv_partial_candle
def _build_coroutine(self, pair: str, timeframe: str, candle_type: CandleType,
since_ms: Optional[int], cache: bool) -> Coroutine:
def _build_coroutine(
self, pair: str, timeframe: str, candle_type: CandleType,
since_ms: Optional[int], cache: bool) -> Coroutine[Any, Any, OHLCVResponse]:
not_all_data = cache and self.required_candle_call_count > 1
if cache and (pair, timeframe, candle_type) in self._klines:
candle_limit = self.ohlcv_candle_limit(timeframe, candle_type)
@ -1914,7 +1901,7 @@ class Exchange:
"""
Build Coroutines to execute as part of refresh_latest_ohlcv
"""
input_coroutines = []
input_coroutines: List[Coroutine[Any, Any, OHLCVResponse]] = []
cached_pairs = []
for pair, timeframe, candle_type in set(pair_list):
if (timeframe not in self.timeframes
@ -1978,7 +1965,6 @@ class Exchange:
:return: Dict of [{(pair, timeframe): Dataframe}]
"""
logger.debug("Refreshing candle (OHLCV) data for %d pairs", len(pair_list))
drop_incomplete = self._ohlcv_partial_candle if drop_incomplete is None else drop_incomplete
# Gather coroutines to run
input_coroutines, cached_pairs = self._build_ohlcv_dl_jobs(pair_list, since_ms, cache)
@ -1996,8 +1982,9 @@ class Exchange:
if isinstance(res, Exception):
logger.warning(f"Async code raised an exception: {repr(res)}")
continue
# Deconstruct tuple (has 4 elements)
pair, timeframe, c_type, ticks = res
# Deconstruct tuple (has 5 elements)
pair, timeframe, c_type, ticks, drop_hint = res
drop_incomplete = drop_hint if drop_incomplete is None else drop_incomplete
ohlcv_df = self._process_ohlcv_df(
pair, timeframe, c_type, ticks, cache, drop_incomplete)
@ -2025,7 +2012,7 @@ class Exchange:
timeframe: str,
candle_type: CandleType,
since_ms: Optional[int] = None,
) -> Tuple[str, str, str, List]:
) -> OHLCVResponse:
"""
Asynchronously get candle history data using fetch_ohlcv
:param candle_type: '', mark, index, premiumIndex, or funding_rate
@ -2035,8 +2022,8 @@ class Exchange:
# Fetch OHLCV asynchronously
s = '(' + arrow.get(since_ms // 1000).isoformat() + ') ' if since_ms is not None else ''
logger.debug(
"Fetching pair %s, interval %s, since %s %s...",
pair, timeframe, since_ms, s
"Fetching pair %s, %s, interval %s, since %s %s...",
pair, candle_type, timeframe, since_ms, s
)
params = deepcopy(self._ft_has.get('ohlcv_params', {}))
candle_limit = self.ohlcv_candle_limit(
@ -2050,11 +2037,12 @@ class Exchange:
limit=candle_limit, params=params)
else:
# Funding rate
data = await self._api_async.fetch_funding_rate_history(
pair, since=since_ms,
limit=candle_limit)
# Convert funding rate to candle pattern
data = [[x['timestamp'], x['fundingRate'], 0, 0, 0, 0] for x in data]
data = await self._fetch_funding_rate_history(
pair=pair,
timeframe=timeframe,
limit=candle_limit,
since_ms=since_ms,
)
# Some exchanges sort OHLCV in ASC order and others in DESC.
# Ex: Bittrex returns the list of OHLCV in ASC order (oldest first, newest last)
# while GDAX returns the list of OHLCV in DESC order (newest first, oldest last)
@ -2064,9 +2052,9 @@ class Exchange:
data = sorted(data, key=lambda x: x[0])
except IndexError:
logger.exception("Error loading %s. Result was %s.", pair, data)
return pair, timeframe, candle_type, []
return pair, timeframe, candle_type, [], self._ohlcv_partial_candle
logger.debug("Done fetching pair %s, interval %s ...", pair, timeframe)
return pair, timeframe, candle_type, data
return pair, timeframe, candle_type, data, self._ohlcv_partial_candle
except ccxt.NotSupported as e:
raise OperationalException(
@ -2082,6 +2070,24 @@ class Exchange:
raise OperationalException(f'Could not fetch historical candle (OHLCV) data '
f'for pair {pair}. Message: {e}') from e
async def _fetch_funding_rate_history(
self,
pair: str,
timeframe: str,
limit: int,
since_ms: Optional[int] = None,
) -> List[List]:
"""
Fetch funding rate history - used to selectively override this by subclasses.
"""
# Funding rate
data = await self._api_async.fetch_funding_rate_history(
pair, since=since_ms,
limit=limit)
# Convert funding rate to candle pattern
data = [[x['timestamp'], x['fundingRate'], 0, 0, 0, 0] for x in data]
return data
# Fetch historic trades
@retrier_async
@ -2745,11 +2751,16 @@ class Exchange:
"""
Important: Must be fetching data from cached values as this is used by backtesting!
PERPETUAL:
gateio: https://www.gate.io/help/futures/perpetual/22160/calculation-of-liquidation-price
gateio: https://www.gate.io/help/futures/futures/27724/liquidation-price-bankruptcy-price
> Liquidation Price = (Entry Price ± Margin / Contract Multiplier / Size) /
[ 1 ± (Maintenance Margin Ratio + Taker Rate)]
Wherein, "+" or "-" depends on whether the contract goes long or short:
"-" for long, and "+" for short.
okex: https://www.okex.com/support/hc/en-us/articles/
360053909592-VI-Introduction-to-the-isolated-mode-of-Single-Multi-currency-Portfolio-margin
:param exchange_name:
:param pair: Pair to calculate liquidation price for
:param open_rate: Entry price of position
:param is_short: True if the trade is a short, false otherwise
:param amount: Absolute value of position size incl. leverage (in base currency)
@ -2789,7 +2800,7 @@ class Exchange:
def get_maintenance_ratio_and_amt(
self,
pair: str,
nominal_value: float = 0.0,
nominal_value: float,
) -> Tuple[float, Optional[float]]:
"""
Important: Must be fetching data from cached values as this is used by backtesting!

View File

@ -1,4 +1,6 @@
from typing import Dict, Optional, TypedDict
from typing import Dict, List, Optional, Tuple, TypedDict
from freqtrade.enums import CandleType
class Ticker(TypedDict):
@ -14,3 +16,6 @@ class Ticker(TypedDict):
Tickers = Dict[str, Ticker]
# pair, timeframe, candleType, OHLCV, drop last?,
OHLCVResponse = Tuple[str, str, CandleType, List, bool]

View File

@ -0,0 +1,125 @@
import logging
from enum import Enum
from gym import spaces
from freqtrade.freqai.RL.BaseEnvironment import BaseEnvironment, Positions
logger = logging.getLogger(__name__)
class Actions(Enum):
Neutral = 0
Buy = 1
Sell = 2
class Base3ActionRLEnv(BaseEnvironment):
"""
Base class for a 3 action environment
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.actions = Actions
def set_action_space(self):
self.action_space = spaces.Discrete(len(Actions))
def step(self, action: int):
"""
Logic for a single step (incrementing one candle in time)
by the agent
:param: action: int = the action type that the agent plans
to take for the current step.
:returns:
observation = current state of environment
step_reward = the reward from `calculate_reward()`
_done = if the agent "died" or if the candles finished
info = dict passed back to openai gym lib
"""
self._done = False
self._current_tick += 1
if self._current_tick == self._end_tick:
self._done = True
self._update_unrealized_total_profit()
step_reward = self.calculate_reward(action)
self.total_reward += step_reward
self.tensorboard_log(self.actions._member_names_[action])
trade_type = None
if self.is_tradesignal(action):
if action == Actions.Buy.value:
if self._position == Positions.Short:
self._update_total_profit()
self._position = Positions.Long
trade_type = "long"
self._last_trade_tick = self._current_tick
elif action == Actions.Sell.value and self.can_short:
if self._position == Positions.Long:
self._update_total_profit()
self._position = Positions.Short
trade_type = "short"
self._last_trade_tick = self._current_tick
elif action == Actions.Sell.value and not self.can_short:
self._update_total_profit()
self._position = Positions.Neutral
trade_type = "neutral"
self._last_trade_tick = None
else:
print("case not defined")
if trade_type is not None:
self.trade_history.append(
{'price': self.current_price(), 'index': self._current_tick,
'type': trade_type})
if (self._total_profit < self.max_drawdown or
self._total_unrealized_profit < self.max_drawdown):
self._done = True
self._position_history.append(self._position)
info = dict(
tick=self._current_tick,
action=action,
total_reward=self.total_reward,
total_profit=self._total_profit,
position=self._position.value,
trade_duration=self.get_trade_duration(),
current_profit_pct=self.get_unrealized_profit()
)
observation = self._get_observation()
self._update_history(info)
return observation, step_reward, self._done, info
def is_tradesignal(self, action: int) -> bool:
"""
Determine if the signal is a trade signal
e.g.: agent wants a Actions.Buy while it is in a Positions.short
"""
return (
(action == Actions.Buy.value and self._position == Positions.Neutral)
or (action == Actions.Sell.value and self._position == Positions.Long)
or (action == Actions.Sell.value and self._position == Positions.Neutral
and self.can_short)
or (action == Actions.Buy.value and self._position == Positions.Short
and self.can_short)
)
def _is_valid(self, action: int) -> bool:
"""
Determine if the signal is valid.
e.g.: agent wants a Actions.Sell while it is in a Positions.Long
"""
if self.can_short:
return action in [Actions.Buy.value, Actions.Sell.value, Actions.Neutral.value]
else:
if action == Actions.Sell.value and self._position != Positions.Long:
return False
return True

View File

@ -46,9 +46,9 @@ class Base4ActionRLEnv(BaseEnvironment):
self._done = True
self._update_unrealized_total_profit()
step_reward = self.calculate_reward(action)
self.total_reward += step_reward
self.tensorboard_log(self.actions._member_names_[action])
trade_type = None
if self.is_tradesignal(action):
@ -88,7 +88,8 @@ class Base4ActionRLEnv(BaseEnvironment):
{'price': self.current_price(), 'index': self._current_tick,
'type': trade_type})
if self._total_profit < 1 - self.rl_config.get('max_training_drawdown_pct', 0.8):
if (self._total_profit < self.max_drawdown or
self._total_unrealized_profit < self.max_drawdown):
self._done = True
self._position_history.append(self._position)

View File

@ -49,6 +49,7 @@ class Base5ActionRLEnv(BaseEnvironment):
self._update_unrealized_total_profit()
step_reward = self.calculate_reward(action)
self.total_reward += step_reward
self.tensorboard_log(self.actions._member_names_[action])
trade_type = None
if self.is_tradesignal(action):

View File

@ -2,7 +2,7 @@ import logging
import random
from abc import abstractmethod
from enum import Enum
from typing import Optional, Type
from typing import Optional, Type, Union
import gym
import numpy as np
@ -11,9 +11,6 @@ from gym import spaces
from gym.utils import seeding
from pandas import DataFrame
from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import RunMode
logger = logging.getLogger(__name__)
@ -47,8 +44,8 @@ class BaseEnvironment(gym.Env):
def __init__(self, df: DataFrame = DataFrame(), prices: DataFrame = DataFrame(),
reward_kwargs: dict = {}, window_size=10, starting_point=True,
id: str = 'baseenv-1', seed: int = 1, config: dict = {},
dp: Optional[DataProvider] = None):
id: str = 'baseenv-1', seed: int = 1, config: dict = {}, live: bool = False,
fee: float = 0.0015, can_short: bool = False):
"""
Initializes the training/eval environment.
:param df: dataframe of features
@ -59,32 +56,31 @@ class BaseEnvironment(gym.Env):
:param id: string id of the environment (used in backend for multiprocessed env)
:param seed: Sets the seed of the environment higher in the gym.Env object
:param config: Typical user configuration file
:param dp: dataprovider from freqtrade
:param live: Whether or not this environment is active in dry/live/backtesting
:param fee: The fee to use for environmental interactions.
:param can_short: Whether or not the environment can short
"""
self.config = config
self.rl_config = config['freqai']['rl_config']
self.add_state_info = self.rl_config.get('add_state_info', False)
self.id = id
self.seed(seed)
self.reset_env(df, prices, window_size, reward_kwargs, starting_point)
self.max_drawdown = 1 - self.rl_config.get('max_training_drawdown_pct', 0.8)
self.compound_trades = config['stake_amount'] == 'unlimited'
if self.config.get('fee', None) is not None:
self.fee = self.config['fee']
elif dp is not None:
self.fee = dp._exchange.get_fee(symbol=dp.current_whitelist()[0]) # type: ignore
else:
self.fee = 0.0015
self.fee = fee
# set here to default 5Ac, but all children envs can override this
self.actions: Type[Enum] = BaseActions
self.custom_info: dict = {}
self.live: bool = False
if dp:
self.live = dp.runmode in (RunMode.DRY_RUN, RunMode.LIVE)
self.tensorboard_metrics: dict = {}
self.can_short = can_short
self.live = live
if not self.live and self.add_state_info:
self.add_state_info = False
logger.warning("add_state_info is not available in backtesting. Deactivating.")
self.seed(seed)
self.reset_env(df, prices, window_size, reward_kwargs, starting_point)
def reset_env(self, df: DataFrame, prices: DataFrame, window_size: int,
reward_kwargs: dict, starting_point=True):
@ -139,20 +135,38 @@ class BaseEnvironment(gym.Env):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def tensorboard_log(self, metric: str, value: Union[int, float] = 1, inc: bool = True):
"""
Function builds the tensorboard_metrics dictionary
to be parsed by the TensorboardCallback. This
function is designed for tracking incremented objects,
events, actions inside the training environment.
For example, a user can call this to track the
frequency of occurence of an `is_valid` call in
their `calculate_reward()`:
def calculate_reward(self, action: int) -> float:
if not self._is_valid(action):
self.tensorboard_log("is_valid")
return -2
:param metric: metric to be tracked and incremented
:param value: value to increment `metric` by
:param inc: sets whether the `value` is incremented or not
"""
if not inc or metric not in self.tensorboard_metrics:
self.tensorboard_metrics[metric] = value
else:
self.tensorboard_metrics[metric] += value
def reset_tensorboard_log(self):
self.tensorboard_metrics = {}
def reset(self):
"""
Reset is called at the beginning of every episode
"""
# custom_info is used for episodic reports and tensorboard logging
self.custom_info["Invalid"] = 0
self.custom_info["Hold"] = 0
self.custom_info["Unknown"] = 0
self.custom_info["pnl_factor"] = 0
self.custom_info["duration_factor"] = 0
self.custom_info["reward_exit"] = 0
self.custom_info["reward_hold"] = 0
for action in self.actions:
self.custom_info[f"{action.name}"] = 0
self.reset_tensorboard_log()
self._done = False
@ -195,7 +209,7 @@ class BaseEnvironment(gym.Env):
"""
features_window = self.signal_features[(
self._current_tick - self.window_size):self._current_tick]
if self.add_state_info and self.live:
if self.add_state_info:
features_and_state = DataFrame(np.zeros((len(features_window), 3)),
columns=['current_profit_pct',
'position',

View File

@ -143,18 +143,14 @@ class BaseReinforcementLearningModel(IFreqaiModel):
train_df = data_dictionary["train_features"]
test_df = data_dictionary["test_features"]
env_info = self.pack_env_dict()
self.train_env = self.MyRLEnv(df=train_df,
prices=prices_train,
window_size=self.CONV_WIDTH,
reward_kwargs=self.reward_params,
config=self.config,
dp=self.data_provider)
**env_info)
self.eval_env = Monitor(self.MyRLEnv(df=test_df,
prices=prices_test,
window_size=self.CONV_WIDTH,
reward_kwargs=self.reward_params,
config=self.config,
dp=self.data_provider))
**env_info))
self.eval_callback = EvalCallback(self.eval_env, deterministic=True,
render=False, eval_freq=len(train_df),
best_model_save_path=str(dk.data_path))
@ -162,6 +158,21 @@ class BaseReinforcementLearningModel(IFreqaiModel):
actions = self.train_env.get_actions()
self.tensorboard_callback = TensorboardCallback(verbose=1, actions=actions)
def pack_env_dict(self) -> Dict[str, Any]:
"""
Create dictionary of environment arguments
"""
env_info = {"window_size": self.CONV_WIDTH,
"reward_kwargs": self.reward_params,
"config": self.config,
"live": self.live,
"can_short": self.can_short}
if self.data_provider:
env_info["fee"] = self.data_provider._exchange \
.get_fee(symbol=self.data_provider.current_whitelist()[0]) # type: ignore
return env_info
@abstractmethod
def fit(self, data_dictionary: Dict[str, Any], dk: FreqaiDataKitchen, **kwargs):
"""
@ -383,8 +394,8 @@ class BaseReinforcementLearningModel(IFreqaiModel):
def make_env(MyRLEnv: Type[gym.Env], env_id: str, rank: int,
seed: int, train_df: DataFrame, price: DataFrame,
reward_params: Dict[str, int], window_size: int, monitor: bool = False,
config: Dict[str, Any] = {}) -> Callable:
monitor: bool = False,
env_info: Dict[str, Any] = {}) -> Callable:
"""
Utility function for multiprocessed env.
@ -392,13 +403,14 @@ def make_env(MyRLEnv: Type[gym.Env], env_id: str, rank: int,
:param num_env: (int) the number of environment you wish to have in subprocesses
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
:param env_info: (dict) all required arguments to instantiate the environment.
:return: (Callable)
"""
def _init() -> gym.Env:
env = MyRLEnv(df=train_df, prices=price, window_size=window_size,
reward_kwargs=reward_params, id=env_id, seed=seed + rank, config=config)
env = MyRLEnv(df=train_df, prices=price, id=env_id, seed=seed + rank,
**env_info)
if monitor:
env = Monitor(env)
return env

View File

@ -42,19 +42,18 @@ class TensorboardCallback(BaseCallback):
)
def _on_step(self) -> bool:
custom_info = self.training_env.get_attr("custom_info")[0]
self.logger.record("_state/position", self.locals["infos"][0]["position"])
self.logger.record("_state/trade_duration", self.locals["infos"][0]["trade_duration"])
self.logger.record("_state/current_profit_pct", self.locals["infos"]
[0]["current_profit_pct"])
self.logger.record("_reward/total_profit", self.locals["infos"][0]["total_profit"])
self.logger.record("_reward/total_reward", self.locals["infos"][0]["total_reward"])
self.logger.record_mean("_reward/mean_trade_duration", self.locals["infos"]
[0]["trade_duration"])
self.logger.record("_actions/action", self.locals["infos"][0]["action"])
self.logger.record("_actions/_Invalid", custom_info["Invalid"])
self.logger.record("_actions/_Unknown", custom_info["Unknown"])
self.logger.record("_actions/Hold", custom_info["Hold"])
for action in self.actions:
self.logger.record(f"_actions/{action.name}", custom_info[action.name])
local_info = self.locals["infos"][0]
tensorboard_metrics = self.training_env.get_attr("tensorboard_metrics")[0]
for info in local_info:
if info not in ["episode", "terminal_observation"]:
self.logger.record(f"_info/{info}", local_info[info])
for info in tensorboard_metrics:
if info in [action.name for action in self.actions]:
self.logger.record(f"_actions/{info}", tensorboard_metrics[info])
else:
self.logger.record(f"_custom/{info}", tensorboard_metrics[info])
return True

View File

@ -95,9 +95,14 @@ class BaseClassifierModel(IFreqaiModel):
self.data_cleaning_predict(dk)
predictions = self.model.predict(dk.data_dictionary["prediction_features"])
if self.CONV_WIDTH == 1:
predictions = np.reshape(predictions, (-1, len(dk.label_list)))
pred_df = DataFrame(predictions, columns=dk.label_list)
predictions_prob = self.model.predict_proba(dk.data_dictionary["prediction_features"])
if self.CONV_WIDTH == 1:
predictions_prob = np.reshape(predictions_prob, (-1, len(self.model.classes_)))
pred_df_prob = DataFrame(predictions_prob, columns=self.model.classes_)
pred_df = pd.concat([pred_df, pred_df_prob], axis=1)

View File

@ -95,6 +95,9 @@ class BaseRegressionModel(IFreqaiModel):
self.data_cleaning_predict(dk)
predictions = self.model.predict(dk.data_dictionary["prediction_features"])
if self.CONV_WIDTH == 1:
predictions = np.reshape(predictions, (-1, len(dk.label_list)))
pred_df = DataFrame(predictions, columns=dk.label_list)
pred_df = dk.denormalize_labels_from_metadata(pred_df)

View File

@ -104,6 +104,7 @@ class IFreqaiModel(ABC):
self.metadata: Dict[str, Any] = self.dd.load_global_metadata_from_disk()
self.data_provider: Optional[DataProvider] = None
self.max_system_threads = max(int(psutil.cpu_count() * 2 - 2), 1)
self.can_short = True # overridden in start() with strategy.can_short
record_params(config, self.full_path)
@ -133,6 +134,7 @@ class IFreqaiModel(ABC):
self.live = strategy.dp.runmode in (RunMode.DRY_RUN, RunMode.LIVE)
self.dd.set_pair_dict_info(metadata)
self.data_provider = strategy.dp
self.can_short = strategy.can_short
if self.live:
self.inference_timer('start')

View File

@ -61,7 +61,7 @@ class ReinforcementLearner(BaseReinforcementLearningModel):
model = self.MODELCLASS(self.policy_type, self.train_env, policy_kwargs=policy_kwargs,
tensorboard_log=Path(
dk.full_path / "tensorboard" / dk.pair.split('/')[0]),
**self.freqai_info['model_training_parameters']
**self.freqai_info.get('model_training_parameters', {})
)
else:
logger.info('Continual training activated - starting training from previously '
@ -100,7 +100,7 @@ class ReinforcementLearner(BaseReinforcementLearningModel):
"""
# first, penalize if the action is not valid
if not self._is_valid(action):
self.custom_info["Invalid"] += 1
self.tensorboard_log("is_valid")
return -2
pnl = self.get_unrealized_profit()
@ -109,15 +109,12 @@ class ReinforcementLearner(BaseReinforcementLearningModel):
# reward agent for entering trades
if (action == Actions.Long_enter.value
and self._position == Positions.Neutral):
self.custom_info[f"{Actions.Long_enter.name}"] += 1
return 25
if (action == Actions.Short_enter.value
and self._position == Positions.Neutral):
self.custom_info[f"{Actions.Short_enter.name}"] += 1
return 25
# discourage agent from not entering trades
if action == Actions.Neutral.value and self._position == Positions.Neutral:
self.custom_info[f"{Actions.Neutral.name}"] += 1
return -1
max_trade_duration = self.rl_config.get('max_trade_duration_candles', 300)
@ -131,22 +128,18 @@ class ReinforcementLearner(BaseReinforcementLearningModel):
# discourage sitting in position
if (self._position in (Positions.Short, Positions.Long) and
action == Actions.Neutral.value):
self.custom_info["Hold"] += 1
return -1 * trade_duration / max_trade_duration
# close long
if action == Actions.Long_exit.value and self._position == Positions.Long:
if pnl > self.profit_aim * self.rr:
factor *= self.rl_config['model_reward_parameters'].get('win_reward_factor', 2)
self.custom_info[f"{Actions.Long_exit.name}"] += 1
return float(pnl * factor)
# close short
if action == Actions.Short_exit.value and self._position == Positions.Short:
if pnl > self.profit_aim * self.rr:
factor *= self.rl_config['model_reward_parameters'].get('win_reward_factor', 2)
self.custom_info[f"{Actions.Short_exit.name}"] += 1
return float(pnl * factor)
self.custom_info["Unknown"] += 1
return 0.

View File

@ -34,17 +34,20 @@ class ReinforcementLearner_multiproc(ReinforcementLearner):
train_df = data_dictionary["train_features"]
test_df = data_dictionary["test_features"]
env_info = self.pack_env_dict()
env_id = "train_env"
self.train_env = SubprocVecEnv([make_env(self.MyRLEnv, env_id, i, 1, train_df, prices_train,
self.reward_params, self.CONV_WIDTH, monitor=True,
config=self.config) for i
self.train_env = SubprocVecEnv([make_env(self.MyRLEnv, env_id, i, 1,
train_df, prices_train,
monitor=True,
env_info=env_info) for i
in range(self.max_threads)])
eval_env_id = 'eval_env'
self.eval_env = SubprocVecEnv([make_env(self.MyRLEnv, eval_env_id, i, 1,
test_df, prices_test,
self.reward_params, self.CONV_WIDTH, monitor=True,
config=self.config) for i
monitor=True,
env_info=env_info) for i
in range(self.max_threads)])
self.eval_callback = EvalCallback(self.eval_env, deterministic=True,
render=False, eval_freq=len(train_df),

View File

@ -155,6 +155,8 @@ class FreqtradeBot(LoggingMixin):
self.cancel_all_open_orders()
self.check_for_open_trades()
except Exception as e:
logger.warning(f'Exception during cleanup: {e.__class__.__name__} {e}')
finally:
self.strategy.ft_bot_cleanup()
@ -162,8 +164,13 @@ class FreqtradeBot(LoggingMixin):
self.rpc.cleanup()
if self.emc:
self.emc.shutdown()
Trade.commit()
self.exchange.close()
try:
Trade.commit()
except Exception:
# Exeptions here will be happening if the db disappeared.
# At which point we can no longer commit anyway.
pass
def startup(self) -> None:
"""
@ -905,6 +912,7 @@ class FreqtradeBot(LoggingMixin):
stake_amount=stake_amount,
min_stake_amount=min_stake_amount,
max_stake_amount=max_stake_amount,
trade_amount=trade.stake_amount if trade else None,
)
return enter_limit_requested, stake_amount, leverage

View File

@ -301,3 +301,21 @@ def remove_entry_exit_signals(dataframe: pd.DataFrame):
dataframe[SignalTagType.EXIT_TAG.value] = None
return dataframe
def append_candles_to_dataframe(left: pd.DataFrame, right: pd.DataFrame) -> pd.DataFrame:
"""
Append the `right` dataframe to the `left` dataframe
:param left: The full dataframe you want appended to
:param right: The new dataframe containing the data you want appended
:returns: The dataframe with the right data in it
"""
if left.iloc[-1]['date'] != right.iloc[-1]['date']:
left = pd.concat([left, right])
# Only keep the last 1500 candles in memory
left = left[-1500:] if len(left) > 1500 else left
left.reset_index(drop=True, inplace=True)
return left

View File

@ -769,6 +769,7 @@ class Backtesting:
stake_amount=stake_amount,
min_stake_amount=min_stake_amount,
max_stake_amount=max_stake_amount,
trade_amount=trade.stake_amount if trade else None
)
return propose_rate, stake_amount_val, leverage, min_stake_amount
@ -1176,6 +1177,7 @@ class Backtesting:
open_trade_count_start = self.backtest_loop(
row, pair, current_time, end_date, max_open_trades,
open_trade_count_start)
continue
detail_data.loc[:, 'enter_long'] = row[LONG_IDX]
detail_data.loc[:, 'exit_long'] = row[ELONG_IDX]
detail_data.loc[:, 'enter_short'] = row[SHORT_IDX]

View File

@ -9,8 +9,9 @@ from tabulate import tabulate
from freqtrade.constants import (DATETIME_PRINT_FORMAT, LAST_BT_RESULT_FN, UNLIMITED_STAKE_AMOUNT,
Config)
from freqtrade.data.metrics import (calculate_cagr, calculate_csum, calculate_market_change,
calculate_max_drawdown)
from freqtrade.data.metrics import (calculate_cagr, calculate_calmar, calculate_csum,
calculate_expectancy, calculate_market_change,
calculate_max_drawdown, calculate_sharpe, calculate_sortino)
from freqtrade.misc import decimals_per_coin, file_dump_joblib, file_dump_json, round_coin_value
from freqtrade.optimize.backtest_caching import get_backtest_metadata_filename
@ -448,6 +449,10 @@ def generate_strategy_stats(pairlist: List[str],
'profit_total_long_abs': results.loc[~results['is_short'], 'profit_abs'].sum(),
'profit_total_short_abs': results.loc[results['is_short'], 'profit_abs'].sum(),
'cagr': calculate_cagr(backtest_days, start_balance, content['final_balance']),
'expectancy': calculate_expectancy(results),
'sortino': calculate_sortino(results, min_date, max_date, start_balance),
'sharpe': calculate_sharpe(results, min_date, max_date, start_balance),
'calmar': calculate_calmar(results, min_date, max_date, start_balance),
'profit_factor': profit_factor,
'backtest_start': min_date.strftime(DATETIME_PRINT_FORMAT),
'backtest_start_ts': int(min_date.timestamp() * 1000),
@ -785,8 +790,13 @@ def text_table_add_metrics(strat_results: Dict) -> str:
strat_results['stake_currency'])),
('Total profit %', f"{strat_results['profit_total']:.2%}"),
('CAGR %', f"{strat_results['cagr']:.2%}" if 'cagr' in strat_results else 'N/A'),
('Sortino', f"{strat_results['sortino']:.2f}" if 'sortino' in strat_results else 'N/A'),
('Sharpe', f"{strat_results['sharpe']:.2f}" if 'sharpe' in strat_results else 'N/A'),
('Calmar', f"{strat_results['calmar']:.2f}" if 'calmar' in strat_results else 'N/A'),
('Profit factor', f'{strat_results["profit_factor"]:.2f}' if 'profit_factor'
in strat_results else 'N/A'),
('Expectancy', f"{strat_results['expectancy']:.2f}" if 'expectancy'
in strat_results else 'N/A'),
('Trades per day', strat_results['trades_per_day']),
('Avg. daily profit %',
f"{(strat_results['profit_total'] / strat_results['backtest_days']):.2%}"),

View File

@ -109,11 +109,10 @@ def migrate_trades_and_orders_table(
else:
is_short = get_column_def(cols, 'is_short', '0')
# Margin Properties
# Futures Properties
interest_rate = get_column_def(cols, 'interest_rate', '0.0')
# Futures properties
funding_fees = get_column_def(cols, 'funding_fees', '0.0')
max_stake_amount = get_column_def(cols, 'max_stake_amount', 'stake_amount')
# If ticker-interval existed use that, else null.
if has_column(cols, 'ticker_interval'):
@ -162,7 +161,8 @@ def migrate_trades_and_orders_table(
timeframe, open_trade_value, close_profit_abs,
trading_mode, leverage, liquidation_price, is_short,
interest_rate, funding_fees, realized_profit,
amount_precision, price_precision, precision_mode, contract_size
amount_precision, price_precision, precision_mode, contract_size,
max_stake_amount
)
select id, lower(exchange), pair, {base_currency} base_currency,
{stake_currency} stake_currency,
@ -190,7 +190,8 @@ def migrate_trades_and_orders_table(
{is_short} is_short, {interest_rate} interest_rate,
{funding_fees} funding_fees, {realized_profit} realized_profit,
{amount_precision} amount_precision, {price_precision} price_precision,
{precision_mode} precision_mode, {contract_size} contract_size
{precision_mode} precision_mode, {contract_size} contract_size,
{max_stake_amount} max_stake_amount
from {trade_back_name}
"""))
@ -310,8 +311,8 @@ def check_migrate(engine, decl_base, previous_tables) -> None:
# if ('orders' not in previous_tables
# or not has_column(cols_orders, 'funding_fee')):
migrating = False
# if not has_column(cols_trades, 'contract_size'):
if not has_column(cols_orders, 'funding_fee'):
# if not has_column(cols_orders, 'funding_fee'):
if not has_column(cols_trades, 'max_stake_amount'):
migrating = True
logger.info(f"Running database migration for trades - "
f"backup: {table_back_name}, {order_table_bak_name}")

View File

@ -293,6 +293,7 @@ class LocalTrade():
close_profit: Optional[float] = None
close_profit_abs: Optional[float] = None
stake_amount: float = 0.0
max_stake_amount: float = 0.0
amount: float = 0.0
amount_requested: Optional[float] = None
open_date: datetime
@ -397,12 +398,6 @@ class LocalTrade():
def close_date_utc(self):
return self.close_date.replace(tzinfo=timezone.utc)
@property
def enter_side(self) -> str:
""" DEPRECATED, please use entry_side instead"""
# TODO: Please remove me after 2022.5
return self.entry_side
@property
def entry_side(self) -> str:
if self.is_short:
@ -475,8 +470,8 @@ class LocalTrade():
'amount': round(self.amount, 8),
'amount_requested': round(self.amount_requested, 8) if self.amount_requested else None,
'stake_amount': round(self.stake_amount, 8),
'max_stake_amount': round(self.max_stake_amount, 8) if self.max_stake_amount else None,
'strategy': self.strategy,
'buy_tag': self.enter_tag,
'enter_tag': self.enter_tag,
'timeframe': self.timeframe,
@ -513,7 +508,6 @@ class LocalTrade():
'profit_pct': round(self.close_profit * 100, 2) if self.close_profit else None,
'profit_abs': self.close_profit_abs,
'sell_reason': self.exit_reason, # Deprecated
'exit_reason': self.exit_reason,
'exit_order_status': self.exit_order_status,
'stop_loss_abs': self.stop_loss,
@ -882,6 +876,7 @@ class LocalTrade():
ZERO = FtPrecise(0.0)
current_amount = FtPrecise(0.0)
current_stake = FtPrecise(0.0)
max_stake_amount = FtPrecise(0.0)
total_stake = 0.0 # Total stake after all buy orders (does not subtract!)
avg_price = FtPrecise(0.0)
close_profit = 0.0
@ -923,7 +918,9 @@ class LocalTrade():
exit_rate, amount=exit_amount, open_rate=avg_price)
else:
total_stake = total_stake + self._calc_open_trade_value(tmp_amount, price)
max_stake_amount += (tmp_amount * price)
self.funding_fees = funding_fees
self.max_stake_amount = float(max_stake_amount)
if close_profit:
self.close_profit = close_profit
@ -1175,6 +1172,7 @@ class Trade(_DECL_BASE, LocalTrade):
close_profit = Column(Float)
close_profit_abs = Column(Float)
stake_amount = Column(Float, nullable=False)
max_stake_amount = Column(Float)
amount = Column(Float)
amount_requested = Column(Float)
open_date = Column(DateTime, nullable=False, default=datetime.utcnow)

View File

@ -0,0 +1,206 @@
"""
Remote PairList provider
Provides pair list fetched from a remote source
"""
import json
import logging
from pathlib import Path
from typing import Any, Dict, List, Tuple
import requests
from cachetools import TTLCache
from freqtrade import __version__
from freqtrade.constants import Config
from freqtrade.exceptions import OperationalException
from freqtrade.exchange.types import Tickers
from freqtrade.plugins.pairlist.IPairList import IPairList
logger = logging.getLogger(__name__)
class RemotePairList(IPairList):
def __init__(self, exchange, pairlistmanager,
config: Config, pairlistconfig: Dict[str, Any],
pairlist_pos: int) -> None:
super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)
if 'number_assets' not in self._pairlistconfig:
raise OperationalException(
'`number_assets` not specified. Please check your configuration '
'for "pairlist.config.number_assets"')
if 'pairlist_url' not in self._pairlistconfig:
raise OperationalException(
'`pairlist_url` not specified. Please check your configuration '
'for "pairlist.config.pairlist_url"')
self._number_pairs = self._pairlistconfig['number_assets']
self._refresh_period: int = self._pairlistconfig.get('refresh_period', 1800)
self._keep_pairlist_on_failure = self._pairlistconfig.get('keep_pairlist_on_failure', True)
self._pair_cache: TTLCache = TTLCache(maxsize=1, ttl=self._refresh_period)
self._pairlist_url = self._pairlistconfig.get('pairlist_url', '')
self._read_timeout = self._pairlistconfig.get('read_timeout', 60)
self._bearer_token = self._pairlistconfig.get('bearer_token', '')
self._init_done = False
self._last_pairlist: List[Any] = list()
@property
def needstickers(self) -> bool:
"""
Boolean property defining if tickers are necessary.
If no Pairlist requires tickers, an empty Dict is passed
as tickers argument to filter_pairlist
"""
return False
def short_desc(self) -> str:
"""
Short whitelist method description - used for startup-messages
"""
return f"{self.name} - {self._pairlistconfig['number_assets']} pairs from RemotePairlist."
def process_json(self, jsonparse) -> List[str]:
pairlist = jsonparse.get('pairs', [])
remote_refresh_period = int(jsonparse.get('refresh_period', self._refresh_period))
if self._refresh_period < remote_refresh_period:
self.log_once(f'Refresh Period has been increased from {self._refresh_period}'
f' to minimum allowed: {remote_refresh_period} from Remote.', logger.info)
self._refresh_period = remote_refresh_period
self._pair_cache = TTLCache(maxsize=1, ttl=remote_refresh_period)
self._init_done = True
return pairlist
def return_last_pairlist(self) -> List[str]:
if self._keep_pairlist_on_failure:
pairlist = self._last_pairlist
self.log_once('Keeping last fetched pairlist', logger.info)
else:
pairlist = []
return pairlist
def fetch_pairlist(self) -> Tuple[List[str], float]:
headers = {
'User-Agent': 'Freqtrade/' + __version__ + ' Remotepairlist'
}
if self._bearer_token:
headers['Authorization'] = f'Bearer {self._bearer_token}'
try:
response = requests.get(self._pairlist_url, headers=headers,
timeout=self._read_timeout)
content_type = response.headers.get('content-type')
time_elapsed = response.elapsed.total_seconds()
if "application/json" in str(content_type):
jsonparse = response.json()
try:
pairlist = self.process_json(jsonparse)
except Exception as e:
if self._init_done:
pairlist = self.return_last_pairlist()
logger.warning(f'Error while processing JSON data: {type(e)}')
else:
raise OperationalException(f'Error while processing JSON data: {type(e)}')
else:
if self._init_done:
self.log_once(f'Error: RemotePairList is not of type JSON: '
f' {self._pairlist_url}', logger.info)
pairlist = self.return_last_pairlist()
else:
raise OperationalException('RemotePairList is not of type JSON, abort.')
except requests.exceptions.RequestException:
self.log_once(f'Was not able to fetch pairlist from:'
f' {self._pairlist_url}', logger.info)
pairlist = self.return_last_pairlist()
time_elapsed = 0
return pairlist, time_elapsed
def gen_pairlist(self, tickers: Tickers) -> List[str]:
"""
Generate the pairlist
:param tickers: Tickers (from exchange.get_tickers). May be cached.
:return: List of pairs
"""
if self._init_done:
pairlist = self._pair_cache.get('pairlist')
else:
pairlist = []
time_elapsed = 0.0
if pairlist:
# Item found - no refresh necessary
return pairlist.copy()
else:
if self._pairlist_url.startswith("file:///"):
filename = self._pairlist_url.split("file:///", 1)[1]
file_path = Path(filename)
if file_path.exists():
with open(filename) as json_file:
# Load the JSON data into a dictionary
jsonparse = json.load(json_file)
try:
pairlist = self.process_json(jsonparse)
except Exception as e:
if self._init_done:
pairlist = self.return_last_pairlist()
logger.warning(f'Error while processing JSON data: {type(e)}')
else:
raise OperationalException('Error while processing'
f'JSON data: {type(e)}')
else:
raise ValueError(f"{self._pairlist_url} does not exist.")
else:
# Fetch Pairlist from Remote URL
pairlist, time_elapsed = self.fetch_pairlist()
self.log_once(f"Fetched pairs: {pairlist}", logger.debug)
pairlist = self._whitelist_for_active_markets(pairlist)
pairlist = pairlist[:self._number_pairs]
self._pair_cache['pairlist'] = pairlist.copy()
if time_elapsed != 0.0:
self.log_once(f'Pairlist Fetched in {time_elapsed} seconds.', logger.info)
else:
self.log_once('Fetched Pairlist.', logger.info)
self._last_pairlist = list(pairlist)
return pairlist
def filter_pairlist(self, pairlist: List[str], tickers: Dict) -> List[str]:
"""
Filters and sorts pairlist and returns the whitelist again.
Called on each bot iteration - please use internal caching if necessary
:param pairlist: pairlist to filter or sort
:param tickers: Tickers (from exchange.get_tickers). May be cached.
:return: new whitelist
"""
rpl_pairlist = self.gen_pairlist(tickers)
merged_list = pairlist + rpl_pairlist
merged_list = sorted(set(merged_list), key=merged_list.index)
return merged_list

View File

@ -135,7 +135,7 @@ class VolumePairList(IPairList):
filtered_tickers = [
v for k, v in tickers.items()
if (self._exchange.get_pair_quote_currency(k) == self._stake_currency
and (self._use_range or v[self._sort_key] is not None)
and (self._use_range or v.get(self._sort_key) is not None)
and v['symbol'] in _pairlist)]
pairlist = [s['symbol'] for s in filtered_tickers]
else:
@ -218,7 +218,7 @@ class VolumePairList(IPairList):
else:
filtered_tickers[i]['quoteVolume'] = 0
else:
# Tickers mode - filter based on incomming pairlist.
# Tickers mode - filter based on incoming pairlist.
filtered_tickers = [v for k, v in tickers.items() if k in pairlist]
if self._min_value > 0:

View File

@ -11,6 +11,7 @@ from freqtrade.configuration.config_validation import validate_config_consistenc
from freqtrade.data.btanalysis import get_backtest_resultlist, load_and_merge_backtest_result
from freqtrade.enums import BacktestState
from freqtrade.exceptions import DependencyException
from freqtrade.misc import deep_merge_dicts
from freqtrade.rpc.api_server.api_schemas import (BacktestHistoryEntry, BacktestRequest,
BacktestResponse)
from freqtrade.rpc.api_server.deps import get_config, is_webserver_mode
@ -37,10 +38,11 @@ async def api_start_backtest(bt_settings: BacktestRequest, background_tasks: Bac
btconfig = deepcopy(config)
settings = dict(bt_settings)
if settings.get('freqai', None) is not None:
settings['freqai'] = dict(settings['freqai'])
# Pydantic models will contain all keys, but non-provided ones are None
for setting in settings.keys():
if settings[setting] is not None:
btconfig[setting] = settings[setting]
btconfig = deep_merge_dicts(settings, btconfig, allow_null_overrides=False)
try:
btconfig['stake_amount'] = float(btconfig['stake_amount'])
except ValueError:

View File

@ -217,8 +217,8 @@ class TradeSchema(BaseModel):
amount: float
amount_requested: float
stake_amount: float
max_stake_amount: Optional[float]
strategy: str
buy_tag: Optional[str] # Deprecated
enter_tag: Optional[str]
timeframe: int
fee_open: Optional[float]
@ -243,7 +243,6 @@ class TradeSchema(BaseModel):
profit_pct: Optional[float]
profit_abs: Optional[float]
profit_fiat: Optional[float]
sell_reason: Optional[str] # Deprecated
exit_reason: Optional[str]
exit_order_status: Optional[str]
stop_loss_abs: Optional[float]
@ -372,6 +371,10 @@ class StrategyListResponse(BaseModel):
strategies: List[str]
class FreqAIModelListResponse(BaseModel):
freqaimodels: List[str]
class StrategyResponse(BaseModel):
strategy: str
code: str
@ -410,6 +413,10 @@ class PairHistory(BaseModel):
}
class BacktestFreqAIInputs(BaseModel):
identifier: str
class BacktestRequest(BaseModel):
strategy: str
timeframe: Optional[str]
@ -419,6 +426,9 @@ class BacktestRequest(BaseModel):
stake_amount: Optional[str]
enable_protections: bool
dry_run_wallet: Optional[float]
backtest_cache: Optional[str]
freqaimodel: Optional[str]
freqai: Optional[BacktestFreqAIInputs]
class BacktestResponse(BaseModel):

View File

@ -13,12 +13,13 @@ from freqtrade.rpc import RPC
from freqtrade.rpc.api_server.api_schemas import (AvailablePairs, Balances, BlacklistPayload,
BlacklistResponse, Count, Daily,
DeleteLockRequest, DeleteTrade, ForceEnterPayload,
ForceEnterResponse, ForceExitPayload, Health,
Locks, Logs, OpenTradeSchema, PairHistory,
PerformanceEntry, Ping, PlotConfig, Profit,
ResultMsg, ShowConfig, Stats, StatusMsg,
StrategyListResponse, StrategyResponse, SysInfo,
Version, WhitelistResponse)
ForceEnterResponse, ForceExitPayload,
FreqAIModelListResponse, Health, Locks, Logs,
OpenTradeSchema, PairHistory, PerformanceEntry,
Ping, PlotConfig, Profit, ResultMsg, ShowConfig,
Stats, StatusMsg, StrategyListResponse,
StrategyResponse, SysInfo, Version,
WhitelistResponse)
from freqtrade.rpc.api_server.deps import get_config, get_exchange, get_rpc, get_rpc_optional
from freqtrade.rpc.rpc import RPCException
@ -38,7 +39,8 @@ logger = logging.getLogger(__name__)
# 2.17: Forceentry - leverage, partial force_exit
# 2.20: Add websocket endpoints
# 2.21: Add new_candle messagetype
API_VERSION = 2.21
# 2.22: Add FreqAI to backtesting
API_VERSION = 2.22
# Public API, requires no auth.
router_public = APIRouter()
@ -279,6 +281,16 @@ def get_strategy(strategy: str, config=Depends(get_config)):
}
@router.get('/freqaimodels', response_model=FreqAIModelListResponse, tags=['freqai'])
def list_freqaimodels(config=Depends(get_config)):
from freqtrade.resolvers.freqaimodel_resolver import FreqaiModelResolver
strategies = FreqaiModelResolver.search_all_objects(
config, False)
strategies = sorted(strategies, key=lambda x: x['name'])
return {'freqaimodels': [x['name'] for x in strategies]}
@router.get('/available_pairs', response_model=AvailablePairs, tags=['candle data'])
def list_available_pairs(timeframe: Optional[str] = None, stake_currency: Optional[str] = None,
candletype: Optional[CandleType] = None, config=Depends(get_config)):

View File

@ -91,9 +91,10 @@ async def _process_consumer_request(
elif type == RPCRequestType.ANALYZED_DF:
# Limit the amount of candles per dataframe to 'limit' or 1500
limit = min(data.get('limit', 1500), 1500) if data else None
pair = data.get('pair', None) if data else None
# For every pair in the generator, send a separate message
for message in rpc._ws_request_analyzed_df(limit):
for message in rpc._ws_request_analyzed_df(limit, pair):
# Format response
response = WSAnalyzedDFMessage(data=message)
await channel.send(response.dict(exclude_none=True))

View File

@ -27,7 +27,8 @@ class WebSocketChannel:
self,
websocket: WebSocketType,
channel_id: Optional[str] = None,
serializer_cls: Type[WebSocketSerializer] = HybridJSONWebSocketSerializer
serializer_cls: Type[WebSocketSerializer] = HybridJSONWebSocketSerializer,
send_throttle: float = 0.01
):
self.channel_id = channel_id if channel_id else uuid4().hex[:8]
self._websocket = WebSocketProxy(websocket)
@ -41,6 +42,7 @@ class WebSocketChannel:
self._send_times: Deque[float] = deque([], maxlen=10)
# High limit defaults to 3 to start
self._send_high_limit = 3
self._send_throttle = send_throttle
# The subscribed message types
self._subscriptions: List[str] = []
@ -106,7 +108,8 @@ class WebSocketChannel:
# Explicitly give control back to event loop as
# websockets.send does not
await asyncio.sleep(0.01)
# Also throttles how fast we send
await asyncio.sleep(self._send_throttle)
async def recv(self):
"""

View File

@ -47,7 +47,7 @@ class WSWhitelistRequest(WSRequestSchema):
class WSAnalyzedDFRequest(WSRequestSchema):
type: RPCRequestType = RPCRequestType.ANALYZED_DF
data: Dict[str, Any] = {"limit": 1500}
data: Dict[str, Any] = {"limit": 1500, "pair": None}
# ------------------------------ MESSAGE SCHEMAS ----------------------------

View File

@ -8,15 +8,17 @@ import asyncio
import logging
import socket
from threading import Thread
from typing import TYPE_CHECKING, Any, Callable, Dict, List, TypedDict
from typing import TYPE_CHECKING, Any, Callable, Dict, List, TypedDict, Union
import websockets
from pydantic import ValidationError
from freqtrade.constants import FULL_DATAFRAME_THRESHOLD
from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import RPCMessageType
from freqtrade.misc import remove_entry_exit_signals
from freqtrade.rpc.api_server.ws import WebSocketChannel
from freqtrade.rpc.api_server.ws.channel import WebSocketChannel, create_channel
from freqtrade.rpc.api_server.ws.message_stream import MessageStream
from freqtrade.rpc.api_server.ws_schemas import (WSAnalyzedDFMessage, WSAnalyzedDFRequest,
WSMessageSchema, WSRequestSchema,
WSSubscribeRequest, WSWhitelistMessage,
@ -38,6 +40,10 @@ class Producer(TypedDict):
logger = logging.getLogger(__name__)
def schema_to_dict(schema: Union[WSMessageSchema, WSRequestSchema]):
return schema.dict(exclude_none=True)
class ExternalMessageConsumer:
"""
The main controller class for consuming external messages from
@ -92,6 +98,8 @@ class ExternalMessageConsumer:
RPCMessageType.ANALYZED_DF: self._consume_analyzed_df_message,
}
self._channel_streams: Dict[str, MessageStream] = {}
self.start()
def start(self):
@ -118,6 +126,8 @@ class ExternalMessageConsumer:
logger.info("Stopping ExternalMessageConsumer")
self._running = False
self._channel_streams = {}
if self._sub_tasks:
# Cancel sub tasks
for task in self._sub_tasks:
@ -175,7 +185,6 @@ class ExternalMessageConsumer:
:param producer: Dictionary containing producer info
:param lock: An asyncio Lock
"""
channel = None
while self._running:
try:
host, port = producer['host'], producer['port']
@ -190,19 +199,21 @@ class ExternalMessageConsumer:
max_size=self.message_size_limit,
ping_interval=None
) as ws:
channel = WebSocketChannel(ws, channel_id=name)
async with create_channel(
ws,
channel_id=name,
send_throttle=0.5
) as channel:
logger.info(f"Producer connection success - {channel}")
# Create the message stream for this channel
self._channel_streams[name] = MessageStream()
# Now request the initial data from this Producer
for request in self._initial_requests:
await channel.send(
request.dict(exclude_none=True)
# Run the channel tasks while connected
await channel.run_channel_tasks(
self._receive_messages(channel, producer, lock),
self._send_requests(channel, self._channel_streams[name])
)
# Now receive data, if none is within the time limit, ping
await self._receive_messages(channel, producer, lock)
except (websockets.exceptions.InvalidURI, ValueError) as e:
logger.error(f"{ws_url} is an invalid WebSocket URL - {e}")
break
@ -229,11 +240,19 @@ class ExternalMessageConsumer:
# An unforseen error has occurred, log and continue
logger.error("Unexpected error has occurred:")
logger.exception(e)
await asyncio.sleep(self.sleep_time)
continue
finally:
if channel:
await channel.close()
async def _send_requests(self, channel: WebSocketChannel, channel_stream: MessageStream):
# Send the initial requests
for init_request in self._initial_requests:
await channel.send(schema_to_dict(init_request))
# Now send any subsequent requests published to
# this channel's stream
async for request, _ in channel_stream:
logger.debug(f"Sending request to channel - {channel} - {request}")
await channel.send(request)
async def _receive_messages(
self,
@ -270,19 +289,31 @@ class ExternalMessageConsumer:
latency = (await asyncio.wait_for(pong, timeout=self.ping_timeout) * 1000)
logger.info(f"Connection to {channel} still alive, latency: {latency}ms")
continue
except (websockets.exceptions.ConnectionClosed):
# Just eat the error and continue reconnecting
logger.warning(f"Disconnection in {channel} - retrying in {self.sleep_time}s")
await asyncio.sleep(self.sleep_time)
break
except Exception as e:
# Just eat the error and continue reconnecting
logger.warning(f"Ping error {channel} - {e} - retrying in {self.sleep_time}s")
logger.debug(e, exc_info=e)
await asyncio.sleep(self.sleep_time)
raise
break
def send_producer_request(
self,
producer_name: str,
request: Union[WSRequestSchema, Dict[str, Any]]
):
"""
Publish a message to the producer's message stream to be
sent by the channel task.
:param producer_name: The name of the producer to publish the message to
:param request: The request to send to the producer
"""
if isinstance(request, WSRequestSchema):
request = schema_to_dict(request)
if channel_stream := self._channel_streams.get(producer_name):
channel_stream.publish(request)
def handle_producer_message(self, producer: Producer, message: Dict[str, Any]):
"""
@ -336,16 +367,45 @@ class ExternalMessageConsumer:
pair, timeframe, candle_type = key
if df.empty:
logger.debug(f"Received Empty Dataframe for {key}")
return
# If set, remove the Entry and Exit signals from the Producer
if self._emc_config.get('remove_entry_exit_signals', False):
df = remove_entry_exit_signals(df)
# Add the dataframe to the dataprovider
self._dp._add_external_df(pair, df,
logger.debug(f"Received {len(df)} candle(s) for {key}")
did_append, n_missing = self._dp._add_external_df(
pair,
df,
last_analyzed=la,
timeframe=timeframe,
candle_type=candle_type,
producer_name=producer_name)
producer_name=producer_name
)
if not did_append:
# We want an overlap in candles incase some data has changed
n_missing += 1
# Set to None for all candles if we missed a full df's worth of candles
n_missing = n_missing if n_missing < FULL_DATAFRAME_THRESHOLD else 1500
logger.warning(f"Holes in data or no existing df, requesting {n_missing} candles "
f"for {key} from `{producer_name}`")
self.send_producer_request(
producer_name,
WSAnalyzedDFRequest(
data={
"limit": n_missing,
"pair": pair
}
)
)
return
logger.debug(
f"Consumed message from `{producer_name}` of type `RPCMessageType.ANALYZED_DF`")
f"Consumed message from `{producer_name}` "
f"of type `RPCMessageType.ANALYZED_DF` for {key}")

View File

@ -167,6 +167,7 @@ class RPC:
results = []
for trade in trades:
order: Optional[Order] = None
current_profit_fiat: Optional[float] = None
if trade.open_order_id:
order = trade.select_order_by_order_id(trade.open_order_id)
# calculate profit and send message to user
@ -176,23 +177,26 @@ class RPC:
trade.pair, side='exit', is_short=trade.is_short, refresh=False)
except (ExchangeError, PricingError):
current_rate = NAN
else:
current_rate = trade.close_rate
if len(trade.select_filled_orders(trade.entry_side)) > 0:
current_profit = trade.calc_profit_ratio(
current_rate) if not isnan(current_rate) else NAN
current_profit_abs = trade.calc_profit(
current_rate) if not isnan(current_rate) else NAN
current_profit_fiat: Optional[float] = None
else:
current_profit = current_profit_abs = current_profit_fiat = 0.0
else:
# Closed trade ...
current_rate = trade.close_rate
current_profit = trade.close_profit
current_profit_abs = trade.close_profit_abs
# Calculate fiat profit
if self._fiat_converter:
if not isnan(current_profit_abs) and self._fiat_converter:
current_profit_fiat = self._fiat_converter.convert_amount(
current_profit_abs,
self._freqtrade.config['stake_currency'],
self._freqtrade.config['fiat_display_currency']
)
else:
current_profit = current_profit_abs = current_profit_fiat = 0.0
# Calculate guaranteed profit (in case of trailing stop)
stoploss_entry_dist = trade.calc_profit(trade.stop_loss)
@ -1058,15 +1062,26 @@ class RPC:
return self._convert_dataframe_to_dict(self._freqtrade.config['strategy'],
pair, timeframe, _data, last_analyzed)
def __rpc_analysed_dataframe_raw(self, pair: str, timeframe: str,
limit: Optional[int]) -> Tuple[DataFrame, datetime]:
""" Get the dataframe and last analyze from the dataprovider """
def __rpc_analysed_dataframe_raw(
self,
pair: str,
timeframe: str,
limit: Optional[int]
) -> Tuple[DataFrame, datetime]:
"""
Get the dataframe and last analyze from the dataprovider
:param pair: The pair to get
:param timeframe: The timeframe of data to get
:param limit: The amount of candles in the dataframe
"""
_data, last_analyzed = self._freqtrade.dataprovider.get_analyzed_dataframe(
pair, timeframe)
_data = _data.copy()
if limit:
_data = _data.iloc[-limit:]
return _data, last_analyzed
def _ws_all_analysed_dataframes(
@ -1074,7 +1089,16 @@ class RPC:
pairlist: List[str],
limit: Optional[int]
) -> Generator[Dict[str, Any], None, None]:
""" Get the analysed dataframes of each pair in the pairlist """
"""
Get the analysed dataframes of each pair in the pairlist.
If specified, only return the most recent `limit` candles for
each dataframe.
:param pairlist: A list of pairs to get
:param limit: If an integer, limits the size of dataframe
If a list of string date times, only returns those candles
:returns: A generator of dictionaries with the key, dataframe, and last analyzed timestamp
"""
timeframe = self._freqtrade.config['timeframe']
candle_type = self._freqtrade.config.get('candle_type_def', CandleType.SPOT)
@ -1087,10 +1111,15 @@ class RPC:
"la": last_analyzed
}
def _ws_request_analyzed_df(self, limit: Optional[int]):
def _ws_request_analyzed_df(
self,
limit: Optional[int] = None,
pair: Optional[str] = None
):
""" Historical Analyzed Dataframes for WebSocket """
whitelist = self._freqtrade.active_pair_whitelist
return self._ws_all_analysed_dataframes(whitelist, limit)
pairlist = [pair] if pair else self._freqtrade.active_pair_whitelist
return self._ws_all_analysed_dataframes(pairlist, limit)
def _ws_request_whitelist(self):
""" Whitelist data for WebSocket """

View File

@ -28,7 +28,7 @@ class FreqaiExampleStrategy(IStrategy):
plot_config = {
"main_plot": {},
"subplots": {
"prediction": {"prediction": {"color": "blue"}},
"&-s_close": {"prediction": {"color": "blue"}},
"do_predict": {
"do_predict": {"color": "brown"},
},
@ -140,7 +140,8 @@ class FreqaiExampleStrategy(IStrategy):
# If user wishes to use multiple targets, they can add more by
# appending more columns with '&'. User should keep in mind that multi targets
# requires a multioutput prediction model such as
# templates/CatboostPredictionMultiModel.py,
# freqai/prediction_models/CatboostRegressorMultiTarget.py,
# freqtrade trade --freqaimodel CatboostRegressorMultiTarget
# df["&-s_range"] = (
# df["close"]

View File

@ -7,14 +7,17 @@
"# Strategy analysis example\n",
"\n",
"Debugging a strategy can be time-consuming. Freqtrade offers helper functions to visualize raw data.\n",
"The following assumes you work with SampleStrategy, data for 5m timeframe from Binance and have downloaded them into the data directory in the default location."
"The following assumes you work with SampleStrategy, data for 5m timeframe from Binance and have downloaded them into the data directory in the default location.\n",
"Please follow the [documentation](https://www.freqtrade.io/en/stable/data-download/) for more details."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
"## Setup\n",
"\n",
"### Change Working directory to repository root"
]
},
{
@ -23,7 +26,38 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from pathlib import Path\n",
"\n",
"# Change directory\n",
"# Modify this cell to insure that the output shows the correct path.\n",
"# Define all paths relative to the project root shown in the cell output\n",
"project_root = \"somedir/freqtrade\"\n",
"i=0\n",
"try:\n",
" os.chdirdir(project_root)\n",
" assert Path('LICENSE').is_file()\n",
"except:\n",
" while i<4 and (not Path('LICENSE').is_file()):\n",
" os.chdir(Path(Path.cwd(), '../'))\n",
" i+=1\n",
" project_root = Path.cwd()\n",
"print(Path.cwd())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure Freqtrade environment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from freqtrade.configuration import Configuration\n",
"\n",
"# Customize these according to your needs.\n",
@ -31,14 +65,14 @@
"# Initialize empty configuration object\n",
"config = Configuration.from_files([])\n",
"# Optionally (recommended), use existing configuration file\n",
"# config = Configuration.from_files([\"config.json\"])\n",
"# config = Configuration.from_files([\"user_data/config.json\"])\n",
"\n",
"# Define some constants\n",
"config[\"timeframe\"] = \"5m\"\n",
"# Name of the strategy class\n",
"config[\"strategy\"] = \"SampleStrategy\"\n",
"# Location of the data\n",
"data_location = config['datadir']\n",
"data_location = config[\"datadir\"]\n",
"# Pair to analyze - Only use one pair here\n",
"pair = \"BTC/USDT\""
]
@ -56,12 +90,12 @@
"candles = load_pair_history(datadir=data_location,\n",
" timeframe=config[\"timeframe\"],\n",
" pair=pair,\n",
" data_format = \"hdf5\",\n",
" data_format = \"json\", # Make sure to update this to your data\n",
" candle_type=CandleType.SPOT,\n",
" )\n",
"\n",
"# Confirm success\n",
"print(\"Loaded \" + str(len(candles)) + f\" rows of data for {pair} from {data_location}\")\n",
"print(f\"Loaded {len(candles)} rows of data for {pair} from {data_location}\")\n",
"candles.head()"
]
},
@ -365,7 +399,7 @@
"metadata": {
"file_extension": ".py",
"kernelspec": {
"display_name": "Python 3.9.7 64-bit ('trade_397')",
"display_name": "Python 3.9.7 64-bit",
"language": "python",
"name": "python3"
},

View File

@ -291,12 +291,17 @@ class Wallets:
return self._check_available_stake_amount(stake_amount, available_amount)
def validate_stake_amount(self, pair: str, stake_amount: Optional[float],
min_stake_amount: Optional[float], max_stake_amount: float):
min_stake_amount: Optional[float], max_stake_amount: float,
trade_amount: Optional[float]):
if not stake_amount:
logger.debug(f"Stake amount is {stake_amount}, ignoring possible trade for {pair}.")
return 0
max_stake_amount = min(max_stake_amount, self.get_available_stake_amount())
if trade_amount:
# if in a trade, then the resulting trade size cannot go beyond the max stake
# Otherwise we could no longer exit.
max_stake_amount = min(max_stake_amount, max_stake_amount - trade_amount)
if min_stake_amount is not None and min_stake_amount > max_stake_amount:
if self._log:

View File

@ -41,6 +41,7 @@ nav:
- Backtest analysis: advanced-backtesting.md
- Advanced Topics:
- Advanced Post-installation Tasks: advanced-setup.md
- Trade Object: trade-object.md
- Advanced Strategy: strategy-advanced.md
- Advanced Hyperopt: advanced-hyperopt.md
- Producer/Consumer mode: producer-consumer.md

View File

@ -10,24 +10,24 @@ coveralls==3.3.1
flake8==6.0.0
flake8-tidy-imports==4.8.0
mypy==0.991
pre-commit==2.20.0
pre-commit==2.21.0
pytest==7.2.0
pytest-asyncio==0.20.2
pytest-asyncio==0.20.3
pytest-cov==4.0.0
pytest-mock==3.10.0
pytest-random-order==1.1.0
isort==5.10.1
isort==5.11.4
# For datetime mocking
time-machine==2.8.2
time-machine==2.9.0
# fastapi testing
httpx==0.23.1
# Convert jupyter notebooks to markdown documents
nbconvert==7.2.5
nbconvert==7.2.7
# mypy types
types-cachetools==5.2.1
types-filelock==3.2.7
types-requests==2.28.11.5
types-requests==2.28.11.7
types-tabulate==0.9.0.0
types-python-dateutil==2.8.19.4
types-python-dateutil==2.8.19.5

View File

@ -2,7 +2,7 @@
-r requirements-freqai.txt
# Required for freqai-rl
torch==1.13.0
torch==1.13.1
stable-baselines3==1.6.2
sb3-contrib==1.6.2
# Gym is forced to this version by stable-baselines3.

View File

@ -7,5 +7,5 @@ scikit-learn==1.1.3
joblib==1.2.0
catboost==1.1.1; platform_machine != 'aarch64'
lightgbm==3.3.3
xgboost==1.7.1
xgboost==1.7.2
tensorboard==2.11.0

View File

@ -5,5 +5,5 @@
scipy==1.9.3
scikit-learn==1.1.3
scikit-optimize==0.9.0
filelock==3.8.0
filelock==3.9.0
progressbar2==4.2.0

View File

@ -1,14 +1,14 @@
numpy==1.23.5
numpy==1.24.1
pandas==1.5.2
pandas-ta==0.3.14b
ccxt==2.2.67
ccxt==2.4.60
# Pin cryptography for now due to rust build errors with piwheels
cryptography==38.0.1; platform_machine == 'armv7l'
cryptography==38.0.4; platform_machine != 'armv7l'
aiohttp==3.8.3
SQLAlchemy==1.4.44
python-telegram-bot==13.14
SQLAlchemy==1.4.45
python-telegram-bot==13.15
arrow==1.2.3
cachetools==4.2.2
requests==2.28.1
@ -19,8 +19,8 @@ technical==1.3.0
tabulate==0.9.0
pycoingecko==3.1.0
jinja2==3.1.2
tables==3.7.0
blosc==1.10.6
tables==3.8.0
blosc==1.11.1
joblib==1.2.0
pyarrow==10.0.1; platform_machine != 'armv7l'
@ -37,7 +37,7 @@ sdnotify==0.3.2
# API Server
fastapi==0.88.0
pydantic==1.10.2
pydantic==1.10.4
uvicorn==0.20.0
pyjwt==2.6.0
aiofiles==22.1.0
@ -47,7 +47,7 @@ psutil==5.9.4
colorama==0.4.6
# Building config files interactively
questionary==1.10.0
prompt-toolkit==3.0.33
prompt-toolkit==3.0.36
# Extensions to datetime library
python-dateutil==2.8.2

View File

@ -25,6 +25,11 @@ freqai_rl = [
'sb3-contrib'
]
hdf5 = [
'tables',
'blosc',
]
develop = [
'coveralls',
'flake8',
@ -44,7 +49,7 @@ jupyter = [
'nbconvert',
]
all_extra = plot + develop + jupyter + hyperopt + freqai + freqai_rl
all_extra = plot + develop + jupyter + hyperopt + hdf5 + freqai + freqai_rl
setup(
tests_require=[
@ -78,8 +83,6 @@ setup(
'prompt-toolkit',
'numpy',
'pandas',
'tables',
'blosc',
'joblib>=1.2.0',
'pyarrow; platform_machine != "armv7l"',
'fastapi',
@ -97,6 +100,7 @@ setup(
'plot': plot,
'jupyter': jupyter,
'hyperopt': hyperopt,
'hdf5': hdf5,
'freqai': freqai,
'freqai_rl': freqai_rl,
'all': all_extra,

View File

@ -746,9 +746,7 @@ def test_download_data_no_exchange(mocker, caplog):
start_download_data(pargs)
def test_download_data_no_pairs(mocker, caplog):
mocker.patch.object(Path, "exists", MagicMock(return_value=False))
def test_download_data_no_pairs(mocker):
mocker.patch('freqtrade.commands.data_commands.refresh_backtest_ohlcv_data',
MagicMock(return_value=["ETH/BTC", "XRP/BTC"]))
@ -770,8 +768,6 @@ def test_download_data_no_pairs(mocker, caplog):
def test_download_data_all_pairs(mocker, markets):
mocker.patch.object(Path, "exists", MagicMock(return_value=False))
dl_mock = mocker.patch('freqtrade.commands.data_commands.refresh_backtest_ohlcv_data',
MagicMock(return_value=["ETH/BTC", "XRP/BTC"]))
patch_exchange(mocker)
@ -1529,7 +1525,7 @@ def test_backtesting_show(mocker, testdatadir, capsys):
args = [
"backtesting-show",
"--export-filename",
f"{testdatadir / 'backtest_results/backtest-result_new.json'}",
f"{testdatadir / 'backtest_results/backtest-result.json'}",
"--show-pair-list"
]
pargs = get_args(args)

View File

@ -408,6 +408,11 @@ def create_mock_trades_usdt(fee, is_short: Optional[bool] = False, use_db: bool
Trade.commit()
@pytest.fixture(autouse=True)
def patch_gc(mocker) -> None:
mocker.patch("freqtrade.main.gc_set_threshold")
@pytest.fixture(autouse=True)
def patch_coingekko(mocker) -> None:
"""

View File

@ -12,9 +12,11 @@ from freqtrade.data.btanalysis import (BT_DATA_COLUMNS, analyze_trade_parallelis
get_latest_hyperopt_file, load_backtest_data,
load_backtest_metadata, load_trades, load_trades_from_db)
from freqtrade.data.history import load_data, load_pair_history
from freqtrade.data.metrics import (calculate_cagr, calculate_csum, calculate_market_change,
calculate_max_drawdown, calculate_underwater,
combine_dataframes_with_mean, create_cum_profit)
from freqtrade.data.metrics import (calculate_cagr, calculate_calmar, calculate_csum,
calculate_expectancy, calculate_market_change,
calculate_max_drawdown, calculate_sharpe, calculate_sortino,
calculate_underwater, combine_dataframes_with_mean,
create_cum_profit)
from freqtrade.exceptions import OperationalException
from tests.conftest import CURRENT_TEST_STRATEGY, create_mock_trades
from tests.conftest_trades import MOCK_TRADE_COUNT
@ -30,10 +32,10 @@ def test_get_latest_backtest_filename(testdatadir, mocker):
testdir_bt = testdatadir / "backtest_results"
res = get_latest_backtest_filename(testdir_bt)
assert res == 'backtest-result_new.json'
assert res == 'backtest-result.json'
res = get_latest_backtest_filename(str(testdir_bt))
assert res == 'backtest-result_new.json'
assert res == 'backtest-result.json'
mocker.patch("freqtrade.data.btanalysis.json_load", return_value={})
@ -81,7 +83,7 @@ def test_load_backtest_data_old_format(testdatadir, mocker):
def test_load_backtest_data_new_format(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
assert isinstance(bt_data, DataFrame)
assert set(bt_data.columns) == set(BT_DATA_COLUMNS)
@ -182,7 +184,7 @@ def test_extract_trades_of_period(testdatadir):
def test_analyze_trade_parallelism(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
res = analyze_trade_parallelism(bt_data, "5m")
@ -256,7 +258,7 @@ def test_combine_dataframes_with_mean_no_data(testdatadir):
def test_create_cum_profit(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
timerange = TimeRange.parse_timerange("20180110-20180112")
@ -268,11 +270,11 @@ def test_create_cum_profit(testdatadir):
"cum_profits", timeframe="5m")
assert "cum_profits" in cum_profits.columns
assert cum_profits.iloc[0]['cum_profits'] == 0
assert pytest.approx(cum_profits.iloc[-1]['cum_profits']) == 8.723007518796964e-06
assert pytest.approx(cum_profits.iloc[-1]['cum_profits']) == 9.0225563e-05
def test_create_cum_profit1(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
# Move close-time to "off" the candle, to make sure the logic still works
bt_data['close_date'] = bt_data.loc[:, 'close_date'] + DateOffset(seconds=20)
@ -286,7 +288,7 @@ def test_create_cum_profit1(testdatadir):
"cum_profits", timeframe="5m")
assert "cum_profits" in cum_profits.columns
assert cum_profits.iloc[0]['cum_profits'] == 0
assert pytest.approx(cum_profits.iloc[-1]['cum_profits']) == 8.723007518796964e-06
assert pytest.approx(cum_profits.iloc[-1]['cum_profits']) == 9.0225563e-05
with pytest.raises(ValueError, match='Trade dataframe empty.'):
create_cum_profit(df.set_index('date'), bt_data[bt_data["pair"] == 'NOTAPAIR'],
@ -294,18 +296,18 @@ def test_create_cum_profit1(testdatadir):
def test_calculate_max_drawdown(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
_, hdate, lowdate, hval, lval, drawdown = calculate_max_drawdown(
bt_data, value_col="profit_abs")
assert isinstance(drawdown, float)
assert pytest.approx(drawdown) == 0.12071099
assert pytest.approx(drawdown) == 0.29753914
assert isinstance(hdate, Timestamp)
assert isinstance(lowdate, Timestamp)
assert isinstance(hval, float)
assert isinstance(lval, float)
assert hdate == Timestamp('2018-01-25 01:30:00', tz='UTC')
assert lowdate == Timestamp('2018-01-25 03:50:00', tz='UTC')
assert hdate == Timestamp('2018-01-16 19:30:00', tz='UTC')
assert lowdate == Timestamp('2018-01-16 22:25:00', tz='UTC')
underwater = calculate_underwater(bt_data)
assert isinstance(underwater, DataFrame)
@ -318,14 +320,15 @@ def test_calculate_max_drawdown(testdatadir):
def test_calculate_csum(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
csum_min, csum_max = calculate_csum(bt_data)
assert isinstance(csum_min, float)
assert isinstance(csum_max, float)
assert csum_min < 0.01
assert csum_max > 0.02
assert csum_min < csum_max
assert csum_min < 0.0001
assert csum_max > 0.0002
csum_min1, csum_max1 = calculate_csum(bt_data, 5)
assert csum_min1 == csum_min + 5
@ -335,6 +338,69 @@ def test_calculate_csum(testdatadir):
csum_min, csum_max = calculate_csum(DataFrame())
def test_calculate_expectancy(testdatadir):
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
expectancy = calculate_expectancy(DataFrame())
assert expectancy == 0.0
expectancy = calculate_expectancy(bt_data)
assert isinstance(expectancy, float)
assert pytest.approx(expectancy) == 0.07151374226574791
def test_calculate_sortino(testdatadir):
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
sortino = calculate_sortino(DataFrame(), None, None, 0)
assert sortino == 0.0
sortino = calculate_sortino(
bt_data,
bt_data['open_date'].min(),
bt_data['close_date'].max(),
0.01,
)
assert isinstance(sortino, float)
assert pytest.approx(sortino) == 35.17722
def test_calculate_sharpe(testdatadir):
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
sharpe = calculate_sharpe(DataFrame(), None, None, 0)
assert sharpe == 0.0
sharpe = calculate_sharpe(
bt_data,
bt_data['open_date'].min(),
bt_data['close_date'].max(),
0.01,
)
assert isinstance(sharpe, float)
assert pytest.approx(sharpe) == 44.5078669
def test_calculate_calmar(testdatadir):
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
calmar = calculate_calmar(DataFrame(), None, None, 0)
assert calmar == 0.0
calmar = calculate_calmar(
bt_data,
bt_data['open_date'].min(),
bt_data['close_date'].max(),
0.01,
)
assert isinstance(calmar, float)
assert pytest.approx(calmar) == 559.040508
@pytest.mark.parametrize('start,end,days, expected', [
(64900, 176000, 3 * 365, 0.3945),
(64900, 176000, 365, 1.7119),

View File

@ -2,13 +2,13 @@ from datetime import datetime, timezone
from unittest.mock import MagicMock
import pytest
from pandas import DataFrame
from pandas import DataFrame, Timestamp
from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import CandleType, RunMode
from freqtrade.exceptions import ExchangeError, OperationalException
from freqtrade.plugins.pairlistmanager import PairListManager
from tests.conftest import get_patched_exchange
from tests.conftest import generate_test_data, get_patched_exchange
@pytest.mark.parametrize('candle_type', [
@ -144,7 +144,7 @@ def test_available_pairs(mocker, default_conf, ohlcv_history):
assert dp.available_pairs == [("XRP/BTC", timeframe), ("UNITTEST/BTC", timeframe), ]
def test_producer_pairs(mocker, default_conf, ohlcv_history):
def test_producer_pairs(default_conf):
dataprovider = DataProvider(default_conf, None)
producer = "default"
@ -161,9 +161,9 @@ def test_producer_pairs(mocker, default_conf, ohlcv_history):
assert dataprovider.get_producer_pairs("bad") == []
def test_get_producer_df(mocker, default_conf, ohlcv_history):
def test_get_producer_df(default_conf):
dataprovider = DataProvider(default_conf, None)
ohlcv_history = generate_test_data('5m', 150)
pair = 'BTC/USDT'
timeframe = default_conf['timeframe']
candle_type = CandleType.SPOT
@ -221,7 +221,7 @@ def test_emit_df(mocker, default_conf, ohlcv_history):
assert send_mock.call_count == 0
def test_refresh(mocker, default_conf, ohlcv_history):
def test_refresh(mocker, default_conf):
refresh_mock = MagicMock()
mocker.patch("freqtrade.exchange.Exchange.refresh_latest_ohlcv", refresh_mock)
@ -412,3 +412,80 @@ def test_dp_send_msg(default_conf):
dp = DataProvider(default_conf, None)
dp.send_msg(msg, always_send=True)
assert msg not in dp._msg_queue
def test_dp__add_external_df(default_conf_usdt):
timeframe = '1h'
default_conf_usdt["timeframe"] = timeframe
dp = DataProvider(default_conf_usdt, None)
df = generate_test_data(timeframe, 24, '2022-01-01 00:00:00+00:00')
last_analyzed = datetime.now(timezone.utc)
res = dp._add_external_df('ETH/USDT', df, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is False
# Why 1000 ??
assert res[1] == 1000
# Hard add dataframe
dp._replace_external_df('ETH/USDT', df, last_analyzed, timeframe, CandleType.SPOT)
# BTC is not stored yet
res = dp._add_external_df('BTC/USDT', df, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is False
df_res, _ = dp.get_producer_df('ETH/USDT', timeframe, CandleType.SPOT)
assert len(df_res) == 24
# Add the same dataframe again - dataframe size shall not change.
res = dp._add_external_df('ETH/USDT', df, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is True
assert res[1] == 0
df, _ = dp.get_producer_df('ETH/USDT', timeframe, CandleType.SPOT)
assert len(df) == 24
# Add a new day.
df2 = generate_test_data(timeframe, 24, '2022-01-02 00:00:00+00:00')
res = dp._add_external_df('ETH/USDT', df2, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is True
assert res[1] == 0
df, _ = dp.get_producer_df('ETH/USDT', timeframe, CandleType.SPOT)
assert len(df) == 48
# Add a dataframe with a 12 hour offset - so 12 candles are overlapping, and 12 valid.
df3 = generate_test_data(timeframe, 24, '2022-01-02 12:00:00+00:00')
res = dp._add_external_df('ETH/USDT', df3, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is True
assert res[1] == 0
df, _ = dp.get_producer_df('ETH/USDT', timeframe, CandleType.SPOT)
# New length = 48 + 12 (since we have a 12 hour offset).
assert len(df) == 60
assert df.iloc[-1]['date'] == df3.iloc[-1]['date']
assert df.iloc[-1]['date'] == Timestamp('2022-01-03 11:00:00+00:00')
# Generate 1 new candle
df4 = generate_test_data(timeframe, 1, '2022-01-03 12:00:00+00:00')
res = dp._add_external_df('ETH/USDT', df4, last_analyzed, timeframe, CandleType.SPOT)
# assert res[0] is True
# assert res[1] == 0
df, _ = dp.get_producer_df('ETH/USDT', timeframe, CandleType.SPOT)
# New length = 61 + 1
assert len(df) == 61
assert df.iloc[-2]['date'] == Timestamp('2022-01-03 11:00:00+00:00')
assert df.iloc[-1]['date'] == Timestamp('2022-01-03 12:00:00+00:00')
# Gap in the data ...
df4 = generate_test_data(timeframe, 1, '2022-01-05 00:00:00+00:00')
res = dp._add_external_df('ETH/USDT', df4, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is False
# 36 hours - from 2022-01-03 12:00:00+00:00 to 2022-01-05 00:00:00+00:00
assert res[1] == 36
df, _ = dp.get_producer_df('ETH/USDT', timeframe, CandleType.SPOT)
# New length = 61 + 1
assert len(df) == 61
# Empty dataframe
df4 = generate_test_data(timeframe, 0, '2022-01-05 00:00:00+00:00')
res = dp._add_external_df('ETH/USDT', df4, last_analyzed, timeframe, CandleType.SPOT)
assert res[0] is False
# 36 hours - from 2022-01-03 12:00:00+00:00 to 2022-01-05 00:00:00+00:00
assert res[1] == 0

View File

@ -23,7 +23,7 @@ from tests.exchange.test_exchange import ccxt_exceptionhandlers
def test_stoploss_order_binance(default_conf, mocker, limitratio, expected, side, trademode):
api_mock = MagicMock()
order_id = 'test_prod_buy_{}'.format(randint(0, 10 ** 6))
order_type = 'stop_loss_limit' if trademode == TradingMode.SPOT else 'limit'
order_type = 'stop_loss_limit' if trademode == TradingMode.SPOT else 'stop'
api_mock.create_order = MagicMock(return_value={
'id': order_id,
@ -557,7 +557,7 @@ async def test__async_get_historic_ohlcv_binance(default_conf, mocker, caplog, c
exchange._api_async.fetch_ohlcv = get_mock_coro(ohlcv)
pair = 'ETH/BTC'
respair, restf, restype, res = await exchange._async_get_historic_ohlcv(
respair, restf, restype, res, _ = await exchange._async_get_historic_ohlcv(
pair, "5m", 1500000000000, is_new_pair=False, candle_type=candle_type)
assert respair == pair
assert restf == '5m'
@ -566,7 +566,7 @@ async def test__async_get_historic_ohlcv_binance(default_conf, mocker, caplog, c
assert exchange._api_async.fetch_ohlcv.call_count > 400
# assert res == ohlcv
exchange._api_async.fetch_ohlcv.reset_mock()
_, _, _, res = await exchange._async_get_historic_ohlcv(
_, _, _, res, _ = await exchange._async_get_historic_ohlcv(
pair, "5m", 1500000000000, is_new_pair=True, candle_type=candle_type)
# Called twice - one "init" call - and one to get the actual data.

View File

@ -8,16 +8,19 @@ suitable to run with freqtrade.
from copy import deepcopy
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Tuple
import pytest
from freqtrade.enums import CandleType
from freqtrade.exchange import timeframe_to_minutes, timeframe_to_prev_date
from freqtrade.exchange.exchange import timeframe_to_msecs
from freqtrade.exchange.exchange import Exchange, timeframe_to_msecs
from freqtrade.resolvers.exchange_resolver import ExchangeResolver
from tests.conftest import get_default_conf_usdt
EXCHANGE_FIXTURE_TYPE = Tuple[Exchange, str]
# Exchanges that should be tested
EXCHANGES = {
'bittrex': {
@ -141,19 +144,19 @@ def exchange_futures(request, exchange_conf, class_mocker):
@pytest.mark.longrun
class TestCCXTExchange():
def test_load_markets(self, exchange):
exchange, exchangename = exchange
def test_load_markets(self, exchange: EXCHANGE_FIXTURE_TYPE):
exch, exchangename = exchange
pair = EXCHANGES[exchangename]['pair']
markets = exchange.markets
markets = exch.markets
assert pair in markets
assert isinstance(markets[pair], dict)
assert exchange.market_is_spot(markets[pair])
assert exch.market_is_spot(markets[pair])
def test_has_validations(self, exchange):
def test_has_validations(self, exchange: EXCHANGE_FIXTURE_TYPE):
exchange, exchangename = exchange
exch, exchangename = exchange
exchange.validate_ordertypes({
exch.validate_ordertypes({
'entry': 'limit',
'exit': 'limit',
'stoploss': 'limit',
@ -162,13 +165,13 @@ class TestCCXTExchange():
if exchangename == 'gateio':
# gateio doesn't have market orders on spot
return
exchange.validate_ordertypes({
exch.validate_ordertypes({
'entry': 'market',
'exit': 'market',
'stoploss': 'market',
})
def test_load_markets_futures(self, exchange_futures):
def test_load_markets_futures(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
exchange, exchangename = exchange_futures
if not exchange:
# exchange_futures only returns values for supported exchanges
@ -181,11 +184,11 @@ class TestCCXTExchange():
assert exchange.market_is_future(markets[pair])
def test_ccxt_fetch_tickers(self, exchange):
exchange, exchangename = exchange
def test_ccxt_fetch_tickers(self, exchange: EXCHANGE_FIXTURE_TYPE):
exch, exchangename = exchange
pair = EXCHANGES[exchangename]['pair']
tickers = exchange.get_tickers()
tickers = exch.get_tickers()
assert pair in tickers
assert 'ask' in tickers[pair]
assert tickers[pair]['ask'] is not None
@ -195,11 +198,11 @@ class TestCCXTExchange():
if EXCHANGES[exchangename].get('hasQuoteVolume'):
assert tickers[pair]['quoteVolume'] is not None
def test_ccxt_fetch_ticker(self, exchange):
exchange, exchangename = exchange
def test_ccxt_fetch_ticker(self, exchange: EXCHANGE_FIXTURE_TYPE):
exch, exchangename = exchange
pair = EXCHANGES[exchangename]['pair']
ticker = exchange.fetch_ticker(pair)
ticker = exch.fetch_ticker(pair)
assert 'ask' in ticker
assert ticker['ask'] is not None
assert 'bid' in ticker
@ -208,21 +211,21 @@ class TestCCXTExchange():
if EXCHANGES[exchangename].get('hasQuoteVolume'):
assert ticker['quoteVolume'] is not None
def test_ccxt_fetch_l2_orderbook(self, exchange):
exchange, exchangename = exchange
def test_ccxt_fetch_l2_orderbook(self, exchange: EXCHANGE_FIXTURE_TYPE):
exch, exchangename = exchange
pair = EXCHANGES[exchangename]['pair']
l2 = exchange.fetch_l2_order_book(pair)
l2 = exch.fetch_l2_order_book(pair)
assert 'asks' in l2
assert 'bids' in l2
assert len(l2['asks']) >= 1
assert len(l2['bids']) >= 1
l2_limit_range = exchange._ft_has['l2_limit_range']
l2_limit_range_required = exchange._ft_has['l2_limit_range_required']
l2_limit_range = exch._ft_has['l2_limit_range']
l2_limit_range_required = exch._ft_has['l2_limit_range_required']
if exchangename == 'gateio':
# TODO: Gateio is unstable here at the moment, ignoring the limit partially.
return
for val in [1, 2, 5, 25, 100]:
l2 = exchange.fetch_l2_order_book(pair, val)
l2 = exch.fetch_l2_order_book(pair, val)
if not l2_limit_range or val in l2_limit_range:
if val > 50:
# Orderbooks are not always this deep.
@ -232,7 +235,7 @@ class TestCCXTExchange():
assert len(l2['asks']) == val
assert len(l2['bids']) == val
else:
next_limit = exchange.get_next_limit_in_list(
next_limit = exch.get_next_limit_in_list(
val, l2_limit_range, l2_limit_range_required)
if next_limit is None:
assert len(l2['asks']) > 100
@ -245,23 +248,23 @@ class TestCCXTExchange():
assert len(l2['asks']) == next_limit
assert len(l2['asks']) == next_limit
def test_ccxt_fetch_ohlcv(self, exchange):
exchange, exchangename = exchange
def test_ccxt_fetch_ohlcv(self, exchange: EXCHANGE_FIXTURE_TYPE):
exch, exchangename = exchange
pair = EXCHANGES[exchangename]['pair']
timeframe = EXCHANGES[exchangename]['timeframe']
pair_tf = (pair, timeframe, CandleType.SPOT)
ohlcv = exchange.refresh_latest_ohlcv([pair_tf])
ohlcv = exch.refresh_latest_ohlcv([pair_tf])
assert isinstance(ohlcv, dict)
assert len(ohlcv[pair_tf]) == len(exchange.klines(pair_tf))
# assert len(exchange.klines(pair_tf)) > 200
assert len(ohlcv[pair_tf]) == len(exch.klines(pair_tf))
# assert len(exch.klines(pair_tf)) > 200
# Assume 90% uptime ...
assert len(exchange.klines(pair_tf)) > exchange.ohlcv_candle_limit(
assert len(exch.klines(pair_tf)) > exch.ohlcv_candle_limit(
timeframe, CandleType.SPOT) * 0.90
# Check if last-timeframe is within the last 2 intervals
now = datetime.now(timezone.utc) - timedelta(minutes=(timeframe_to_minutes(timeframe) * 2))
assert exchange.klines(pair_tf).iloc[-1]['date'] >= timeframe_to_prev_date(timeframe, now)
assert exch.klines(pair_tf).iloc[-1]['date'] >= timeframe_to_prev_date(timeframe, now)
def ccxt__async_get_candle_history(self, exchange, exchangename, pair, timeframe, candle_type):
@ -289,17 +292,17 @@ class TestCCXTExchange():
assert len(candles) >= min(candle_count, candle_count1)
assert candles[0][0] == since_ms or (since_ms + timeframe_ms)
def test_ccxt__async_get_candle_history(self, exchange):
exchange, exchangename = exchange
def test_ccxt__async_get_candle_history(self, exchange: EXCHANGE_FIXTURE_TYPE):
exc, exchangename = exchange
# For some weired reason, this test returns random lengths for bittrex.
if not exchange._ft_has['ohlcv_has_history'] or exchangename in ('bittrex'):
if not exc._ft_has['ohlcv_has_history'] or exchangename in ('bittrex'):
return
pair = EXCHANGES[exchangename]['pair']
timeframe = EXCHANGES[exchangename]['timeframe']
self.ccxt__async_get_candle_history(
exchange, exchangename, pair, timeframe, CandleType.SPOT)
exc, exchangename, pair, timeframe, CandleType.SPOT)
def test_ccxt__async_get_candle_history_futures(self, exchange_futures):
def test_ccxt__async_get_candle_history_futures(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
exchange, exchangename = exchange_futures
if not exchange:
# exchange_futures only returns values for supported exchanges
@ -309,7 +312,7 @@ class TestCCXTExchange():
self.ccxt__async_get_candle_history(
exchange, exchangename, pair, timeframe, CandleType.FUTURES)
def test_ccxt_fetch_funding_rate_history(self, exchange_futures):
def test_ccxt_fetch_funding_rate_history(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
exchange, exchangename = exchange_futures
if not exchange:
# exchange_futures only returns values for supported exchanges
@ -347,7 +350,7 @@ class TestCCXTExchange():
(rate['open'].min() != rate['open'].max())
)
def test_ccxt_fetch_mark_price_history(self, exchange_futures):
def test_ccxt_fetch_mark_price_history(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
exchange, exchangename = exchange_futures
if not exchange:
# exchange_futures only returns values for supported exchanges
@ -371,7 +374,7 @@ class TestCCXTExchange():
assert mark_candles[mark_candles['date'] == prev_hour].iloc[0]['open'] != 0.0
assert mark_candles[mark_candles['date'] == this_hour].iloc[0]['open'] != 0.0
def test_ccxt__calculate_funding_fees(self, exchange_futures):
def test_ccxt__calculate_funding_fees(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
exchange, exchangename = exchange_futures
if not exchange:
# exchange_futures only returns values for supported exchanges
@ -387,16 +390,16 @@ class TestCCXTExchange():
# TODO: tests fetch_trades (?)
def test_ccxt_get_fee(self, exchange):
exchange, exchangename = exchange
def test_ccxt_get_fee(self, exchange: EXCHANGE_FIXTURE_TYPE):
exch, exchangename = exchange
pair = EXCHANGES[exchangename]['pair']
threshold = 0.01
assert 0 < exchange.get_fee(pair, 'limit', 'buy') < threshold
assert 0 < exchange.get_fee(pair, 'limit', 'sell') < threshold
assert 0 < exchange.get_fee(pair, 'market', 'buy') < threshold
assert 0 < exchange.get_fee(pair, 'market', 'sell') < threshold
assert 0 < exch.get_fee(pair, 'limit', 'buy') < threshold
assert 0 < exch.get_fee(pair, 'limit', 'sell') < threshold
assert 0 < exch.get_fee(pair, 'market', 'buy') < threshold
assert 0 < exch.get_fee(pair, 'market', 'sell') < threshold
def test_ccxt_get_max_leverage_spot(self, exchange):
def test_ccxt_get_max_leverage_spot(self, exchange: EXCHANGE_FIXTURE_TYPE):
spot, spot_name = exchange
if spot:
leverage_in_market_spot = EXCHANGES[spot_name].get('leverage_in_spot_market')
@ -406,7 +409,7 @@ class TestCCXTExchange():
assert (isinstance(spot_leverage, float) or isinstance(spot_leverage, int))
assert spot_leverage >= 1.0
def test_ccxt_get_max_leverage_futures(self, exchange_futures):
def test_ccxt_get_max_leverage_futures(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
futures, futures_name = exchange_futures
if futures:
leverage_tiers_public = EXCHANGES[futures_name].get('leverage_tiers_public')
@ -419,7 +422,7 @@ class TestCCXTExchange():
assert (isinstance(futures_leverage, float) or isinstance(futures_leverage, int))
assert futures_leverage >= 1.0
def test_ccxt_get_contract_size(self, exchange_futures):
def test_ccxt_get_contract_size(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
futures, futures_name = exchange_futures
if futures:
futures_pair = EXCHANGES[futures_name].get(
@ -430,7 +433,7 @@ class TestCCXTExchange():
assert (isinstance(contract_size, float) or isinstance(contract_size, int))
assert contract_size >= 0.0
def test_ccxt_load_leverage_tiers(self, exchange_futures):
def test_ccxt_load_leverage_tiers(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
futures, futures_name = exchange_futures
if futures and EXCHANGES[futures_name].get('leverage_tiers_public'):
leverage_tiers = futures.load_leverage_tiers()
@ -463,7 +466,7 @@ class TestCCXTExchange():
oldminNotional = tier['minNotional']
oldmaxNotional = tier['maxNotional']
def test_ccxt_dry_run_liquidation_price(self, exchange_futures):
def test_ccxt_dry_run_liquidation_price(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
futures, futures_name = exchange_futures
if futures and EXCHANGES[futures_name].get('leverage_tiers_public'):
@ -494,7 +497,7 @@ class TestCCXTExchange():
assert (isinstance(liquidation_price, float))
assert liquidation_price >= 0.0
def test_ccxt_get_max_pair_stake_amount(self, exchange_futures):
def test_ccxt_get_max_pair_stake_amount(self, exchange_futures: EXCHANGE_FIXTURE_TYPE):
futures, futures_name = exchange_futures
if futures:
futures_pair = EXCHANGES[futures_name].get(

View File

@ -1955,7 +1955,7 @@ def test_get_historic_ohlcv(default_conf, mocker, caplog, exchange_name, candle_
pair = 'ETH/BTC'
async def mock_candle_hist(pair, timeframe, candle_type, since_ms):
return pair, timeframe, candle_type, ohlcv
return pair, timeframe, candle_type, ohlcv, True
exchange._async_get_candle_history = Mock(wraps=mock_candle_hist)
# one_call calculation * 1.8 should do 2 calls
@ -1988,62 +1988,6 @@ def test_get_historic_ohlcv(default_conf, mocker, caplog, exchange_name, candle_
assert log_has_re(r"Async code raised an exception: .*", caplog)
@pytest.mark.parametrize("exchange_name", EXCHANGES)
@pytest.mark.parametrize('candle_type', ['mark', ''])
def test_get_historic_ohlcv_as_df(default_conf, mocker, exchange_name, candle_type):
exchange = get_patched_exchange(mocker, default_conf, id=exchange_name)
ohlcv = [
[
arrow.utcnow().int_timestamp * 1000, # unix timestamp ms
1, # open
2, # high
3, # low
4, # close
5, # volume (in quote currency)
],
[
arrow.utcnow().shift(minutes=5).int_timestamp * 1000, # unix timestamp ms
1, # open
2, # high
3, # low
4, # close
5, # volume (in quote currency)
],
[
arrow.utcnow().shift(minutes=10).int_timestamp * 1000, # unix timestamp ms
1, # open
2, # high
3, # low
4, # close
5, # volume (in quote currency)
]
]
pair = 'ETH/BTC'
async def mock_candle_hist(pair, timeframe, candle_type, since_ms):
return pair, timeframe, candle_type, ohlcv
exchange._async_get_candle_history = Mock(wraps=mock_candle_hist)
# one_call calculation * 1.8 should do 2 calls
since = 5 * 60 * exchange.ohlcv_candle_limit('5m', CandleType.SPOT) * 1.8
ret = exchange.get_historic_ohlcv_as_df(
pair,
"5m",
int((arrow.utcnow().int_timestamp - since) * 1000),
candle_type=candle_type
)
assert exchange._async_get_candle_history.call_count == 2
# Returns twice the above OHLCV data
assert len(ret) == 2
assert isinstance(ret, DataFrame)
assert 'date' in ret.columns
assert 'open' in ret.columns
assert 'close' in ret.columns
assert 'high' in ret.columns
@pytest.mark.asyncio
@pytest.mark.parametrize("exchange_name", EXCHANGES)
@pytest.mark.parametrize('candle_type', [CandleType.MARK, CandleType.SPOT])
@ -2063,7 +2007,7 @@ async def test__async_get_historic_ohlcv(default_conf, mocker, caplog, exchange_
exchange._api_async.fetch_ohlcv = get_mock_coro(ohlcv)
pair = 'ETH/USDT'
respair, restf, _, res = await exchange._async_get_historic_ohlcv(
respair, restf, _, res, _ = await exchange._async_get_historic_ohlcv(
pair, "5m", 1500000000000, candle_type=candle_type, is_new_pair=False)
assert respair == pair
assert restf == '5m'
@ -2074,7 +2018,7 @@ async def test__async_get_historic_ohlcv(default_conf, mocker, caplog, exchange_
exchange._api_async.fetch_ohlcv.reset_mock()
end_ts = 1_500_500_000_000
start_ts = 1_500_000_000_000
respair, restf, _, res = await exchange._async_get_historic_ohlcv(
respair, restf, _, res, _ = await exchange._async_get_historic_ohlcv(
pair, "5m", since_ms=start_ts, candle_type=candle_type, is_new_pair=False,
until_ms=end_ts
)
@ -2306,7 +2250,7 @@ async def test__async_get_candle_history(default_conf, mocker, caplog, exchange_
pair = 'ETH/BTC'
res = await exchange._async_get_candle_history(pair, "5m", CandleType.SPOT)
assert type(res) is tuple
assert len(res) == 4
assert len(res) == 5
assert res[0] == pair
assert res[1] == "5m"
assert res[2] == CandleType.SPOT
@ -2393,7 +2337,7 @@ async def test__async_get_candle_history_empty(default_conf, mocker, caplog):
pair = 'ETH/BTC'
res = await exchange._async_get_candle_history(pair, "5m", CandleType.SPOT)
assert type(res) is tuple
assert len(res) == 4
assert len(res) == 5
assert res[0] == pair
assert res[1] == "5m"
assert res[2] == CandleType.SPOT
@ -4014,9 +3958,6 @@ def test_validate_trading_mode_and_margin_mode(
("binance", "spot", {}),
("binance", "margin", {"options": {"defaultType": "margin"}}),
("binance", "futures", {"options": {"defaultType": "future"}}),
("bibox", "spot", {"has": {"fetchCurrencies": False}}),
("bibox", "margin", {"has": {"fetchCurrencies": False}, "options": {"defaultType": "margin"}}),
("bibox", "futures", {"has": {"fetchCurrencies": False}, "options": {"defaultType": "swap"}}),
("bybit", "spot", {"options": {"defaultType": "spot"}}),
("bybit", "futures", {"options": {"defaultType": "linear"}}),
("gateio", "futures", {"options": {"defaultType": "swap"}}),

View File

@ -27,20 +27,23 @@ def is_mac() -> bool:
return "Darwin" in machine
@pytest.mark.parametrize('model, pca, dbscan, float32', [
('LightGBMRegressor', True, False, True),
('XGBoostRegressor', False, True, False),
('XGBoostRFRegressor', False, False, False),
('CatboostRegressor', False, False, False),
('ReinforcementLearner', False, True, False),
('ReinforcementLearner_multiproc', False, False, False),
('ReinforcementLearner_test_4ac', False, False, False)
@pytest.mark.parametrize('model, pca, dbscan, float32, can_short', [
('LightGBMRegressor', True, False, True, True),
('XGBoostRegressor', False, True, False, True),
('XGBoostRFRegressor', False, False, False, True),
('CatboostRegressor', False, False, False, True),
('ReinforcementLearner', False, True, False, True),
('ReinforcementLearner_multiproc', False, False, False, True),
('ReinforcementLearner_test_3ac', False, False, False, False),
('ReinforcementLearner_test_3ac', False, False, False, True),
('ReinforcementLearner_test_4ac', False, False, False, True)
])
def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca, dbscan, float32):
def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
dbscan, float32, can_short):
if is_arm() and model == 'CatboostRegressor':
pytest.skip("CatBoost is not supported on ARM")
if is_mac() and 'Reinforcement' in model:
if is_mac() and not is_arm() and 'Reinforcement' in model:
pytest.skip("Reinforcement learning module not available on intel based Mac OS")
model_save_ext = 'joblib'
@ -58,9 +61,6 @@ def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
freqai_conf['freqai']['feature_parameters'].update({"use_SVM_to_remove_outliers": True})
freqai_conf['freqai']['data_split_parameters'].update({'shuffle': True})
if 'test_4ac' in model:
freqai_conf["freqaimodel_path"] = str(Path(__file__).parents[1] / "freqai" / "test_models")
if 'ReinforcementLearner' in model:
model_save_ext = 'zip'
freqai_conf = make_rl_config(freqai_conf)
@ -68,7 +68,7 @@ def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
freqai_conf['freqai']['feature_parameters'].update({"use_SVM_to_remove_outliers": True})
freqai_conf['freqai']['data_split_parameters'].update({'shuffle': True})
if 'test_4ac' in model:
if 'test_3ac' in model or 'test_4ac' in model:
freqai_conf["freqaimodel_path"] = str(Path(__file__).parents[1] / "freqai" / "test_models")
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
@ -77,6 +77,7 @@ def test_extract_data_and_train_model_Standard(mocker, freqai_conf, model, pca,
strategy.freqai_info = freqai_conf.get("freqai", {})
freqai = strategy.freqai
freqai.live = True
freqai.can_short = can_short
freqai.dk = FreqaiDataKitchen(freqai_conf)
freqai.dk.set_paths('ADA/BTC', 10000)
timerange = TimeRange.parse_timerange("20180110-20180130")

View File

@ -0,0 +1,65 @@
import logging
import numpy as np
from freqtrade.freqai.prediction_models.ReinforcementLearner import ReinforcementLearner
from freqtrade.freqai.RL.Base3ActionRLEnv import Actions, Base3ActionRLEnv, Positions
logger = logging.getLogger(__name__)
class ReinforcementLearner_test_3ac(ReinforcementLearner):
"""
User created Reinforcement Learning Model prediction model.
"""
class MyRLEnv(Base3ActionRLEnv):
"""
User can override any function in BaseRLEnv and gym.Env. Here the user
sets a custom reward based on profit and trade duration.
"""
def calculate_reward(self, action: int) -> float:
# first, penalize if the action is not valid
if not self._is_valid(action):
return -2
pnl = self.get_unrealized_profit()
rew = np.sign(pnl) * (pnl + 1)
factor = 100.
# reward agent for entering trades
if (action in (Actions.Buy.value, Actions.Sell.value)
and self._position == Positions.Neutral):
return 25
# discourage agent from not entering trades
if action == Actions.Neutral.value and self._position == Positions.Neutral:
return -1
max_trade_duration = self.rl_config.get('max_trade_duration_candles', 300)
trade_duration = self._current_tick - self._last_trade_tick # type: ignore
if trade_duration <= max_trade_duration:
factor *= 1.5
elif trade_duration > max_trade_duration:
factor *= 0.5
# discourage sitting in position
if self._position in (Positions.Short, Positions.Long) and (
action == Actions.Neutral.value
or (action == Actions.Sell.value and self._position == Positions.Short)
or (action == Actions.Buy.value and self._position == Positions.Long)
):
return -1 * trade_duration / max_trade_duration
# close position
if (action == Actions.Buy.value and self._position == Positions.Short) or (
action == Actions.Sell.value and self._position == Positions.Long
):
if pnl > self.profit_aim * self.rr:
factor *= self.rl_config["model_reward_parameters"].get("win_reward_factor", 2)
return float(rew * factor)
return 0.

View File

@ -1,5 +1,6 @@
import pytest
from freqtrade.exceptions import OperationalException
from freqtrade.leverage import interest
from freqtrade.util import FtPrecise
@ -29,3 +30,13 @@ def test_interest(exchange, interest_rate, hours, expected):
rate=FtPrecise(interest_rate),
hours=hours
))) == expected
def test_interest_exception():
with pytest.raises(OperationalException, match=r"Leverage not available on .* with freqtrade"):
interest(
exchange_name='bitmex',
borrowed=FtPrecise(60.0),
rate=FtPrecise(0.0005),
hours=ten_mins
)

View File

@ -710,6 +710,7 @@ def test_backtest_one(default_conf, fee, mocker, testdatadir) -> None:
expected = pd.DataFrame(
{'pair': [pair, pair],
'stake_amount': [0.001, 0.001],
'max_stake_amount': [0.001, 0.001],
'amount': [0.00957442, 0.0097064],
'open_date': pd.to_datetime([Arrow(2018, 1, 29, 18, 40, 0).datetime,
Arrow(2018, 1, 30, 3, 30, 0).datetime], utc=True

View File

@ -50,6 +50,7 @@ def test_backtest_position_adjustment(default_conf, fee, mocker, testdatadir) ->
expected = pd.DataFrame(
{'pair': [pair, pair],
'stake_amount': [500.0, 100.0],
'max_stake_amount': [500.0, 100],
'amount': [4806.87657523, 970.63960782],
'open_date': pd.to_datetime([Arrow(2018, 1, 29, 18, 40, 0).datetime,
Arrow(2018, 1, 30, 3, 30, 0).datetime], utc=True

View File

@ -308,7 +308,7 @@ def test_generate_pair_metrics():
def test_generate_daily_stats(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
res = generate_daily_stats(bt_data)
assert isinstance(res, dict)
@ -328,7 +328,7 @@ def test_generate_daily_stats(testdatadir):
def test_generate_trading_stats(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
res = generate_trading_stats(bt_data)
assert isinstance(res, dict)
@ -444,7 +444,7 @@ def test_generate_edge_table():
def test_generate_periodic_breakdown_stats(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename).to_dict(orient='records')
res = generate_periodic_breakdown_stats(bt_data, 'day')
@ -472,7 +472,7 @@ def test__get_resample_from_period():
def test_show_sorted_pairlist(testdatadir, default_conf, capsys):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_stats(filename)
default_conf['backtest_show_pair_list'] = True

View File

@ -0,0 +1,412 @@
# pragma pylint: disable=missing-docstring, C0103
import logging
from pathlib import Path
from unittest.mock import MagicMock
import pytest
from sqlalchemy import create_engine, text
from freqtrade.constants import DEFAULT_DB_PROD_URL
from freqtrade.enums import TradingMode
from freqtrade.exceptions import OperationalException
from freqtrade.persistence import Trade, init_db
from freqtrade.persistence.migrations import get_last_sequence_ids, set_sequence_ids
from freqtrade.persistence.models import PairLock
from tests.conftest import log_has
spot, margin, futures = TradingMode.SPOT, TradingMode.MARGIN, TradingMode.FUTURES
def test_init_create_session(default_conf):
# Check if init create a session
init_db(default_conf['db_url'])
assert hasattr(Trade, '_session')
assert 'scoped_session' in type(Trade._session).__name__
def test_init_custom_db_url(default_conf, tmpdir):
# Update path to a value other than default, but still in-memory
filename = f"{tmpdir}/freqtrade2_test.sqlite"
assert not Path(filename).is_file()
default_conf.update({'db_url': f'sqlite:///{filename}'})
init_db(default_conf['db_url'])
assert Path(filename).is_file()
r = Trade._session.execute(text("PRAGMA journal_mode"))
assert r.first() == ('wal',)
def test_init_invalid_db_url():
# Update path to a value other than default, but still in-memory
with pytest.raises(OperationalException, match=r'.*no valid database URL*'):
init_db('unknown:///some.url')
with pytest.raises(OperationalException, match=r'Bad db-url.*For in-memory database, pl.*'):
init_db('sqlite:///')
def test_init_prod_db(default_conf, mocker):
default_conf.update({'dry_run': False})
default_conf.update({'db_url': DEFAULT_DB_PROD_URL})
create_engine_mock = mocker.patch('freqtrade.persistence.models.create_engine', MagicMock())
init_db(default_conf['db_url'])
assert create_engine_mock.call_count == 1
assert create_engine_mock.mock_calls[0][1][0] == 'sqlite:///tradesv3.sqlite'
def test_init_dryrun_db(default_conf, tmpdir):
filename = f"{tmpdir}/freqtrade2_prod.sqlite"
assert not Path(filename).is_file()
default_conf.update({
'dry_run': True,
'db_url': f'sqlite:///{filename}'
})
init_db(default_conf['db_url'])
assert Path(filename).is_file()
def test_migrate_new(mocker, default_conf, fee, caplog):
"""
Test Database migration (starting with new pairformat)
"""
caplog.set_level(logging.DEBUG)
amount = 103.223
# Always create all columns apart from the last!
create_table_old = """CREATE TABLE IF NOT EXISTS "trades" (
id INTEGER NOT NULL,
exchange VARCHAR NOT NULL,
pair VARCHAR NOT NULL,
is_open BOOLEAN NOT NULL,
fee FLOAT NOT NULL,
open_rate FLOAT,
close_rate FLOAT,
close_profit FLOAT,
stake_amount FLOAT NOT NULL,
amount FLOAT,
open_date DATETIME NOT NULL,
close_date DATETIME,
open_order_id VARCHAR,
stop_loss FLOAT,
initial_stop_loss FLOAT,
max_rate FLOAT,
sell_reason VARCHAR,
strategy VARCHAR,
ticker_interval INTEGER,
stoploss_order_id VARCHAR,
PRIMARY KEY (id),
CHECK (is_open IN (0, 1))
);"""
create_table_order = """CREATE TABLE orders (
id INTEGER NOT NULL,
ft_trade_id INTEGER,
ft_order_side VARCHAR(25) NOT NULL,
ft_pair VARCHAR(25) NOT NULL,
ft_is_open BOOLEAN NOT NULL,
order_id VARCHAR(255) NOT NULL,
status VARCHAR(255),
symbol VARCHAR(25),
order_type VARCHAR(50),
side VARCHAR(25),
price FLOAT,
amount FLOAT,
filled FLOAT,
remaining FLOAT,
cost FLOAT,
order_date DATETIME,
order_filled_date DATETIME,
order_update_date DATETIME,
PRIMARY KEY (id)
);"""
insert_table_old = """INSERT INTO trades (exchange, pair, is_open, fee,
open_rate, stake_amount, amount, open_date,
stop_loss, initial_stop_loss, max_rate, ticker_interval,
open_order_id, stoploss_order_id)
VALUES ('binance', 'ETC/BTC', 1, {fee},
0.00258580, {stake}, {amount},
'2019-11-28 12:44:24.000000',
0.0, 0.0, 0.0, '5m',
'buy_order', 'dry_stop_order_id222')
""".format(fee=fee.return_value,
stake=default_conf.get("stake_amount"),
amount=amount
)
insert_orders = f"""
insert into orders (
ft_trade_id,
ft_order_side,
ft_pair,
ft_is_open,
order_id,
status,
symbol,
order_type,
side,
price,
amount,
filled,
remaining,
cost)
values (
1,
'buy',
'ETC/BTC',
0,
'dry_buy_order',
'closed',
'ETC/BTC',
'limit',
'buy',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
),
(
1,
'buy',
'ETC/BTC',
1,
'dry_buy_order22',
'canceled',
'ETC/BTC',
'limit',
'buy',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
),
(
1,
'stoploss',
'ETC/BTC',
1,
'dry_stop_order_id11X',
'canceled',
'ETC/BTC',
'limit',
'sell',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
),
(
1,
'stoploss',
'ETC/BTC',
1,
'dry_stop_order_id222',
'open',
'ETC/BTC',
'limit',
'sell',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
)
"""
engine = create_engine('sqlite://')
mocker.patch('freqtrade.persistence.models.create_engine', lambda *args, **kwargs: engine)
# Create table using the old format
with engine.begin() as connection:
connection.execute(text(create_table_old))
connection.execute(text(create_table_order))
connection.execute(text("create index ix_trades_is_open on trades(is_open)"))
connection.execute(text("create index ix_trades_pair on trades(pair)"))
connection.execute(text(insert_table_old))
connection.execute(text(insert_orders))
# fake previous backup
connection.execute(text("create table trades_bak as select * from trades"))
connection.execute(text("create table trades_bak1 as select * from trades"))
# Run init to test migration
init_db(default_conf['db_url'])
assert len(Trade.query.filter(Trade.id == 1).all()) == 1
trade = Trade.query.filter(Trade.id == 1).first()
assert trade.fee_open == fee.return_value
assert trade.fee_close == fee.return_value
assert trade.open_rate_requested is None
assert trade.close_rate_requested is None
assert trade.is_open == 1
assert trade.amount == amount
assert trade.amount_requested == amount
assert trade.stake_amount == default_conf.get("stake_amount")
assert trade.pair == "ETC/BTC"
assert trade.exchange == "binance"
assert trade.max_rate == 0.0
assert trade.min_rate is None
assert trade.stop_loss == 0.0
assert trade.initial_stop_loss == 0.0
assert trade.exit_reason is None
assert trade.strategy is None
assert trade.timeframe == '5m'
assert trade.stoploss_order_id == 'dry_stop_order_id222'
assert trade.stoploss_last_update is None
assert log_has("trying trades_bak1", caplog)
assert log_has("trying trades_bak2", caplog)
assert log_has("Running database migration for trades - backup: trades_bak2, orders_bak0",
caplog)
assert log_has("Database migration finished.", caplog)
assert pytest.approx(trade.open_trade_value) == trade._calc_open_trade_value(
trade.amount, trade.open_rate)
assert trade.close_profit_abs is None
assert trade.stake_amount == trade.max_stake_amount
orders = trade.orders
assert len(orders) == 4
assert orders[0].order_id == 'dry_buy_order'
assert orders[0].ft_order_side == 'buy'
assert orders[-1].order_id == 'dry_stop_order_id222'
assert orders[-1].ft_order_side == 'stoploss'
assert orders[-1].ft_is_open is True
assert orders[1].order_id == 'dry_buy_order22'
assert orders[1].ft_order_side == 'buy'
assert orders[1].ft_is_open is False
assert orders[2].order_id == 'dry_stop_order_id11X'
assert orders[2].ft_order_side == 'stoploss'
assert orders[2].ft_is_open is False
def test_migrate_too_old(mocker, default_conf, fee, caplog):
"""
Test Database migration (starting with new pairformat)
"""
caplog.set_level(logging.DEBUG)
amount = 103.223
create_table_old = """CREATE TABLE IF NOT EXISTS "trades" (
id INTEGER NOT NULL,
exchange VARCHAR NOT NULL,
pair VARCHAR NOT NULL,
is_open BOOLEAN NOT NULL,
fee_open FLOAT NOT NULL,
fee_close FLOAT NOT NULL,
open_rate FLOAT,
close_rate FLOAT,
close_profit FLOAT,
stake_amount FLOAT NOT NULL,
amount FLOAT,
open_date DATETIME NOT NULL,
close_date DATETIME,
open_order_id VARCHAR,
PRIMARY KEY (id),
CHECK (is_open IN (0, 1))
);"""
insert_table_old = """INSERT INTO trades (exchange, pair, is_open, fee_open, fee_close,
open_rate, stake_amount, amount, open_date)
VALUES ('binance', 'ETC/BTC', 1, {fee}, {fee},
0.00258580, {stake}, {amount},
'2019-11-28 12:44:24.000000')
""".format(fee=fee.return_value,
stake=default_conf.get("stake_amount"),
amount=amount
)
engine = create_engine('sqlite://')
mocker.patch('freqtrade.persistence.models.create_engine', lambda *args, **kwargs: engine)
# Create table using the old format
with engine.begin() as connection:
connection.execute(text(create_table_old))
connection.execute(text(insert_table_old))
# Run init to test migration
with pytest.raises(OperationalException, match=r'Your database seems to be very old'):
init_db(default_conf['db_url'])
def test_migrate_get_last_sequence_ids():
engine = MagicMock()
engine.begin = MagicMock()
engine.name = 'postgresql'
get_last_sequence_ids(engine, 'trades_bak', 'orders_bak')
assert engine.begin.call_count == 2
engine.reset_mock()
engine.begin.reset_mock()
engine.name = 'somethingelse'
get_last_sequence_ids(engine, 'trades_bak', 'orders_bak')
assert engine.begin.call_count == 0
def test_migrate_set_sequence_ids():
engine = MagicMock()
engine.begin = MagicMock()
engine.name = 'postgresql'
set_sequence_ids(engine, 22, 55, 5)
assert engine.begin.call_count == 1
engine.reset_mock()
engine.begin.reset_mock()
engine.name = 'somethingelse'
set_sequence_ids(engine, 22, 55, 6)
assert engine.begin.call_count == 0
def test_migrate_pairlocks(mocker, default_conf, fee, caplog):
"""
Test Database migration (starting with new pairformat)
"""
caplog.set_level(logging.DEBUG)
# Always create all columns apart from the last!
create_table_old = """CREATE TABLE pairlocks (
id INTEGER NOT NULL,
pair VARCHAR(25) NOT NULL,
reason VARCHAR(255),
lock_time DATETIME NOT NULL,
lock_end_time DATETIME NOT NULL,
active BOOLEAN NOT NULL,
PRIMARY KEY (id)
)
"""
create_index1 = "CREATE INDEX ix_pairlocks_pair ON pairlocks (pair)"
create_index2 = "CREATE INDEX ix_pairlocks_lock_end_time ON pairlocks (lock_end_time)"
create_index3 = "CREATE INDEX ix_pairlocks_active ON pairlocks (active)"
insert_table_old = """INSERT INTO pairlocks (
id, pair, reason, lock_time, lock_end_time, active)
VALUES (1, 'ETH/BTC', 'Auto lock', '2021-07-12 18:41:03', '2021-07-11 18:45:00', 1)
"""
insert_table_old2 = """INSERT INTO pairlocks (
id, pair, reason, lock_time, lock_end_time, active)
VALUES (2, '*', 'Lock all', '2021-07-12 18:41:03', '2021-07-12 19:00:00', 1)
"""
engine = create_engine('sqlite://')
mocker.patch('freqtrade.persistence.models.create_engine', lambda *args, **kwargs: engine)
# Create table using the old format
with engine.begin() as connection:
connection.execute(text(create_table_old))
connection.execute(text(insert_table_old))
connection.execute(text(insert_table_old2))
connection.execute(text(create_index1))
connection.execute(text(create_index2))
connection.execute(text(create_index3))
init_db(default_conf['db_url'])
assert len(PairLock.query.all()) == 2
assert len(PairLock.query.filter(PairLock.pair == '*').all()) == 1
pairlocks = PairLock.query.filter(PairLock.pair == 'ETH/BTC').all()
assert len(pairlocks) == 1
pairlocks[0].pair == 'ETH/BTC'
pairlocks[0].side == '*'

View File

@ -1,78 +1,20 @@
# pragma pylint: disable=missing-docstring, C0103
import logging
from datetime import datetime, timedelta, timezone
from pathlib import Path
from types import FunctionType
from unittest.mock import MagicMock
import arrow
import pytest
from sqlalchemy import create_engine, text
from freqtrade.constants import DATETIME_PRINT_FORMAT, DEFAULT_DB_PROD_URL
from freqtrade.constants import DATETIME_PRINT_FORMAT
from freqtrade.enums import TradingMode
from freqtrade.exceptions import DependencyException, OperationalException
from freqtrade.exceptions import DependencyException
from freqtrade.persistence import LocalTrade, Order, Trade, init_db
from freqtrade.persistence.migrations import get_last_sequence_ids, set_sequence_ids
from freqtrade.persistence.models import PairLock
from tests.conftest import create_mock_trades, create_mock_trades_with_leverage, log_has, log_has_re
spot, margin, futures = TradingMode.SPOT, TradingMode.MARGIN, TradingMode.FUTURES
def test_init_create_session(default_conf):
# Check if init create a session
init_db(default_conf['db_url'])
assert hasattr(Trade, '_session')
assert 'scoped_session' in type(Trade._session).__name__
def test_init_custom_db_url(default_conf, tmpdir):
# Update path to a value other than default, but still in-memory
filename = f"{tmpdir}/freqtrade2_test.sqlite"
assert not Path(filename).is_file()
default_conf.update({'db_url': f'sqlite:///{filename}'})
init_db(default_conf['db_url'])
assert Path(filename).is_file()
r = Trade._session.execute(text("PRAGMA journal_mode"))
assert r.first() == ('wal',)
def test_init_invalid_db_url():
# Update path to a value other than default, but still in-memory
with pytest.raises(OperationalException, match=r'.*no valid database URL*'):
init_db('unknown:///some.url')
with pytest.raises(OperationalException, match=r'Bad db-url.*For in-memory database, pl.*'):
init_db('sqlite:///')
def test_init_prod_db(default_conf, mocker):
default_conf.update({'dry_run': False})
default_conf.update({'db_url': DEFAULT_DB_PROD_URL})
create_engine_mock = mocker.patch('freqtrade.persistence.models.create_engine', MagicMock())
init_db(default_conf['db_url'])
assert create_engine_mock.call_count == 1
assert create_engine_mock.mock_calls[0][1][0] == 'sqlite:///tradesv3.sqlite'
def test_init_dryrun_db(default_conf, tmpdir):
filename = f"{tmpdir}/freqtrade2_prod.sqlite"
assert not Path(filename).is_file()
default_conf.update({
'dry_run': True,
'db_url': f'sqlite:///{filename}'
})
init_db(default_conf['db_url'])
assert Path(filename).is_file()
@pytest.mark.parametrize('is_short', [False, True])
@pytest.mark.usefixtures("init_persistence")
def test_enter_exit_side(fee, is_short):
@ -316,8 +258,7 @@ def test_interest(fee, exchange, is_short, lev, minutes, rate, interest,
(True, 3.0, 30.0, margin),
])
@pytest.mark.usefixtures("init_persistence")
def test_borrowed(limit_buy_order_usdt, limit_sell_order_usdt, fee,
caplog, is_short, lev, borrowed, trading_mode):
def test_borrowed(fee, is_short, lev, borrowed, trading_mode):
"""
10 minute limit trade on Binance/Kraken at 1x, 3x leverage
fee: 0.25% quote
@ -1204,347 +1145,6 @@ def test_calc_profit(
trade.open_rate)) == round(profit_ratio, 8)
def test_migrate_new(mocker, default_conf, fee, caplog):
"""
Test Database migration (starting with new pairformat)
"""
caplog.set_level(logging.DEBUG)
amount = 103.223
# Always create all columns apart from the last!
create_table_old = """CREATE TABLE IF NOT EXISTS "trades" (
id INTEGER NOT NULL,
exchange VARCHAR NOT NULL,
pair VARCHAR NOT NULL,
is_open BOOLEAN NOT NULL,
fee FLOAT NOT NULL,
open_rate FLOAT,
close_rate FLOAT,
close_profit FLOAT,
stake_amount FLOAT NOT NULL,
amount FLOAT,
open_date DATETIME NOT NULL,
close_date DATETIME,
open_order_id VARCHAR,
stop_loss FLOAT,
initial_stop_loss FLOAT,
max_rate FLOAT,
sell_reason VARCHAR,
strategy VARCHAR,
ticker_interval INTEGER,
stoploss_order_id VARCHAR,
PRIMARY KEY (id),
CHECK (is_open IN (0, 1))
);"""
create_table_order = """CREATE TABLE orders (
id INTEGER NOT NULL,
ft_trade_id INTEGER,
ft_order_side VARCHAR(25) NOT NULL,
ft_pair VARCHAR(25) NOT NULL,
ft_is_open BOOLEAN NOT NULL,
order_id VARCHAR(255) NOT NULL,
status VARCHAR(255),
symbol VARCHAR(25),
order_type VARCHAR(50),
side VARCHAR(25),
price FLOAT,
amount FLOAT,
filled FLOAT,
remaining FLOAT,
cost FLOAT,
order_date DATETIME,
order_filled_date DATETIME,
order_update_date DATETIME,
PRIMARY KEY (id)
);"""
insert_table_old = """INSERT INTO trades (exchange, pair, is_open, fee,
open_rate, stake_amount, amount, open_date,
stop_loss, initial_stop_loss, max_rate, ticker_interval,
open_order_id, stoploss_order_id)
VALUES ('binance', 'ETC/BTC', 1, {fee},
0.00258580, {stake}, {amount},
'2019-11-28 12:44:24.000000',
0.0, 0.0, 0.0, '5m',
'buy_order', 'dry_stop_order_id222')
""".format(fee=fee.return_value,
stake=default_conf.get("stake_amount"),
amount=amount
)
insert_orders = f"""
insert into orders (
ft_trade_id,
ft_order_side,
ft_pair,
ft_is_open,
order_id,
status,
symbol,
order_type,
side,
price,
amount,
filled,
remaining,
cost)
values (
1,
'buy',
'ETC/BTC',
0,
'dry_buy_order',
'closed',
'ETC/BTC',
'limit',
'buy',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
),
(
1,
'buy',
'ETC/BTC',
1,
'dry_buy_order22',
'canceled',
'ETC/BTC',
'limit',
'buy',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
),
(
1,
'stoploss',
'ETC/BTC',
1,
'dry_stop_order_id11X',
'canceled',
'ETC/BTC',
'limit',
'sell',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
),
(
1,
'stoploss',
'ETC/BTC',
1,
'dry_stop_order_id222',
'open',
'ETC/BTC',
'limit',
'sell',
0.00258580,
{amount},
{amount},
0,
{amount * 0.00258580}
)
"""
engine = create_engine('sqlite://')
mocker.patch('freqtrade.persistence.models.create_engine', lambda *args, **kwargs: engine)
# Create table using the old format
with engine.begin() as connection:
connection.execute(text(create_table_old))
connection.execute(text(create_table_order))
connection.execute(text("create index ix_trades_is_open on trades(is_open)"))
connection.execute(text("create index ix_trades_pair on trades(pair)"))
connection.execute(text(insert_table_old))
connection.execute(text(insert_orders))
# fake previous backup
connection.execute(text("create table trades_bak as select * from trades"))
connection.execute(text("create table trades_bak1 as select * from trades"))
# Run init to test migration
init_db(default_conf['db_url'])
assert len(Trade.query.filter(Trade.id == 1).all()) == 1
trade = Trade.query.filter(Trade.id == 1).first()
assert trade.fee_open == fee.return_value
assert trade.fee_close == fee.return_value
assert trade.open_rate_requested is None
assert trade.close_rate_requested is None
assert trade.is_open == 1
assert trade.amount == amount
assert trade.amount_requested == amount
assert trade.stake_amount == default_conf.get("stake_amount")
assert trade.pair == "ETC/BTC"
assert trade.exchange == "binance"
assert trade.max_rate == 0.0
assert trade.min_rate is None
assert trade.stop_loss == 0.0
assert trade.initial_stop_loss == 0.0
assert trade.exit_reason is None
assert trade.strategy is None
assert trade.timeframe == '5m'
assert trade.stoploss_order_id == 'dry_stop_order_id222'
assert trade.stoploss_last_update is None
assert log_has("trying trades_bak1", caplog)
assert log_has("trying trades_bak2", caplog)
assert log_has("Running database migration for trades - backup: trades_bak2, orders_bak0",
caplog)
assert log_has("Database migration finished.", caplog)
assert pytest.approx(trade.open_trade_value) == trade._calc_open_trade_value(
trade.amount, trade.open_rate)
assert trade.close_profit_abs is None
orders = trade.orders
assert len(orders) == 4
assert orders[0].order_id == 'dry_buy_order'
assert orders[0].ft_order_side == 'buy'
assert orders[-1].order_id == 'dry_stop_order_id222'
assert orders[-1].ft_order_side == 'stoploss'
assert orders[-1].ft_is_open is True
assert orders[1].order_id == 'dry_buy_order22'
assert orders[1].ft_order_side == 'buy'
assert orders[1].ft_is_open is False
assert orders[2].order_id == 'dry_stop_order_id11X'
assert orders[2].ft_order_side == 'stoploss'
assert orders[2].ft_is_open is False
def test_migrate_too_old(mocker, default_conf, fee, caplog):
"""
Test Database migration (starting with new pairformat)
"""
caplog.set_level(logging.DEBUG)
amount = 103.223
create_table_old = """CREATE TABLE IF NOT EXISTS "trades" (
id INTEGER NOT NULL,
exchange VARCHAR NOT NULL,
pair VARCHAR NOT NULL,
is_open BOOLEAN NOT NULL,
fee_open FLOAT NOT NULL,
fee_close FLOAT NOT NULL,
open_rate FLOAT,
close_rate FLOAT,
close_profit FLOAT,
stake_amount FLOAT NOT NULL,
amount FLOAT,
open_date DATETIME NOT NULL,
close_date DATETIME,
open_order_id VARCHAR,
PRIMARY KEY (id),
CHECK (is_open IN (0, 1))
);"""
insert_table_old = """INSERT INTO trades (exchange, pair, is_open, fee_open, fee_close,
open_rate, stake_amount, amount, open_date)
VALUES ('binance', 'ETC/BTC', 1, {fee}, {fee},
0.00258580, {stake}, {amount},
'2019-11-28 12:44:24.000000')
""".format(fee=fee.return_value,
stake=default_conf.get("stake_amount"),
amount=amount
)
engine = create_engine('sqlite://')
mocker.patch('freqtrade.persistence.models.create_engine', lambda *args, **kwargs: engine)
# Create table using the old format
with engine.begin() as connection:
connection.execute(text(create_table_old))
connection.execute(text(insert_table_old))
# Run init to test migration
with pytest.raises(OperationalException, match=r'Your database seems to be very old'):
init_db(default_conf['db_url'])
def test_migrate_get_last_sequence_ids():
engine = MagicMock()
engine.begin = MagicMock()
engine.name = 'postgresql'
get_last_sequence_ids(engine, 'trades_bak', 'orders_bak')
assert engine.begin.call_count == 2
engine.reset_mock()
engine.begin.reset_mock()
engine.name = 'somethingelse'
get_last_sequence_ids(engine, 'trades_bak', 'orders_bak')
assert engine.begin.call_count == 0
def test_migrate_set_sequence_ids():
engine = MagicMock()
engine.begin = MagicMock()
engine.name = 'postgresql'
set_sequence_ids(engine, 22, 55, 5)
assert engine.begin.call_count == 1
engine.reset_mock()
engine.begin.reset_mock()
engine.name = 'somethingelse'
set_sequence_ids(engine, 22, 55, 6)
assert engine.begin.call_count == 0
def test_migrate_pairlocks(mocker, default_conf, fee, caplog):
"""
Test Database migration (starting with new pairformat)
"""
caplog.set_level(logging.DEBUG)
# Always create all columns apart from the last!
create_table_old = """CREATE TABLE pairlocks (
id INTEGER NOT NULL,
pair VARCHAR(25) NOT NULL,
reason VARCHAR(255),
lock_time DATETIME NOT NULL,
lock_end_time DATETIME NOT NULL,
active BOOLEAN NOT NULL,
PRIMARY KEY (id)
)
"""
create_index1 = "CREATE INDEX ix_pairlocks_pair ON pairlocks (pair)"
create_index2 = "CREATE INDEX ix_pairlocks_lock_end_time ON pairlocks (lock_end_time)"
create_index3 = "CREATE INDEX ix_pairlocks_active ON pairlocks (active)"
insert_table_old = """INSERT INTO pairlocks (
id, pair, reason, lock_time, lock_end_time, active)
VALUES (1, 'ETH/BTC', 'Auto lock', '2021-07-12 18:41:03', '2021-07-11 18:45:00', 1)
"""
insert_table_old2 = """INSERT INTO pairlocks (
id, pair, reason, lock_time, lock_end_time, active)
VALUES (2, '*', 'Lock all', '2021-07-12 18:41:03', '2021-07-12 19:00:00', 1)
"""
engine = create_engine('sqlite://')
mocker.patch('freqtrade.persistence.models.create_engine', lambda *args, **kwargs: engine)
# Create table using the old format
with engine.begin() as connection:
connection.execute(text(create_table_old))
connection.execute(text(insert_table_old))
connection.execute(text(insert_table_old2))
connection.execute(text(create_index1))
connection.execute(text(create_index2))
connection.execute(text(create_index3))
init_db(default_conf['db_url'])
assert len(PairLock.query.all()) == 2
assert len(PairLock.query.filter(PairLock.pair == '*').all()) == 1
pairlocks = PairLock.query.filter(PairLock.pair == 'ETH/BTC').all()
assert len(pairlocks) == 1
pairlocks[0].pair == 'ETH/BTC'
pairlocks[0].side == '*'
def test_adjust_stop_loss(fee):
trade = Trade(
pair='ADA/USDT',
@ -1758,6 +1358,7 @@ def test_to_json(fee):
'amount': 123.0,
'amount_requested': 123.0,
'stake_amount': 0.001,
'max_stake_amount': None,
'trade_duration': None,
'trade_duration_s': None,
'realized_profit': 0.0,
@ -1767,7 +1368,6 @@ def test_to_json(fee):
'profit_ratio': None,
'profit_pct': None,
'profit_abs': None,
'sell_reason': None,
'exit_reason': None,
'exit_order_status': None,
'stop_loss_abs': None,
@ -1782,7 +1382,6 @@ def test_to_json(fee):
'min_rate': None,
'max_rate': None,
'strategy': None,
'buy_tag': None,
'enter_tag': None,
'timeframe': None,
'exchange': 'binance',
@ -1826,6 +1425,7 @@ def test_to_json(fee):
'amount': 100.0,
'amount_requested': 101.0,
'stake_amount': 0.001,
'max_stake_amount': None,
'trade_duration': 60,
'trade_duration_s': 3600,
'stop_loss_abs': None,
@ -1857,11 +1457,9 @@ def test_to_json(fee):
'open_order_id': None,
'open_rate_requested': None,
'open_trade_value': 12.33075,
'sell_reason': None,
'exit_reason': None,
'exit_order_status': None,
'strategy': None,
'buy_tag': 'buys_signal_001',
'enter_tag': 'buys_signal_001',
'timeframe': None,
'exchange': 'binance',

View File

@ -22,6 +22,11 @@ from tests.conftest import (create_mock_trades_usdt, get_patched_exchange, get_p
log_has, log_has_re, num_log_has)
# Exclude RemotePairList from tests.
# It has a mandatory parameter, and requires special handling, which happens in test_remotepairlist.
TESTABLE_PAIRLISTS = [p for p in AVAILABLE_PAIRLISTS if p not in ['RemotePairList']]
@pytest.fixture(scope="function")
def whitelist_conf(default_conf):
default_conf['stake_currency'] = 'BTC'
@ -824,7 +829,7 @@ def test_pair_whitelist_not_supported_Spread(mocker, default_conf, tickers) -> N
get_patched_freqtradebot(mocker, default_conf)
@pytest.mark.parametrize("pairlist", AVAILABLE_PAIRLISTS)
@pytest.mark.parametrize("pairlist", TESTABLE_PAIRLISTS)
def test_pairlist_class(mocker, whitelist_conf, markets, pairlist):
whitelist_conf['pairlists'][0]['method'] = pairlist
mocker.patch.multiple('freqtrade.exchange.Exchange',
@ -839,7 +844,7 @@ def test_pairlist_class(mocker, whitelist_conf, markets, pairlist):
assert isinstance(freqtrade.pairlists.blacklist, list)
@pytest.mark.parametrize("pairlist", AVAILABLE_PAIRLISTS)
@pytest.mark.parametrize("pairlist", TESTABLE_PAIRLISTS)
@pytest.mark.parametrize("whitelist,log_message", [
(['ETH/BTC', 'TKN/BTC'], ""),
# TRX/ETH not in markets
@ -872,7 +877,7 @@ def test__whitelist_for_active_markets(mocker, whitelist_conf, markets, pairlist
assert log_message in caplog.text
@pytest.mark.parametrize("pairlist", AVAILABLE_PAIRLISTS)
@pytest.mark.parametrize("pairlist", TESTABLE_PAIRLISTS)
def test__whitelist_for_active_markets_empty(mocker, whitelist_conf, pairlist, tickers):
whitelist_conf['pairlists'][0]['method'] = pairlist

View File

@ -0,0 +1,185 @@
import json
from unittest.mock import MagicMock
import pytest
import requests
from freqtrade.exceptions import OperationalException
from freqtrade.plugins.pairlist.RemotePairList import RemotePairList
from freqtrade.plugins.pairlistmanager import PairListManager
from tests.conftest import get_patched_exchange, get_patched_freqtradebot, log_has
@pytest.fixture(scope="function")
def rpl_config(default_conf):
default_conf['stake_currency'] = 'USDT'
default_conf['exchange']['pair_whitelist'] = [
'ETH/USDT',
'BTC/USDT',
]
default_conf['exchange']['pair_blacklist'] = [
'BLK/USDT'
]
return default_conf
def test_gen_pairlist_with_local_file(mocker, rpl_config):
mock_file = MagicMock()
mock_file.read.return_value = '{"pairs": ["TKN/USDT","ETH/USDT","NANO/USDT"]}'
mocker.patch('freqtrade.plugins.pairlist.RemotePairList.open', return_value=mock_file)
mock_file_path = mocker.patch('freqtrade.plugins.pairlist.RemotePairList.Path')
mock_file_path.exists.return_value = True
jsonparse = json.loads(mock_file.read.return_value)
mocker.patch('freqtrade.plugins.pairlist.RemotePairList.json.load', return_value=jsonparse)
rpl_config['pairlists'] = [
{
"method": "RemotePairList",
'number_assets': 2,
'refresh_period': 1800,
'keep_pairlist_on_failure': True,
'pairlist_url': 'file:///pairlist.json',
'bearer_token': '',
'read_timeout': 60
}
]
exchange = get_patched_exchange(mocker, rpl_config)
pairlistmanager = PairListManager(exchange, rpl_config)
remote_pairlist = RemotePairList(exchange, pairlistmanager, rpl_config,
rpl_config['pairlists'][0], 0)
result = remote_pairlist.gen_pairlist([])
assert result == ['TKN/USDT', 'ETH/USDT']
def test_fetch_pairlist_mock_response_html(mocker, rpl_config):
mock_response = MagicMock()
mock_response.headers = {'content-type': 'text/html'}
rpl_config['pairlists'] = [
{
"method": "RemotePairList",
"pairlist_url": "http://example.com/pairlist",
"number_assets": 10,
"read_timeout": 10,
"keep_pairlist_on_failure": True,
}
]
exchange = get_patched_exchange(mocker, rpl_config)
pairlistmanager = PairListManager(exchange, rpl_config)
mocker.patch("freqtrade.plugins.pairlist.RemotePairList.requests.get",
return_value=mock_response)
remote_pairlist = RemotePairList(exchange, pairlistmanager, rpl_config,
rpl_config['pairlists'][0], 0)
with pytest.raises(OperationalException, match='RemotePairList is not of type JSON, abort.'):
remote_pairlist.fetch_pairlist()
def test_fetch_pairlist_timeout_keep_last_pairlist(mocker, rpl_config, caplog):
rpl_config['pairlists'] = [
{
"method": "RemotePairList",
"pairlist_url": "http://example.com/pairlist",
"number_assets": 10,
"read_timeout": 10,
"keep_pairlist_on_failure": True,
}
]
exchange = get_patched_exchange(mocker, rpl_config)
pairlistmanager = PairListManager(exchange, rpl_config)
mocker.patch("freqtrade.plugins.pairlist.RemotePairList.requests.get",
side_effect=requests.exceptions.RequestException)
remote_pairlist = RemotePairList(exchange, pairlistmanager, rpl_config,
rpl_config['pairlists'][0], 0)
remote_pairlist._last_pairlist = ["BTC/USDT", "ETH/USDT", "LTC/USDT"]
pairs, time_elapsed = remote_pairlist.fetch_pairlist()
assert log_has(f"Was not able to fetch pairlist from: {remote_pairlist._pairlist_url}", caplog)
assert log_has("Keeping last fetched pairlist", caplog)
assert pairs == ["BTC/USDT", "ETH/USDT", "LTC/USDT"]
def test_remote_pairlist_init_no_pairlist_url(mocker, rpl_config):
rpl_config['pairlists'] = [
{
"method": "RemotePairList",
"number_assets": 10,
"keep_pairlist_on_failure": True,
}
]
get_patched_exchange(mocker, rpl_config)
with pytest.raises(OperationalException, match=r'`pairlist_url` not specified.'
r' Please check your configuration for "pairlist.config.pairlist_url"'):
get_patched_freqtradebot(mocker, rpl_config)
def test_remote_pairlist_init_no_number_assets(mocker, rpl_config):
rpl_config['pairlists'] = [
{
"method": "RemotePairList",
"pairlist_url": "http://example.com/pairlist",
"keep_pairlist_on_failure": True,
}
]
get_patched_exchange(mocker, rpl_config)
with pytest.raises(OperationalException, match=r'`number_assets` not specified. '
'Please check your configuration for "pairlist.config.number_assets"'):
get_patched_freqtradebot(mocker, rpl_config)
def test_fetch_pairlist_mock_response_valid(mocker, rpl_config):
rpl_config['pairlists'] = [
{
"method": "RemotePairList",
"pairlist_url": "http://example.com/pairlist",
"number_assets": 10,
"refresh_period": 10,
"read_timeout": 10,
"keep_pairlist_on_failure": True,
}
]
mock_response = MagicMock()
mock_response.json.return_value = {
"pairs": ["ETH/USDT", "XRP/USDT", "LTC/USDT", "EOS/USDT"],
"refresh_period": 60
}
mock_response.headers = {
"content-type": "application/json"
}
mock_response.elapsed.total_seconds.return_value = 0.4
mocker.patch("freqtrade.plugins.pairlist.RemotePairList.requests.get",
return_value=mock_response)
exchange = get_patched_exchange(mocker, rpl_config)
pairlistmanager = PairListManager(exchange, rpl_config)
remote_pairlist = RemotePairList(exchange, pairlistmanager, rpl_config,
rpl_config['pairlists'][0], 0)
pairs, time_elapsed = remote_pairlist.fetch_pairlist()
assert pairs == ["ETH/USDT", "XRP/USDT", "LTC/USDT", "EOS/USDT"]
assert time_elapsed == 0.4
assert remote_pairlist._refresh_period == 60

View File

@ -46,13 +46,11 @@ def test_rpc_trade_status(default_conf, ticker, fee, mocker) -> None:
'open_rate_requested': ANY,
'open_trade_value': 0.0010025,
'close_rate_requested': ANY,
'sell_reason': ANY,
'exit_reason': ANY,
'exit_order_status': ANY,
'min_rate': ANY,
'max_rate': ANY,
'strategy': ANY,
'buy_tag': ANY,
'enter_tag': ANY,
'timeframe': 5,
'open_order_id': ANY,
@ -64,6 +62,7 @@ def test_rpc_trade_status(default_conf, ticker, fee, mocker) -> None:
'amount': 91.07468123,
'amount_requested': 91.07468124,
'stake_amount': 0.001,
'max_stake_amount': ANY,
'trade_duration': None,
'trade_duration_s': None,
'close_profit': None,

View File

@ -985,6 +985,7 @@ def test_api_status(botclient, mocker, ticker, fee, markets, is_short,
'base_currency': 'ETH',
'quote_currency': 'BTC',
'stake_amount': 0.001,
'max_stake_amount': ANY,
'stop_loss_abs': ANY,
'stop_loss_pct': ANY,
'stop_loss_ratio': ANY,
@ -1014,11 +1015,9 @@ def test_api_status(botclient, mocker, ticker, fee, markets, is_short,
'open_order_id': open_order_id,
'open_rate_requested': ANY,
'open_trade_value': open_trade_value,
'sell_reason': None,
'exit_reason': None,
'exit_order_status': None,
'strategy': CURRENT_TEST_STRATEGY,
'buy_tag': None,
'enter_tag': None,
'timeframe': 5,
'exchange': 'binance',
@ -1188,6 +1187,7 @@ def test_api_force_entry(botclient, mocker, fee, endpoint):
'base_currency': 'ETH',
'quote_currency': 'BTC',
'stake_amount': 1,
'max_stake_amount': ANY,
'stop_loss_abs': None,
'stop_loss_pct': None,
'stop_loss_ratio': None,
@ -1218,11 +1218,9 @@ def test_api_force_entry(botclient, mocker, fee, endpoint):
'open_order_id': '123456',
'open_rate_requested': None,
'open_trade_value': 0.24605460,
'sell_reason': None,
'exit_reason': None,
'exit_order_status': None,
'strategy': CURRENT_TEST_STRATEGY,
'buy_tag': None,
'enter_tag': None,
'timeframe': 5,
'exchange': 'binance',
@ -1488,6 +1486,44 @@ def test_api_strategy(botclient):
assert_response(rc, 500)
def test_api_freqaimodels(botclient, tmpdir, mocker):
ftbot, client = botclient
ftbot.config['user_data_dir'] = Path(tmpdir)
mocker.patch(
"freqtrade.resolvers.freqaimodel_resolver.FreqaiModelResolver.search_all_objects",
return_value=[
{'name': 'LightGBMClassifier'},
{'name': 'LightGBMClassifierMultiTarget'},
{'name': 'LightGBMRegressor'},
{'name': 'LightGBMRegressorMultiTarget'},
{'name': 'ReinforcementLearner'},
{'name': 'ReinforcementLearner_multiproc'},
{'name': 'XGBoostClassifier'},
{'name': 'XGBoostRFClassifier'},
{'name': 'XGBoostRFRegressor'},
{'name': 'XGBoostRegressor'},
{'name': 'XGBoostRegressorMultiTarget'},
])
rc = client_get(client, f"{BASE_URI}/freqaimodels")
assert_response(rc)
assert rc.json() == {'freqaimodels': [
'LightGBMClassifier',
'LightGBMClassifierMultiTarget',
'LightGBMRegressor',
'LightGBMRegressorMultiTarget',
'ReinforcementLearner',
'ReinforcementLearner_multiproc',
'XGBoostClassifier',
'XGBoostRFClassifier',
'XGBoostRFRegressor',
'XGBoostRegressor',
'XGBoostRegressorMultiTarget'
]}
def test_list_available_pairs(botclient):
ftbot, client = botclient
@ -1671,7 +1707,7 @@ def test_api_backtest_history(botclient, mocker, testdatadir):
mocker.patch('freqtrade.data.btanalysis._get_backtest_files',
return_value=[
testdatadir / 'backtest_results/backtest-result_multistrat.json',
testdatadir / 'backtest_results/backtest-result_new.json'
testdatadir / 'backtest_results/backtest-result.json'
])
rc = client_get(client, f"{BASE_URI}/backtest/history")

View File

@ -83,6 +83,7 @@ def test_emc_init(patched_emc):
def test_emc_handle_producer_message(patched_emc, caplog, ohlcv_history):
test_producer = {"name": "test", "url": "ws://test", "ws_token": "test"}
producer_name = test_producer['name']
invalid_msg = r"Invalid message .+"
caplog.set_level(logging.DEBUG)
@ -94,7 +95,7 @@ def test_emc_handle_producer_message(patched_emc, caplog, ohlcv_history):
assert log_has(
f"Consumed message from `{producer_name}` of type `RPCMessageType.WHITELIST`", caplog)
# Test handle analyzed_df message
# Test handle analyzed_df single candle message
df_message = {
"type": "analyzed_df",
"data": {
@ -106,8 +107,7 @@ def test_emc_handle_producer_message(patched_emc, caplog, ohlcv_history):
patched_emc.handle_producer_message(test_producer, df_message)
assert log_has(f"Received message of type `analyzed_df` from `{producer_name}`", caplog)
assert log_has(
f"Consumed message from `{producer_name}` of type `RPCMessageType.ANALYZED_DF`", caplog)
assert log_has_re(r"Holes in data or no existing df,.+", caplog)
# Test unhandled message
unhandled_message = {"type": "status", "data": "RUNNING"}
@ -120,7 +120,8 @@ def test_emc_handle_producer_message(patched_emc, caplog, ohlcv_history):
malformed_message = {"type": "whitelist", "data": {"pair": "BTC/USDT"}}
patched_emc.handle_producer_message(test_producer, malformed_message)
assert log_has_re(r"Invalid message .+", caplog)
assert log_has_re(invalid_msg, caplog)
caplog.clear()
malformed_message = {
"type": "analyzed_df",
@ -133,13 +134,30 @@ def test_emc_handle_producer_message(patched_emc, caplog, ohlcv_history):
patched_emc.handle_producer_message(test_producer, malformed_message)
assert log_has(f"Received message of type `analyzed_df` from `{producer_name}`", caplog)
assert log_has_re(r"Invalid message .+", caplog)
assert log_has_re(invalid_msg, caplog)
caplog.clear()
# Empty dataframe
malformed_message = {
"type": "analyzed_df",
"data": {
"key": ("BTC/USDT", "5m", "spot"),
"df": ohlcv_history.loc[ohlcv_history['open'] < 0],
"la": datetime.now(timezone.utc)
}
}
patched_emc.handle_producer_message(test_producer, malformed_message)
assert log_has(f"Received message of type `analyzed_df` from `{producer_name}`", caplog)
assert not log_has_re(invalid_msg, caplog)
assert log_has_re(r"Received Empty Dataframe for.+", caplog)
caplog.clear()
malformed_message = {"some": "stuff"}
patched_emc.handle_producer_message(test_producer, malformed_message)
assert log_has_re(r"Invalid message .+", caplog)
assert log_has_re(invalid_msg, caplog)
caplog.clear()
caplog.clear()
malformed_message = {"type": "whitelist", "data": None}
@ -183,7 +201,7 @@ async def test_emc_create_connection_success(default_conf, caplog, mocker):
async with websockets.serve(eat, _TEST_WS_HOST, _TEST_WS_PORT):
await emc._create_connection(test_producer, lock)
assert log_has_re(r"Producer connection success.+", caplog)
assert log_has_re(r"Connected to channel.+", caplog)
finally:
emc.shutdown()
@ -212,7 +230,8 @@ async def test_emc_create_connection_invalid_url(default_conf, caplog, mocker, h
dp = DataProvider(default_conf, None, None, None)
# Handle start explicitly to avoid messing with threading in tests
mocker.patch("freqtrade.rpc.external_message_consumer.ExternalMessageConsumer.start",)
mocker.patch("freqtrade.rpc.external_message_consumer.ExternalMessageConsumer.start")
mocker.patch("freqtrade.rpc.api_server.ws.channel.create_channel")
emc = ExternalMessageConsumer(default_conf, dp)
try:
@ -248,7 +267,7 @@ async def test_emc_create_connection_error(default_conf, caplog, mocker):
emc = ExternalMessageConsumer(default_conf, dp)
try:
await asyncio.sleep(0.01)
await asyncio.sleep(0.05)
assert log_has("Unexpected error has occurred:", caplog)
finally:
emc.shutdown()
@ -390,6 +409,8 @@ async def test_emc_receive_messages_timeout(default_conf, caplog, mocker):
try:
change_running(emc)
loop.call_soon(functools.partial(change_running, emc=emc))
with pytest.raises(asyncio.TimeoutError):
await emc._receive_messages(TestChannel(), test_producer, lock)
assert log_has_re(r"Ping error.+", caplog)

View File

@ -12,6 +12,7 @@ from unittest.mock import ANY, MagicMock
import arrow
import pytest
import time_machine
from pandas import DataFrame
from telegram import Chat, Message, ReplyKeyboardMarkup, Update
from telegram.error import BadRequest, NetworkError, TelegramError
@ -1906,6 +1907,7 @@ def test_send_msg_entry_fill_notification(default_conf, mocker, message_type, en
def test_send_msg_sell_notification(default_conf, mocker) -> None:
with time_machine.travel("2022-09-01 05:00:00 +00:00", tick=False):
telegram, _, msg_mock = get_telegram_testobject(mocker, default_conf)
old_convamount = telegram._rpc._fiat_converter.convert_amount
@ -2065,6 +2067,7 @@ def test_send_msg_sell_fill_notification(default_conf, mocker, direction,
default_conf['telegram']['notification_settings']['exit_fill'] = 'on'
telegram, _, msg_mock = get_telegram_testobject(mocker, default_conf)
with time_machine.travel("2022-09-01 05:00:00 +00:00", tick=False):
telegram.send_msg({
'type': RPCMessageType.EXIT_FILL,
'trade_id': 1,

View File

@ -1046,8 +1046,13 @@ def test__validate_freqai_include_timeframes(default_conf, caplog) -> None:
# Validation pass
conf.update({'timeframe': '1m'})
validate_config_consistency(conf)
conf.update({'analyze_per_epoch': True})
# Ensure base timeframe is in include_timeframes
conf['freqai']['feature_parameters']['include_timeframes'] = ["5m", "15m"]
validate_config_consistency(conf)
assert conf['freqai']['feature_parameters']['include_timeframes'] == ["1m", "5m", "15m"]
conf.update({'analyze_per_epoch': True})
with pytest.raises(OperationalException,
match=r"Using analyze-per-epoch .* not supported with a FreqAI strategy."):
validate_config_consistency(conf)

View File

@ -88,6 +88,18 @@ def test_bot_cleanup(mocker, default_conf_usdt, caplog) -> None:
assert coo_mock.call_count == 1
def test_bot_cleanup_db_errors(mocker, default_conf_usdt, caplog) -> None:
mocker.patch('freqtrade.freqtradebot.Trade.commit',
side_effect=OperationalException())
mocker.patch('freqtrade.freqtradebot.FreqtradeBot.check_for_open_trades',
side_effect=OperationalException())
freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt)
freqtrade.emc = MagicMock()
freqtrade.emc.shutdown = MagicMock()
freqtrade.cleanup()
assert freqtrade.emc.shutdown.call_count == 1
@pytest.mark.parametrize('runmode', [
RunMode.DRY_RUN,
RunMode.LIVE
@ -2366,7 +2378,7 @@ def test_close_trade(
trade.is_short = is_short
assert trade
oobj = Order.parse_from_ccxt_object(enter_order, enter_order['symbol'], trade.enter_side)
oobj = Order.parse_from_ccxt_object(enter_order, enter_order['symbol'], trade.entry_side)
trade.update_trade(oobj)
oobj = Order.parse_from_ccxt_object(exit_order, exit_order['symbol'], trade.exit_side)
trade.update_trade(oobj)

View File

@ -46,7 +46,7 @@ def test_init_plotscript(default_conf, mocker, testdatadir):
default_conf['trade_source'] = "file"
default_conf['timeframe'] = "5m"
default_conf["datadir"] = testdatadir
default_conf['exportfilename'] = testdatadir / "backtest-result_new.json"
default_conf['exportfilename'] = testdatadir / "backtest-result.json"
supported_markets = ["TRX/BTC", "ADA/BTC"]
ret = init_plotscript(default_conf, supported_markets)
assert "ohlcv" in ret
@ -158,7 +158,7 @@ def test_plot_trades(testdatadir, caplog):
assert fig == fig1
assert log_has("No trades found.", caplog)
pair = "ADA/BTC"
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
trades = load_backtest_data(filename)
trades = trades.loc[trades['pair'] == pair]
@ -299,7 +299,7 @@ def test_generate_plot_file(mocker, caplog):
def test_add_profit(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
timerange = TimeRange.parse_timerange("20180110-20180112")
@ -319,7 +319,7 @@ def test_add_profit(testdatadir):
def test_generate_profit_graph(testdatadir):
filename = testdatadir / "backtest_results/backtest-result_new.json"
filename = testdatadir / "backtest_results/backtest-result.json"
trades = load_backtest_data(filename)
timerange = TimeRange.parse_timerange("20180110-20180112")
pairs = ["TRX/BTC", "XLM/BTC"]
@ -354,7 +354,7 @@ def test_generate_profit_graph(testdatadir):
profit = find_trace_in_fig_data(figure.data, "Profit")
assert isinstance(profit, go.Scatter)
drawdown = find_trace_in_fig_data(figure.data, "Max drawdown 35.69%")
drawdown = find_trace_in_fig_data(figure.data, "Max drawdown 73.89%")
assert isinstance(drawdown, go.Scatter)
parallel = find_trace_in_fig_data(figure.data, "Parallel trades")
assert isinstance(parallel, go.Scatter)
@ -395,7 +395,7 @@ def test_load_and_plot_trades(default_conf, mocker, caplog, testdatadir):
default_conf['trade_source'] = 'file'
default_conf["datadir"] = testdatadir
default_conf['exportfilename'] = testdatadir / "backtest-result_new.json"
default_conf['exportfilename'] = testdatadir / "backtest-result.json"
default_conf['indicators1'] = ["sma5", "ema10"]
default_conf['indicators2'] = ["macd"]
default_conf['pairs'] = ["ETH/BTC", "LTC/BTC"]
@ -466,7 +466,7 @@ def test_plot_profit(default_conf, mocker, testdatadir):
match=r"No trades found, cannot generate Profit-plot.*"):
plot_profit(default_conf)
default_conf['exportfilename'] = testdatadir / "backtest_results/backtest-result_new.json"
default_conf['exportfilename'] = testdatadir / "backtest_results/backtest-result.json"
plot_profit(default_conf)

Some files were not shown because too many files have changed in this diff Show More