Merge branch 'develop' into db_keep_orders
This commit is contained in:
commit
4ecb67d1d1
4
.github/workflows/ci.yml
vendored
4
.github/workflows/ci.yml
vendored
@ -89,7 +89,7 @@ jobs:
|
||||
run: |
|
||||
cp config.json.example config.json
|
||||
freqtrade create-userdir --userdir user_data
|
||||
freqtrade hyperopt --datadir tests/testdata -e 5 --strategy SampleStrategy --hyperopt SampleHyperOpt
|
||||
freqtrade hyperopt --datadir tests/testdata -e 5 --strategy SampleStrategy --hyperopt SampleHyperOpt --print-all
|
||||
|
||||
- name: Flake8
|
||||
run: |
|
||||
@ -151,7 +151,7 @@ jobs:
|
||||
run: |
|
||||
cp config.json.example config.json
|
||||
freqtrade create-userdir --userdir user_data
|
||||
freqtrade hyperopt --datadir tests/testdata -e 5 --strategy SampleStrategy --hyperopt SampleHyperOpt
|
||||
freqtrade hyperopt --datadir tests/testdata -e 5 --strategy SampleStrategy --hyperopt SampleHyperOpt --print-all
|
||||
|
||||
- name: Flake8
|
||||
run: |
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM --platform=linux/arm/v7 python:3.7.7-slim-buster
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get -y install curl build-essential libssl-dev libatlas3-base libgfortran5 sqlite3 \
|
||||
&& apt-get -y install curl build-essential libssl-dev libffi-dev libatlas3-base libgfortran5 sqlite3 \
|
||||
&& apt-get clean \
|
||||
&& pip install --upgrade pip \
|
||||
&& echo "[global]\nextra-index-url=https://www.piwheels.org/simple" > /etc/pip.conf
|
||||
|
@ -157,17 +157,32 @@ A backtesting result will look like that:
|
||||
| ADA/BTC | 1 | 0.89 | 0.89 | 0.00004434 | 0.44 | 6:00:00 | 1 | 0 | 0 |
|
||||
| LTC/BTC | 1 | 0.68 | 0.68 | 0.00003421 | 0.34 | 2:00:00 | 1 | 0 | 0 |
|
||||
| TOTAL | 2 | 0.78 | 1.57 | 0.00007855 | 0.78 | 4:00:00 | 2 | 0 | 0 |
|
||||
=============== SUMMARY METRICS ===============
|
||||
| Metric | Value |
|
||||
|-----------------------+---------------------|
|
||||
| Backtesting from | 2019-01-01 00:00:00 |
|
||||
| Backtesting to | 2019-05-01 00:00:00 |
|
||||
| Total trades | 429 |
|
||||
| First trade | 2019-01-01 18:30:00 |
|
||||
| First trade Pair | EOS/USDT |
|
||||
| Total Profit % | 152.41% |
|
||||
| Trades per day | 3.575 |
|
||||
| Best day | 25.27% |
|
||||
| Worst day | -30.67% |
|
||||
| Avg. Duration Winners | 4:23:00 |
|
||||
| Avg. Duration Loser | 6:55:00 |
|
||||
| | |
|
||||
| Max Drawdown | 50.63% |
|
||||
| Drawdown Start | 2019-02-15 14:10:00 |
|
||||
| Drawdown End | 2019-04-11 18:15:00 |
|
||||
| Market change | -5.88% |
|
||||
===============================================
|
||||
```
|
||||
|
||||
### Backtesting report table
|
||||
|
||||
The 1st table contains all trades the bot made, including "left open trades".
|
||||
|
||||
The 2nd table contains a recap of sell reasons.
|
||||
This table can tell you which area needs some additional work (i.e. all `sell_signal` trades are losses, so we should disable the sell-signal or work on improving that).
|
||||
|
||||
The 3rd table contains all trades the bot had to `forcesell` at the end of the backtest period to present a full picture.
|
||||
This is necessary to simulate realistic behaviour, since the backtest period has to end at some point, while realistically, you could leave the bot running forever.
|
||||
These trades are also included in the first table, but are extracted separately for clarity.
|
||||
|
||||
The last line will give you the overall performance of your strategy,
|
||||
here:
|
||||
|
||||
@ -196,6 +211,58 @@ On the other hand, if you set a too high `minimal_roi` like `"0": 0.55`
|
||||
(55%), there is almost no chance that the bot will ever reach this profit.
|
||||
Hence, keep in mind that your performance is an integral mix of all different elements of the strategy, your configuration, and the crypto-currency pairs you have set up.
|
||||
|
||||
### Sell reasons table
|
||||
|
||||
The 2nd table contains a recap of sell reasons.
|
||||
This table can tell you which area needs some additional work (e.g. all or many of the `sell_signal` trades are losses, so you should work on improving the sell signal, or consider disabling it).
|
||||
|
||||
### Left open trades table
|
||||
|
||||
The 3rd table contains all trades the bot had to `forcesell` at the end of the backtesting period to present you the full picture.
|
||||
This is necessary to simulate realistic behavior, since the backtest period has to end at some point, while realistically, you could leave the bot running forever.
|
||||
These trades are also included in the first table, but are also shown separately in this table for clarity.
|
||||
|
||||
### Summary metrics
|
||||
|
||||
The last element of the backtest report is the summary metrics table.
|
||||
It contains some useful key metrics about performance of your strategy on backtesting data.
|
||||
|
||||
```
|
||||
=============== SUMMARY METRICS ===============
|
||||
| Metric | Value |
|
||||
|-----------------------+---------------------|
|
||||
| Backtesting from | 2019-01-01 00:00:00 |
|
||||
| Backtesting to | 2019-05-01 00:00:00 |
|
||||
| Total trades | 429 |
|
||||
| First trade | 2019-01-01 18:30:00 |
|
||||
| First trade Pair | EOS/USDT |
|
||||
| Total Profit % | 152.41% |
|
||||
| Trades per day | 3.575 |
|
||||
| Best day | 25.27% |
|
||||
| Worst day | -30.67% |
|
||||
| Avg. Duration Winners | 4:23:00 |
|
||||
| Avg. Duration Loser | 6:55:00 |
|
||||
| | |
|
||||
| Max Drawdown | 50.63% |
|
||||
| Drawdown Start | 2019-02-15 14:10:00 |
|
||||
| Drawdown End | 2019-04-11 18:15:00 |
|
||||
| Market change | -5.88% |
|
||||
===============================================
|
||||
|
||||
```
|
||||
|
||||
- `Total trades`: Identical to the total trades of the backtest output table.
|
||||
- `First trade`: First trade entered.
|
||||
- `First trade pair`: Which pair was part of the first trade.
|
||||
- `Backtesting from` / `Backtesting to`: Backtesting range (usually defined with the `--timerange` option).
|
||||
- `Total Profit %`: Total profit per stake amount. Aligned to the TOTAL column of the first table.
|
||||
- `Trades per day`: Total trades divided by the backtesting duration in days (this will give you information about how many trades to expect from the strategy).
|
||||
- `Best day` / `Worst day`: Best and worst day based on daily profit.
|
||||
- `Avg. Duration Winners` / `Avg. Duration Loser`: Average durations for winning and losing trades.
|
||||
- `Max Drawdown`: Maximum drawdown experienced. For example, the value of 50% means that from highest to subsequent lowest point, a 50% drop was experienced).
|
||||
- `Drawdown Start` / `Drawdown End`: Start and end datetimes for this largest drawdown (can also be visualized via the `plot-dataframe` sub-command).
|
||||
- `Market change`: Change of the market during the backtest period. Calculated as average of all pairs changes from the first to the last candle using the "close" column.
|
||||
|
||||
### Assumptions made by backtesting
|
||||
|
||||
Since backtesting lacks some detailed information about what happens within a candle, it needs to take a few assumptions:
|
||||
|
@ -55,9 +55,9 @@ Mandatory parameters are marked as **Required**, which means that they are requi
|
||||
| `process_only_new_candles` | Enable processing of indicators only when new candles arrive. If false each loop populates the indicators, this will mean the same candle is processed many times creating system load but can be useful of your strategy depends on tick data not only candle. [Strategy Override](#parameters-in-the-strategy). <br>*Defaults to `false`.* <br> **Datatype:** Boolean
|
||||
| `minimal_roi` | **Required.** Set the threshold as ratio the bot will use to sell a trade. [More information below](#understand-minimal_roi). [Strategy Override](#parameters-in-the-strategy). <br> **Datatype:** Dict
|
||||
| `stoploss` | **Required.** Value as ratio of the stoploss used by the bot. More details in the [stoploss documentation](stoploss.md). [Strategy Override](#parameters-in-the-strategy). <br> **Datatype:** Float (as ratio)
|
||||
| `trailing_stop` | Enables trailing stoploss (based on `stoploss` in either configuration or strategy file). More details in the [stoploss documentation](stoploss.md). [Strategy Override](#parameters-in-the-strategy). <br> **Datatype:** Boolean
|
||||
| `trailing_stop_positive` | Changes stoploss once profit has been reached. More details in the [stoploss documentation](stoploss.md). [Strategy Override](#parameters-in-the-strategy). <br> **Datatype:** Float
|
||||
| `trailing_stop_positive_offset` | Offset on when to apply `trailing_stop_positive`. Percentage value which should be positive. More details in the [stoploss documentation](stoploss.md). [Strategy Override](#parameters-in-the-strategy). <br>*Defaults to `0.0` (no offset).* <br> **Datatype:** Float
|
||||
| `trailing_stop` | Enables trailing stoploss (based on `stoploss` in either configuration or strategy file). More details in the [stoploss documentation](stoploss.md#trailing-stop-loss). [Strategy Override](#parameters-in-the-strategy). <br> **Datatype:** Boolean
|
||||
| `trailing_stop_positive` | Changes stoploss once profit has been reached. More details in the [stoploss documentation](stoploss.md#trailing-stop-loss-custom-positive-loss). [Strategy Override](#parameters-in-the-strategy). <br> **Datatype:** Float
|
||||
| `trailing_stop_positive_offset` | Offset on when to apply `trailing_stop_positive`. Percentage value which should be positive. More details in the [stoploss documentation](stoploss.md#trailing-stop-loss-only-once-the-trade-has-reached-a-certain-offset). [Strategy Override](#parameters-in-the-strategy). <br>*Defaults to `0.0` (no offset).* <br> **Datatype:** Float
|
||||
| `trailing_only_offset_is_reached` | Only apply trailing stoploss when the offset is reached. [stoploss documentation](stoploss.md). [Strategy Override](#parameters-in-the-strategy). <br>*Defaults to `false`.* <br> **Datatype:** Boolean
|
||||
| `unfilledtimeout.buy` | **Required.** How long (in minutes) the bot will wait for an unfilled buy order to complete, after which the order will be cancelled. [Strategy Override](#parameters-in-the-strategy).<br> **Datatype:** Integer
|
||||
| `unfilledtimeout.sell` | **Required.** How long (in minutes) the bot will wait for an unfilled sell order to complete, after which the order will be cancelled. [Strategy Override](#parameters-in-the-strategy).<br> **Datatype:** Integer
|
||||
@ -278,24 +278,13 @@ This allows to buy using limit orders, sell using
|
||||
limit-orders, and create stoplosses using market orders. It also allows to set the
|
||||
stoploss "on exchange" which means stoploss order would be placed immediately once
|
||||
the buy order is fulfilled.
|
||||
If `stoploss_on_exchange` and `trailing_stop` are both set, then the bot will use `stoploss_on_exchange_interval` to check and update the stoploss on exchange periodically.
|
||||
`order_types` can be set in the configuration file or in the strategy.
|
||||
|
||||
`order_types` set in the configuration file overwrites values set in the strategy as a whole, so you need to configure the whole `order_types` dictionary in one place.
|
||||
|
||||
If this is configured, the following 4 values (`buy`, `sell`, `stoploss` and
|
||||
`stoploss_on_exchange`) need to be present, otherwise the bot will fail to start.
|
||||
|
||||
`emergencysell` is an optional value, which defaults to `market` and is used when creating stoploss on exchange orders fails.
|
||||
The below is the default which is used if this is not configured in either strategy or configuration file.
|
||||
|
||||
Not all Exchanges support `stoploss_on_exchange`. If an exchange supports both limit and market stoploss orders, then the value of `stoploss` will be used to determine the stoploss type.
|
||||
|
||||
If `stoploss_on_exchange` uses limit orders, the exchange needs 2 prices, the stoploss_price and the Limit price.
|
||||
`stoploss` defines the stop-price - and limit should be slightly below this.
|
||||
|
||||
This defaults to 0.99 / 1% (configurable via `stoploss_on_exchange_limit_ratio`).
|
||||
Calculation example: we bought the asset at 100$.
|
||||
Stop-price is 95$, then limit would be `95 * 0.99 = 94.05$` - so the stoploss will happen between 95$ and 94.05$.
|
||||
For information on (`emergencysell`,`stoploss_on_exchange`,`stoploss_on_exchange_interval`,`stoploss_on_exchange_limit_ratio`) please see stop loss documentation [stop loss on exchange](stoploss.md)
|
||||
|
||||
Syntax for Strategy:
|
||||
|
||||
@ -663,24 +652,28 @@ Filters low-value coins which would not allow setting stoplosses.
|
||||
#### PriceFilter
|
||||
|
||||
The `PriceFilter` allows filtering of pairs by price. Currently the following price filters are supported:
|
||||
|
||||
* `min_price`
|
||||
* `max_price`
|
||||
* `low_price_ratio`
|
||||
|
||||
The `min_price` setting removes pairs where the price is below the specified price. This is useful if you wish to avoid trading very low-priced pairs.
|
||||
This option is disabled by default, and will only apply if set to <> 0.
|
||||
This option is disabled by default, and will only apply if set to > 0.
|
||||
|
||||
The `max_price` setting removes pairs where the price is above the specified price. This is useful if you wish to trade only low-priced pairs.
|
||||
This option is disabled by default, and will only apply if set to <> 0.
|
||||
This option is disabled by default, and will only apply if set to > 0.
|
||||
|
||||
The `low_price_ratio` setting removes pairs where a raise of 1 price unit (pip) is above the `low_price_ratio` ratio.
|
||||
This option is disabled by default, and will only apply if set to <> 0.
|
||||
This option is disabled by default, and will only apply if set to > 0.
|
||||
|
||||
For `PriceFiler` at least one of its `min_price`, `max_price` or `low_price_ratio` settings must be applied.
|
||||
|
||||
Calculation example:
|
||||
|
||||
Min price precision is 8 decimals. If price is 0.00000011 - one step would be 0.00000012 - which is almost 10% higher than the previous value.
|
||||
Min price precision for SHITCOIN/BTC is 8 decimals. If its price is 0.00000011 - one price step above would be 0.00000012, which is ~9% higher than the previous price value. You may filter out this pair by using PriceFilter with `low_price_ratio` set to 0.09 (9%) or with `min_price` set to 0.00000011, correspondingly.
|
||||
|
||||
These pairs are dangerous since it may be impossible to place the desired stoploss - and often result in high losses.
|
||||
!!! Warning "Low priced pairs"
|
||||
Low priced pairs with high "1 pip movements" are dangerous since they are often illiquid and it may also be impossible to place the desired stoploss, which can often result in high losses since price needs to be rounded to the next tradable price - so instead of having a stoploss of -5%, you could end up with a stoploss of -9% simply due to price rounding.
|
||||
|
||||
#### ShuffleFilter
|
||||
|
||||
|
11
docs/edge.md
11
docs/edge.md
@ -6,7 +6,8 @@ This page explains how to use Edge Positioning module in your bot in order to en
|
||||
Edge positioning is not compatible with dynamic (volume-based) whitelist.
|
||||
|
||||
!!! Note
|
||||
Edge does not consider anything else than buy/sell/stoploss signals. So trailing stoploss, ROI, and everything else are ignored in its calculation.
|
||||
Edge does not consider anything other than *its own* buy/sell/stoploss signals. It ignores the stoploss, trailing stoploss, and ROI settings in the strategy configuration file.
|
||||
Therefore, it is important to understand that Edge can improve the performance of some trading strategies but *decrease* the performance of others.
|
||||
|
||||
## Introduction
|
||||
|
||||
@ -89,7 +90,7 @@ You can also use this value to evaluate the effectiveness of modifications to th
|
||||
|
||||
## How does it work?
|
||||
|
||||
If enabled in config, Edge will go through historical data with a range of stoplosses in order to find buy and sell/stoploss signals. It then calculates win rate and expectancy over *N* trades for each stoploss. Here is an example:
|
||||
Edge combines dynamic stoploss, dynamic positions, and whitelist generation into one isolated module which is then applied to the trading strategy. If enabled in config, Edge will go through historical data with a range of stoplosses in order to find buy and sell/stoploss signals. It then calculates win rate and expectancy over *N* trades for each stoploss. Here is an example:
|
||||
|
||||
| Pair | Stoploss | Win Rate | Risk Reward Ratio | Expectancy |
|
||||
|----------|:-------------:|-------------:|------------------:|-----------:|
|
||||
@ -186,6 +187,12 @@ An example of its output:
|
||||
| APPC/BTC | -0.02 | 0.44 | 2.28 | 1.27 | 0.44 | 25 | 43 |
|
||||
| NEBL/BTC | -0.03 | 0.63 | 1.29 | 0.58 | 0.44 | 19 | 59 |
|
||||
|
||||
Edge produced the above table by comparing `calculate_since_number_of_days` to `minimum_expectancy` to find `min_trade_number` historical information based on the config file. The timerange Edge uses for its comparisons can be further limited by using the `--timerange` switch.
|
||||
|
||||
In live and dry-run modes, after the `process_throttle_secs` has passed, Edge will again process `calculate_since_number_of_days` against `minimum_expectancy` to find `min_trade_number`. If no `min_trade_number` is found, the bot will return "whitelist empty". Depending on the trade strategy being deployed, "whitelist empty" may be return much of the time - or *all* of the time. The use of Edge may also cause trading to occur in bursts, though this is rare.
|
||||
|
||||
If you encounter "whitelist empty" a lot, condsider tuning `calculate_since_number_of_days`, `minimum_expectancy` and `min_trade_number` to align to the trading frequency of your strategy.
|
||||
|
||||
### Update cached pairs with the latest data
|
||||
|
||||
Edge requires historic data the same way as backtesting does.
|
||||
|
@ -370,6 +370,9 @@ By default, hyperopt prints colorized results -- epochs with positive profit are
|
||||
|
||||
You can use the `--print-all` command line option if you would like to see all results in the hyperopt output, not only the best ones. When `--print-all` is used, current best results are also colorized by default -- they are printed in bold (bright) style. This can also be switched off with the `--no-color` command line option.
|
||||
|
||||
!!! Note "Windows and color output"
|
||||
Windows does not support color-output nativly, therefore it is automatically disabled. To have color-output for hyperopt running under windows, please consider using WSL.
|
||||
|
||||
### Understand Hyperopt ROI results
|
||||
|
||||
If you are optimizing ROI (i.e. if optimization search-space contains 'all', 'default' or 'roi'), your result will look as follows and include a ROI table:
|
||||
|
@ -224,7 +224,8 @@ Possible options for the `freqtrade plot-profit` subcommand:
|
||||
|
||||
```
|
||||
usage: freqtrade plot-profit [-h] [-v] [--logfile FILE] [-V] [-c PATH]
|
||||
[-d PATH] [--userdir PATH] [-p PAIRS [PAIRS ...]]
|
||||
[-d PATH] [--userdir PATH] [-s NAME]
|
||||
[--strategy-path PATH] [-p PAIRS [PAIRS ...]]
|
||||
[--timerange TIMERANGE] [--export EXPORT]
|
||||
[--export-filename PATH] [--db-url PATH]
|
||||
[--trade-source {DB,file}] [-i TIMEFRAME]
|
||||
@ -270,6 +271,11 @@ Common arguments:
|
||||
--userdir PATH, --user-data-dir PATH
|
||||
Path to userdata directory.
|
||||
|
||||
Strategy arguments:
|
||||
-s NAME, --strategy NAME
|
||||
Specify strategy class name which will be used by the
|
||||
bot.
|
||||
--strategy-path PATH Specify additional strategy lookup path.
|
||||
```
|
||||
|
||||
The `-p/--pairs` argument, can be used to limit the pairs that are considered for this calculation.
|
||||
@ -279,7 +285,7 @@ Examples:
|
||||
Use custom backtest-export file
|
||||
|
||||
``` bash
|
||||
freqtrade plot-profit -p LTC/BTC --export-filename user_data/backtest_results/backtest-result-Strategy005.json
|
||||
freqtrade plot-profit -p LTC/BTC --export-filename user_data/backtest_results/backtest-result.json
|
||||
```
|
||||
|
||||
Use custom database
|
||||
|
@ -1,2 +1,2 @@
|
||||
mkdocs-material==5.5.3
|
||||
mkdocs-material==5.5.7
|
||||
mdx_truly_sane_lists==1.2
|
||||
|
@ -110,7 +110,7 @@ SET is_open=0,
|
||||
close_date=<close_date>,
|
||||
close_rate=<close_rate>,
|
||||
close_profit = close_rate / open_rate - 1,
|
||||
close_profit_abs = (amount * <close_rate> * (1 - fee_close) - (amount * (open_rate * 1 - fee_open))),
|
||||
close_profit_abs = (amount * <close_rate> * (1 - fee_close) - (amount * (open_rate * (1 - fee_open)))),
|
||||
sell_reason=<sell_reason>
|
||||
WHERE id=<trade_ID_to_update>;
|
||||
```
|
||||
@ -123,7 +123,7 @@ SET is_open=0,
|
||||
close_date='2020-06-20 03:08:45.103418',
|
||||
close_rate=0.19638016,
|
||||
close_profit=0.0496,
|
||||
close_profit_abs = (amount * 0.19638016 * (1 - fee_close) - (amount * open_rate * (1 - fee_open))),
|
||||
close_profit_abs = (amount * 0.19638016 * (1 - fee_close) - (amount * (open_rate * (1 - fee_open)))),
|
||||
sell_reason='force_sell'
|
||||
WHERE id=31;
|
||||
```
|
||||
|
151
docs/stoploss.md
151
docs/stoploss.md
@ -6,7 +6,63 @@ For example, value `-0.10` will cause immediate sell if the profit dips below -1
|
||||
Most of the strategy files already include the optimal `stoploss` value.
|
||||
|
||||
!!! Info
|
||||
All stoploss properties mentioned in this file can be set in the Strategy, or in the configuration. Configuration values will override the strategy values.
|
||||
All stoploss properties mentioned in this file can be set in the Strategy, or in the configuration.
|
||||
<ins>Configuration values will override the strategy values.</ins>
|
||||
|
||||
## Stop Loss On-Exchange/Freqtrade
|
||||
|
||||
Those stoploss modes can be *on exchange* or *off exchange*.
|
||||
|
||||
These modes can be configured with these values:
|
||||
|
||||
``` python
|
||||
'emergencysell': 'market',
|
||||
'stoploss_on_exchange': False
|
||||
'stoploss_on_exchange_interval': 60,
|
||||
'stoploss_on_exchange_limit_ratio': 0.99
|
||||
```
|
||||
|
||||
!!! Note
|
||||
Stoploss on exchange is only supported for Binance (stop-loss-limit), Kraken (stop-loss-market) and FTX (stop limit and stop-market) as of now.
|
||||
<ins>Do not set too low stoploss value if using stop loss on exchange!</ins>
|
||||
If set to low/tight then you have greater risk of missing fill on the order and stoploss will not work
|
||||
|
||||
### stoploss_on_exchange and stoploss_on_exchange_limit_ratio
|
||||
Enable or Disable stop loss on exchange.
|
||||
If the stoploss is *on exchange* it means a stoploss limit order is placed on the exchange immediately after buy order happens successfully. This will protect you against sudden crashes in market as the order will be in the queue immediately and if market goes down then the order has more chance of being fulfilled.
|
||||
|
||||
If `stoploss_on_exchange` uses limit orders, the exchange needs 2 prices, the stoploss_price and the Limit price.
|
||||
`stoploss` defines the stop-price where the limit order is placed - and limit should be slightly below this.
|
||||
If an exchange supports both limit and market stoploss orders, then the value of `stoploss` will be used to determine the stoploss type.
|
||||
|
||||
Calculation example: we bought the asset at 100$.
|
||||
Stop-price is 95$, then limit would be `95 * 0.99 = 94.05$` - so the limit order fill can happen between 95$ and 94.05$.
|
||||
|
||||
For example, assuming the stoploss is on exchange, and trailing stoploss is enabled, and the market is going up, then the bot automatically cancels the previous stoploss order and puts a new one with a stop value higher than the previous stoploss order.
|
||||
|
||||
### stoploss_on_exchange_interval
|
||||
In case of stoploss on exchange there is another parameter called `stoploss_on_exchange_interval`. This configures the interval in seconds at which the bot will check the stoploss and update it if necessary.
|
||||
The bot cannot do these every 5 seconds (at each iteration), otherwise it would get banned by the exchange.
|
||||
So this parameter will tell the bot how often it should update the stoploss order. The default value is 60 (1 minute).
|
||||
This same logic will reapply a stoploss order on the exchange should you cancel it accidentally.
|
||||
|
||||
### emergencysell
|
||||
`emergencysell` is an optional value, which defaults to `market` and is used when creating stop loss on exchange orders fails.
|
||||
The below is the default which is used if not changed in strategy or configuration file.
|
||||
|
||||
Example from strategy file:
|
||||
|
||||
``` python
|
||||
order_types = {
|
||||
'buy': 'limit',
|
||||
'sell': 'limit',
|
||||
'emergencysell': 'market',
|
||||
'stoploss': 'market',
|
||||
'stoploss_on_exchange': True,
|
||||
'stoploss_on_exchange_interval': 60,
|
||||
'stoploss_on_exchange_limit_ratio': 0.99
|
||||
}
|
||||
```
|
||||
|
||||
## Stop Loss Types
|
||||
|
||||
@ -17,29 +73,29 @@ At this stage the bot contains the following stoploss support modes:
|
||||
3. Trailing stop loss, custom positive loss.
|
||||
4. Trailing stop loss only once the trade has reached a certain offset.
|
||||
|
||||
Those stoploss modes can be *on exchange* or *off exchange*. If the stoploss is *on exchange* it means a stoploss limit order is placed on the exchange immediately after buy order happens successfully. This will protect you against sudden crashes in market as the order will be in the queue immediately and if market goes down then the order has more chance of being fulfilled.
|
||||
|
||||
In case of stoploss on exchange there is another parameter called `stoploss_on_exchange_interval`. This configures the interval in seconds at which the bot will check the stoploss and update it if necessary.
|
||||
|
||||
For example, assuming the stoploss is on exchange, and trailing stoploss is enabled, and the market is going up, then the bot automatically cancels the previous stoploss order and puts a new one with a stop value higher than the previous stoploss order.
|
||||
The bot cannot do this every 5 seconds (at each iteration), otherwise it would get banned by the exchange.
|
||||
So this parameter will tell the bot how often it should update the stoploss order. The default value is 60 (1 minute).
|
||||
This same logic will reapply a stoploss order on the exchange should you cancel it accidentally.
|
||||
|
||||
!!! Note
|
||||
Stoploss on exchange is only supported for Binance (stop-loss-limit), Kraken (stop-loss-market) and FTX (stop limit and stop-market) as of now.
|
||||
|
||||
## Static Stop Loss
|
||||
### Static Stop Loss
|
||||
|
||||
This is very simple, you define a stop loss of x (as a ratio of price, i.e. x * 100% of price). This will try to sell the asset once the loss exceeds the defined loss.
|
||||
|
||||
## Trailing Stop Loss
|
||||
Example of stop loss:
|
||||
|
||||
``` python
|
||||
stoploss = -0.10
|
||||
```
|
||||
|
||||
For example, simplified math:
|
||||
* the bot buys an asset at a price of 100$
|
||||
* the stop loss is defined at -10%
|
||||
* the stop loss would get triggered once the asset drops below 90$
|
||||
|
||||
### Trailing Stop Loss
|
||||
|
||||
The initial value for this is `stoploss`, just as you would define your static Stop loss.
|
||||
To enable trailing stoploss:
|
||||
|
||||
``` python
|
||||
trailing_stop = True
|
||||
stoploss = -0.10
|
||||
trailing_stop = True
|
||||
```
|
||||
|
||||
This will now activate an algorithm, which automatically moves the stop loss up every time the price of your asset increases.
|
||||
@ -47,35 +103,43 @@ This will now activate an algorithm, which automatically moves the stop loss up
|
||||
For example, simplified math:
|
||||
|
||||
* the bot buys an asset at a price of 100$
|
||||
* the stop loss is defined at 2%
|
||||
* the stop loss would get triggered once the asset dropps below 98$
|
||||
* the stop loss is defined at -10%
|
||||
* the stop loss would get triggered once the asset drops below 90$
|
||||
* assuming the asset now increases to 102$
|
||||
* the stop loss will now be 2% of 102$ or 99.96$
|
||||
* now the asset drops in value to 101$, the stop loss will still be 99.96$ and would trigger at 99.96$.
|
||||
* the stop loss will now be -10% of 102$ = 91.8$
|
||||
* now the asset drops in value to 101$, the stop loss will still be 91.8$ and would trigger at 91.8$.
|
||||
|
||||
In summary: The stoploss will be adjusted to be always be 2% of the highest observed price.
|
||||
In summary: The stoploss will be adjusted to be always be -10% of the highest observed price.
|
||||
|
||||
### Custom positive stoploss
|
||||
### Trailing stop loss, custom positive loss
|
||||
|
||||
It is also possible to have a default stop loss, when you are in the red with your buy, but once your profit surpasses a certain percentage, the system will utilize a new stop loss, which can have a different value.
|
||||
For example your default stop loss is 5%, but once you have 1.1% profit, it will be changed to be only a 1% stop loss, which trails the green candles until it goes below them.
|
||||
It is also possible to have a default stop loss, when you are in the red with your buy (buy - fee), but once you hit positive result the system will utilize a new stop loss, which can have a different value.
|
||||
For example, your default stop loss is -10%, but once you have more than 0% profit (example 0.1%) a different trailing stoploss will be used.
|
||||
|
||||
Both values require `trailing_stop` to be set to true.
|
||||
!!! Note
|
||||
If you want the stoploss to only be changed when you break even of making a profit (what most users want) please refer to next section with [offset enabled](#Trailing-stop-loss-only-once-the-trade-has-reached-a-certain-offset).
|
||||
|
||||
Both values require `trailing_stop` to be set to true and `trailing_stop_positive` with a value.
|
||||
|
||||
``` python
|
||||
trailing_stop_positive = 0.01
|
||||
trailing_stop_positive_offset = 0.011
|
||||
stoploss = -0.10
|
||||
trailing_stop = True
|
||||
trailing_stop_positive = 0.02
|
||||
```
|
||||
|
||||
The 0.01 would translate to a 1% stop loss, once you hit 1.1% profit.
|
||||
For example, simplified math:
|
||||
|
||||
* the bot buys an asset at a price of 100$
|
||||
* the stop loss is defined at -10%
|
||||
* the stop loss would get triggered once the asset drops below 90$
|
||||
* assuming the asset now increases to 102$
|
||||
* the stop loss will now be -2% of 102$ = 99.96$ (99.96$ stop loss will be locked in and will follow asset price increasements with -2%)
|
||||
* now the asset drops in value to 101$, the stop loss will still be 99.96$ and would trigger at 99.96$
|
||||
|
||||
The 0.02 would translate to a -2% stop loss.
|
||||
Before this, `stoploss` is used for the trailing stoploss.
|
||||
|
||||
Read the [next section](#trailing-only-once-offset-is-reached) to keep stoploss at 5% of the entry point.
|
||||
|
||||
!!! Tip
|
||||
Make sure to have this value (`trailing_stop_positive_offset`) lower than minimal ROI, otherwise minimal ROI will apply first and sell the trade.
|
||||
|
||||
### Trailing only once offset is reached
|
||||
### Trailing stop loss only once the trade has reached a certain offset
|
||||
|
||||
It is also possible to use a static stoploss until the offset is reached, and then trail the trade to take profits once the market turns.
|
||||
|
||||
@ -87,17 +151,28 @@ This option can be used with or without `trailing_stop_positive`, but uses `trai
|
||||
trailing_only_offset_is_reached = True
|
||||
```
|
||||
|
||||
Simplified example:
|
||||
Configuration (offset is buyprice + 3%):
|
||||
|
||||
``` python
|
||||
stoploss = 0.05
|
||||
stoploss = -0.10
|
||||
trailing_stop = True
|
||||
trailing_stop_positive = 0.02
|
||||
trailing_stop_positive_offset = 0.03
|
||||
trailing_only_offset_is_reached = True
|
||||
```
|
||||
|
||||
For example, simplified math:
|
||||
|
||||
* the bot buys an asset at a price of 100$
|
||||
* the stop loss is defined at 5%
|
||||
* the stop loss will remain at 95% until profit reaches +3%
|
||||
* the stop loss is defined at -10%
|
||||
* the stop loss would get triggered once the asset drops below 90$
|
||||
* stoploss will remain at 90$ unless asset increases to or above our configured offset
|
||||
* assuming the asset now increases to 103$ (where we have the offset configured)
|
||||
* the stop loss will now be -2% of 103$ = 100.94$
|
||||
* now the asset drops in value to 101$, the stop loss will still be 100.94$ and would trigger at 100.94$
|
||||
|
||||
!!! Tip
|
||||
Make sure to have this value (`trailing_stop_positive_offset`) lower than minimal ROI, otherwise minimal ROI will apply first and sell the trade.
|
||||
|
||||
## Changing stoploss on open trades
|
||||
|
||||
|
@ -199,3 +199,24 @@ class Awesomestrategy(IStrategy):
|
||||
return True
|
||||
|
||||
```
|
||||
|
||||
## Derived strategies
|
||||
|
||||
The strategies can be derived from other strategies. This avoids duplication of your custom strategy code. You can use this technique to override small parts of your main strategy, leaving the rest untouched:
|
||||
|
||||
``` python
|
||||
class MyAwesomeStrategy(IStrategy):
|
||||
...
|
||||
stoploss = 0.13
|
||||
trailing_stop = False
|
||||
# All other attributes and methods are here as they
|
||||
# should be in any custom strategy...
|
||||
...
|
||||
|
||||
class MyAwesomeStrategy2(MyAwesomeStrategy):
|
||||
# Override something
|
||||
stoploss = 0.08
|
||||
trailing_stop = True
|
||||
```
|
||||
|
||||
Both attributes and methods may be overriden, altering behavior of the original strategy in a way you need.
|
||||
|
@ -58,12 +58,12 @@ file as reference.**
|
||||
|
||||
!!! Note "Strategies and Backtesting"
|
||||
To avoid problems and unexpected differences between Backtesting and dry/live modes, please be aware
|
||||
that during backtesting the full time-interval is passed to the `populate_*()` methods at once.
|
||||
that during backtesting the full time range is passed to the `populate_*()` methods at once.
|
||||
It is therefore best to use vectorized operations (across the whole dataframe, not loops) and
|
||||
avoid index referencing (`df.iloc[-1]`), but instead use `df.shift()` to get to the previous candle.
|
||||
|
||||
!!! Warning "Warning: Using future data"
|
||||
Since backtesting passes the full time interval to the `populate_*()` methods, the strategy author
|
||||
Since backtesting passes the full time range to the `populate_*()` methods, the strategy author
|
||||
needs to take care to avoid having the strategy utilize data from the future.
|
||||
Some common patterns for this are listed in the [Common Mistakes](#common-mistakes-when-developing-strategies) section of this document.
|
||||
|
||||
@ -251,7 +251,7 @@ minimal_roi = {
|
||||
While technically not completely disabled, this would sell once the trade reaches 10000% Profit.
|
||||
|
||||
To use times based on candle duration (timeframe), the following snippet can be handy.
|
||||
This will allow you to change the ticket_interval for the strategy, and ROI times will still be set as candles (e.g. after 3 candles ...)
|
||||
This will allow you to change the timeframe for the strategy, and ROI times will still be set as candles (e.g. after 3 candles ...)
|
||||
|
||||
``` python
|
||||
from freqtrade.exchange import timeframe_to_minutes
|
||||
@ -285,7 +285,7 @@ If your exchange supports it, it's recommended to also set `"stoploss_on_exchang
|
||||
|
||||
For more information on order_types please look [here](configuration.md#understand-order_types).
|
||||
|
||||
### Timeframe (ticker interval)
|
||||
### Timeframe (formerly ticker interval)
|
||||
|
||||
This is the set of candles the bot should download and use for the analysis.
|
||||
Common values are `"1m"`, `"5m"`, `"15m"`, `"1h"`, however all values supported by your exchange should work.
|
||||
@ -328,15 +328,15 @@ class Awesomestrategy(IStrategy):
|
||||
|
||||
***
|
||||
|
||||
### Additional data (informative_pairs)
|
||||
## Additional data (informative_pairs)
|
||||
|
||||
#### Get data for non-tradeable pairs
|
||||
### Get data for non-tradeable pairs
|
||||
|
||||
Data for additional, informative pairs (reference pairs) can be beneficial for some strategies.
|
||||
Ohlcv data for these pairs will be downloaded as part of the regular whitelist refresh process and is available via `DataProvider` just as other pairs (see below).
|
||||
OHLCV data for these pairs will be downloaded as part of the regular whitelist refresh process and is available via `DataProvider` just as other pairs (see below).
|
||||
These parts will **not** be traded unless they are also specified in the pair whitelist, or have been selected by Dynamic Whitelisting.
|
||||
|
||||
The pairs need to be specified as tuples in the format `("pair", "interval")`, with pair as the first and time interval as the second argument.
|
||||
The pairs need to be specified as tuples in the format `("pair", "timeframe")`, with pair as the first and timeframe as the second argument.
|
||||
|
||||
Sample:
|
||||
|
||||
@ -347,15 +347,17 @@ def informative_pairs(self):
|
||||
]
|
||||
```
|
||||
|
||||
A full sample can be found [in the DataProvider section](#complete-data-provider-sample).
|
||||
|
||||
!!! Warning
|
||||
As these pairs will be refreshed as part of the regular whitelist refresh, it's best to keep this list short.
|
||||
All intervals and all pairs can be specified as long as they are available (and active) on the used exchange.
|
||||
It is however better to use resampling to longer time-intervals when possible
|
||||
All timeframes and all pairs can be specified as long as they are available (and active) on the used exchange.
|
||||
It is however better to use resampling to longer timeframes whenever possible
|
||||
to avoid hammering the exchange with too many requests and risk being blocked.
|
||||
|
||||
***
|
||||
|
||||
### Additional data (DataProvider)
|
||||
## Additional data (DataProvider)
|
||||
|
||||
The strategy provides access to the `DataProvider`. This allows you to get additional data to use in your strategy.
|
||||
|
||||
@ -363,10 +365,14 @@ All methods return `None` in case of failure (do not raise an exception).
|
||||
|
||||
Please always check the mode of operation to select the correct method to get data (samples see below).
|
||||
|
||||
#### Possible options for DataProvider
|
||||
!!! Warning "Hyperopt"
|
||||
Dataprovider is available during hyperopt, however it can only be used in `populate_indicators()` within a strategy.
|
||||
It is not available in `populate_buy()` and `populate_sell()` methods, nor in `populate_indicators()`, if this method located in the hyperopt file.
|
||||
|
||||
- [`available_pairs`](#available_pairs) - Property with tuples listing cached pairs with their intervals (pair, interval).
|
||||
- [`current_whitelist()`](#current_whitelist) - Returns a current list of whitelisted pairs. Useful for accessing dynamic whitelists (ie. VolumePairlist)
|
||||
### Possible options for DataProvider
|
||||
|
||||
- [`available_pairs`](#available_pairs) - Property with tuples listing cached pairs with their timeframe (pair, timeframe).
|
||||
- [`current_whitelist()`](#current_whitelist) - Returns a current list of whitelisted pairs. Useful for accessing dynamic whitelists (i.e. VolumePairlist)
|
||||
- [`get_pair_dataframe(pair, timeframe)`](#get_pair_dataframepair-timeframe) - This is a universal method, which returns either historical data (for backtesting) or cached live data (for the Dry-Run and Live-Run modes).
|
||||
- [`get_analyzed_dataframe(pair, timeframe)`](#get_analyzed_dataframepair-timeframe) - Returns the analyzed dataframe (after calling `populate_indicators()`, `populate_buy()`, `populate_sell()`) and the time of the latest analysis.
|
||||
- `historic_ohlcv(pair, timeframe)` - Returns historical data stored on disk.
|
||||
@ -376,9 +382,9 @@ Please always check the mode of operation to select the correct method to get da
|
||||
- [`ticker(pair)`](#tickerpair) - Returns current ticker data for the pair. See [ccxt documentation](https://github.com/ccxt/ccxt/wiki/Manual#price-tickers) for more details on the Ticker data structure.
|
||||
- `runmode` - Property containing the current runmode.
|
||||
|
||||
#### Example Usages:
|
||||
### Example Usages
|
||||
|
||||
#### *available_pairs*
|
||||
### *available_pairs*
|
||||
|
||||
``` python
|
||||
if self.dp:
|
||||
@ -386,7 +392,7 @@ if self.dp:
|
||||
print(f"available {pair}, {timeframe}")
|
||||
```
|
||||
|
||||
#### *current_whitelist()*
|
||||
### *current_whitelist()*
|
||||
|
||||
Imagine you've developed a strategy that trades the `5m` timeframe using signals generated from a `1d` timeframe on the top 10 volume pairs by volume.
|
||||
|
||||
@ -400,6 +406,82 @@ Since we can't resample our data we will have to use an informative pair; and si
|
||||
|
||||
This is where calling `self.dp.current_whitelist()` comes in handy.
|
||||
|
||||
```python
|
||||
def informative_pairs(self):
|
||||
|
||||
# get access to all pairs available in whitelist.
|
||||
pairs = self.dp.current_whitelist()
|
||||
# Assign tf to each pair so they can be downloaded and cached for strategy.
|
||||
informative_pairs = [(pair, '1d') for pair in pairs]
|
||||
return informative_pairs
|
||||
```
|
||||
|
||||
### *get_pair_dataframe(pair, timeframe)*
|
||||
|
||||
``` python
|
||||
# fetch live / historical candle (OHLCV) data for the first informative pair
|
||||
if self.dp:
|
||||
inf_pair, inf_timeframe = self.informative_pairs()[0]
|
||||
informative = self.dp.get_pair_dataframe(pair=inf_pair,
|
||||
timeframe=inf_timeframe)
|
||||
```
|
||||
|
||||
!!! Warning "Warning about backtesting"
|
||||
Be careful when using dataprovider in backtesting. `historic_ohlcv()` (and `get_pair_dataframe()`
|
||||
for the backtesting runmode) provides the full time-range in one go,
|
||||
so please be aware of it and make sure to not "look into the future" to avoid surprises when running in dry/live mode.
|
||||
|
||||
### *get_analyzed_dataframe(pair, timeframe)*
|
||||
|
||||
This method is used by freqtrade internally to determine the last signal.
|
||||
It can also be used in specific callbacks to get the signal that caused the action (see [Advanced Strategy Documentation](strategy-advanced.md) for more details on available callbacks).
|
||||
|
||||
``` python
|
||||
# fetch current dataframe
|
||||
if self.dp:
|
||||
dataframe, last_updated = self.dp.get_analyzed_dataframe(pair=metadata['pair'],
|
||||
timeframe=self.timeframe)
|
||||
```
|
||||
|
||||
!!! Note "No data available"
|
||||
Returns an empty dataframe if the requested pair was not cached.
|
||||
This should not happen when using whitelisted pairs.
|
||||
|
||||
### *orderbook(pair, maximum)*
|
||||
|
||||
``` python
|
||||
if self.dp:
|
||||
if self.dp.runmode.value in ('live', 'dry_run'):
|
||||
ob = self.dp.orderbook(metadata['pair'], 1)
|
||||
dataframe['best_bid'] = ob['bids'][0][0]
|
||||
dataframe['best_ask'] = ob['asks'][0][0]
|
||||
```
|
||||
|
||||
!!! Warning
|
||||
The order book is not part of the historic data which means backtesting and hyperopt will not work correctly if this method is used.
|
||||
|
||||
### *ticker(pair)*
|
||||
|
||||
``` python
|
||||
if self.dp:
|
||||
if self.dp.runmode.value in ('live', 'dry_run'):
|
||||
ticker = self.dp.ticker(metadata['pair'])
|
||||
dataframe['last_price'] = ticker['last']
|
||||
dataframe['volume24h'] = ticker['quoteVolume']
|
||||
dataframe['vwap'] = ticker['vwap']
|
||||
```
|
||||
|
||||
!!! Warning
|
||||
Although the ticker data structure is a part of the ccxt Unified Interface, the values returned by this method can
|
||||
vary for different exchanges. For instance, many exchanges do not return `vwap` values, the FTX exchange
|
||||
does not always fills in the `last` field (so it can be None), etc. So you need to carefully verify the ticker
|
||||
data returned from the exchange and add appropriate error handling / defaults.
|
||||
|
||||
!!! Warning "Warning about backtesting"
|
||||
This method will always return up-to-date values - so usage during backtesting / hyperopt will lead to wrong results.
|
||||
|
||||
### Complete Data-provider sample
|
||||
|
||||
```python
|
||||
class SampleStrategy(IStrategy):
|
||||
# strategy init stuff...
|
||||
@ -414,13 +496,20 @@ class SampleStrategy(IStrategy):
|
||||
pairs = self.dp.current_whitelist()
|
||||
# Assign tf to each pair so they can be downloaded and cached for strategy.
|
||||
informative_pairs = [(pair, '1d') for pair in pairs]
|
||||
# Optionally Add additional "static" pairs
|
||||
informative_pairs += [("ETH/USDT", "5m"),
|
||||
("BTC/TUSD", "15m"),
|
||||
]
|
||||
return informative_pairs
|
||||
|
||||
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
||||
if not self.dp:
|
||||
# Don't do anything if DataProvider is not available.
|
||||
return dataframe
|
||||
|
||||
inf_tf = '1d'
|
||||
# Get the informative pair
|
||||
informative = self.dp.get_pair_dataframe(pair=metadata['pair'], timeframe='1d')
|
||||
informative = self.dp.get_pair_dataframe(pair=metadata['pair'], timeframe=inf_tf)
|
||||
# Get the 14 day rsi
|
||||
informative['rsi'] = ta.RSI(informative, timeperiod=14)
|
||||
|
||||
@ -435,6 +524,7 @@ class SampleStrategy(IStrategy):
|
||||
# FFill to have the 1d value available in every row throughout the day.
|
||||
# Without this, comparisons would only work once per day.
|
||||
dataframe = dataframe.ffill()
|
||||
|
||||
# Calculate rsi of the original dataframe (5m timeframe)
|
||||
dataframe['rsi'] = ta.RSI(dataframe, timeperiod=14)
|
||||
|
||||
@ -455,77 +545,9 @@ class SampleStrategy(IStrategy):
|
||||
|
||||
```
|
||||
|
||||
#### *get_pair_dataframe(pair, timeframe)*
|
||||
|
||||
``` python
|
||||
# fetch live / historical candle (OHLCV) data for the first informative pair
|
||||
if self.dp:
|
||||
inf_pair, inf_timeframe = self.informative_pairs()[0]
|
||||
informative = self.dp.get_pair_dataframe(pair=inf_pair,
|
||||
timeframe=inf_timeframe)
|
||||
```
|
||||
|
||||
!!! Warning "Warning about backtesting"
|
||||
Be careful when using dataprovider in backtesting. `historic_ohlcv()` (and `get_pair_dataframe()`
|
||||
for the backtesting runmode) provides the full time-range in one go,
|
||||
so please be aware of it and make sure to not "look into the future" to avoid surprises when running in dry/live mode).
|
||||
|
||||
!!! Warning "Warning in hyperopt"
|
||||
This option cannot currently be used during hyperopt.
|
||||
|
||||
#### *get_analyzed_dataframe(pair, timeframe)*
|
||||
|
||||
This method is used by freqtrade internally to determine the last signal.
|
||||
It can also be used in specific callbacks to get the signal that caused the action (see [Advanced Strategy Documentation](strategy-advanced.md) for more details on available callbacks).
|
||||
|
||||
``` python
|
||||
# fetch current dataframe
|
||||
if self.dp:
|
||||
dataframe, last_updated = self.dp.get_analyzed_dataframe(pair=metadata['pair'],
|
||||
timeframe=self.ticker_interval)
|
||||
```
|
||||
|
||||
!!! Note "No data available"
|
||||
Returns an empty dataframe if the requested pair was not cached.
|
||||
This should not happen when using whitelisted pairs.
|
||||
|
||||
!!! Warning "Warning in hyperopt"
|
||||
This option cannot currently be used during hyperopt.
|
||||
|
||||
#### *orderbook(pair, maximum)*
|
||||
|
||||
``` python
|
||||
if self.dp:
|
||||
if self.dp.runmode.value in ('live', 'dry_run'):
|
||||
ob = self.dp.orderbook(metadata['pair'], 1)
|
||||
dataframe['best_bid'] = ob['bids'][0][0]
|
||||
dataframe['best_ask'] = ob['asks'][0][0]
|
||||
```
|
||||
|
||||
!!! Warning
|
||||
The order book is not part of the historic data which means backtesting and hyperopt will not work if this
|
||||
method is used.
|
||||
|
||||
#### *ticker(pair)*
|
||||
|
||||
``` python
|
||||
if self.dp:
|
||||
if self.dp.runmode.value in ('live', 'dry_run'):
|
||||
ticker = self.dp.ticker(metadata['pair'])
|
||||
dataframe['last_price'] = ticker['last']
|
||||
dataframe['volume24h'] = ticker['quoteVolume']
|
||||
dataframe['vwap'] = ticker['vwap']
|
||||
```
|
||||
|
||||
!!! Warning
|
||||
Although the ticker data structure is a part of the ccxt Unified Interface, the values returned by this method can
|
||||
vary for different exchanges. For instance, many exchanges do not return `vwap` values, the FTX exchange
|
||||
does not always fills in the `last` field (so it can be None), etc. So you need to carefully verify the ticker
|
||||
data returned from the exchange and add appropriate error handling / defaults.
|
||||
|
||||
***
|
||||
|
||||
### Additional data (Wallets)
|
||||
## Additional data (Wallets)
|
||||
|
||||
The strategy provides access to the `Wallets` object. This contains the current balances on the exchange.
|
||||
|
||||
@ -541,7 +563,7 @@ if self.wallets:
|
||||
total_eth = self.wallets.get_total('ETH')
|
||||
```
|
||||
|
||||
#### Possible options for Wallets
|
||||
### Possible options for Wallets
|
||||
|
||||
- `get_free(asset)` - currently available balance to trade
|
||||
- `get_used(asset)` - currently tied up balance (open orders)
|
||||
@ -549,7 +571,7 @@ if self.wallets:
|
||||
|
||||
***
|
||||
|
||||
### Additional data (Trades)
|
||||
## Additional data (Trades)
|
||||
|
||||
A history of Trades can be retrieved in the strategy by querying the database.
|
||||
|
||||
@ -595,13 +617,13 @@ Sample return value: ETH/BTC had 5 trades, with a total profit of 1.5% (ratio of
|
||||
!!! Warning
|
||||
Trade history is not available during backtesting or hyperopt.
|
||||
|
||||
### Prevent trades from happening for a specific pair
|
||||
## Prevent trades from happening for a specific pair
|
||||
|
||||
Freqtrade locks pairs automatically for the current candle (until that candle is over) when a pair is sold, preventing an immediate re-buy of that pair.
|
||||
|
||||
Locked pairs will show the message `Pair <pair> is currently locked.`.
|
||||
|
||||
#### Locking pairs from within the strategy
|
||||
### Locking pairs from within the strategy
|
||||
|
||||
Sometimes it may be desired to lock a pair after certain events happen (e.g. multiple losing trades in a row).
|
||||
|
||||
@ -618,7 +640,7 @@ To verify if a pair is currently locked, use `self.is_pair_locked(pair)`.
|
||||
!!! Warning
|
||||
Locking pairs is not functioning during backtesting.
|
||||
|
||||
##### Pair locking example
|
||||
#### Pair locking example
|
||||
|
||||
``` python
|
||||
from freqtrade.persistence import Trade
|
||||
@ -640,7 +662,7 @@ if self.config['runmode'].value in ('live', 'dry_run'):
|
||||
self.lock_pair(metadata['pair'], until=datetime.now(timezone.utc) + timedelta(hours=12))
|
||||
```
|
||||
|
||||
### Print created dataframe
|
||||
## Print created dataframe
|
||||
|
||||
To inspect the created dataframe, you can issue a print-statement in either `populate_buy_trend()` or `populate_sell_trend()`.
|
||||
You may also want to print the pair so it's clear what data is currently shown.
|
||||
@ -664,36 +686,7 @@ def populate_buy_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
||||
|
||||
Printing more than a few rows is also possible (simply use `print(dataframe)` instead of `print(dataframe.tail())`), however not recommended, as that will be very verbose (~500 lines per pair every 5 seconds).
|
||||
|
||||
### Specify custom strategy location
|
||||
|
||||
If you want to use a strategy from a different directory you can pass `--strategy-path`
|
||||
|
||||
```bash
|
||||
freqtrade trade --strategy AwesomeStrategy --strategy-path /some/directory
|
||||
```
|
||||
|
||||
### Derived strategies
|
||||
|
||||
The strategies can be derived from other strategies. This avoids duplication of your custom strategy code. You can use this technique to override small parts of your main strategy, leaving the rest untouched:
|
||||
|
||||
``` python
|
||||
class MyAwesomeStrategy(IStrategy):
|
||||
...
|
||||
stoploss = 0.13
|
||||
trailing_stop = False
|
||||
# All other attributes and methods are here as they
|
||||
# should be in any custom strategy...
|
||||
...
|
||||
|
||||
class MyAwesomeStrategy2(MyAwesomeStrategy):
|
||||
# Override something
|
||||
stoploss = 0.08
|
||||
trailing_stop = True
|
||||
```
|
||||
|
||||
Both attributes and methods may be overriden, altering behavior of the original strategy in a way you need.
|
||||
|
||||
### Common mistakes when developing strategies
|
||||
## Common mistakes when developing strategies
|
||||
|
||||
Backtesting analyzes the whole time-range at once for performance reasons. Because of this, strategy authors need to make sure that strategies do not look-ahead into the future.
|
||||
This is a common pain-point, which can cause huge differences between backtesting and dry/live run methods, since they all use data which is not available during dry/live runs, so these strategies will perform well during backtesting, but will fail / perform badly in real conditions.
|
||||
@ -705,7 +698,7 @@ The following lists some common patterns which should be avoided to prevent frus
|
||||
- don't use `dataframe['volume'].mean()`. This uses the full DataFrame for backtesting, including data from the future. Use `dataframe['volume'].rolling(<window>).mean()` instead
|
||||
- don't use `.resample('1h')`. This uses the left border of the interval, so moves data from an hour to the start of the hour. Use `.resample('1h', label='right')` instead.
|
||||
|
||||
### Further strategy ideas
|
||||
## Further strategy ideas
|
||||
|
||||
To get additional Ideas for strategies, head over to our [strategy repository](https://github.com/freqtrade/freqtrade-strategies). Feel free to use them as they are - but results will depend on the current market situation, pairs used etc. - therefore please backtest the strategy for your exchange/desired pairs first, evaluate carefully, use at your own risk.
|
||||
Feel free to use any of them as inspiration for your own strategies.
|
||||
|
@ -85,10 +85,44 @@ Analyze a trades dataframe (also used below for plotting)
|
||||
|
||||
|
||||
```python
|
||||
from freqtrade.data.btanalysis import load_backtest_data
|
||||
from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats
|
||||
|
||||
# Load backtest results
|
||||
trades = load_backtest_data(config["user_data_dir"] / "backtest_results/backtest-result.json")
|
||||
# if backtest_dir points to a directory, it'll automatically load the last backtest file.
|
||||
backtest_dir = config["user_data_dir"] / "backtest_results"
|
||||
# backtest_dir can also point to a specific file
|
||||
# backtest_dir = config["user_data_dir"] / "backtest_results/backtest-result-2020-07-01_20-04-22.json"
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# You can get the full backtest statistics by using the following command.
|
||||
# This contains all information used to generate the backtest result.
|
||||
stats = load_backtest_stats(backtest_dir)
|
||||
|
||||
strategy = 'SampleStrategy'
|
||||
# All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well.
|
||||
# Example usages:
|
||||
print(stats['strategy'][strategy]['results_per_pair'])
|
||||
# Get pairlist used for this backtest
|
||||
print(stats['strategy'][strategy]['pairlist'])
|
||||
# Get market change (average change of all pairs from start to end of the backtest period)
|
||||
print(stats['strategy'][strategy]['market_change'])
|
||||
# Maximum drawdown ()
|
||||
print(stats['strategy'][strategy]['max_drawdown'])
|
||||
# Maximum drawdown start and end
|
||||
print(stats['strategy'][strategy]['drawdown_start'])
|
||||
print(stats['strategy'][strategy]['drawdown_end'])
|
||||
|
||||
|
||||
# Get strategy comparison (only relevant if multiple strategies were compared)
|
||||
print(stats['strategy_comparison'])
|
||||
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Load backtested trades as dataframe
|
||||
trades = load_backtest_data(backtest_dir)
|
||||
|
||||
# Show value-counts per pair
|
||||
trades.groupby("pair")["sell_reason"].value_counts()
|
||||
|
@ -366,7 +366,7 @@ class Arguments:
|
||||
plot_profit_cmd = subparsers.add_parser(
|
||||
'plot-profit',
|
||||
help='Generate plot showing profits.',
|
||||
parents=[_common_parser],
|
||||
parents=[_common_parser, _strategy_parser],
|
||||
)
|
||||
plot_profit_cmd.set_defaults(func=start_plot_profit)
|
||||
self._build_args(optionlist=ARGS_PLOT_PROFIT, parser=plot_profit_cmd)
|
||||
|
@ -14,7 +14,7 @@ from freqtrade.configuration import setup_utils_configuration
|
||||
from freqtrade.constants import USERPATH_HYPEROPTS, USERPATH_STRATEGIES
|
||||
from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.exchange import (available_exchanges, ccxt_exchanges,
|
||||
market_is_active, symbol_is_pair)
|
||||
market_is_active)
|
||||
from freqtrade.misc import plural
|
||||
from freqtrade.resolvers import ExchangeResolver, StrategyResolver
|
||||
from freqtrade.state import RunMode
|
||||
@ -163,7 +163,7 @@ def start_list_markets(args: Dict[str, Any], pairs_only: bool = False) -> None:
|
||||
tabular_data.append({'Id': v['id'], 'Symbol': v['symbol'],
|
||||
'Base': v['base'], 'Quote': v['quote'],
|
||||
'Active': market_is_active(v),
|
||||
**({'Is pair': symbol_is_pair(v['symbol'])}
|
||||
**({'Is pair': exchange.market_is_tradable(v)}
|
||||
if not pairs_only else {})})
|
||||
|
||||
if (args.get('print_one_column', False) or
|
||||
|
@ -199,7 +199,7 @@ class Configuration:
|
||||
config['exportfilename'] = Path(config['exportfilename'])
|
||||
else:
|
||||
config['exportfilename'] = (config['user_data_dir']
|
||||
/ 'backtest_results/backtest-result.json')
|
||||
/ 'backtest_results')
|
||||
|
||||
def _process_optimize_options(self, config: Dict[str, Any]) -> None:
|
||||
|
||||
|
@ -26,12 +26,15 @@ AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList',
|
||||
'ShuffleFilter', 'SpreadFilter']
|
||||
AVAILABLE_DATAHANDLERS = ['json', 'jsongz']
|
||||
DRY_RUN_WALLET = 1000
|
||||
DATETIME_PRINT_FORMAT = '%Y-%m-%d %H:%M:%S'
|
||||
MATH_CLOSE_PREC = 1e-14 # Precision used for float comparisons
|
||||
DEFAULT_DATAFRAME_COLUMNS = ['date', 'open', 'high', 'low', 'close', 'volume']
|
||||
# Don't modify sequence of DEFAULT_TRADES_COLUMNS
|
||||
# it has wide consequences for stored trades files
|
||||
DEFAULT_TRADES_COLUMNS = ['timestamp', 'id', 'type', 'side', 'price', 'amount', 'cost']
|
||||
|
||||
LAST_BT_RESULT_FN = '.last_result.json'
|
||||
|
||||
USERPATH_HYPEROPTS = 'hyperopts'
|
||||
USERPATH_STRATEGIES = 'strategies'
|
||||
USERPATH_NOTEBOOKS = 'notebooks'
|
||||
|
@ -3,52 +3,123 @@ Helpers when analyzing backtest data
|
||||
"""
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, Union, Tuple
|
||||
from typing import Dict, Union, Tuple, Any, Optional
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from datetime import timezone
|
||||
|
||||
from freqtrade import persistence
|
||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||
from freqtrade.misc import json_load
|
||||
from freqtrade.persistence import Trade
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# must align with columns in backtest.py
|
||||
BT_DATA_COLUMNS = ["pair", "profit_percent", "open_time", "close_time", "index", "duration",
|
||||
BT_DATA_COLUMNS = ["pair", "profit_percent", "open_date", "close_date", "index", "trade_duration",
|
||||
"open_rate", "close_rate", "open_at_end", "sell_reason"]
|
||||
|
||||
|
||||
def load_backtest_data(filename: Union[Path, str]) -> pd.DataFrame:
|
||||
def get_latest_backtest_filename(directory: Union[Path, str]) -> str:
|
||||
"""
|
||||
Load backtest data file.
|
||||
:param filename: pathlib.Path object, or string pointing to the file.
|
||||
:return: a dataframe with the analysis results
|
||||
Get latest backtest export based on '.last_result.json'.
|
||||
:param directory: Directory to search for last result
|
||||
:return: string containing the filename of the latest backtest result
|
||||
:raises: ValueError in the following cases:
|
||||
* Directory does not exist
|
||||
* `directory/.last_result.json` does not exist
|
||||
* `directory/.last_result.json` has the wrong content
|
||||
"""
|
||||
if isinstance(filename, str):
|
||||
filename = Path(filename)
|
||||
if isinstance(directory, str):
|
||||
directory = Path(directory)
|
||||
if not directory.is_dir():
|
||||
raise ValueError(f"Directory '{directory}' does not exist.")
|
||||
filename = directory / LAST_BT_RESULT_FN
|
||||
|
||||
if not filename.is_file():
|
||||
raise ValueError(f"File {filename} does not exist.")
|
||||
raise ValueError(
|
||||
f"Directory '{directory}' does not seem to contain backtest statistics yet.")
|
||||
|
||||
with filename.open() as file:
|
||||
data = json_load(file)
|
||||
|
||||
if 'latest_backtest' not in data:
|
||||
raise ValueError(f"Invalid '{LAST_BT_RESULT_FN}' format.")
|
||||
|
||||
return data['latest_backtest']
|
||||
|
||||
|
||||
def load_backtest_stats(filename: Union[Path, str]) -> Dict[str, Any]:
|
||||
"""
|
||||
Load backtest statistics file.
|
||||
:param filename: pathlib.Path object, or string pointing to the file.
|
||||
:return: a dictionary containing the resulting file.
|
||||
"""
|
||||
if isinstance(filename, str):
|
||||
filename = Path(filename)
|
||||
if filename.is_dir():
|
||||
filename = filename / get_latest_backtest_filename(filename)
|
||||
if not filename.is_file():
|
||||
raise ValueError(f"File {filename} does not exist.")
|
||||
logger.info(f"Loading backtest result from {filename}")
|
||||
with filename.open() as file:
|
||||
data = json_load(file)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def load_backtest_data(filename: Union[Path, str], strategy: Optional[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Load backtest data file.
|
||||
:param filename: pathlib.Path object, or string pointing to a file or directory
|
||||
:param strategy: Strategy to load - mainly relevant for multi-strategy backtests
|
||||
Can also serve as protection to load the correct result.
|
||||
:return: a dataframe with the analysis results
|
||||
:raise: ValueError if loading goes wrong.
|
||||
"""
|
||||
data = load_backtest_stats(filename)
|
||||
if not isinstance(data, list):
|
||||
# new, nested format
|
||||
if 'strategy' not in data:
|
||||
raise ValueError("Unknown dataformat.")
|
||||
|
||||
if not strategy:
|
||||
if len(data['strategy']) == 1:
|
||||
strategy = list(data['strategy'].keys())[0]
|
||||
else:
|
||||
raise ValueError("Detected backtest result with more than one strategy. "
|
||||
"Please specify a strategy.")
|
||||
|
||||
if strategy not in data['strategy']:
|
||||
raise ValueError(f"Strategy {strategy} not available in the backtest result.")
|
||||
|
||||
data = data['strategy'][strategy]['trades']
|
||||
df = pd.DataFrame(data)
|
||||
df['open_date'] = pd.to_datetime(df['open_date'],
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['close_date'] = pd.to_datetime(df['close_date'],
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
else:
|
||||
# old format - only with lists.
|
||||
df = pd.DataFrame(data, columns=BT_DATA_COLUMNS)
|
||||
|
||||
df['open_time'] = pd.to_datetime(df['open_time'],
|
||||
df['open_date'] = pd.to_datetime(df['open_date'],
|
||||
unit='s',
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['close_time'] = pd.to_datetime(df['close_time'],
|
||||
df['close_date'] = pd.to_datetime(df['close_date'],
|
||||
unit='s',
|
||||
utc=True,
|
||||
infer_datetime_format=True
|
||||
)
|
||||
df['profit'] = df['close_rate'] - df['open_rate']
|
||||
df = df.sort_values("open_time").reset_index(drop=True)
|
||||
df['profit_abs'] = df['close_rate'] - df['open_rate']
|
||||
df = df.sort_values("open_date").reset_index(drop=True)
|
||||
return df
|
||||
|
||||
|
||||
@ -62,9 +133,9 @@ def analyze_trade_parallelism(results: pd.DataFrame, timeframe: str) -> pd.DataF
|
||||
"""
|
||||
from freqtrade.exchange import timeframe_to_minutes
|
||||
timeframe_min = timeframe_to_minutes(timeframe)
|
||||
dates = [pd.Series(pd.date_range(row[1].open_time, row[1].close_time,
|
||||
dates = [pd.Series(pd.date_range(row[1]['open_date'], row[1]['close_date'],
|
||||
freq=f"{timeframe_min}min"))
|
||||
for row in results[['open_time', 'close_time']].iterrows()]
|
||||
for row in results[['open_date', 'close_date']].iterrows()]
|
||||
deltas = [len(x) for x in dates]
|
||||
dates = pd.Series(pd.concat(dates).values, name='date')
|
||||
df2 = pd.DataFrame(np.repeat(results.values, deltas, axis=0), columns=results.columns)
|
||||
@ -90,21 +161,26 @@ def evaluate_result_multi(results: pd.DataFrame, timeframe: str,
|
||||
return df_final[df_final['open_trades'] > max_open_trades]
|
||||
|
||||
|
||||
def load_trades_from_db(db_url: str) -> pd.DataFrame:
|
||||
def load_trades_from_db(db_url: str, strategy: Optional[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Load trades from a DB (using dburl)
|
||||
:param db_url: Sqlite url (default format sqlite:///tradesv3.dry-run.sqlite)
|
||||
:param strategy: Strategy to load - mainly relevant for multi-strategy backtests
|
||||
Can also serve as protection to load the correct result.
|
||||
:return: Dataframe containing Trades
|
||||
"""
|
||||
trades: pd.DataFrame = pd.DataFrame([], columns=BT_DATA_COLUMNS)
|
||||
persistence.init(db_url, clean_open_orders=False)
|
||||
|
||||
columns = ["pair", "open_time", "close_time", "profit", "profit_percent",
|
||||
"open_rate", "close_rate", "amount", "duration", "sell_reason",
|
||||
columns = ["pair", "open_date", "close_date", "profit", "profit_percent",
|
||||
"open_rate", "close_rate", "amount", "trade_duration", "sell_reason",
|
||||
"fee_open", "fee_close", "open_rate_requested", "close_rate_requested",
|
||||
"stake_amount", "max_rate", "min_rate", "id", "exchange",
|
||||
"stop_loss", "initial_stop_loss", "strategy", "timeframe"]
|
||||
|
||||
filters = []
|
||||
if strategy:
|
||||
filters.append(Trade.strategy == strategy)
|
||||
|
||||
trades = pd.DataFrame([(t.pair,
|
||||
t.open_date.replace(tzinfo=timezone.utc),
|
||||
t.close_date.replace(tzinfo=timezone.utc) if t.close_date else None,
|
||||
@ -123,14 +199,14 @@ def load_trades_from_db(db_url: str) -> pd.DataFrame:
|
||||
t.stop_loss, t.initial_stop_loss,
|
||||
t.strategy, t.timeframe
|
||||
)
|
||||
for t in Trade.get_trades().all()],
|
||||
for t in Trade.get_trades(filters).all()],
|
||||
columns=columns)
|
||||
|
||||
return trades
|
||||
|
||||
|
||||
def load_trades(source: str, db_url: str, exportfilename: Path,
|
||||
no_trades: bool = False) -> pd.DataFrame:
|
||||
no_trades: bool = False, strategy: Optional[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Based on configuration option "trade_source":
|
||||
* loads data from DB (using `db_url`)
|
||||
@ -148,7 +224,7 @@ def load_trades(source: str, db_url: str, exportfilename: Path,
|
||||
if source == "DB":
|
||||
return load_trades_from_db(db_url)
|
||||
elif source == "file":
|
||||
return load_backtest_data(exportfilename)
|
||||
return load_backtest_data(exportfilename, strategy)
|
||||
|
||||
|
||||
def extract_trades_of_period(dataframe: pd.DataFrame, trades: pd.DataFrame,
|
||||
@ -163,11 +239,31 @@ def extract_trades_of_period(dataframe: pd.DataFrame, trades: pd.DataFrame,
|
||||
else:
|
||||
trades_start = dataframe.iloc[0]['date']
|
||||
trades_stop = dataframe.iloc[-1]['date']
|
||||
trades = trades.loc[(trades['open_time'] >= trades_start) &
|
||||
(trades['close_time'] <= trades_stop)]
|
||||
trades = trades.loc[(trades['open_date'] >= trades_start) &
|
||||
(trades['close_date'] <= trades_stop)]
|
||||
return trades
|
||||
|
||||
|
||||
def calculate_market_change(data: Dict[str, pd.DataFrame], column: str = "close") -> float:
|
||||
"""
|
||||
Calculate market change based on "column".
|
||||
Calculation is done by taking the first non-null and the last non-null element of each column
|
||||
and calculating the pctchange as "(last - first) / first".
|
||||
Then the results per pair are combined as mean.
|
||||
|
||||
:param data: Dict of Dataframes, dict key should be pair.
|
||||
:param column: Column in the original dataframes to use
|
||||
:return:
|
||||
"""
|
||||
tmp_means = []
|
||||
for pair, df in data.items():
|
||||
start = df[column].dropna().iloc[0]
|
||||
end = df[column].dropna().iloc[-1]
|
||||
tmp_means.append((end - start) / start)
|
||||
|
||||
return np.mean(tmp_means)
|
||||
|
||||
|
||||
def combine_dataframes_with_mean(data: Dict[str, pd.DataFrame],
|
||||
column: str = "close") -> pd.DataFrame:
|
||||
"""
|
||||
@ -190,7 +286,7 @@ def create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str,
|
||||
"""
|
||||
Adds a column `col_name` with the cumulative profit for the given trades array.
|
||||
:param df: DataFrame with date index
|
||||
:param trades: DataFrame containing trades (requires columns close_time and profit_percent)
|
||||
:param trades: DataFrame containing trades (requires columns close_date and profit_percent)
|
||||
:param col_name: Column name that will be assigned the results
|
||||
:param timeframe: Timeframe used during the operations
|
||||
:return: Returns df with one additional column, col_name, containing the cumulative profit.
|
||||
@ -201,7 +297,7 @@ def create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str,
|
||||
from freqtrade.exchange import timeframe_to_minutes
|
||||
timeframe_minutes = timeframe_to_minutes(timeframe)
|
||||
# Resample to timeframe to make sure trades match candles
|
||||
_trades_sum = trades.resample(f'{timeframe_minutes}min', on='close_time'
|
||||
_trades_sum = trades.resample(f'{timeframe_minutes}min', on='close_date'
|
||||
)[['profit_percent']].sum()
|
||||
df.loc[:, col_name] = _trades_sum.cumsum()
|
||||
# Set first value to 0
|
||||
@ -211,13 +307,13 @@ def create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str,
|
||||
return df
|
||||
|
||||
|
||||
def calculate_max_drawdown(trades: pd.DataFrame, *, date_col: str = 'close_time',
|
||||
def calculate_max_drawdown(trades: pd.DataFrame, *, date_col: str = 'close_date',
|
||||
value_col: str = 'profit_percent'
|
||||
) -> Tuple[float, pd.Timestamp, pd.Timestamp]:
|
||||
"""
|
||||
Calculate max drawdown and the corresponding close dates
|
||||
:param trades: DataFrame containing trades (requires columns close_time and profit_percent)
|
||||
:param date_col: Column in DataFrame to use for dates (defaults to 'close_time')
|
||||
:param trades: DataFrame containing trades (requires columns close_date and profit_percent)
|
||||
:param date_col: Column in DataFrame to use for dates (defaults to 'close_date')
|
||||
:param value_col: Column in DataFrame to use for values (defaults to 'profit_percent')
|
||||
:return: Tuple (float, highdate, lowdate) with absolute max drawdown, high and low time
|
||||
:raise: ValueError if trade-dataframe was found empty.
|
||||
|
@ -9,7 +9,7 @@ import utils_find_1st as utf1st
|
||||
from pandas import DataFrame
|
||||
|
||||
from freqtrade.configuration import TimeRange
|
||||
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
|
||||
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT, DATETIME_PRINT_FORMAT
|
||||
from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.data.history import get_timerange, load_data, refresh_data
|
||||
from freqtrade.strategy.interface import SellType
|
||||
@ -121,12 +121,9 @@ class Edge:
|
||||
|
||||
# Print timeframe
|
||||
min_date, max_date = get_timerange(preprocessed)
|
||||
logger.info(
|
||||
'Measuring data from %s up to %s (%s days) ...',
|
||||
min_date.isoformat(),
|
||||
max_date.isoformat(),
|
||||
(max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Measuring data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
headers = ['date', 'buy', 'open', 'close', 'sell', 'high', 'low']
|
||||
|
||||
trades: list = []
|
||||
@ -240,7 +237,7 @@ class Edge:
|
||||
# All returned values are relative, they are defined as ratios.
|
||||
stake = 0.015
|
||||
|
||||
result['trade_duration'] = result['close_time'] - result['open_time']
|
||||
result['trade_duration'] = result['close_date'] - result['open_date']
|
||||
|
||||
result['trade_duration'] = result['trade_duration'].map(
|
||||
lambda x: int(x.total_seconds() / 60))
|
||||
@ -430,10 +427,8 @@ class Edge:
|
||||
'stoploss': stoploss,
|
||||
'profit_ratio': '',
|
||||
'profit_abs': '',
|
||||
'open_time': date_column[open_trade_index],
|
||||
'close_time': date_column[exit_index],
|
||||
'open_index': start_point + open_trade_index,
|
||||
'close_index': start_point + exit_index,
|
||||
'open_date': date_column[open_trade_index],
|
||||
'close_date': date_column[exit_index],
|
||||
'trade_duration': '',
|
||||
'open_rate': round(open_price, 15),
|
||||
'close_rate': round(exit_price, 15),
|
||||
|
@ -12,8 +12,7 @@ from freqtrade.exchange.exchange import (timeframe_to_seconds,
|
||||
timeframe_to_msecs,
|
||||
timeframe_to_next_date,
|
||||
timeframe_to_prev_date)
|
||||
from freqtrade.exchange.exchange import (market_is_active,
|
||||
symbol_is_pair)
|
||||
from freqtrade.exchange.exchange import (market_is_active)
|
||||
from freqtrade.exchange.kraken import Kraken
|
||||
from freqtrade.exchange.binance import Binance
|
||||
from freqtrade.exchange.bibox import Bibox
|
||||
|
@ -223,7 +223,7 @@ class Exchange:
|
||||
if quote_currencies:
|
||||
markets = {k: v for k, v in markets.items() if v['quote'] in quote_currencies}
|
||||
if pairs_only:
|
||||
markets = {k: v for k, v in markets.items() if symbol_is_pair(v['symbol'])}
|
||||
markets = {k: v for k, v in markets.items() if self.market_is_tradable(v)}
|
||||
if active_only:
|
||||
markets = {k: v for k, v in markets.items() if market_is_active(v)}
|
||||
return markets
|
||||
@ -247,6 +247,19 @@ class Exchange:
|
||||
"""
|
||||
return self.markets.get(pair, {}).get('base', '')
|
||||
|
||||
def market_is_tradable(self, market: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Check if the market symbol is tradable by Freqtrade.
|
||||
By default, checks if it's splittable by `/` and both sides correspond to base / quote
|
||||
"""
|
||||
symbol_parts = market['symbol'].split('/')
|
||||
return (len(symbol_parts) == 2 and
|
||||
len(symbol_parts[0]) > 0 and
|
||||
len(symbol_parts[1]) > 0 and
|
||||
symbol_parts[0] == market.get('base') and
|
||||
symbol_parts[1] == market.get('quote')
|
||||
)
|
||||
|
||||
def klines(self, pair_interval: Tuple[str, str], copy: bool = True) -> DataFrame:
|
||||
if pair_interval in self._klines:
|
||||
return self._klines[pair_interval].copy() if copy else self._klines[pair_interval]
|
||||
@ -1271,20 +1284,6 @@ def timeframe_to_next_date(timeframe: str, date: datetime = None) -> datetime:
|
||||
return datetime.fromtimestamp(new_timestamp, tz=timezone.utc)
|
||||
|
||||
|
||||
def symbol_is_pair(market_symbol: str, base_currency: str = None,
|
||||
quote_currency: str = None) -> bool:
|
||||
"""
|
||||
Check if the market symbol is a pair, i.e. that its symbol consists of the base currency and the
|
||||
quote currency separated by '/' character. If base_currency and/or quote_currency is passed,
|
||||
it also checks that the symbol contains appropriate base and/or quote currency part before
|
||||
and after the separating character correspondingly.
|
||||
"""
|
||||
symbol_parts = market_symbol.split('/')
|
||||
return (len(symbol_parts) == 2 and
|
||||
(symbol_parts[0] == base_currency if base_currency else len(symbol_parts[0]) > 0) and
|
||||
(symbol_parts[1] == quote_currency if quote_currency else len(symbol_parts[1]) > 0))
|
||||
|
||||
|
||||
def market_is_active(market: Dict) -> bool:
|
||||
"""
|
||||
Return True if the market is active.
|
||||
|
@ -1,6 +1,6 @@
|
||||
""" FTX exchange subclass """
|
||||
import logging
|
||||
from typing import Dict
|
||||
from typing import Any, Dict
|
||||
|
||||
import ccxt
|
||||
|
||||
@ -20,6 +20,16 @@ class Ftx(Exchange):
|
||||
"ohlcv_candle_limit": 1500,
|
||||
}
|
||||
|
||||
def market_is_tradable(self, market: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Check if the market symbol is tradable by Freqtrade.
|
||||
Default checks + check if pair is spot pair (no futures trading yet).
|
||||
"""
|
||||
parent_check = super().market_is_tradable(market)
|
||||
|
||||
return (parent_check and
|
||||
market.get('spot', False) is True)
|
||||
|
||||
def stoploss_adjust(self, stop_loss: float, order: Dict) -> bool:
|
||||
"""
|
||||
Verify stop_loss against stoploss-order value (limit or price)
|
||||
|
@ -1,6 +1,6 @@
|
||||
""" Kraken exchange subclass """
|
||||
import logging
|
||||
from typing import Dict
|
||||
from typing import Any, Dict
|
||||
|
||||
import ccxt
|
||||
|
||||
@ -22,6 +22,16 @@ class Kraken(Exchange):
|
||||
"trades_pagination_arg": "since",
|
||||
}
|
||||
|
||||
def market_is_tradable(self, market: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Check if the market symbol is tradable by Freqtrade.
|
||||
Default checks + check if pair is darkpool pair.
|
||||
"""
|
||||
parent_check = super().market_is_tradable(market)
|
||||
|
||||
return (parent_check and
|
||||
market.get('darkpool', False) is False)
|
||||
|
||||
@retrier
|
||||
def get_balances(self) -> dict:
|
||||
if self._config['dry_run']:
|
||||
|
@ -13,6 +13,7 @@ from pandas import DataFrame
|
||||
|
||||
from freqtrade.configuration import (TimeRange, remove_credentials,
|
||||
validate_config_consistency)
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||
from freqtrade.data import history
|
||||
from freqtrade.data.converter import trim_dataframe
|
||||
from freqtrade.data.dataprovider import DataProvider
|
||||
@ -20,11 +21,10 @@ from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.exchange import timeframe_to_minutes, timeframe_to_seconds
|
||||
from freqtrade.optimize.optimize_reports import (generate_backtest_stats,
|
||||
show_backtest_results,
|
||||
store_backtest_result)
|
||||
store_backtest_stats)
|
||||
from freqtrade.pairlist.pairlistmanager import PairListManager
|
||||
from freqtrade.persistence import Trade
|
||||
from freqtrade.resolvers import ExchangeResolver, StrategyResolver
|
||||
from freqtrade.state import RunMode
|
||||
from freqtrade.strategy.interface import IStrategy, SellCheckTuple, SellType
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -37,14 +37,15 @@ class BacktestResult(NamedTuple):
|
||||
pair: str
|
||||
profit_percent: float
|
||||
profit_abs: float
|
||||
open_time: datetime
|
||||
close_time: datetime
|
||||
open_index: int
|
||||
close_index: int
|
||||
open_date: datetime
|
||||
open_rate: float
|
||||
open_fee: float
|
||||
close_date: datetime
|
||||
close_rate: float
|
||||
close_fee: float
|
||||
amount: float
|
||||
trade_duration: float
|
||||
open_at_end: bool
|
||||
open_rate: float
|
||||
close_rate: float
|
||||
sell_reason: SellType
|
||||
|
||||
|
||||
@ -65,9 +66,8 @@ class Backtesting:
|
||||
self.strategylist: List[IStrategy] = []
|
||||
self.exchange = ExchangeResolver.load_exchange(self.config['exchange']['name'], self.config)
|
||||
|
||||
if self.config.get('runmode') != RunMode.HYPEROPT:
|
||||
self.dataprovider = DataProvider(self.config, self.exchange)
|
||||
IStrategy.dp = self.dataprovider
|
||||
dataprovider = DataProvider(self.config, self.exchange)
|
||||
IStrategy.dp = dataprovider
|
||||
|
||||
if self.config.get('strategy_list', None):
|
||||
for strat in list(self.config['strategy_list']):
|
||||
@ -137,10 +137,10 @@ class Backtesting:
|
||||
|
||||
min_date, max_date = history.get_timerange(data)
|
||||
|
||||
logger.info(
|
||||
'Loading data from %s up to %s (%s days)..',
|
||||
min_date.isoformat(), max_date.isoformat(), (max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Loading data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
|
||||
# Adjust startts forward if not enough data is available
|
||||
timerange.adjust_start_if_necessary(timeframe_to_seconds(self.timeframe),
|
||||
self.required_startup, min_date)
|
||||
@ -225,7 +225,7 @@ class Backtesting:
|
||||
open_rate=buy_row.open,
|
||||
open_date=buy_row.date,
|
||||
stake_amount=stake_amount,
|
||||
amount=stake_amount / buy_row.open,
|
||||
amount=round(stake_amount / buy_row.open, 8),
|
||||
fee_open=self.fee,
|
||||
fee_close=self.fee,
|
||||
is_open=True,
|
||||
@ -246,14 +246,15 @@ class Backtesting:
|
||||
return BacktestResult(pair=pair,
|
||||
profit_percent=trade.calc_profit_ratio(rate=closerate),
|
||||
profit_abs=trade.calc_profit(rate=closerate),
|
||||
open_time=buy_row.date,
|
||||
close_time=sell_row.date,
|
||||
trade_duration=trade_dur,
|
||||
open_index=buy_row.Index,
|
||||
close_index=sell_row.Index,
|
||||
open_at_end=False,
|
||||
open_date=buy_row.date,
|
||||
open_rate=buy_row.open,
|
||||
open_fee=self.fee,
|
||||
close_date=sell_row.date,
|
||||
close_rate=closerate,
|
||||
close_fee=self.fee,
|
||||
amount=trade.amount,
|
||||
trade_duration=trade_dur,
|
||||
open_at_end=False,
|
||||
sell_reason=sell.sell_type
|
||||
)
|
||||
if partial_ohlcv:
|
||||
@ -262,15 +263,16 @@ class Backtesting:
|
||||
bt_res = BacktestResult(pair=pair,
|
||||
profit_percent=trade.calc_profit_ratio(rate=sell_row.open),
|
||||
profit_abs=trade.calc_profit(rate=sell_row.open),
|
||||
open_time=buy_row.date,
|
||||
close_time=sell_row.date,
|
||||
open_date=buy_row.date,
|
||||
open_rate=buy_row.open,
|
||||
open_fee=self.fee,
|
||||
close_date=sell_row.date,
|
||||
close_rate=sell_row.open,
|
||||
close_fee=self.fee,
|
||||
amount=trade.amount,
|
||||
trade_duration=int((
|
||||
sell_row.date - buy_row.date).total_seconds() // 60),
|
||||
open_index=buy_row.Index,
|
||||
close_index=sell_row.Index,
|
||||
open_at_end=True,
|
||||
open_rate=buy_row.open,
|
||||
close_rate=sell_row.open,
|
||||
sell_reason=SellType.FORCE_SELL
|
||||
)
|
||||
logger.debug(f"{pair} - Force selling still open trade, "
|
||||
@ -356,8 +358,8 @@ class Backtesting:
|
||||
|
||||
if trade_entry:
|
||||
logger.debug(f"{pair} - Locking pair till "
|
||||
f"close_time={trade_entry.close_time}")
|
||||
lock_pair_until[pair] = trade_entry.close_time
|
||||
f"close_date={trade_entry.close_date}")
|
||||
lock_pair_until[pair] = trade_entry.close_date
|
||||
trades.append(trade_entry)
|
||||
else:
|
||||
# Set lock_pair_until to end of testing period if trade could not be closed
|
||||
@ -400,10 +402,9 @@ class Backtesting:
|
||||
preprocessed[pair] = trim_dataframe(df, timerange)
|
||||
min_date, max_date = history.get_timerange(preprocessed)
|
||||
|
||||
logger.info(
|
||||
'Backtesting with data from %s up to %s (%s days)..',
|
||||
min_date.isoformat(), max_date.isoformat(), (max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Backtesting with data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
# Execute backtest and print results
|
||||
all_results[self.strategy.get_strategy_name()] = self.backtest(
|
||||
processed=preprocessed,
|
||||
@ -414,8 +415,10 @@ class Backtesting:
|
||||
position_stacking=position_stacking,
|
||||
)
|
||||
|
||||
stats = generate_backtest_stats(self.config, data, all_results,
|
||||
min_date=min_date, max_date=max_date)
|
||||
if self.config.get('export', False):
|
||||
store_backtest_result(self.config['exportfilename'], all_results)
|
||||
store_backtest_stats(self.config['exportfilename'], stats)
|
||||
|
||||
# Show backtest results
|
||||
stats = generate_backtest_stats(self.config, data, all_results)
|
||||
show_backtest_results(self.config, stats)
|
||||
|
@ -4,27 +4,28 @@
|
||||
This module contains the hyperopt logic
|
||||
"""
|
||||
|
||||
import io
|
||||
import locale
|
||||
import logging
|
||||
import random
|
||||
import warnings
|
||||
from math import ceil
|
||||
from collections import OrderedDict
|
||||
from math import ceil
|
||||
from operator import itemgetter
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import progressbar
|
||||
import rapidjson
|
||||
import tabulate
|
||||
from colorama import Fore, Style
|
||||
from colorama import init as colorama_init
|
||||
from joblib import (Parallel, cpu_count, delayed, dump, load,
|
||||
wrap_non_picklable_objects)
|
||||
from pandas import DataFrame, json_normalize, isna
|
||||
import progressbar
|
||||
import tabulate
|
||||
from os import path
|
||||
import io
|
||||
from pandas import DataFrame, isna, json_normalize
|
||||
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||
from freqtrade.data.converter import trim_dataframe
|
||||
from freqtrade.data.history import get_timerange
|
||||
from freqtrade.exceptions import OperationalException
|
||||
@ -32,9 +33,11 @@ from freqtrade.misc import plural, round_dict
|
||||
from freqtrade.optimize.backtesting import Backtesting
|
||||
# Import IHyperOpt and IHyperOptLoss to allow unpickling classes from these modules
|
||||
from freqtrade.optimize.hyperopt_interface import IHyperOpt # noqa: F401
|
||||
from freqtrade.optimize.hyperopt_loss_interface import IHyperOptLoss # noqa: F401
|
||||
from freqtrade.optimize.hyperopt_loss_interface import \
|
||||
IHyperOptLoss # noqa: F401
|
||||
from freqtrade.resolvers.hyperopt_resolver import (HyperOptLossResolver,
|
||||
HyperOptResolver)
|
||||
from freqtrade.strategy import IStrategy
|
||||
|
||||
# Suppress scikit-learn FutureWarnings from skopt
|
||||
with warnings.catch_warnings():
|
||||
@ -395,7 +398,7 @@ class Hyperopt:
|
||||
return
|
||||
|
||||
# Verification for overwrite
|
||||
if path.isfile(csv_file):
|
||||
if Path(csv_file).is_file():
|
||||
logger.error(f"CSV file already exists: {csv_file}")
|
||||
return
|
||||
|
||||
@ -641,15 +644,17 @@ class Hyperopt:
|
||||
preprocessed[pair] = trim_dataframe(df, timerange)
|
||||
min_date, max_date = get_timerange(data)
|
||||
|
||||
logger.info(
|
||||
'Hyperopting with data from %s up to %s (%s days)..',
|
||||
min_date.isoformat(), max_date.isoformat(), (max_date - min_date).days
|
||||
)
|
||||
logger.info(f'Hyperopting with data from {min_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'up to {max_date.strftime(DATETIME_PRINT_FORMAT)} '
|
||||
f'({(max_date - min_date).days} days)..')
|
||||
|
||||
dump(preprocessed, self.data_pickle_file)
|
||||
|
||||
# We don't need exchange instance anymore while running hyperopt
|
||||
self.backtesting.exchange = None # type: ignore
|
||||
self.backtesting.pairlists = None # type: ignore
|
||||
self.backtesting.strategy.dp = None # type: ignore
|
||||
IStrategy.dp = None # type: ignore
|
||||
|
||||
self.epochs = self.load_previous_results(self.results_file)
|
||||
|
||||
@ -660,6 +665,10 @@ class Hyperopt:
|
||||
|
||||
self.dimensions: List[Dimension] = self.hyperopt_space()
|
||||
self.opt = self.get_optimizer(self.dimensions, config_jobs)
|
||||
|
||||
if self.print_colorized:
|
||||
colorama_init(autoreset=True)
|
||||
|
||||
try:
|
||||
with Parallel(n_jobs=config_jobs) as parallel:
|
||||
jobs = parallel._effective_n_jobs()
|
||||
|
@ -43,7 +43,7 @@ class SharpeHyperOptLossDaily(IHyperOptLoss):
|
||||
normalize=True)
|
||||
|
||||
sum_daily = (
|
||||
results.resample(resample_freq, on='close_time').agg(
|
||||
results.resample(resample_freq, on='close_date').agg(
|
||||
{"profit_percent_after_slippage": sum}).reindex(t_index).fillna(0)
|
||||
)
|
||||
|
||||
|
@ -45,7 +45,7 @@ class SortinoHyperOptLossDaily(IHyperOptLoss):
|
||||
normalize=True)
|
||||
|
||||
sum_daily = (
|
||||
results.resample(resample_freq, on='close_time').agg(
|
||||
results.resample(resample_freq, on='close_date').agg(
|
||||
{"profit_percent_after_slippage": sum}).reindex(t_index).fillna(0)
|
||||
)
|
||||
|
||||
|
@ -1,46 +1,40 @@
|
||||
import logging
|
||||
from datetime import timedelta
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from arrow import Arrow
|
||||
from pandas import DataFrame
|
||||
from numpy import int64
|
||||
from tabulate import tabulate
|
||||
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT, LAST_BT_RESULT_FN
|
||||
from freqtrade.data.btanalysis import calculate_max_drawdown, calculate_market_change
|
||||
from freqtrade.misc import file_dump_json
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def store_backtest_result(recordfilename: Path, all_results: Dict[str, DataFrame]) -> None:
|
||||
def store_backtest_stats(recordfilename: Path, stats: Dict[str, DataFrame]) -> None:
|
||||
"""
|
||||
Stores backtest results to file (one file per strategy)
|
||||
:param recordfilename: Destination filename
|
||||
:param all_results: Dict of Dataframes, one results dataframe per strategy
|
||||
Stores backtest results
|
||||
:param recordfilename: Path object, which can either be a filename or a directory.
|
||||
Filenames will be appended with a timestamp right before the suffix
|
||||
while for diectories, <directory>/backtest-result-<datetime>.json will be used as filename
|
||||
:param stats: Dataframe containing the backtesting statistics
|
||||
"""
|
||||
for strategy, results in all_results.items():
|
||||
records = backtest_result_to_list(results)
|
||||
|
||||
if records:
|
||||
filename = recordfilename
|
||||
if len(all_results) > 1:
|
||||
# Inject strategy to filename
|
||||
if recordfilename.is_dir():
|
||||
filename = (recordfilename /
|
||||
f'backtest-result-{datetime.now().strftime("%Y-%m-%d_%H-%M-%S")}.json')
|
||||
else:
|
||||
filename = Path.joinpath(
|
||||
recordfilename.parent,
|
||||
f'{recordfilename.stem}-{strategy}').with_suffix(recordfilename.suffix)
|
||||
logger.info(f'Dumping backtest results to {filename}')
|
||||
file_dump_json(filename, records)
|
||||
f'{recordfilename.stem}-{datetime.now().strftime("%Y-%m-%d_%H-%M-%S")}'
|
||||
).with_suffix(recordfilename.suffix)
|
||||
file_dump_json(filename, stats)
|
||||
|
||||
|
||||
def backtest_result_to_list(results: DataFrame) -> List[List]:
|
||||
"""
|
||||
Converts a list of Backtest-results to list
|
||||
:param results: Dataframe containing results for one strategy
|
||||
:return: List of Lists containing the trades
|
||||
"""
|
||||
return [[t.pair, t.profit_percent, t.open_time.timestamp(),
|
||||
t.close_time.timestamp(), t.open_index - 1, t.trade_duration,
|
||||
t.open_rate, t.close_rate, t.open_at_end, t.sell_reason.value]
|
||||
for index, t in results.iterrows()]
|
||||
latest_filename = Path.joinpath(filename.parent, LAST_BT_RESULT_FN)
|
||||
file_dump_json(latest_filename, {'latest_backtest': str(filename.name)})
|
||||
|
||||
|
||||
def _get_line_floatfmt() -> List[str]:
|
||||
@ -66,11 +60,12 @@ def _generate_result_line(result: DataFrame, max_open_trades: int, first_column:
|
||||
return {
|
||||
'key': first_column,
|
||||
'trades': len(result),
|
||||
'profit_mean': result['profit_percent'].mean(),
|
||||
'profit_mean_pct': result['profit_percent'].mean() * 100.0,
|
||||
'profit_mean': result['profit_percent'].mean() if len(result) > 0 else 0.0,
|
||||
'profit_mean_pct': result['profit_percent'].mean() * 100.0 if len(result) > 0 else 0.0,
|
||||
'profit_sum': result['profit_percent'].sum(),
|
||||
'profit_sum_pct': result['profit_percent'].sum() * 100.0,
|
||||
'profit_total_abs': result['profit_abs'].sum(),
|
||||
'profit_total': result['profit_percent'].sum() / max_open_trades,
|
||||
'profit_total_pct': result['profit_percent'].sum() * 100.0 / max_open_trades,
|
||||
'duration_avg': str(timedelta(
|
||||
minutes=round(result['trade_duration'].mean()))
|
||||
@ -141,7 +136,7 @@ def generate_sell_reason_stats(max_open_trades: int, results: DataFrame) -> List
|
||||
'profit_sum': profit_sum,
|
||||
'profit_sum_pct': round(profit_sum * 100, 2),
|
||||
'profit_total_abs': result['profit_abs'].sum(),
|
||||
'profit_pct_total': profit_percent_tot,
|
||||
'profit_total_pct': profit_percent_tot,
|
||||
}
|
||||
)
|
||||
return tabular_data
|
||||
@ -189,18 +184,58 @@ def generate_edge_table(results: dict) -> str:
|
||||
floatfmt=floatfmt, tablefmt="orgtbl", stralign="right") # type: ignore
|
||||
|
||||
|
||||
def generate_daily_stats(results: DataFrame) -> Dict[str, Any]:
|
||||
if len(results) == 0:
|
||||
return {
|
||||
'backtest_best_day': 0,
|
||||
'backtest_worst_day': 0,
|
||||
'winning_days': 0,
|
||||
'draw_days': 0,
|
||||
'losing_days': 0,
|
||||
'winner_holding_avg': timedelta(),
|
||||
'loser_holding_avg': timedelta(),
|
||||
}
|
||||
daily_profit = results.resample('1d', on='close_date')['profit_percent'].sum()
|
||||
worst = min(daily_profit)
|
||||
best = max(daily_profit)
|
||||
winning_days = sum(daily_profit > 0)
|
||||
draw_days = sum(daily_profit == 0)
|
||||
losing_days = sum(daily_profit < 0)
|
||||
|
||||
winning_trades = results.loc[results['profit_percent'] > 0]
|
||||
losing_trades = results.loc[results['profit_percent'] < 0]
|
||||
|
||||
return {
|
||||
'backtest_best_day': best,
|
||||
'backtest_worst_day': worst,
|
||||
'winning_days': winning_days,
|
||||
'draw_days': draw_days,
|
||||
'losing_days': losing_days,
|
||||
'winner_holding_avg': (timedelta(minutes=round(winning_trades['trade_duration'].mean()))
|
||||
if not winning_trades.empty else timedelta()),
|
||||
'loser_holding_avg': (timedelta(minutes=round(losing_trades['trade_duration'].mean()))
|
||||
if not losing_trades.empty else timedelta()),
|
||||
}
|
||||
|
||||
|
||||
def generate_backtest_stats(config: Dict, btdata: Dict[str, DataFrame],
|
||||
all_results: Dict[str, DataFrame]) -> Dict[str, Any]:
|
||||
all_results: Dict[str, DataFrame],
|
||||
min_date: Arrow, max_date: Arrow
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
:param config: Configuration object used for backtest
|
||||
:param btdata: Backtest data
|
||||
:param all_results: backtest result - dictionary with { Strategy: results}.
|
||||
:param min_date: Backtest start date
|
||||
:param max_date: Backtest end date
|
||||
:return:
|
||||
Dictionary containing results per strategy and a stratgy summary.
|
||||
"""
|
||||
stake_currency = config['stake_currency']
|
||||
max_open_trades = config['max_open_trades']
|
||||
result: Dict[str, Any] = {'strategy': {}}
|
||||
market_change = calculate_market_change(btdata, 'close')
|
||||
|
||||
for strategy, results in all_results.items():
|
||||
|
||||
pair_results = generate_pair_metrics(btdata, stake_currency=stake_currency,
|
||||
@ -212,14 +247,58 @@ def generate_backtest_stats(config: Dict, btdata: Dict[str, DataFrame],
|
||||
max_open_trades=max_open_trades,
|
||||
results=results.loc[results['open_at_end']],
|
||||
skip_nan=True)
|
||||
daily_stats = generate_daily_stats(results)
|
||||
|
||||
results['open_timestamp'] = results['open_date'].astype(int64) // 1e6
|
||||
results['close_timestamp'] = results['close_date'].astype(int64) // 1e6
|
||||
|
||||
backtest_days = (max_date - min_date).days
|
||||
strat_stats = {
|
||||
'trades': backtest_result_to_list(results),
|
||||
'trades': results.to_dict(orient='records'),
|
||||
'results_per_pair': pair_results,
|
||||
'sell_reason_summary': sell_reason_stats,
|
||||
'left_open_trades': left_open_results,
|
||||
'total_trades': len(results),
|
||||
'profit_mean': results['profit_percent'].mean() if len(results) > 0 else 0,
|
||||
'profit_total': results['profit_percent'].sum(),
|
||||
'profit_total_abs': results['profit_abs'].sum(),
|
||||
'backtest_start': min_date.datetime,
|
||||
'backtest_start_ts': min_date.timestamp * 1000,
|
||||
'backtest_end': max_date.datetime,
|
||||
'backtest_end_ts': max_date.timestamp * 1000,
|
||||
'backtest_days': backtest_days,
|
||||
|
||||
'trades_per_day': round(len(results) / backtest_days, 2) if backtest_days > 0 else 0,
|
||||
'market_change': market_change,
|
||||
'pairlist': list(btdata.keys()),
|
||||
'stake_amount': config['stake_amount'],
|
||||
'stake_currency': config['stake_currency'],
|
||||
'max_open_trades': (config['max_open_trades']
|
||||
if config['max_open_trades'] != float('inf') else -1),
|
||||
'timeframe': config['timeframe'],
|
||||
**daily_stats,
|
||||
}
|
||||
result['strategy'][strategy] = strat_stats
|
||||
|
||||
try:
|
||||
max_drawdown, drawdown_start, drawdown_end = calculate_max_drawdown(
|
||||
results, value_col='profit_percent')
|
||||
strat_stats.update({
|
||||
'max_drawdown': max_drawdown,
|
||||
'drawdown_start': drawdown_start,
|
||||
'drawdown_start_ts': drawdown_start.timestamp() * 1000,
|
||||
'drawdown_end': drawdown_end,
|
||||
'drawdown_end_ts': drawdown_end.timestamp() * 1000,
|
||||
})
|
||||
except ValueError:
|
||||
strat_stats.update({
|
||||
'max_drawdown': 0.0,
|
||||
'drawdown_start': datetime(1970, 1, 1, tzinfo=timezone.utc),
|
||||
'drawdown_start_ts': 0,
|
||||
'drawdown_end': datetime(1970, 1, 1, tzinfo=timezone.utc),
|
||||
'drawdown_end_ts': 0,
|
||||
})
|
||||
|
||||
strategy_results = generate_strategy_metrics(stake_currency=stake_currency,
|
||||
max_open_trades=max_open_trades,
|
||||
all_results=all_results)
|
||||
@ -273,7 +352,7 @@ def text_table_sell_reason(sell_reason_stats: List[Dict[str, Any]], stake_curren
|
||||
|
||||
output = [[
|
||||
t['sell_reason'], t['trades'], t['wins'], t['draws'], t['losses'],
|
||||
t['profit_mean_pct'], t['profit_sum_pct'], t['profit_total_abs'], t['profit_pct_total'],
|
||||
t['profit_mean_pct'], t['profit_sum_pct'], t['profit_total_abs'], t['profit_total_pct'],
|
||||
] for t in sell_reason_stats]
|
||||
return tabulate(output, headers=headers, tablefmt="orgtbl", stralign="right")
|
||||
|
||||
@ -298,6 +377,35 @@ def text_table_strategy(strategy_results, stake_currency: str) -> str:
|
||||
floatfmt=floatfmt, tablefmt="orgtbl", stralign="right")
|
||||
|
||||
|
||||
def text_table_add_metrics(strat_results: Dict) -> str:
|
||||
if len(strat_results['trades']) > 0:
|
||||
min_trade = min(strat_results['trades'], key=lambda x: x['open_date'])
|
||||
metrics = [
|
||||
('Backtesting from', strat_results['backtest_start'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Backtesting to', strat_results['backtest_end'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Total trades', strat_results['total_trades']),
|
||||
('First trade', min_trade['open_date'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('First trade Pair', min_trade['pair']),
|
||||
('Total Profit %', f"{round(strat_results['profit_total'] * 100, 2)}%"),
|
||||
('Trades per day', strat_results['trades_per_day']),
|
||||
('Best day', f"{round(strat_results['backtest_best_day'] * 100, 2)}%"),
|
||||
('Worst day', f"{round(strat_results['backtest_worst_day'] * 100, 2)}%"),
|
||||
('Days win/draw/lose', f"{strat_results['winning_days']} / "
|
||||
f"{strat_results['draw_days']} / {strat_results['losing_days']}"),
|
||||
('Avg. Duration Winners', f"{strat_results['winner_holding_avg']}"),
|
||||
('Avg. Duration Loser', f"{strat_results['loser_holding_avg']}"),
|
||||
('', ''), # Empty line to improve readability
|
||||
('Max Drawdown', f"{round(strat_results['max_drawdown'] * 100, 2)}%"),
|
||||
('Drawdown Start', strat_results['drawdown_start'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Drawdown End', strat_results['drawdown_end'].strftime(DATETIME_PRINT_FORMAT)),
|
||||
('Market change', f"{round(strat_results['market_change'] * 100, 2)}%"),
|
||||
]
|
||||
|
||||
return tabulate(metrics, headers=["Metric", "Value"], tablefmt="orgtbl")
|
||||
else:
|
||||
return ''
|
||||
|
||||
|
||||
def show_backtest_results(config: Dict, backtest_stats: Dict):
|
||||
stake_currency = config['stake_currency']
|
||||
|
||||
@ -312,15 +420,21 @@ def show_backtest_results(config: Dict, backtest_stats: Dict):
|
||||
|
||||
table = text_table_sell_reason(sell_reason_stats=results['sell_reason_summary'],
|
||||
stake_currency=stake_currency)
|
||||
if isinstance(table, str):
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print(' SELL REASON STATS '.center(len(table.splitlines()[0]), '='))
|
||||
print(table)
|
||||
|
||||
table = text_table_bt_results(results['left_open_trades'], stake_currency=stake_currency)
|
||||
if isinstance(table, str):
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print(' LEFT OPEN TRADES REPORT '.center(len(table.splitlines()[0]), '='))
|
||||
print(table)
|
||||
if isinstance(table, str):
|
||||
|
||||
table = text_table_add_metrics(results)
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print(' SUMMARY METRICS '.center(len(table.splitlines()[0]), '='))
|
||||
print(table)
|
||||
|
||||
if isinstance(table, str) and len(table) > 0:
|
||||
print('=' * len(table.splitlines()[0]))
|
||||
print()
|
||||
|
||||
|
@ -26,12 +26,11 @@ class AgeFilter(IPairList):
|
||||
self._min_days_listed = pairlistconfig.get('min_days_listed', 10)
|
||||
|
||||
if self._min_days_listed < 1:
|
||||
raise OperationalException("AgeFilter requires min_days_listed must be >= 1")
|
||||
raise OperationalException("AgeFilter requires min_days_listed to be >= 1")
|
||||
if self._min_days_listed > exchange.ohlcv_candle_limit:
|
||||
raise OperationalException("AgeFilter requires min_days_listed must not exceed "
|
||||
raise OperationalException("AgeFilter requires min_days_listed to not exceed "
|
||||
"exchange max request size "
|
||||
f"({exchange.ohlcv_candle_limit})")
|
||||
self._enabled = self._min_days_listed >= 1
|
||||
|
||||
@property
|
||||
def needstickers(self) -> bool:
|
||||
|
@ -162,6 +162,11 @@ class IPairList(ABC):
|
||||
f"{self._exchange.name}. Removing it from whitelist..")
|
||||
continue
|
||||
|
||||
if not self._exchange.market_is_tradable(markets[pair]):
|
||||
logger.warning(f"Pair {pair} is not tradable with Freqtrade."
|
||||
"Removing it from whitelist..")
|
||||
continue
|
||||
|
||||
if self._exchange.get_pair_quote_currency(pair) != self._config['stake_currency']:
|
||||
logger.warning(f"Pair {pair} is not compatible with your stake currency "
|
||||
f"{self._config['stake_currency']}. Removing it from whitelist..")
|
||||
|
@ -4,6 +4,7 @@ Price pair list filter
|
||||
import logging
|
||||
from typing import Any, Dict
|
||||
|
||||
from freqtrade.exceptions import OperationalException
|
||||
from freqtrade.pairlist.IPairList import IPairList
|
||||
|
||||
|
||||
@ -18,11 +19,17 @@ class PriceFilter(IPairList):
|
||||
super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)
|
||||
|
||||
self._low_price_ratio = pairlistconfig.get('low_price_ratio', 0)
|
||||
if self._low_price_ratio < 0:
|
||||
raise OperationalException("PriceFilter requires low_price_ratio to be >= 0")
|
||||
self._min_price = pairlistconfig.get('min_price', 0)
|
||||
if self._min_price < 0:
|
||||
raise OperationalException("PriceFilter requires min_price to be >= 0")
|
||||
self._max_price = pairlistconfig.get('max_price', 0)
|
||||
self._enabled = ((self._low_price_ratio != 0) or
|
||||
(self._min_price != 0) or
|
||||
(self._max_price != 0))
|
||||
if self._max_price < 0:
|
||||
raise OperationalException("PriceFilter requires max_price to be >= 0")
|
||||
self._enabled = ((self._low_price_ratio > 0) or
|
||||
(self._min_price > 0) or
|
||||
(self._max_price > 0))
|
||||
|
||||
@property
|
||||
def needstickers(self) -> bool:
|
||||
|
@ -276,7 +276,7 @@ class Trade(_DECL_BASE):
|
||||
|
||||
'open_date_hum': arrow.get(self.open_date).humanize(),
|
||||
'open_date': self.open_date.strftime("%Y-%m-%d %H:%M:%S"),
|
||||
'open_timestamp': int(self.open_date.timestamp() * 1000),
|
||||
'open_timestamp': int(self.open_date.replace(tzinfo=timezone.utc).timestamp() * 1000),
|
||||
'open_rate': self.open_rate,
|
||||
'open_rate_requested': self.open_rate_requested,
|
||||
'open_trade_price': round(self.open_trade_price, 8),
|
||||
@ -285,7 +285,8 @@ class Trade(_DECL_BASE):
|
||||
if self.close_date else None),
|
||||
'close_date': (self.close_date.strftime("%Y-%m-%d %H:%M:%S")
|
||||
if self.close_date else None),
|
||||
'close_timestamp': int(self.close_date.timestamp() * 1000) if self.close_date else None,
|
||||
'close_timestamp': int(self.close_date.replace(
|
||||
tzinfo=timezone.utc).timestamp() * 1000) if self.close_date else None,
|
||||
'close_rate': self.close_rate,
|
||||
'close_rate_requested': self.close_rate_requested,
|
||||
'close_profit': self.close_profit,
|
||||
@ -300,8 +301,8 @@ class Trade(_DECL_BASE):
|
||||
'stoploss_order_id': self.stoploss_order_id,
|
||||
'stoploss_last_update': (self.stoploss_last_update.strftime("%Y-%m-%d %H:%M:%S")
|
||||
if self.stoploss_last_update else None),
|
||||
'stoploss_last_update_timestamp': (int(self.stoploss_last_update.timestamp() * 1000)
|
||||
if self.stoploss_last_update else None),
|
||||
'stoploss_last_update_timestamp': int(self.stoploss_last_update.replace(
|
||||
tzinfo=timezone.utc).timestamp() * 1000) if self.stoploss_last_update else None,
|
||||
'initial_stop_loss': self.initial_stop_loss, # Deprecated - should not be used
|
||||
'initial_stop_loss_abs': self.initial_stop_loss,
|
||||
'initial_stop_loss_ratio': (self.initial_stop_loss_pct
|
||||
|
@ -8,7 +8,8 @@ from freqtrade.configuration import TimeRange
|
||||
from freqtrade.data.btanalysis import (calculate_max_drawdown,
|
||||
combine_dataframes_with_mean,
|
||||
create_cum_profit,
|
||||
extract_trades_of_period, load_trades)
|
||||
extract_trades_of_period,
|
||||
load_trades)
|
||||
from freqtrade.data.converter import trim_dataframe
|
||||
from freqtrade.data.dataprovider import DataProvider
|
||||
from freqtrade.data.history import load_data
|
||||
@ -53,19 +54,22 @@ def init_plotscript(config):
|
||||
)
|
||||
|
||||
no_trades = False
|
||||
filename = config.get('exportfilename')
|
||||
if config.get('no_trades', False):
|
||||
no_trades = True
|
||||
elif not config['exportfilename'].is_file() and config['trade_source'] == 'file':
|
||||
elif config['trade_source'] == 'file':
|
||||
if not filename.is_dir() and not filename.is_file():
|
||||
logger.warning("Backtest file is missing skipping trades.")
|
||||
no_trades = True
|
||||
|
||||
trades = load_trades(
|
||||
config['trade_source'],
|
||||
db_url=config.get('db_url'),
|
||||
exportfilename=config.get('exportfilename'),
|
||||
no_trades=no_trades
|
||||
exportfilename=filename,
|
||||
no_trades=no_trades,
|
||||
strategy=config.get("strategy"),
|
||||
)
|
||||
trades = trim_dataframe(trades, timerange, 'open_time')
|
||||
trades = trim_dataframe(trades, timerange, 'open_date')
|
||||
|
||||
return {"ohlcv": data,
|
||||
"trades": trades,
|
||||
@ -165,10 +169,11 @@ def plot_trades(fig, trades: pd.DataFrame) -> make_subplots:
|
||||
if trades is not None and len(trades) > 0:
|
||||
# Create description for sell summarizing the trade
|
||||
trades['desc'] = trades.apply(lambda row: f"{round(row['profit_percent'] * 100, 1)}%, "
|
||||
f"{row['sell_reason']}, {row['duration']} min",
|
||||
f"{row['sell_reason']}, "
|
||||
f"{row['trade_duration']} min",
|
||||
axis=1)
|
||||
trade_buys = go.Scatter(
|
||||
x=trades["open_time"],
|
||||
x=trades["open_date"],
|
||||
y=trades["open_rate"],
|
||||
mode='markers',
|
||||
name='Trade buy',
|
||||
@ -183,7 +188,7 @@ def plot_trades(fig, trades: pd.DataFrame) -> make_subplots:
|
||||
)
|
||||
|
||||
trade_sells = go.Scatter(
|
||||
x=trades.loc[trades['profit_percent'] > 0, "close_time"],
|
||||
x=trades.loc[trades['profit_percent'] > 0, "close_date"],
|
||||
y=trades.loc[trades['profit_percent'] > 0, "close_rate"],
|
||||
text=trades.loc[trades['profit_percent'] > 0, "desc"],
|
||||
mode='markers',
|
||||
@ -196,7 +201,7 @@ def plot_trades(fig, trades: pd.DataFrame) -> make_subplots:
|
||||
)
|
||||
)
|
||||
trade_sells_loss = go.Scatter(
|
||||
x=trades.loc[trades['profit_percent'] <= 0, "close_time"],
|
||||
x=trades.loc[trades['profit_percent'] <= 0, "close_date"],
|
||||
y=trades.loc[trades['profit_percent'] <= 0, "close_rate"],
|
||||
text=trades.loc[trades['profit_percent'] <= 0, "desc"],
|
||||
mode='markers',
|
||||
@ -510,7 +515,7 @@ def plot_profit(config: Dict[str, Any]) -> None:
|
||||
# Remove open pairs - we don't know the profit yet so can't calculate profit for these.
|
||||
# Also, If only one open pair is left, then the profit-generation would fail.
|
||||
trades = trades[(trades['pair'].isin(plot_elements["pairs"]))
|
||||
& (~trades['close_time'].isnull())
|
||||
& (~trades['close_date'].isnull())
|
||||
]
|
||||
if len(trades) == 0:
|
||||
raise OperationalException("No trades found, cannot generate Profit-plot without "
|
||||
|
@ -23,7 +23,7 @@ class HyperOptResolver(IResolver):
|
||||
object_type = IHyperOpt
|
||||
object_type_str = "Hyperopt"
|
||||
user_subdir = USERPATH_HYPEROPTS
|
||||
initial_search_path = Path(__file__).parent.parent.joinpath('optimize').resolve()
|
||||
initial_search_path = None
|
||||
|
||||
@staticmethod
|
||||
def load_hyperopt(config: Dict) -> IHyperOpt:
|
||||
|
@ -16,6 +16,7 @@ from werkzeug.security import safe_str_cmp
|
||||
from werkzeug.serving import make_server
|
||||
|
||||
from freqtrade.__init__ import __version__
|
||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||
from freqtrade.rpc.rpc import RPC, RPCException
|
||||
from freqtrade.rpc.fiat_convert import CryptoToFiatConverter
|
||||
|
||||
@ -32,7 +33,7 @@ class ArrowJSONEncoder(JSONEncoder):
|
||||
elif isinstance(obj, date):
|
||||
return obj.strftime("%Y-%m-%d")
|
||||
elif isinstance(obj, datetime):
|
||||
return obj.strftime("%Y-%m-%d %H:%M:%S")
|
||||
return obj.strftime(DATETIME_PRINT_FORMAT)
|
||||
iterable = iter(obj)
|
||||
except TypeError:
|
||||
pass
|
||||
|
@ -224,22 +224,20 @@ class RPC:
|
||||
]).order_by(Trade.close_date).all()
|
||||
curdayprofit = sum(trade.close_profit_abs for trade in trades)
|
||||
profit_days[profitday] = {
|
||||
'amount': f'{curdayprofit:.8f}',
|
||||
'amount': curdayprofit,
|
||||
'trades': len(trades)
|
||||
}
|
||||
|
||||
data = [
|
||||
{
|
||||
'date': key,
|
||||
'abs_profit': f'{float(value["amount"]):.8f}',
|
||||
'fiat_value': '{value:.3f}'.format(
|
||||
value=self._fiat_converter.convert_amount(
|
||||
'abs_profit': value["amount"],
|
||||
'fiat_value': self._fiat_converter.convert_amount(
|
||||
value['amount'],
|
||||
stake_currency,
|
||||
fiat_display_currency
|
||||
) if self._fiat_converter else 0,
|
||||
),
|
||||
'trade_count': f'{value["trades"]}',
|
||||
'trade_count': value["trades"],
|
||||
}
|
||||
for key, value in profit_days.items()
|
||||
]
|
||||
|
@ -305,8 +305,8 @@ class Telegram(RPC):
|
||||
)
|
||||
stats_tab = tabulate(
|
||||
[[day['date'],
|
||||
f"{day['abs_profit']} {stats['stake_currency']}",
|
||||
f"{day['fiat_value']} {stats['fiat_display_currency']}",
|
||||
f"{day['abs_profit']:.8f} {stats['stake_currency']}",
|
||||
f"{day['fiat_value']:.3f} {stats['fiat_display_currency']}",
|
||||
f"{day['trade_count']} trades"] for day in stats['data']],
|
||||
headers=[
|
||||
'Day',
|
||||
|
@ -44,6 +44,10 @@ class SellType(Enum):
|
||||
EMERGENCY_SELL = "emergency_sell"
|
||||
NONE = ""
|
||||
|
||||
def __str__(self):
|
||||
# explicitly convert to String to help with exporting data.
|
||||
return self.value
|
||||
|
||||
|
||||
class SellCheckTuple(NamedTuple):
|
||||
"""
|
||||
|
@ -34,7 +34,7 @@
|
||||
"# config = Configuration.from_files([\"config.json\"])\n",
|
||||
"\n",
|
||||
"# Define some constants\n",
|
||||
"config[\"ticker_interval\"] = \"5m\"\n",
|
||||
"config[\"timeframe\"] = \"5m\"\n",
|
||||
"# Name of the strategy class\n",
|
||||
"config[\"strategy\"] = \"SampleStrategy\"\n",
|
||||
"# Location of the data\n",
|
||||
@ -53,7 +53,7 @@
|
||||
"from freqtrade.data.history import load_pair_history\n",
|
||||
"\n",
|
||||
"candles = load_pair_history(datadir=data_location,\n",
|
||||
" timeframe=config[\"ticker_interval\"],\n",
|
||||
" timeframe=config[\"timeframe\"],\n",
|
||||
" pair=pair)\n",
|
||||
"\n",
|
||||
"# Confirm success\n",
|
||||
@ -136,10 +136,51 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from freqtrade.data.btanalysis import load_backtest_data\n",
|
||||
"from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats\n",
|
||||
"\n",
|
||||
"# Load backtest results\n",
|
||||
"trades = load_backtest_data(config[\"user_data_dir\"] / \"backtest_results/backtest-result.json\")\n",
|
||||
"# if backtest_dir points to a directory, it'll automatically load the last backtest file.\n",
|
||||
"backtest_dir = config[\"user_data_dir\"] / \"backtest_results\"\n",
|
||||
"# backtest_dir can also point to a specific file \n",
|
||||
"# backtest_dir = config[\"user_data_dir\"] / \"backtest_results/backtest-result-2020-07-01_20-04-22.json\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# You can get the full backtest statistics by using the following command.\n",
|
||||
"# This contains all information used to generate the backtest result.\n",
|
||||
"stats = load_backtest_stats(backtest_dir)\n",
|
||||
"\n",
|
||||
"strategy = 'SampleStrategy'\n",
|
||||
"# All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well.\n",
|
||||
"# Example usages:\n",
|
||||
"print(stats['strategy'][strategy]['results_per_pair'])\n",
|
||||
"# Get pairlist used for this backtest\n",
|
||||
"print(stats['strategy'][strategy]['pairlist'])\n",
|
||||
"# Get market change (average change of all pairs from start to end of the backtest period)\n",
|
||||
"print(stats['strategy'][strategy]['market_change'])\n",
|
||||
"# Maximum drawdown ()\n",
|
||||
"print(stats['strategy'][strategy]['max_drawdown'])\n",
|
||||
"# Maximum drawdown start and end\n",
|
||||
"print(stats['strategy'][strategy]['drawdown_start'])\n",
|
||||
"print(stats['strategy'][strategy]['drawdown_end'])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Get strategy comparison (only relevant if multiple strategies were compared)\n",
|
||||
"print(stats['strategy_comparison'])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load backtested trades as dataframe\n",
|
||||
"trades = load_backtest_data(backtest_dir)\n",
|
||||
"\n",
|
||||
"# Show value-counts per pair\n",
|
||||
"trades.groupby(\"pair\")[\"sell_reason\"].value_counts()"
|
||||
|
@ -1,6 +1,6 @@
|
||||
# requirements without requirements installable via conda
|
||||
# mainly used for Raspberry pi installs
|
||||
ccxt==1.32.88
|
||||
ccxt==1.33.18
|
||||
SQLAlchemy==1.3.18
|
||||
python-telegram-bot==12.8
|
||||
arrow==0.15.8
|
||||
@ -32,4 +32,4 @@ flask-cors==3.0.8
|
||||
colorama==0.4.3
|
||||
# Building config files interactively
|
||||
questionary==1.5.2
|
||||
prompt-toolkit==3.0.5
|
||||
prompt-toolkit==3.0.6
|
||||
|
@ -3,14 +3,14 @@
|
||||
-r requirements-plot.txt
|
||||
-r requirements-hyperopt.txt
|
||||
|
||||
coveralls==2.1.1
|
||||
coveralls==2.1.2
|
||||
flake8==3.8.3
|
||||
flake8-type-annotations==0.1.0
|
||||
flake8-tidy-imports==4.1.0
|
||||
mypy==0.782
|
||||
pytest==6.0.1
|
||||
pytest-asyncio==0.14.0
|
||||
pytest-cov==2.10.0
|
||||
pytest-cov==2.10.1
|
||||
pytest-mock==3.2.0
|
||||
pytest-random-order==1.0.4
|
||||
|
||||
|
@ -667,7 +667,7 @@ def test_start_list_hyperopts(mocker, caplog, capsys):
|
||||
args = [
|
||||
"list-hyperopts",
|
||||
"--hyperopt-path",
|
||||
str(Path(__file__).parent.parent / "optimize"),
|
||||
str(Path(__file__).parent.parent / "optimize" / "hyperopts"),
|
||||
"-1"
|
||||
]
|
||||
pargs = get_args(args)
|
||||
@ -683,7 +683,7 @@ def test_start_list_hyperopts(mocker, caplog, capsys):
|
||||
args = [
|
||||
"list-hyperopts",
|
||||
"--hyperopt-path",
|
||||
str(Path(__file__).parent.parent / "optimize"),
|
||||
str(Path(__file__).parent.parent / "optimize" / "hyperopts"),
|
||||
]
|
||||
pargs = get_args(args)
|
||||
# pargs['config'] = None
|
||||
@ -692,7 +692,6 @@ def test_start_list_hyperopts(mocker, caplog, capsys):
|
||||
assert "TestHyperoptLegacy" not in captured.out
|
||||
assert "legacy_hyperopt.py" not in captured.out
|
||||
assert "DefaultHyperOpt" in captured.out
|
||||
assert "test_hyperopt.py" in captured.out
|
||||
|
||||
|
||||
def test_start_test_pairlist(mocker, caplog, tickers, default_conf, capsys):
|
||||
|
@ -181,7 +181,8 @@ def create_mock_trades(fee):
|
||||
fee_close=fee.return_value,
|
||||
open_rate=0.123,
|
||||
exchange='bittrex',
|
||||
open_order_id='dry_run_buy_12345'
|
||||
open_order_id='dry_run_buy_12345',
|
||||
strategy='DefaultStrategy',
|
||||
)
|
||||
Trade.session.add(trade)
|
||||
|
||||
@ -197,7 +198,8 @@ def create_mock_trades(fee):
|
||||
close_profit=0.005,
|
||||
exchange='bittrex',
|
||||
is_open=False,
|
||||
open_order_id='dry_run_sell_12345'
|
||||
open_order_id='dry_run_sell_12345',
|
||||
strategy='DefaultStrategy',
|
||||
)
|
||||
Trade.session.add(trade)
|
||||
|
||||
@ -225,7 +227,8 @@ def create_mock_trades(fee):
|
||||
fee_close=fee.return_value,
|
||||
open_rate=0.123,
|
||||
exchange='bittrex',
|
||||
open_order_id='prod_buy_12345'
|
||||
open_order_id='prod_buy_12345',
|
||||
strategy='DefaultStrategy',
|
||||
)
|
||||
Trade.session.add(trade)
|
||||
|
||||
|
@ -6,24 +6,48 @@ from arrow import Arrow
|
||||
from pandas import DataFrame, DateOffset, Timestamp, to_datetime
|
||||
|
||||
from freqtrade.configuration import TimeRange
|
||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||
from freqtrade.data.btanalysis import (BT_DATA_COLUMNS,
|
||||
analyze_trade_parallelism,
|
||||
calculate_market_change,
|
||||
calculate_max_drawdown,
|
||||
combine_dataframes_with_mean,
|
||||
create_cum_profit,
|
||||
extract_trades_of_period,
|
||||
get_latest_backtest_filename,
|
||||
load_backtest_data, load_trades,
|
||||
load_trades_from_db)
|
||||
from freqtrade.data.history import load_data, load_pair_history
|
||||
from freqtrade.optimize.backtesting import BacktestResult
|
||||
from tests.conftest import create_mock_trades
|
||||
|
||||
|
||||
def test_load_backtest_data(testdatadir):
|
||||
def test_get_latest_backtest_filename(testdatadir, mocker):
|
||||
with pytest.raises(ValueError, match=r"Directory .* does not exist\."):
|
||||
get_latest_backtest_filename(testdatadir / 'does_not_exist')
|
||||
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Directory .* does not seem to contain .*"):
|
||||
get_latest_backtest_filename(testdatadir.parent)
|
||||
|
||||
res = get_latest_backtest_filename(testdatadir)
|
||||
assert res == 'backtest-result_new.json'
|
||||
|
||||
res = get_latest_backtest_filename(str(testdatadir))
|
||||
assert res == 'backtest-result_new.json'
|
||||
|
||||
mocker.patch("freqtrade.data.btanalysis.json_load", return_value={})
|
||||
|
||||
with pytest.raises(ValueError, match=r"Invalid '.last_result.json' format."):
|
||||
get_latest_backtest_filename(testdatadir)
|
||||
|
||||
|
||||
def test_load_backtest_data_old_format(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_test.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
assert isinstance(bt_data, DataFrame)
|
||||
assert list(bt_data.columns) == BT_DATA_COLUMNS + ["profit"]
|
||||
assert list(bt_data.columns) == BT_DATA_COLUMNS + ["profit_abs"]
|
||||
assert len(bt_data) == 179
|
||||
|
||||
# Test loading from string (must yield same result)
|
||||
@ -34,6 +58,49 @@ def test_load_backtest_data(testdatadir):
|
||||
load_backtest_data(str("filename") + "nofile")
|
||||
|
||||
|
||||
def test_load_backtest_data_new_format(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_new.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
assert isinstance(bt_data, DataFrame)
|
||||
assert set(bt_data.columns) == set(list(BacktestResult._fields) + ["profit_abs"])
|
||||
assert len(bt_data) == 179
|
||||
|
||||
# Test loading from string (must yield same result)
|
||||
bt_data2 = load_backtest_data(str(filename))
|
||||
assert bt_data.equals(bt_data2)
|
||||
|
||||
# Test loading from folder (must yield same result)
|
||||
bt_data3 = load_backtest_data(testdatadir)
|
||||
assert bt_data.equals(bt_data3)
|
||||
|
||||
with pytest.raises(ValueError, match=r"File .* does not exist\."):
|
||||
load_backtest_data(str("filename") + "nofile")
|
||||
|
||||
with pytest.raises(ValueError, match=r"Unknown dataformat."):
|
||||
load_backtest_data(testdatadir / LAST_BT_RESULT_FN)
|
||||
|
||||
|
||||
def test_load_backtest_data_multi(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_multistrat.json"
|
||||
for strategy in ('DefaultStrategy', 'TestStrategy'):
|
||||
bt_data = load_backtest_data(filename, strategy=strategy)
|
||||
assert isinstance(bt_data, DataFrame)
|
||||
assert set(bt_data.columns) == set(list(BacktestResult._fields) + ["profit_abs"])
|
||||
assert len(bt_data) == 179
|
||||
|
||||
# Test loading from string (must yield same result)
|
||||
bt_data2 = load_backtest_data(str(filename), strategy=strategy)
|
||||
assert bt_data.equals(bt_data2)
|
||||
|
||||
with pytest.raises(ValueError, match=r"Strategy XYZ not available in the backtest result\."):
|
||||
load_backtest_data(filename, strategy='XYZ')
|
||||
|
||||
with pytest.raises(ValueError, match=r"Detected backtest result with more than one strategy.*"):
|
||||
load_backtest_data(filename)
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("init_persistence")
|
||||
def test_load_trades_from_db(default_conf, fee, mocker):
|
||||
|
||||
@ -46,12 +113,16 @@ def test_load_trades_from_db(default_conf, fee, mocker):
|
||||
assert len(trades) == 4
|
||||
assert isinstance(trades, DataFrame)
|
||||
assert "pair" in trades.columns
|
||||
assert "open_time" in trades.columns
|
||||
assert "open_date" in trades.columns
|
||||
assert "profit_percent" in trades.columns
|
||||
|
||||
for col in BT_DATA_COLUMNS:
|
||||
if col not in ['index', 'open_at_end']:
|
||||
assert col in trades.columns
|
||||
trades = load_trades_from_db(db_url=default_conf['db_url'], strategy='DefaultStrategy')
|
||||
assert len(trades) == 3
|
||||
trades = load_trades_from_db(db_url=default_conf['db_url'], strategy='NoneStrategy')
|
||||
assert len(trades) == 0
|
||||
|
||||
|
||||
def test_extract_trades_of_period(testdatadir):
|
||||
@ -66,13 +137,13 @@ def test_extract_trades_of_period(testdatadir):
|
||||
{'pair': [pair, pair, pair, pair],
|
||||
'profit_percent': [0.0, 0.1, -0.2, -0.5],
|
||||
'profit_abs': [0.0, 1, -2, -5],
|
||||
'open_time': to_datetime([Arrow(2017, 11, 13, 15, 40, 0).datetime,
|
||||
'open_date': to_datetime([Arrow(2017, 11, 13, 15, 40, 0).datetime,
|
||||
Arrow(2017, 11, 14, 9, 41, 0).datetime,
|
||||
Arrow(2017, 11, 14, 14, 20, 0).datetime,
|
||||
Arrow(2017, 11, 15, 3, 40, 0).datetime,
|
||||
], utc=True
|
||||
),
|
||||
'close_time': to_datetime([Arrow(2017, 11, 13, 16, 40, 0).datetime,
|
||||
'close_date': to_datetime([Arrow(2017, 11, 13, 16, 40, 0).datetime,
|
||||
Arrow(2017, 11, 14, 10, 41, 0).datetime,
|
||||
Arrow(2017, 11, 14, 15, 25, 0).datetime,
|
||||
Arrow(2017, 11, 15, 3, 55, 0).datetime,
|
||||
@ -81,10 +152,10 @@ def test_extract_trades_of_period(testdatadir):
|
||||
trades1 = extract_trades_of_period(data, trades)
|
||||
# First and last trade are dropped as they are out of range
|
||||
assert len(trades1) == 2
|
||||
assert trades1.iloc[0].open_time == Arrow(2017, 11, 14, 9, 41, 0).datetime
|
||||
assert trades1.iloc[0].close_time == Arrow(2017, 11, 14, 10, 41, 0).datetime
|
||||
assert trades1.iloc[-1].open_time == Arrow(2017, 11, 14, 14, 20, 0).datetime
|
||||
assert trades1.iloc[-1].close_time == Arrow(2017, 11, 14, 15, 25, 0).datetime
|
||||
assert trades1.iloc[0].open_date == Arrow(2017, 11, 14, 9, 41, 0).datetime
|
||||
assert trades1.iloc[0].close_date == Arrow(2017, 11, 14, 10, 41, 0).datetime
|
||||
assert trades1.iloc[-1].open_date == Arrow(2017, 11, 14, 14, 20, 0).datetime
|
||||
assert trades1.iloc[-1].close_date == Arrow(2017, 11, 14, 15, 25, 0).datetime
|
||||
|
||||
|
||||
def test_analyze_trade_parallelism(default_conf, mocker, testdatadir):
|
||||
@ -105,7 +176,8 @@ def test_load_trades(default_conf, mocker):
|
||||
load_trades("DB",
|
||||
db_url=default_conf.get('db_url'),
|
||||
exportfilename=default_conf.get('exportfilename'),
|
||||
no_trades=False
|
||||
no_trades=False,
|
||||
strategy="DefaultStrategy",
|
||||
)
|
||||
|
||||
assert db_mock.call_count == 1
|
||||
@ -135,6 +207,14 @@ def test_load_trades(default_conf, mocker):
|
||||
assert bt_mock.call_count == 0
|
||||
|
||||
|
||||
def test_calculate_market_change(testdatadir):
|
||||
pairs = ["ETH/BTC", "ADA/BTC"]
|
||||
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m')
|
||||
result = calculate_market_change(data)
|
||||
assert isinstance(result, float)
|
||||
assert pytest.approx(result) == 0.00955514
|
||||
|
||||
|
||||
def test_combine_dataframes_with_mean(testdatadir):
|
||||
pairs = ["ETH/BTC", "ADA/BTC"]
|
||||
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m')
|
||||
@ -165,7 +245,7 @@ def test_create_cum_profit1(testdatadir):
|
||||
filename = testdatadir / "backtest-result_test.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
# Move close-time to "off" the candle, to make sure the logic still works
|
||||
bt_data.loc[:, 'close_time'] = bt_data.loc[:, 'close_time'] + DateOffset(seconds=20)
|
||||
bt_data.loc[:, 'close_date'] = bt_data.loc[:, 'close_date'] + DateOffset(seconds=20)
|
||||
timerange = TimeRange.parse_timerange("20180110-20180112")
|
||||
|
||||
df = load_pair_history(pair="TRX/BTC", timeframe='5m',
|
||||
@ -204,11 +284,11 @@ def test_calculate_max_drawdown2():
|
||||
-0.033961, 0.010680, 0.010886, -0.029274, 0.011178, 0.010693, 0.010711]
|
||||
|
||||
dates = [Arrow(2020, 1, 1).shift(days=i) for i in range(len(values))]
|
||||
df = DataFrame(zip(values, dates), columns=['profit', 'open_time'])
|
||||
df = DataFrame(zip(values, dates), columns=['profit', 'open_date'])
|
||||
# sort by profit and reset index
|
||||
df = df.sort_values('profit').reset_index(drop=True)
|
||||
df1 = df.copy()
|
||||
drawdown, h, low = calculate_max_drawdown(df, date_col='open_time', value_col='profit')
|
||||
drawdown, h, low = calculate_max_drawdown(df, date_col='open_date', value_col='profit')
|
||||
# Ensure df has not been altered.
|
||||
assert df.equals(df1)
|
||||
|
||||
@ -217,6 +297,6 @@ def test_calculate_max_drawdown2():
|
||||
assert h < low
|
||||
assert drawdown == 0.091755
|
||||
|
||||
df = DataFrame(zip(values[:5], dates[:5]), columns=['profit', 'open_time'])
|
||||
df = DataFrame(zip(values[:5], dates[:5]), columns=['profit', 'open_date'])
|
||||
with pytest.raises(ValueError, match='No losing trade, therefore no drawdown.'):
|
||||
calculate_max_drawdown(df, date_col='open_time', value_col='profit')
|
||||
calculate_max_drawdown(df, date_col='open_date', value_col='profit')
|
||||
|
@ -36,7 +36,7 @@ def _backup_file(file: Path, copy_file: bool = False) -> None:
|
||||
"""
|
||||
Backup existing file to avoid deleting the user file
|
||||
:param file: complete path to the file
|
||||
:param touch_file: create an empty file in replacement
|
||||
:param copy_file: keep file in place too.
|
||||
:return: None
|
||||
"""
|
||||
file_swp = str(file) + '.swp'
|
||||
|
@ -163,8 +163,8 @@ def test_edge_results(edge_conf, mocker, caplog, data) -> None:
|
||||
for c, trade in enumerate(data.trades):
|
||||
res = results.iloc[c]
|
||||
assert res.exit_type == trade.sell_reason
|
||||
assert res.open_time == _get_frame_time_from_offset(trade.open_tick).replace(tzinfo=None)
|
||||
assert res.close_time == _get_frame_time_from_offset(trade.close_tick).replace(tzinfo=None)
|
||||
assert res.open_date == _get_frame_time_from_offset(trade.open_tick).replace(tzinfo=None)
|
||||
assert res.close_date == _get_frame_time_from_offset(trade.close_tick).replace(tzinfo=None)
|
||||
|
||||
|
||||
def test_adjust(mocker, edge_conf):
|
||||
@ -354,10 +354,8 @@ def test_process_expectancy(mocker, edge_conf, fee, risk_reward_ratio, expectanc
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'open_index': 1,
|
||||
'close_index': 1,
|
||||
'open_date': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'trade_duration': '',
|
||||
'open_rate': 17,
|
||||
'close_rate': 17,
|
||||
@ -367,10 +365,8 @@ def test_process_expectancy(mocker, edge_conf, fee, risk_reward_ratio, expectanc
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'trade_duration': '',
|
||||
'open_rate': 20,
|
||||
'close_rate': 20,
|
||||
@ -380,10 +376,8 @@ def test_process_expectancy(mocker, edge_conf, fee, risk_reward_ratio, expectanc
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'open_index': 6,
|
||||
'close_index': 7,
|
||||
'open_date': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'trade_duration': '',
|
||||
'open_rate': 26,
|
||||
'close_rate': 34,
|
||||
@ -424,8 +418,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:05:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:10:00.000000000'),
|
||||
'open_index': 1,
|
||||
'close_index': 1,
|
||||
'trade_duration': '',
|
||||
@ -437,8 +431,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -449,8 +443,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -461,8 +455,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -473,8 +467,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:20:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:25:00.000000000'),
|
||||
'open_index': 4,
|
||||
'close_index': 4,
|
||||
'trade_duration': '',
|
||||
@ -486,8 +480,8 @@ def test_process_expectancy_remove_pumps(mocker, edge_conf, fee,):
|
||||
'stoploss': -0.9,
|
||||
'profit_percent': '',
|
||||
'profit_abs': '',
|
||||
'open_time': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_time': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'open_date': np.datetime64('2018-10-03T00:30:00.000000000'),
|
||||
'close_date': np.datetime64('2018-10-03T00:40:00.000000000'),
|
||||
'open_index': 6,
|
||||
'close_index': 7,
|
||||
'trade_duration': '',
|
||||
|
@ -15,7 +15,7 @@ from freqtrade.exceptions import (DDosProtection, DependencyException,
|
||||
from freqtrade.exchange import Binance, Exchange, Kraken
|
||||
from freqtrade.exchange.common import (API_RETRY_COUNT, API_FETCH_ORDER_RETRY_COUNT,
|
||||
calculate_backoff)
|
||||
from freqtrade.exchange.exchange import (market_is_active, symbol_is_pair,
|
||||
from freqtrade.exchange.exchange import (market_is_active,
|
||||
timeframe_to_minutes,
|
||||
timeframe_to_msecs,
|
||||
timeframe_to_next_date,
|
||||
@ -2245,25 +2245,42 @@ def test_timeframe_to_next_date():
|
||||
assert timeframe_to_next_date("5m") > date
|
||||
|
||||
|
||||
@pytest.mark.parametrize("market_symbol,base_currency,quote_currency,expected_result", [
|
||||
("BTC/USDT", None, None, True),
|
||||
("USDT/BTC", None, None, True),
|
||||
("BTCUSDT", None, None, False),
|
||||
("BTC/USDT", None, "USDT", True),
|
||||
("USDT/BTC", None, "USDT", False),
|
||||
("BTCUSDT", None, "USDT", False),
|
||||
("BTC/USDT", "BTC", None, True),
|
||||
("USDT/BTC", "BTC", None, False),
|
||||
("BTCUSDT", "BTC", None, False),
|
||||
("BTC/USDT", "BTC", "USDT", True),
|
||||
("BTC/USDT", "USDT", "BTC", False),
|
||||
("BTC/USDT", "BTC", "USD", False),
|
||||
("BTCUSDT", "BTC", "USDT", False),
|
||||
("BTC/", None, None, False),
|
||||
("/USDT", None, None, False),
|
||||
@pytest.mark.parametrize("market_symbol,base,quote,exchange,add_dict,expected_result", [
|
||||
("BTC/USDT", 'BTC', 'USDT', "binance", {}, True),
|
||||
("USDT/BTC", 'USDT', 'BTC', "binance", {}, True),
|
||||
("USDT/BTC", 'BTC', 'USDT', "binance", {}, False), # Reversed currencies
|
||||
("BTCUSDT", 'BTC', 'USDT', "binance", {}, False), # No seperating /
|
||||
("BTCUSDT", None, "USDT", "binance", {}, False), #
|
||||
("USDT/BTC", "BTC", None, "binance", {}, False),
|
||||
("BTCUSDT", "BTC", None, "binance", {}, False),
|
||||
("BTC/USDT", "BTC", "USDT", "binance", {}, True),
|
||||
("BTC/USDT", "USDT", "BTC", "binance", {}, False), # reversed currencies
|
||||
("BTC/USDT", "BTC", "USD", "binance", {}, False), # Wrong quote currency
|
||||
("BTC/", "BTC", 'UNK', "binance", {}, False),
|
||||
("/USDT", 'UNK', 'USDT', "binance", {}, False),
|
||||
("BTC/EUR", 'BTC', 'EUR', "kraken", {"darkpool": False}, True),
|
||||
("EUR/BTC", 'EUR', 'BTC', "kraken", {"darkpool": False}, True),
|
||||
("EUR/BTC", 'BTC', 'EUR', "kraken", {"darkpool": False}, False), # Reversed currencies
|
||||
("BTC/EUR", 'BTC', 'USD', "kraken", {"darkpool": False}, False), # wrong quote currency
|
||||
("BTC/EUR", 'BTC', 'EUR', "kraken", {"darkpool": True}, False), # no darkpools
|
||||
("BTC/EUR.d", 'BTC', 'EUR', "kraken", {"darkpool": True}, False), # no darkpools
|
||||
("BTC/USD", 'BTC', 'USD', "ftx", {'spot': True}, True),
|
||||
("USD/BTC", 'USD', 'BTC', "ftx", {'spot': True}, True),
|
||||
("BTC/USD", 'BTC', 'USDT', "ftx", {'spot': True}, False), # Wrong quote currency
|
||||
("BTC/USD", 'USD', 'BTC', "ftx", {'spot': True}, False), # Reversed currencies
|
||||
("BTC/USD", 'BTC', 'USD', "ftx", {'spot': False}, False), # Can only trade spot markets
|
||||
("BTC-PERP", 'BTC', 'USD', "ftx", {'spot': False}, False), # Can only trade spot markets
|
||||
])
|
||||
def test_symbol_is_pair(market_symbol, base_currency, quote_currency, expected_result) -> None:
|
||||
assert symbol_is_pair(market_symbol, base_currency, quote_currency) == expected_result
|
||||
def test_market_is_tradable(mocker, default_conf, market_symbol, base,
|
||||
quote, add_dict, exchange, expected_result) -> None:
|
||||
ex = get_patched_exchange(mocker, default_conf, id=exchange)
|
||||
market = {
|
||||
'symbol': market_symbol,
|
||||
'base': base,
|
||||
'quote': quote,
|
||||
**(add_dict),
|
||||
}
|
||||
assert ex.market_is_tradable(market) == expected_result
|
||||
|
||||
|
||||
@pytest.mark.parametrize("market,expected_result", [
|
||||
|
@ -395,5 +395,5 @@ def test_backtest_results(default_conf, fee, mocker, caplog, data) -> None:
|
||||
for c, trade in enumerate(data.trades):
|
||||
res = results.iloc[c]
|
||||
assert res.sell_reason == trade.sell_reason
|
||||
assert res.open_time == _get_frame_time_from_offset(trade.open_tick)
|
||||
assert res.close_time == _get_frame_time_from_offset(trade.close_tick)
|
||||
assert res.open_date == _get_frame_time_from_offset(trade.open_tick)
|
||||
assert res.close_date == _get_frame_time_from_offset(trade.close_tick)
|
||||
|
@ -354,8 +354,8 @@ def test_backtesting_start(default_conf, mocker, testdatadir, caplog) -> None:
|
||||
exists = [
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:59:00+00:00 (0 days)..'
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:59:00 (0 days)..'
|
||||
]
|
||||
for line in exists:
|
||||
assert log_has(line, caplog)
|
||||
@ -464,28 +464,29 @@ def test_backtest(default_conf, fee, mocker, testdatadir) -> None:
|
||||
{'pair': [pair, pair],
|
||||
'profit_percent': [0.0, 0.0],
|
||||
'profit_abs': [0.0, 0.0],
|
||||
'open_time': pd.to_datetime([Arrow(2018, 1, 29, 18, 40, 0).datetime,
|
||||
'open_date': pd.to_datetime([Arrow(2018, 1, 29, 18, 40, 0).datetime,
|
||||
Arrow(2018, 1, 30, 3, 30, 0).datetime], utc=True
|
||||
),
|
||||
'close_time': pd.to_datetime([Arrow(2018, 1, 29, 22, 35, 0).datetime,
|
||||
'open_rate': [0.104445, 0.10302485],
|
||||
'open_fee': [0.0025, 0.0025],
|
||||
'close_date': pd.to_datetime([Arrow(2018, 1, 29, 22, 35, 0).datetime,
|
||||
Arrow(2018, 1, 30, 4, 10, 0).datetime], utc=True),
|
||||
'open_index': [78, 184],
|
||||
'close_index': [125, 192],
|
||||
'close_rate': [0.104969, 0.103541],
|
||||
'close_fee': [0.0025, 0.0025],
|
||||
'amount': [0.00957442, 0.0097064],
|
||||
'trade_duration': [235, 40],
|
||||
'open_at_end': [False, False],
|
||||
'open_rate': [0.104445, 0.10302485],
|
||||
'close_rate': [0.104969, 0.103541],
|
||||
'sell_reason': [SellType.ROI, SellType.ROI]
|
||||
})
|
||||
pd.testing.assert_frame_equal(results, expected)
|
||||
data_pair = processed[pair]
|
||||
for _, t in results.iterrows():
|
||||
ln = data_pair.loc[data_pair["date"] == t["open_time"]]
|
||||
ln = data_pair.loc[data_pair["date"] == t["open_date"]]
|
||||
# Check open trade rate alignes to open rate
|
||||
assert ln is not None
|
||||
assert round(ln.iloc[0]["open"], 6) == round(t["open_rate"], 6)
|
||||
# check close trade rate alignes to close rate or is between high and low
|
||||
ln = data_pair.loc[data_pair["date"] == t["close_time"]]
|
||||
ln = data_pair.loc[data_pair["date"] == t["close_date"]]
|
||||
assert (round(ln.iloc[0]["open"], 6) == round(t["close_rate"], 6) or
|
||||
round(ln.iloc[0]["low"], 6) < round(
|
||||
t["close_rate"], 6) < round(ln.iloc[0]["high"], 6))
|
||||
@ -677,10 +678,10 @@ def test_backtest_start_timerange(default_conf, mocker, caplog, testdatadir):
|
||||
f'Using data directory: {testdatadir} ...',
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Loading data from 2017-11-14T20:57:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Loading data from 2017-11-14 20:57:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Parameter --enable-position-stacking detected ...'
|
||||
]
|
||||
|
||||
@ -707,6 +708,7 @@ def test_backtest_start_multi_strat(default_conf, mocker, caplog, testdatadir):
|
||||
generate_pair_metrics=MagicMock(),
|
||||
generate_sell_reason_stats=sell_reason_mock,
|
||||
generate_strategy_metrics=strat_summary,
|
||||
generate_daily_stats=MagicMock(),
|
||||
)
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
|
||||
@ -740,10 +742,10 @@ def test_backtest_start_multi_strat(default_conf, mocker, caplog, testdatadir):
|
||||
f'Using data directory: {testdatadir} ...',
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Loading data from 2017-11-14T20:57:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Loading data from 2017-11-14 20:57:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Parameter --enable-position-stacking detected ...',
|
||||
'Running backtesting for Strategy DefaultStrategy',
|
||||
'Running backtesting for Strategy TestStrategyLegacy',
|
||||
@ -761,13 +763,11 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
|
||||
pd.DataFrame({'pair': ['XRP/BTC', 'LTC/BTC'],
|
||||
'profit_percent': [0.0, 0.0],
|
||||
'profit_abs': [0.0, 0.0],
|
||||
'open_time': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'open_date': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'2018-01-30 03:30:00', ], utc=True
|
||||
),
|
||||
'close_time': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'close_date': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'2018-01-30 05:35:00', ], utc=True),
|
||||
'open_index': [78, 184],
|
||||
'close_index': [125, 192],
|
||||
'trade_duration': [235, 40],
|
||||
'open_at_end': [False, False],
|
||||
'open_rate': [0.104445, 0.10302485],
|
||||
@ -777,15 +777,13 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
|
||||
pd.DataFrame({'pair': ['XRP/BTC', 'LTC/BTC', 'ETH/BTC'],
|
||||
'profit_percent': [0.03, 0.01, 0.1],
|
||||
'profit_abs': [0.01, 0.02, 0.2],
|
||||
'open_time': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'open_date': pd.to_datetime(['2018-01-29 18:40:00',
|
||||
'2018-01-30 03:30:00',
|
||||
'2018-01-30 05:30:00'], utc=True
|
||||
),
|
||||
'close_time': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'close_date': pd.to_datetime(['2018-01-29 20:45:00',
|
||||
'2018-01-30 05:35:00',
|
||||
'2018-01-30 08:30:00'], utc=True),
|
||||
'open_index': [78, 184, 185],
|
||||
'close_index': [125, 224, 205],
|
||||
'trade_duration': [47, 40, 20],
|
||||
'open_at_end': [False, False, False],
|
||||
'open_rate': [0.104445, 0.10302485, 0.122541],
|
||||
@ -823,10 +821,10 @@ def test_backtest_start_multi_strat_nomock(default_conf, mocker, caplog, testdat
|
||||
f'Using data directory: {testdatadir} ...',
|
||||
'Using stake_currency: BTC ...',
|
||||
'Using stake_amount: 0.001 ...',
|
||||
'Loading data from 2017-11-14T20:57:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14T21:17:00+00:00 '
|
||||
'up to 2017-11-14T22:58:00+00:00 (0 days)..',
|
||||
'Loading data from 2017-11-14 20:57:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Backtesting with data from 2017-11-14 21:17:00 '
|
||||
'up to 2017-11-14 22:58:00 (0 days)..',
|
||||
'Parameter --enable-position-stacking detected ...',
|
||||
'Running backtesting for Strategy DefaultStrategy',
|
||||
'Running backtesting for Strategy TestStrategyLegacy',
|
||||
|
@ -105,3 +105,17 @@ def test_edge_init_fee(mocker, edge_conf) -> None:
|
||||
edge_cli = EdgeCli(edge_conf)
|
||||
assert edge_cli.edge.fee == 0.1234
|
||||
assert fee_mock.call_count == 0
|
||||
|
||||
|
||||
def test_edge_start(mocker, edge_conf) -> None:
|
||||
mock_calculate = mocker.patch('freqtrade.edge.edge_positioning.Edge.calculate',
|
||||
return_value=True)
|
||||
table_mock = mocker.patch('freqtrade.optimize.edge_cli.generate_edge_table')
|
||||
|
||||
patch_exchange(mocker)
|
||||
edge_conf['stake_amount'] = 20
|
||||
|
||||
edge_cli = EdgeCli(edge_conf)
|
||||
edge_cli.start()
|
||||
assert mock_calculate.call_count == 1
|
||||
assert table_mock.call_count == 1
|
||||
|
@ -3,6 +3,7 @@ import locale
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from copy import deepcopy
|
||||
from typing import Dict, List
|
||||
from unittest.mock import MagicMock, PropertyMock
|
||||
|
||||
@ -16,7 +17,6 @@ from freqtrade.commands.optimize_commands import (setup_optimize_configuration,
|
||||
start_hyperopt)
|
||||
from freqtrade.data.history import load_data
|
||||
from freqtrade.exceptions import DependencyException, OperationalException
|
||||
from freqtrade.optimize.default_hyperopt import DefaultHyperOpt
|
||||
from freqtrade.optimize.default_hyperopt_loss import DefaultHyperOptLoss
|
||||
from freqtrade.optimize.hyperopt import Hyperopt
|
||||
from freqtrade.resolvers.hyperopt_resolver import (HyperOptLossResolver,
|
||||
@ -26,15 +26,28 @@ from freqtrade.strategy.interface import SellType
|
||||
from tests.conftest import (get_args, log_has, log_has_re, patch_exchange,
|
||||
patched_configuration_load_config_file)
|
||||
|
||||
from .hyperopts.default_hyperopt import DefaultHyperOpt
|
||||
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def hyperopt(default_conf, mocker):
|
||||
default_conf.update({
|
||||
'spaces': ['default'],
|
||||
def hyperopt_conf(default_conf):
|
||||
hyperconf = deepcopy(default_conf)
|
||||
hyperconf.update({
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'hyperopt_path': str(Path(__file__).parent / 'hyperopts'),
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': ['default'],
|
||||
'hyperopt_jobs': 1,
|
||||
})
|
||||
return hyperconf
|
||||
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def hyperopt(hyperopt_conf, mocker):
|
||||
|
||||
patch_exchange(mocker)
|
||||
return Hyperopt(default_conf)
|
||||
return Hyperopt(hyperopt_conf)
|
||||
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
@ -46,7 +59,7 @@ def hyperopt_results():
|
||||
'profit_abs': [-0.2, 0.4, 0.6],
|
||||
'trade_duration': [10, 30, 10],
|
||||
'sell_reason': [SellType.STOP_LOSS, SellType.ROI, SellType.ROI],
|
||||
'close_time':
|
||||
'close_date':
|
||||
[
|
||||
datetime(2019, 1, 1, 9, 26, 3, 478039),
|
||||
datetime(2019, 2, 1, 9, 26, 3, 478039),
|
||||
@ -160,7 +173,7 @@ def test_setup_hyperopt_configuration_with_arguments(mocker, default_conf, caplo
|
||||
assert log_has('Parameter --print-all detected ...', caplog)
|
||||
|
||||
|
||||
def test_setup_hyperopt_configuration_unlimited_stake_amount(mocker, default_conf, caplog) -> None:
|
||||
def test_setup_hyperopt_configuration_unlimited_stake_amount(mocker, default_conf) -> None:
|
||||
default_conf['stake_amount'] = constants.UNLIMITED_STAKE_AMOUNT
|
||||
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
@ -201,7 +214,7 @@ def test_hyperoptresolver(mocker, default_conf, caplog) -> None:
|
||||
assert hasattr(x, "timeframe")
|
||||
|
||||
|
||||
def test_hyperoptresolver_wrongname(mocker, default_conf, caplog) -> None:
|
||||
def test_hyperoptresolver_wrongname(default_conf) -> None:
|
||||
default_conf.update({'hyperopt': "NonExistingHyperoptClass"})
|
||||
|
||||
with pytest.raises(OperationalException, match=r'Impossible to load Hyperopt.*'):
|
||||
@ -216,7 +229,7 @@ def test_hyperoptresolver_noname(default_conf):
|
||||
HyperOptResolver.load_hyperopt(default_conf)
|
||||
|
||||
|
||||
def test_hyperoptlossresolver(mocker, default_conf, caplog) -> None:
|
||||
def test_hyperoptlossresolver(mocker, default_conf) -> None:
|
||||
|
||||
hl = DefaultHyperOptLoss
|
||||
mocker.patch(
|
||||
@ -227,14 +240,14 @@ def test_hyperoptlossresolver(mocker, default_conf, caplog) -> None:
|
||||
assert hasattr(x, "hyperopt_loss_function")
|
||||
|
||||
|
||||
def test_hyperoptlossresolver_wrongname(mocker, default_conf, caplog) -> None:
|
||||
def test_hyperoptlossresolver_wrongname(default_conf) -> None:
|
||||
default_conf.update({'hyperopt_loss': "NonExistingLossClass"})
|
||||
|
||||
with pytest.raises(OperationalException, match=r'Impossible to load HyperoptLoss.*'):
|
||||
HyperOptLossResolver.load_hyperoptloss(default_conf)
|
||||
|
||||
|
||||
def test_start_not_installed(mocker, default_conf, caplog, import_fails) -> None:
|
||||
def test_start_not_installed(mocker, default_conf, import_fails) -> None:
|
||||
start_mock = MagicMock()
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
|
||||
@ -245,6 +258,8 @@ def test_start_not_installed(mocker, default_conf, caplog, import_fails) -> None
|
||||
'hyperopt',
|
||||
'--config', 'config.json',
|
||||
'--hyperopt', 'DefaultHyperOpt',
|
||||
'--hyperopt-path',
|
||||
str(Path(__file__).parent / "hyperopts"),
|
||||
'--epochs', '5'
|
||||
]
|
||||
pargs = get_args(args)
|
||||
@ -253,9 +268,9 @@ def test_start_not_installed(mocker, default_conf, caplog, import_fails) -> None
|
||||
start_hyperopt(pargs)
|
||||
|
||||
|
||||
def test_start(mocker, default_conf, caplog) -> None:
|
||||
def test_start(mocker, hyperopt_conf, caplog) -> None:
|
||||
start_mock = MagicMock()
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
patched_configuration_load_config_file(mocker, hyperopt_conf)
|
||||
mocker.patch('freqtrade.optimize.hyperopt.Hyperopt.start', start_mock)
|
||||
patch_exchange(mocker)
|
||||
|
||||
@ -272,8 +287,8 @@ def test_start(mocker, default_conf, caplog) -> None:
|
||||
assert start_mock.call_count == 1
|
||||
|
||||
|
||||
def test_start_no_data(mocker, default_conf, caplog) -> None:
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
def test_start_no_data(mocker, hyperopt_conf) -> None:
|
||||
patched_configuration_load_config_file(mocker, hyperopt_conf)
|
||||
mocker.patch('freqtrade.data.history.load_pair_history', MagicMock(return_value=pd.DataFrame))
|
||||
mocker.patch(
|
||||
'freqtrade.optimize.hyperopt.get_timerange',
|
||||
@ -293,9 +308,9 @@ def test_start_no_data(mocker, default_conf, caplog) -> None:
|
||||
start_hyperopt(pargs)
|
||||
|
||||
|
||||
def test_start_filelock(mocker, default_conf, caplog) -> None:
|
||||
start_mock = MagicMock(side_effect=Timeout(Hyperopt.get_lock_filename(default_conf)))
|
||||
patched_configuration_load_config_file(mocker, default_conf)
|
||||
def test_start_filelock(mocker, hyperopt_conf, caplog) -> None:
|
||||
start_mock = MagicMock(side_effect=Timeout(Hyperopt.get_lock_filename(hyperopt_conf)))
|
||||
patched_configuration_load_config_file(mocker, hyperopt_conf)
|
||||
mocker.patch('freqtrade.optimize.hyperopt.Hyperopt.start', start_mock)
|
||||
patch_exchange(mocker)
|
||||
|
||||
@ -519,7 +534,7 @@ def test_roi_table_generation(hyperopt) -> None:
|
||||
assert hyperopt.custom_hyperopt.generate_roi_table(params) == {0: 6, 15: 3, 25: 1, 30: 0}
|
||||
|
||||
|
||||
def test_start_calls_optimizer(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_start_calls_optimizer(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -545,15 +560,9 @@ def test_start_calls_optimizer(mocker, default_conf, caplog, capsys) -> None:
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
# Co-test loading timeframe from strategy
|
||||
del default_conf['timeframe']
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'default',
|
||||
'hyperopt_jobs': 1, })
|
||||
del hyperopt_conf['timeframe']
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -569,7 +578,7 @@ def test_start_calls_optimizer(mocker, default_conf, caplog, capsys) -> None:
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_sell")
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_buy")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == default_conf['max_open_trades']
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
|
||||
|
||||
@ -686,11 +695,34 @@ def test_buy_strategy_generator(hyperopt, testdatadir) -> None:
|
||||
assert 1 in result['buy']
|
||||
|
||||
|
||||
def test_generate_optimizer(mocker, default_conf) -> None:
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'timerange': None,
|
||||
'spaces': 'all',
|
||||
def test_sell_strategy_generator(hyperopt, testdatadir) -> None:
|
||||
data = load_data(testdatadir, '1m', ['UNITTEST/BTC'], fill_up_missing=True)
|
||||
dataframes = hyperopt.backtesting.strategy.ohlcvdata_to_dataframe(data)
|
||||
dataframe = hyperopt.custom_hyperopt.populate_indicators(dataframes['UNITTEST/BTC'],
|
||||
{'pair': 'UNITTEST/BTC'})
|
||||
|
||||
populate_sell_trend = hyperopt.custom_hyperopt.sell_strategy_generator(
|
||||
{
|
||||
'sell-adx-value': 20,
|
||||
'sell-fastd-value': 75,
|
||||
'sell-mfi-value': 80,
|
||||
'sell-rsi-value': 20,
|
||||
'sell-adx-enabled': True,
|
||||
'sell-fastd-enabled': True,
|
||||
'sell-mfi-enabled': True,
|
||||
'sell-rsi-enabled': True,
|
||||
'sell-trigger': 'sell-bb_upper'
|
||||
}
|
||||
)
|
||||
result = populate_sell_trend(dataframe, {'pair': 'UNITTEST/BTC'})
|
||||
# Check if some indicators are generated. We will not test all of them
|
||||
print(result)
|
||||
assert 'sell' in result
|
||||
assert 1 in result['sell']
|
||||
|
||||
|
||||
def test_generate_optimizer(mocker, hyperopt_conf) -> None:
|
||||
hyperopt_conf.update({'spaces': 'all',
|
||||
'hyperopt_min_trades': 1,
|
||||
})
|
||||
|
||||
@ -790,48 +822,35 @@ def test_generate_optimizer(mocker, default_conf) -> None:
|
||||
'total_profit': 0.00023300
|
||||
}
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.dimensions = hyperopt.hyperopt_space()
|
||||
generate_optimizer_value = hyperopt.generate_optimizer(list(optimizer_param.values()))
|
||||
assert generate_optimizer_value == response_expected
|
||||
|
||||
|
||||
def test_clean_hyperopt(mocker, default_conf, caplog):
|
||||
def test_clean_hyperopt(mocker, hyperopt_conf, caplog):
|
||||
patch_exchange(mocker)
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'default',
|
||||
'hyperopt_jobs': 1,
|
||||
})
|
||||
|
||||
mocker.patch("freqtrade.optimize.hyperopt.Path.is_file", MagicMock(return_value=True))
|
||||
unlinkmock = mocker.patch("freqtrade.optimize.hyperopt.Path.unlink", MagicMock())
|
||||
h = Hyperopt(default_conf)
|
||||
h = Hyperopt(hyperopt_conf)
|
||||
|
||||
assert unlinkmock.call_count == 2
|
||||
assert log_has(f"Removing `{h.data_pickle_file}`.", caplog)
|
||||
|
||||
|
||||
def test_continue_hyperopt(mocker, default_conf, caplog):
|
||||
def test_continue_hyperopt(mocker, hyperopt_conf, caplog):
|
||||
patch_exchange(mocker)
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'default',
|
||||
'hyperopt_jobs': 1,
|
||||
'hyperopt_continue': True
|
||||
})
|
||||
hyperopt_conf.update({'hyperopt_continue': True})
|
||||
mocker.patch("freqtrade.optimize.hyperopt.Path.is_file", MagicMock(return_value=True))
|
||||
unlinkmock = mocker.patch("freqtrade.optimize.hyperopt.Path.unlink", MagicMock())
|
||||
Hyperopt(default_conf)
|
||||
Hyperopt(hyperopt_conf)
|
||||
|
||||
assert unlinkmock.call_count == 0
|
||||
assert log_has("Continuing on previous hyperopt results.", caplog)
|
||||
|
||||
|
||||
def test_print_json_spaces_all(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_print_json_spaces_all(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -862,16 +881,12 @@ def test_print_json_spaces_all(mocker, default_conf, caplog, capsys) -> None:
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'all',
|
||||
hyperopt_conf.update({'spaces': 'all',
|
||||
'hyperopt_jobs': 1,
|
||||
'print_json': True,
|
||||
})
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -890,7 +905,7 @@ def test_print_json_spaces_all(mocker, default_conf, caplog, capsys) -> None:
|
||||
assert dumper.call_count == 2
|
||||
|
||||
|
||||
def test_print_json_spaces_default(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_print_json_spaces_default(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -920,16 +935,9 @@ def test_print_json_spaces_default(mocker, default_conf, caplog, capsys) -> None
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'default',
|
||||
'hyperopt_jobs': 1,
|
||||
'print_json': True,
|
||||
})
|
||||
hyperopt_conf.update({'print_json': True})
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -944,7 +952,7 @@ def test_print_json_spaces_default(mocker, default_conf, caplog, capsys) -> None
|
||||
assert dumper.call_count == 2
|
||||
|
||||
|
||||
def test_print_json_spaces_roi_stoploss(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_print_json_spaces_roi_stoploss(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -970,16 +978,12 @@ def test_print_json_spaces_roi_stoploss(mocker, default_conf, caplog, capsys) ->
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'roi stoploss',
|
||||
hyperopt_conf.update({'spaces': 'roi stoploss',
|
||||
'hyperopt_jobs': 1,
|
||||
'print_json': True,
|
||||
})
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -994,7 +998,7 @@ def test_print_json_spaces_roi_stoploss(mocker, default_conf, caplog, capsys) ->
|
||||
assert dumper.call_count == 2
|
||||
|
||||
|
||||
def test_simplified_interface_roi_stoploss(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_simplified_interface_roi_stoploss(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -1019,14 +1023,9 @@ def test_simplified_interface_roi_stoploss(mocker, default_conf, caplog, capsys)
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'roi stoploss',
|
||||
'hyperopt_jobs': 1, })
|
||||
hyperopt_conf.update({'spaces': 'roi stoploss'})
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -1047,11 +1046,11 @@ def test_simplified_interface_roi_stoploss(mocker, default_conf, caplog, capsys)
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_sell")
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_buy")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == default_conf['max_open_trades']
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
|
||||
|
||||
def test_simplified_interface_all_failed(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_simplified_interface_all_failed(mocker, hyperopt_conf) -> None:
|
||||
mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -1062,14 +1061,9 @@ def test_simplified_interface_all_failed(mocker, default_conf, caplog, capsys) -
|
||||
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'all',
|
||||
'hyperopt_jobs': 1, })
|
||||
hyperopt_conf.update({'spaces': 'all', })
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -1082,7 +1076,7 @@ def test_simplified_interface_all_failed(mocker, default_conf, caplog, capsys) -
|
||||
hyperopt.start()
|
||||
|
||||
|
||||
def test_simplified_interface_buy(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_simplified_interface_buy(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -1107,14 +1101,9 @@ def test_simplified_interface_buy(mocker, default_conf, caplog, capsys) -> None:
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'buy',
|
||||
'hyperopt_jobs': 1, })
|
||||
hyperopt_conf.update({'spaces': 'buy'})
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -1135,11 +1124,11 @@ def test_simplified_interface_buy(mocker, default_conf, caplog, capsys) -> None:
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_sell")
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_buy")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == default_conf['max_open_trades']
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
|
||||
|
||||
def test_simplified_interface_sell(mocker, default_conf, caplog, capsys) -> None:
|
||||
def test_simplified_interface_sell(mocker, hyperopt_conf, capsys) -> None:
|
||||
dumper = mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -1164,14 +1153,9 @@ def test_simplified_interface_sell(mocker, default_conf, caplog, capsys) -> None
|
||||
)
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': 'sell',
|
||||
'hyperopt_jobs': 1, })
|
||||
hyperopt_conf.update({'spaces': 'sell', })
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
@ -1192,7 +1176,7 @@ def test_simplified_interface_sell(mocker, default_conf, caplog, capsys) -> None
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_sell")
|
||||
assert hasattr(hyperopt.backtesting.strategy, "advise_buy")
|
||||
assert hasattr(hyperopt, "max_open_trades")
|
||||
assert hyperopt.max_open_trades == default_conf['max_open_trades']
|
||||
assert hyperopt.max_open_trades == hyperopt_conf['max_open_trades']
|
||||
assert hasattr(hyperopt, "position_stacking")
|
||||
|
||||
|
||||
@ -1202,7 +1186,7 @@ def test_simplified_interface_sell(mocker, default_conf, caplog, capsys) -> None
|
||||
('sell_strategy_generator', 'sell'),
|
||||
('sell_indicator_space', 'sell'),
|
||||
])
|
||||
def test_simplified_interface_failed(mocker, default_conf, caplog, capsys, method, space) -> None:
|
||||
def test_simplified_interface_failed(mocker, hyperopt_conf, method, space) -> None:
|
||||
mocker.patch('freqtrade.optimize.hyperopt.dump', MagicMock())
|
||||
mocker.patch('freqtrade.optimize.backtesting.Backtesting.load_bt_data',
|
||||
MagicMock(return_value=(MagicMock(), None)))
|
||||
@ -1213,14 +1197,9 @@ def test_simplified_interface_failed(mocker, default_conf, caplog, capsys, metho
|
||||
|
||||
patch_exchange(mocker)
|
||||
|
||||
default_conf.update({'config': 'config.json.example',
|
||||
'hyperopt': 'DefaultHyperOpt',
|
||||
'epochs': 1,
|
||||
'timerange': None,
|
||||
'spaces': space,
|
||||
'hyperopt_jobs': 1, })
|
||||
hyperopt_conf.update({'spaces': space})
|
||||
|
||||
hyperopt = Hyperopt(default_conf)
|
||||
hyperopt = Hyperopt(hyperopt_conf)
|
||||
hyperopt.backtesting.strategy.ohlcvdata_to_dataframe = MagicMock()
|
||||
hyperopt.custom_hyperopt.generate_roi_table = MagicMock(return_value={})
|
||||
|
||||
|
@ -1,16 +1,29 @@
|
||||
import re
|
||||
from datetime import timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import pandas as pd
|
||||
import pytest
|
||||
from arrow import Arrow
|
||||
|
||||
from freqtrade.configuration import TimeRange
|
||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||
from freqtrade.data import history
|
||||
from freqtrade.data.btanalysis import (get_latest_backtest_filename,
|
||||
load_backtest_data)
|
||||
from freqtrade.edge import PairInfo
|
||||
from freqtrade.optimize.optimize_reports import (
|
||||
generate_pair_metrics, generate_edge_table, generate_sell_reason_stats,
|
||||
text_table_bt_results, text_table_sell_reason, generate_strategy_metrics,
|
||||
text_table_strategy, store_backtest_result)
|
||||
from freqtrade.optimize.optimize_reports import (generate_backtest_stats,
|
||||
generate_daily_stats,
|
||||
generate_edge_table,
|
||||
generate_pair_metrics,
|
||||
generate_sell_reason_stats,
|
||||
generate_strategy_metrics,
|
||||
store_backtest_stats,
|
||||
text_table_bt_results,
|
||||
text_table_sell_reason,
|
||||
text_table_strategy)
|
||||
from freqtrade.strategy.interface import SellType
|
||||
from tests.conftest import patch_exchange
|
||||
from tests.data.test_history import _backup_file, _clean_test_file
|
||||
|
||||
|
||||
def test_text_table_bt_results(default_conf, mocker):
|
||||
@ -43,6 +56,115 @@ def test_text_table_bt_results(default_conf, mocker):
|
||||
assert text_table_bt_results(pair_results, stake_currency='BTC') == result_str
|
||||
|
||||
|
||||
def test_generate_backtest_stats(default_conf, testdatadir):
|
||||
results = {'DefStrat': pd.DataFrame({"pair": ["UNITTEST/BTC", "UNITTEST/BTC",
|
||||
"UNITTEST/BTC", "UNITTEST/BTC"],
|
||||
"profit_percent": [0.003312, 0.010801, 0.013803, 0.002780],
|
||||
"profit_abs": [0.000003, 0.000011, 0.000014, 0.000003],
|
||||
"open_date": [Arrow(2017, 11, 14, 19, 32, 00).datetime,
|
||||
Arrow(2017, 11, 14, 21, 36, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 12, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 44, 00).datetime],
|
||||
"close_date": [Arrow(2017, 11, 14, 21, 35, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 10, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 43, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 58, 00).datetime],
|
||||
"open_rate": [0.002543, 0.003003, 0.003089, 0.003214],
|
||||
"close_rate": [0.002546, 0.003014, 0.003103, 0.003217],
|
||||
"trade_duration": [123, 34, 31, 14],
|
||||
"open_at_end": [False, False, False, True],
|
||||
"sell_reason": [SellType.ROI, SellType.STOP_LOSS,
|
||||
SellType.ROI, SellType.FORCE_SELL]
|
||||
})}
|
||||
timerange = TimeRange.parse_timerange('1510688220-1510700340')
|
||||
min_date = Arrow.fromtimestamp(1510688220)
|
||||
max_date = Arrow.fromtimestamp(1510700340)
|
||||
btdata = history.load_data(testdatadir, '1m', ['UNITTEST/BTC'], timerange=timerange,
|
||||
fill_up_missing=True)
|
||||
|
||||
stats = generate_backtest_stats(default_conf, btdata, results, min_date, max_date)
|
||||
assert isinstance(stats, dict)
|
||||
assert 'strategy' in stats
|
||||
assert 'DefStrat' in stats['strategy']
|
||||
assert 'strategy_comparison' in stats
|
||||
strat_stats = stats['strategy']['DefStrat']
|
||||
assert strat_stats['backtest_start'] == min_date.datetime
|
||||
assert strat_stats['backtest_end'] == max_date.datetime
|
||||
assert strat_stats['total_trades'] == len(results['DefStrat'])
|
||||
# Above sample had no loosing trade
|
||||
assert strat_stats['max_drawdown'] == 0.0
|
||||
|
||||
results = {'DefStrat': pd.DataFrame(
|
||||
{"pair": ["UNITTEST/BTC", "UNITTEST/BTC", "UNITTEST/BTC", "UNITTEST/BTC"],
|
||||
"profit_percent": [0.003312, 0.010801, -0.013803, 0.002780],
|
||||
"profit_abs": [0.000003, 0.000011, -0.000014, 0.000003],
|
||||
"open_date": [Arrow(2017, 11, 14, 19, 32, 00).datetime,
|
||||
Arrow(2017, 11, 14, 21, 36, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 12, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 44, 00).datetime],
|
||||
"close_date": [Arrow(2017, 11, 14, 21, 35, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 10, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 43, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 58, 00).datetime],
|
||||
"open_rate": [0.002543, 0.003003, 0.003089, 0.003214],
|
||||
"close_rate": [0.002546, 0.003014, 0.0032903, 0.003217],
|
||||
"trade_duration": [123, 34, 31, 14],
|
||||
"open_at_end": [False, False, False, True],
|
||||
"sell_reason": [SellType.ROI, SellType.STOP_LOSS,
|
||||
SellType.ROI, SellType.FORCE_SELL]
|
||||
})}
|
||||
|
||||
assert strat_stats['max_drawdown'] == 0.0
|
||||
assert strat_stats['drawdown_start'] == Arrow.fromtimestamp(0).datetime
|
||||
assert strat_stats['drawdown_end'] == Arrow.fromtimestamp(0).datetime
|
||||
assert strat_stats['drawdown_end_ts'] == 0
|
||||
assert strat_stats['drawdown_start_ts'] == 0
|
||||
assert strat_stats['pairlist'] == ['UNITTEST/BTC']
|
||||
|
||||
# Test storing stats
|
||||
filename = Path(testdatadir / 'btresult.json')
|
||||
filename_last = Path(testdatadir / LAST_BT_RESULT_FN)
|
||||
_backup_file(filename_last, copy_file=True)
|
||||
assert not filename.is_file()
|
||||
|
||||
store_backtest_stats(filename, stats)
|
||||
|
||||
# get real Filename (it's btresult-<date>.json)
|
||||
last_fn = get_latest_backtest_filename(filename_last.parent)
|
||||
assert re.match(r"btresult-.*\.json", last_fn)
|
||||
|
||||
filename1 = (testdatadir / last_fn)
|
||||
assert filename1.is_file()
|
||||
content = filename1.read_text()
|
||||
assert 'max_drawdown' in content
|
||||
assert 'strategy' in content
|
||||
assert 'pairlist' in content
|
||||
|
||||
assert filename_last.is_file()
|
||||
|
||||
_clean_test_file(filename_last)
|
||||
filename1.unlink()
|
||||
|
||||
|
||||
def test_store_backtest_stats(testdatadir, mocker):
|
||||
|
||||
dump_mock = mocker.patch('freqtrade.optimize.optimize_reports.file_dump_json')
|
||||
|
||||
store_backtest_stats(testdatadir, {})
|
||||
|
||||
assert dump_mock.call_count == 2
|
||||
assert isinstance(dump_mock.call_args_list[0][0][0], Path)
|
||||
assert str(dump_mock.call_args_list[0][0][0]).startswith(str(testdatadir/'backtest-result'))
|
||||
|
||||
dump_mock.reset_mock()
|
||||
filename = testdatadir / 'testresult.json'
|
||||
store_backtest_stats(filename, {})
|
||||
assert dump_mock.call_count == 2
|
||||
assert isinstance(dump_mock.call_args_list[0][0][0], Path)
|
||||
# result will be testdatadir / testresult-<timestamp>.json
|
||||
assert str(dump_mock.call_args_list[0][0][0]).startswith(str(testdatadir / 'testresult'))
|
||||
|
||||
|
||||
def test_generate_pair_metrics(default_conf, mocker):
|
||||
|
||||
results = pd.DataFrame(
|
||||
@ -68,6 +190,29 @@ def test_generate_pair_metrics(default_conf, mocker):
|
||||
pytest.approx(pair_results[-1]['profit_sum_pct']) == pair_results[-1]['profit_sum'] * 100)
|
||||
|
||||
|
||||
def test_generate_daily_stats(testdatadir):
|
||||
|
||||
filename = testdatadir / "backtest-result_new.json"
|
||||
bt_data = load_backtest_data(filename)
|
||||
res = generate_daily_stats(bt_data)
|
||||
assert isinstance(res, dict)
|
||||
assert round(res['backtest_best_day'], 4) == 0.1796
|
||||
assert round(res['backtest_worst_day'], 4) == -0.1468
|
||||
assert res['winning_days'] == 14
|
||||
assert res['draw_days'] == 4
|
||||
assert res['losing_days'] == 3
|
||||
assert res['winner_holding_avg'] == timedelta(seconds=1440)
|
||||
assert res['loser_holding_avg'] == timedelta(days=1, seconds=21420)
|
||||
|
||||
# Select empty dataframe!
|
||||
res = generate_daily_stats(bt_data.loc[bt_data['open_date'] == '2000-01-01', :])
|
||||
assert isinstance(res, dict)
|
||||
assert round(res['backtest_best_day'], 4) == 0.0
|
||||
assert res['winning_days'] == 0
|
||||
assert res['draw_days'] == 0
|
||||
assert res['losing_days'] == 0
|
||||
|
||||
|
||||
def test_text_table_sell_reason(default_conf):
|
||||
|
||||
results = pd.DataFrame(
|
||||
@ -188,77 +333,3 @@ def test_generate_edge_table(edge_conf, mocker):
|
||||
assert generate_edge_table(results).count('| ETH/BTC |') == 1
|
||||
assert generate_edge_table(results).count(
|
||||
'| Risk Reward Ratio | Required Risk Reward | Expectancy |') == 1
|
||||
|
||||
|
||||
def test_backtest_record(default_conf, fee, mocker):
|
||||
names = []
|
||||
records = []
|
||||
patch_exchange(mocker)
|
||||
mocker.patch('freqtrade.exchange.Exchange.get_fee', fee)
|
||||
mocker.patch(
|
||||
'freqtrade.optimize.optimize_reports.file_dump_json',
|
||||
new=lambda n, r: (names.append(n), records.append(r))
|
||||
)
|
||||
|
||||
results = {'DefStrat': pd.DataFrame({"pair": ["UNITTEST/BTC", "UNITTEST/BTC",
|
||||
"UNITTEST/BTC", "UNITTEST/BTC"],
|
||||
"profit_percent": [0.003312, 0.010801, 0.013803, 0.002780],
|
||||
"profit_abs": [0.000003, 0.000011, 0.000014, 0.000003],
|
||||
"open_time": [Arrow(2017, 11, 14, 19, 32, 00).datetime,
|
||||
Arrow(2017, 11, 14, 21, 36, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 12, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 44, 00).datetime],
|
||||
"close_time": [Arrow(2017, 11, 14, 21, 35, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 10, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 43, 00).datetime,
|
||||
Arrow(2017, 11, 14, 22, 58, 00).datetime],
|
||||
"open_rate": [0.002543, 0.003003, 0.003089, 0.003214],
|
||||
"close_rate": [0.002546, 0.003014, 0.003103, 0.003217],
|
||||
"open_index": [1, 119, 153, 185],
|
||||
"close_index": [118, 151, 184, 199],
|
||||
"trade_duration": [123, 34, 31, 14],
|
||||
"open_at_end": [False, False, False, True],
|
||||
"sell_reason": [SellType.ROI, SellType.STOP_LOSS,
|
||||
SellType.ROI, SellType.FORCE_SELL]
|
||||
})}
|
||||
store_backtest_result(Path("backtest-result.json"), results)
|
||||
# Assert file_dump_json was only called once
|
||||
assert names == [Path('backtest-result.json')]
|
||||
records = records[0]
|
||||
# Ensure records are of correct type
|
||||
assert len(records) == 4
|
||||
|
||||
# reset test to test with strategy name
|
||||
names = []
|
||||
records = []
|
||||
results['Strat'] = results['DefStrat']
|
||||
results['Strat2'] = results['DefStrat']
|
||||
store_backtest_result(Path("backtest-result.json"), results)
|
||||
assert names == [
|
||||
Path('backtest-result-DefStrat.json'),
|
||||
Path('backtest-result-Strat.json'),
|
||||
Path('backtest-result-Strat2.json'),
|
||||
]
|
||||
records = records[0]
|
||||
# Ensure records are of correct type
|
||||
assert len(records) == 4
|
||||
|
||||
# ('UNITTEST/BTC', 0.00331158, '1510684320', '1510691700', 0, 117)
|
||||
# Below follows just a typecheck of the schema/type of trade-records
|
||||
oix = None
|
||||
for (pair, profit, date_buy, date_sell, buy_index, dur,
|
||||
openr, closer, open_at_end, sell_reason) in records:
|
||||
assert pair == 'UNITTEST/BTC'
|
||||
assert isinstance(profit, float)
|
||||
# FIX: buy/sell should be converted to ints
|
||||
assert isinstance(date_buy, float)
|
||||
assert isinstance(date_sell, float)
|
||||
assert isinstance(openr, float)
|
||||
assert isinstance(closer, float)
|
||||
assert isinstance(open_at_end, bool)
|
||||
assert isinstance(sell_reason, str)
|
||||
isinstance(buy_index, pd._libs.tslib.Timestamp)
|
||||
if oix:
|
||||
assert buy_index > oix
|
||||
oix = buy_index
|
||||
assert dur > 0
|
||||
|
@ -468,7 +468,9 @@ def test_pairlist_class(mocker, whitelist_conf, markets, pairlist):
|
||||
# BCH/BTC not available
|
||||
(['ETH/BTC', 'TKN/BTC', 'BCH/BTC'], "is not compatible with exchange"),
|
||||
# BTT/BTC is inactive
|
||||
(['ETH/BTC', 'TKN/BTC', 'BTT/BTC'], "Market is not active")
|
||||
(['ETH/BTC', 'TKN/BTC', 'BTT/BTC'], "Market is not active"),
|
||||
# XLTCUSDT is not a valid pair
|
||||
(['ETH/BTC', 'TKN/BTC', 'XLTCUSDT'], "is not tradable with Freqtrade"),
|
||||
])
|
||||
def test__whitelist_for_active_markets(mocker, whitelist_conf, markets, pairlist, whitelist, caplog,
|
||||
log_message, tickers):
|
||||
@ -547,7 +549,7 @@ def test_agefilter_min_days_listed_too_small(mocker, default_conf, markets, tick
|
||||
)
|
||||
|
||||
with pytest.raises(OperationalException,
|
||||
match=r'AgeFilter requires min_days_listed must be >= 1'):
|
||||
match=r'AgeFilter requires min_days_listed to be >= 1'):
|
||||
get_patched_freqtradebot(mocker, default_conf)
|
||||
|
||||
|
||||
@ -562,7 +564,7 @@ def test_agefilter_min_days_listed_too_large(mocker, default_conf, markets, tick
|
||||
)
|
||||
|
||||
with pytest.raises(OperationalException,
|
||||
match=r'AgeFilter requires min_days_listed must not exceed '
|
||||
match=r'AgeFilter requires min_days_listed to not exceed '
|
||||
r'exchange max request size \([0-9]+\)'):
|
||||
get_patched_freqtradebot(mocker, default_conf)
|
||||
|
||||
@ -590,34 +592,58 @@ def test_agefilter_caching(mocker, markets, whitelist_conf_3, tickers, ohlcv_his
|
||||
assert freqtrade.exchange.get_historic_ohlcv.call_count == previous_call_count
|
||||
|
||||
|
||||
@pytest.mark.parametrize("pairlistconfig,expected", [
|
||||
@pytest.mark.parametrize("pairlistconfig,desc_expected,exception_expected", [
|
||||
({"method": "PriceFilter", "low_price_ratio": 0.001, "min_price": 0.00000010,
|
||||
"max_price": 1.0}, "[{'PriceFilter': 'PriceFilter - Filtering pairs priced below "
|
||||
"0.1% or below 0.00000010 or above 1.00000000.'}]"
|
||||
"max_price": 1.0},
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below "
|
||||
"0.1% or below 0.00000010 or above 1.00000000.'}]",
|
||||
None
|
||||
),
|
||||
({"method": "PriceFilter", "low_price_ratio": 0.001, "min_price": 0.00000010},
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below 0.1% or below 0.00000010.'}]"
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below 0.1% or below 0.00000010.'}]",
|
||||
None
|
||||
),
|
||||
({"method": "PriceFilter", "low_price_ratio": 0.001, "max_price": 1.00010000},
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below 0.1% or above 1.00010000.'}]"
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below 0.1% or above 1.00010000.'}]",
|
||||
None
|
||||
),
|
||||
({"method": "PriceFilter", "min_price": 0.00002000},
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below 0.00002000.'}]"
|
||||
"[{'PriceFilter': 'PriceFilter - Filtering pairs priced below 0.00002000.'}]",
|
||||
None
|
||||
),
|
||||
({"method": "PriceFilter"},
|
||||
"[{'PriceFilter': 'PriceFilter - No price filters configured.'}]"
|
||||
"[{'PriceFilter': 'PriceFilter - No price filters configured.'}]",
|
||||
None
|
||||
),
|
||||
({"method": "PriceFilter", "low_price_ratio": -0.001},
|
||||
None,
|
||||
"PriceFilter requires low_price_ratio to be >= 0"
|
||||
), # OperationalException expected
|
||||
({"method": "PriceFilter", "min_price": -0.00000010},
|
||||
None,
|
||||
"PriceFilter requires min_price to be >= 0"
|
||||
), # OperationalException expected
|
||||
({"method": "PriceFilter", "max_price": -1.00010000},
|
||||
None,
|
||||
"PriceFilter requires max_price to be >= 0"
|
||||
), # OperationalException expected
|
||||
])
|
||||
def test_pricefilter_desc(mocker, whitelist_conf, markets, pairlistconfig, expected):
|
||||
def test_pricefilter_desc(mocker, whitelist_conf, markets, pairlistconfig,
|
||||
desc_expected, exception_expected):
|
||||
mocker.patch.multiple('freqtrade.exchange.Exchange',
|
||||
markets=PropertyMock(return_value=markets),
|
||||
exchange_has=MagicMock(return_value=True)
|
||||
)
|
||||
whitelist_conf['pairlists'] = [pairlistconfig]
|
||||
|
||||
if desc_expected is not None:
|
||||
freqtrade = get_patched_freqtradebot(mocker, whitelist_conf)
|
||||
short_desc = str(freqtrade.pairlists.short_desc())
|
||||
assert short_desc == expected
|
||||
assert short_desc == desc_expected
|
||||
else: # OperationalException expected
|
||||
with pytest.raises(OperationalException,
|
||||
match=exception_expected):
|
||||
freqtrade = get_patched_freqtradebot(mocker, whitelist_conf)
|
||||
|
||||
|
||||
def test_pairlistmanager_no_pairlist(mocker, markets, whitelist_conf, caplog):
|
||||
|
@ -255,11 +255,11 @@ def test_rpc_daily_profit(default_conf, update, ticker, fee,
|
||||
assert days['fiat_display_currency'] == default_conf['fiat_display_currency']
|
||||
for day in days['data']:
|
||||
# [datetime.date(2018, 1, 11), '0.00000000 BTC', '0.000 USD']
|
||||
assert (day['abs_profit'] == '0.00000000' or
|
||||
day['abs_profit'] == '0.00006217')
|
||||
assert (day['abs_profit'] == 0.0 or
|
||||
day['abs_profit'] == 0.00006217)
|
||||
|
||||
assert (day['fiat_value'] == '0.000' or
|
||||
day['fiat_value'] == '0.767')
|
||||
assert (day['fiat_value'] == 0.0 or
|
||||
day['fiat_value'] == 0.76748865)
|
||||
# ensure first day is current date
|
||||
assert str(days['data'][0]['date']) == str(datetime.utcnow().date())
|
||||
|
||||
|
@ -321,7 +321,7 @@ def test_edge_overrides_stoploss(limit_buy_order, fee, caplog, mocker, edge_conf
|
||||
|
||||
# stoploss shoud be hit
|
||||
assert freqtrade.handle_trade(trade) is True
|
||||
assert log_has('Executing Sell for NEO/BTC. Reason: SellType.STOP_LOSS', caplog)
|
||||
assert log_has('Executing Sell for NEO/BTC. Reason: stop_loss', caplog)
|
||||
assert trade.sell_reason == SellType.STOP_LOSS.value
|
||||
|
||||
|
||||
|
@ -267,7 +267,7 @@ def test_generate_profit_graph(testdatadir):
|
||||
trades = load_backtest_data(filename)
|
||||
timerange = TimeRange.parse_timerange("20180110-20180112")
|
||||
pairs = ["TRX/BTC", "XLM/BTC"]
|
||||
trades = trades[trades['close_time'] < pd.Timestamp('2018-01-12', tz='UTC')]
|
||||
trades = trades[trades['close_date'] < pd.Timestamp('2018-01-12', tz='UTC')]
|
||||
|
||||
data = history.load_data(datadir=testdatadir,
|
||||
pairs=pairs,
|
||||
|
1
tests/testdata/.last_result.json
vendored
Normal file
1
tests/testdata/.last_result.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
{"latest_backtest":"backtest-result_new.json"}
|
1
tests/testdata/backtest-result_multistrat.json
vendored
Normal file
1
tests/testdata/backtest-result_multistrat.json
vendored
Normal file
File diff suppressed because one or more lines are too long
1
tests/testdata/backtest-result_new.json
vendored
Normal file
1
tests/testdata/backtest-result_new.json
vendored
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue
Block a user